Although ray tracing is simple, powerful, and easy to implement, many things in the real world cannot be processed well. For example: Bleeding, Caustics - the phenomenon of light being brought together.
Until 1993, when photon mapping was proposed, ray tracing could not effectively solve these two problems, while photon mapping provided a good solution to these two problems. Glow and caustics are both caused by indirect lighting from diffuse surfaces. Using photon mapping methods, this type of lighting can be estimated using precomputed photon maps. Extending ray tracing to photon mapping results in a method that can account for any direct or indirect lighting. Moreover, photon mapping can also handle the situation of participating media and is easy to perform parallel calculations. The global illumination algorithm based on photon mapping has two steps: in the first step, photons are emitted from the light source to the scene, and when they encounter non-specular objects, they are saved in a photon map (photon map) to build a photon map. In the second step, statistical techniques are used to extract the incident flux and reflected radiant energy at all points in the scene from the photon map. The photon map is completely separated from the scene representation. This feature enables the photon mapping method to handle very complex scenes, including tens of millions of triangle patches, instantiated geometry, and complex procedural objects.
Compared with the finite element radiometric method, the advantage of photon mapping is that it does not require meshing. The radiometric speed can be very fast in simple scenes, but once the scene is complex, the radiometric speed lags far behind photon mapping. Moreover, photon tracking can also handle non-diffuse surfaces and caustics, which cannot be done with radioactivity.
Compared with Monte Carlo ray tracing methods such as light path tracing, bidirectional light path tracing and Metropolis, which can simulate all global illumination effects with very little memory overhead, the biggest advantage of photon mapping is that it is efficient, but at the cost of Additional memory holds the photon map. The photon mapping algorithm is fast for most scenes, and the final effect is better than Monte Carlo's, because most of the errors caused by photon mapping produce low-frequency signals that are not easy to notice, while Monte Carlo's tends to be high-frequency signals.
Another benefit is that the photon mapping method is patent-free, if considered from an economic perspective. (Of course this is Jensen’s funny statement) The difference between photon tracing and ray tracing is that ray tracing collects light brightness (radiance), while photon tracing collects light flux (flux).
This difference is very important, because the interaction between a beam of light and a certain material is definitely different from the interaction between a photon and the material. One of the more obvious examples is refraction - the brightness should change with refraction. It changes with the change of rate, while photon tracking is not affected by this factor (because it is the light flux that is collected). When a photon hits a surface, it is either reflected, transmitted (refracted), or absorbed. The result is related to the parameters of the surface. The technology currently used to determine whether the result is reflection, refraction, or absorption is the Russian turntable - - Basically, it randomly determines whether the photon is not absorbed and proceeds to the next photon tracking step. Which photon-surface interactions are to be preserved: Photons are to be preserved only if they hit a diffuse or, more precisely, non-specular surface. Since saving photons on a specular surface does not give us any useful information: the probability of a photon coming in exactly the direction of the specular surface is 0, the best way to accurately map surface reflections is to use ray tracing methods, in the direction of mirror symmetry. Trace rays.
Each photon can be stored multiple times along its path, including when it is finally absorbed by a diffusing surface. Each time a photon hits a surface, its position, photon energy, incident direction are saved, and a marker is used to build the lookup structure.
In the photon tracking step, the photon graph is organized as a sequence of photons, but before drawing, the sequence is reorganized into a balanced kd tree for efficiency reasons.
It is generally assumed that the interaction of photons occurs on the surface of the object, and it is assumed that the "body" of the object has no effect on the photons. In fact, it is not difficult to extend photon mapping to a body with an intermediate medium. In 1998, Jensen published an article on photon mapping of bodies.
In fact, photons can be emitted not only from points and surfaces, but also from bodies. For example, a candle flame can be simulated by emitting photons from a flame-shaped body.
When a photon passes through a medium, it may be scattered or absorbed. The probability depends on the density of the medium and the distance the photon travels in the medium. For inhomogeneous media, the photon mapping method can also be solved more conveniently using Ray Marching [Jensen98]. A simple Ray Marcher divides the medium into many small steps. Each step updates the accumulated density (integral disappearance coefficient), and then Based on the precalculated probability, it is decided whether the photon is scattered or absorbed and whether the next step needs to be performed. The process of constructing a photon map is done during photon tracking. During the drawing stage, the photon map is just a set of static data used to estimate the luminous flux and reflected light brightness at those points in the scene.
During the drawing process, the photon graph closest to a certain point will be continuously searched for. Therefore, finding a suitable data structure to reorganize the photon graph has a great impact on the efficiency of the algorithm. This data structure The first requirement is to be compact, and the second is to be able to search for the nearest photon as quickly as possible. The balanced kd tree is more in line with these two requirements. There are examples of comparing balanced and unbalanced kd trees in Jensen96a's article. The brightness of a certain point in the scene is estimated by using the luminous flux information of N photons near the point. The simplest method is to find the smallest sphere around the point that can surround N existing photons. In this sphere The total luminous flux of the N photons divided by the projected area of ??the ball (that is, PI*r^2) is equivalent to the illumination (irradiance), and then multiplied by the BRDF function, we can get the direction of the scene at this illumination. The brightness of the light reflected in the direction of the eye is the color of the pixel in the corresponding direction on the final synthesized Image.
In addition to using a sphere to locate the N photons required for Locate, you can also use Disc, elliptical plane, etc. to reduce aliasing at the intersection between surfaces. Selecting different types of Filter can also deal with some noise caused by insufficient photon density. Usually, two techniques are used to draw the final image, Splatting (photons are diffused according to brightness and then blurred, or photons are clustered and then diffused) or Final Gathering (for each point on the screen, the photons in the space are collected, and each collected Weighted average of the photon brightness obtained), because directly visualizing the photon map requires a large number of photons to obtain a high-quality image, while using ray tracing to produce a more accurate image requires much fewer photons. The brightness of a pixel is approximated by the average of a series of samples, each obtained by tracing a line of sight from the eye through one pixel into the scene.
The returned brightness value is equal to the outgoing brightness along the line of sight at the intersection point where the line of sight first intersects a non-specular surface.