All Articles

Physically Based Rendering - Material

I learned the shading magic like how the most of others did, from a classic Blinn-Phong model to some more complex models like Cook-Torrance model, but the understanding would always be confined around the common practices shaped by our slow speed computer, and our eager desires for the more realistic results. With these compensations, tricks, and hacks we may achieve lots of cool and amazing stuff at first, but if we don’t have a clear bird’s-eye view (for example people like me who is always suffering among the formulae), then we would lose ourselves inside just shader code and strange artifacts often and can’t find the solutions without painful testing and doubting.

Then I was thinking, why not implement and learn something from the basic of the physics again? Let’s forget about a little what I’ve already written and start to construct a shading pipeline from scratch, following the physical interpretations, then seeking for solutions to run between each 16 ms. Let’s go back to the start.

So, what is light?

“…In physical optics, light is modeled as an electromagnetic transverse wave, a wave

that oscillates the electric and magnetic fields perpendicularly to the direction of its propagation…” - pg. 293, ”Real-Time Rendering”, 4th Edition.

(The book already has become the bible of me, it’s a nice working reference, a balanced textbook and a well-formed dictionary, I’ll quote a lot from it later in this ctrl-c + ctrl-v style post (Why not?))

Light, a kind of electromagnetic wave whose property shaped by its wavelengths majorly, is one of the fundamental beings in our universe, and all of our beautiful rendering works would focus on how to calculate the physical transmission of it always. But what about to think in this way, just because it could be received by our eyes (a psychophysical phenomenon) then it had a special name as “light”, and if we could treat it not so special in our theoretical discussion would something become easier and more generally to handle with?

Let’s say, the basic electromagnetic mechanism works well too with light, all and no exceptions (of course it is), then we could just model a general solution in the computer, and feed it with the light wavelengths λ\lambda we emitted from some origins, and other relating properties of some objects which stops the light propagation to calculate the result (may also contain the influences from the space they are in later, now let’s just work inside an ideal static vacuum). The classic physics model of the electromagnetic mechanism is described by Maxwell’s equations successfully, and with it we could figure out how the electric field propagating through space, thus means to get the correct phenomenon of the electromagnetic wave in our simulation box.

But unfortunately, we don’t have such huge computational resources to run a field propagation simulation in real-time inside our personal computer, representing a non-analytical electric field also requires a 3D data structure at least, we must allow some compensations. Luckily, physicists have already simplified the scope of the problems for us, instead of using the original electromagnetic field related methods, we could try to use the radiation methods to focus on the energy property only, and this would bring us a simplification of only considering about 2 things, the emitter and the receiver of the electromagnetic wave.

A new measurement

“…Light waves are emitted when the electric charges in an object oscillate. Part of

the energy that caused the oscillations—heat, electrical energy, chemical energy—is converted to light energy, which is radiated away from the object. …” - pg. 296, ”Real-Time Rendering”, 4th Edition.

Then we would arrive at Radiometry, which deals with the measurement of electromagnetic radiation. For the common usage, there are some radiometric quantities which represent the measurement of the electromagnetic radiation energy with respects to other basic physical quantities such like time/distance/area/angle, I’ll list some of them below which are more important to our rendering business, with the reference from the book and Wikipedia.

Quantity Unit Notes
Name Symbol Name Symbol
Radiant energy Qe{Q_e} joule J{J} Energy of electromagnetic radiation
Radiant flux Φe{\Phi_e} joule per second or watt Js{\frac{J}{s}} or W{W} also called Radiant power
Radiant intensity Ie{I_e} watt per steradian Wsr{\frac{W}{sr}} steradian is similar with angle in 2D space
Irradiance Flux density Me{M_e} or Ee{E_e} watt per square metre Wm2{\frac{W}{m^2}} Ir-radiance, means the radiance received by a surface
Radiance Le{L_e} watt per steradian per square metre Wm2sr{\frac{W}{m^2·sr}}

We won’t directly calculate Radiant energy, because usually we would handle with some objects which has shape and some events which happens during a period, it’s more convenient to cancel these quantities at first, that means we better choose to use Radiant intensity, Irradiance Flux density and Radiance.

To evaluate how a light source emitted light or energy, we need to define what is a light source at first. In this blog post, I would choose to discuss the ideal punctual light source only, which has an infinity small shape, the same as a point in space. Also, I would idealize it with an omnidirectional radiation characteristic, which means the radiation wouldn’t variant around different steradians.

We could deduce the definition of Radiant Intensity first, imagine a unit sphere surrounding the point light source, a unit steradian would emit some energy per second, then Ie=dΦdωI_e = \frac{d\Phi}{d\omega}.

Diagram

Similar, we could get Irradiance Flux density (or call it Radiant exitance if we want to emphasize it’s more about emitting rather than receiving, but it assumes we use some area light sources), Me=dΦdAM_e = \frac{d\Phi}{dA}.

Diagram

And then Radiance is Le=dΦdωdΦdAcosθ=d2ΦdωdAcosθL_e = \frac{d\Phi}{d\omega} * \frac{d\Phi}{dA \cos\theta} = \frac{d^2\Phi}{ d\omega dA \cos\theta}, we now need to consider about the angle between the surface normal and the unit steradian because the actual effective area is not same as the original unit area, it is projected (AcosθA\cos\theta is called projected area), so here we add a cos to it. This kind of cos-weighted distribution is called Lambert’s cosine law.

Diagram

Through time and space, once and forever?

“…The oscillating electrical field pushes and pulls at the electrical

charges in the matter, causing them to oscillate in turn. The oscillating charges emit new light waves, which redirect some of the energy of the incoming light wave in new directions. This reaction, called scattering, is the basis of a wide variety of optical phenomena. …” - pg. 297, ”Real-Time Rendering”, 4th Edition.

If we emit any light/energy in vacuum, it would never change the direction and the energy unless meets with some obstructions (for example some molecules or small dust floating in space or some large giant gas planets like Saturn, or our body/eyes), then it would be absorbed or scattered due to the characteristics of the obstructions. Actually, this is all what we need to care about, the obstructions IS exactly the receiver! Then if we could model how the receiver it is we would solve the problem. But again, the limitation of our computational resources doesn’t allow us to simulate every molecule, instead, we have to follow some macro scope rules to abstract them. We call these single or group of trivial shape objects which influence the wave propagation as particles, and the volume the particles fulfilled with as media.

We live on Earth where the air and water are the most common noticeable medium, who gives us an inspiration for aesthetic creations for thousands of years. For to measure how they influence the light propagation, we need to define a kind of ratio between the original light and the affected light.

“…The ratio of the phase velocities of the original and new waves defines an optical

property of the medium called the index of refraction (IOR) or refractive index, denoted by the letter nn. Some media are absorptive. They convert part of the light energy to heat, causing the wave amplitude to decrease exponentially with distance. The rate of decrease is defined by the attenuation index, denoted by the Greek letter κ\kappa (kappa). Both n and κ\kappa typically vary by wavelength. Together, these two numbers fully define how the medium affects light of a given wavelength, and they are often combined into a single complex number n+iκn + i \kappa, called the complex index of refraction. …” - pg. 298, ”Real-Time Rendering”, 4th Edition.

As the book introduced, we use complex IOR to represent the characteristics of a media, but in practice of local illumination, we’d often only care about the real number part nn, since the attenuation happens around the conductor medium more and we typically imply we would treat the air as our original media (or in the volume rendering business, but I won’t cover about that topic here). Then we could simply get IOR by n=cvn = \frac{c}{v}, where cc is the light speed in vacuum and vv is the light speed in the media. For a further detailed discussion about complex IOR, I recommend this post by Sébastien Lagarde, which he talked around the situations that covering all the other possible medium interfaces.

And then we would have a physical law called Snell’s law, which relates the incident angle with refracted angle by IOR of two mediums, written as sin(θt)=n1n2sin(θi)\sin(\theta_t) = \frac{n_1}{n_2}\sin(\theta_i), where θi\theta_i is the angle between the interface normal and the incident light direction, θt\theta_t as the angle between the inverse of the interface normal and the refracted/transmitted light direction. We denote the index of refraction on the “outside” (the side where the incoming, or incident, wave originates) as and the index of refraction on the “inside” (where the wave will be transmitted after passing through the surface) as .

Diagram

Now if we know the angles (here we abstract the light to a single monochromatic beam, which has “angle” and a single wavelength, but actually we’ve talked before how it is in real situation) and the IOR of the medium, could we calculate anything useful to display on the screen? Well, with Snell’s law we could figure out the direction change of the light, but we didn’t know how much energy would change, also if the light is not monochromatic, we don’t know which range of wavelengths would be reflected or refracted. And the most important problem is, we are receiving light through our eyes, but not always directly from the light source, what we want to receive are those light “reflected” from different surfaces, rather than those directly emitted from the sun or the bulbs!

So actually, we need to figure out such a scenario: Light is emitted from a source, meets with some surfaces and changes, then somehow travels into our eyes. When the light “hit” the surface we then could try to apply Snell’s law. But Snell’s law is just about how light changes at the medium interface in a very ideal situation, we don’t know what would happen next, since the conservation of energy always work in this universe (until the moment I wrote this line it is still valid), the reflected light must be weaker than the incident light, but how much? Also, where the refracted light part goes?

That requires us to give further information about how light continuously traveling. As I mentioned before, media is made up by particles, then the size of particles and the distance of particles should have an influence on the light transmission. But since it’s impossible to calculate everything happened inside the media, we choose to model the region around the surface of the media only which has much more contributions to the light into our eyes.

“…In rendering, we typically use geometrical optics, which ignores wave effects such as

interference and diffraction. This is equivalent to assuming that all surface irregularities are either smaller than a light wavelength or much larger. …” - pg. 303, ”Real-Time Rendering”, 4th Edition.

“…surface irregularities much larger than a wavelength

change the local orientation of the surface. When these irregularities are too small to be individually rendered—in other words, smaller than a pixel—we refer to them as microgeometry. …” - pg. 304, ”Real-Time Rendering”, 4th Edition.

“…For rendering, rather than modeling the microgeometry explicitly, we treat it statistically

and view the surface as having a random distribution of microstructure normals. As a result, we model the surface as reflecting (and refracting) light in a continuous spread of directions. The width of this spread, and thus the blurriness of reflected and refracted detail, depends on the statistical variance of the microgeometry normal vectors—in other words, the surface microscale roughness. …” - pg. 304, ”Real-Time Rendering”, 4th Edition.

The book introduced the microgeometry theory, which is balanced between the computational burden and the credibility of the result when it was adopted into real-time rendering community. Since our screen has limited discrete pixels, then it’s meaningless and wasting to calculate anything too accurately, instead we choose to use some statistical models for a cheaper routine. You may have heard some statistical methods used in rendering like the most famous Monte Carlo method before, here we would follow the similar path to figure out how to get the final light.

The Rendering Equation

Now we could introduce an almost accurate representation of the interaction between light and surfaces. For a single light in wavelength λ\lambda at time tt who hits the surface in a unit area AA from a unit steradian ωi\omega_i, and is “changed” by the surface then finally “re-emitted” to our eyes by an unit steradian ωo\omega_o, we could write such a formula: Lo(A,ωo,λ,t)=frLi(A,ωi,λ,t)L_o(A,\omega_o,\lambda,t) = f_r L_i(A,\omega_i,\lambda,t), which describe the incident radiance is weighted by a factor frf_r who indicates the surface optic characteristic then contributes to the surface irradiance. Then we could do an integration for every possible incident direction of different light around the hemisphere centered at the surface normal, combine with the original emitted light from the surface itself, to get a more general formula. One summarization formula for rendering in this context is The Rendering Equation, Lo(p,ωo,λ,t)=Le(p,ωo,λ,t)+Ωfr(p,ωi,ωo,λ,t)Li(p,ωi,λ,t)(nωi)dωiL_o(p,\omega_o,\lambda,t) = L_e(p,\omega_o,\lambda,t) + \int\limits_{\Omega} f_r(p,\omega_i,\omega_o,\lambda,t) L_i(p,\omega_i,\lambda,t) (n \cdot \omega_i) d\omega_i, basically it is the same transcript of what I talked before, but with a simplification that we ignore the surface area, just minimize it to a point pp.

Further more, we would like to treat the light-surface interaction as an time-domain individual event, then we could omit tt. And because we would finally send a RGB color space data to the screen, so a continuous spectral irradiance is fairly unnecessary, then we would like to also cancel the spectrum by replacing it with 3 individual similar formulae, which have same form but care only about each color channels, thus we could just write one instead as The Reflectance Equation: Lo(p,ωo)=Ωfr(p,ωi,ωo)Li(p,ωi)(nωi)dωiL_o(p,\omega_o) = \int\limits_{\Omega} f_r(p,\omega_i,\omega_o) L_i(p,\omega_i) (n \cdot \omega_i) d\omega_i.

“…Local reflectance

is quantified by the bidirectional reflectance distribution function (BRDF), denoted as f(l,v)f(l, v). …” - pg. 310, ”Real-Time Rendering”, 4th Edition.

Now if we know how the incident light they are, then we just need to give a frf_r weight who described the entire optic characteristic of the media and would get the final results. This fr(p,ωi,ωo)f_r(p,\omega_i,\omega_o), like the book introduced, is called BRDF, combining with the microgeometry method I listed above, we would have chance to finally write some codes to calculate. But at first, let’s take a look back at the surface model we used here, we say “surface” not “interface”, the “surface” indicates we are inspecting in a region around the “interface”, so it should have more properties than what Snell’s law could describe. Let’s take a look:

Diagram

As the media won’t always absorb and scatter all the refracted light inside (for example most of the non-conductors won’t, since there are too less free electrons inside to do the business), some of them would leave the media and enter the previous media again, which we called this kind of phenomenon as Subsurface Scattering. A typical practice is to separate the BRDF to 2 parts, a reflection part as specular and a local subsurface scattering part as diffuse. I would like to use the notation as fsf_s and fdf_d later.

Also, sometimes we need to calculate more general subsurface scattering, then we would use a bidirectional scattering distribution function (abbr. BSSRDF) instead, and for the light who travel through the entire media and leave in another surface it becomes bidirectional transmittance distribution function (abbr. BTDF). Together they are called as BxDF.

To make a BxDF physically correct, we need to achieve 3 goals:

  1. fr(ωi,ωo)0f_r(\omega_i,\omega_o) \ge 0, a BxDF never results negative weight;

  2. fr(ωi,ωo)=fr(ωo,ωi)f_r(\omega_i,\omega_o) = f_r(\omega_o,\omega_i), it’s called Helmholtz reciprocity, simply speaking means if we change the incident and the observe direction it should have the same result;

  3. ωi,Ωfr(ωi,ωo)(nωi)dωi1\forall \omega_i, \int\limits_{\Omega}f_r(\omega_i,\omega_o)(n \cdot \omega_i) d\omega_i \le 1, means we need to follow the energy conservation law, the weight should never exceed than 1, the relative outgoing light energy should never exceed than the relative incoming light energy.

In practice there is a kind of convenient way to evaluate whether a BxDF is energy conservation or not called White furnace test, I’ll talk about it later.

BRDF

BRDF would be thought as fr(ωi,ωo)=dLo(ωo)Li(ωi)cosθidωif_r(\omega_i,\omega_o) = \frac{dL_o(\omega_o)}{L_i(\omega_i)\cos\theta_i d\omega_i}, which gives us a possibility to measure it in real, for example MERL is one kind of BRDF database. Also, we could write BRDF as fr(ωi,ωo)=fd(ωi,ωo)+fs(ωi,ωo)f_r(\omega_i,\omega_o) = f_d(\omega_i,\omega_o) + f_s(\omega_i,\omega_o), to indicate that we would like to seperate BRDF to the specular and diffuse parts and solve them independently.

Let’s use nn as the macro surface normal vector and mm as the micro surface normal vector, and h =ωo+ωiωo+ωih\ = \frac{\omega_o + \omega_i}{||\omega_o + \omega_i||} as the normalized halfway vector of the view direction and light direction.

Also, for the sake of convenience to discuss BRDF more practically, I’d like to introduce the Directional albedo which measures the amount of light coming from a given direction that is reflected at all, into any outgoing direction in the hemisphere around the surface normal, in formula as Rs=Ωf(l,v)(ns)dsR_s = \int\limits_{\Omega}f(l, v)(n·s)ds, if the BRDF is Helmholtz reciprocal then could substitute ss with ll or vv freely.

Diffuse part

Simple Lambert model

fLambert=ρπf_{Lambert} = \frac{\rho} {\pi}, ρ\rho as the “color” of the surface, strictly speaking it is the subsurface scattering part of the surface irradiance under a particular lighting circumstance; π\pi comes from the fact that we treat the surface as the Lambertian surface, thus it won’t change due to the view and light direction, then the BRDF integral over the hemisphere yield it.

Let’s visualize its directional albedo: (BRDF Explorer/Octave WIP)

This simple Lambert diffuse model would ignore the surface micro scope variation, combines with the Lambert’s cosine law we could already get Lo(ωo)=ΩρπLi(ωi)cosθidωiL_o(\omega_o) = \int\limits_{\Omega}\frac{\rho} {\pi}L_i(\omega_i)\cos\theta_i d\omega_i, since the integral of direction ωi\omega_i over the hemisphere is π\pi, then it would cancel the π\pi in the BRDF, so finally we would just get Lo(ωo)=ρcosθiL_o(\omega_o) = \rho \cos\theta_i, exactly the same as what we learned first in the real-time rendering class 101!

Cook-Torrance model

The lack of micro scope detail of the simple Lambert model limits us to get further realistic results, luckily as the researchers and scientists work on it for decades, we’ve already had some advanced replacements of the simple Lambert model, one of them which is used most commonly today in real-time rendering community is the microgeometry theory, we’d refer it as microfacet theory here.

R.L. Cook and K. E. Torrance [CT82] wrote a paper in 80’s which is the root of the most popular adopted microfacet model today, with the other nice references [Hei14][LR14] we could conclude the general Cook-Torrance model as fCookTorrance=1nωonωiΩfm(ωo,ωi,m)G(ωo,ωi,m)D(m,α)ωomωimdmf_{Cook-Torrance}=\frac{1}{|n·\omega_o||n·\omega_i|}\int\limits_{\Omega}f_m(\omega_o,\omega_i,m)G(\omega_o,\omega_i,m) D(m,\alpha)\langle \omega_o·m\rangle \langle \omega_i·m\rangle dm. It emphasizes that the macro BRDF is correlated with the micro BRDF, and it could be calculated through the integral over the microfacet mm, with the additional weight functions GG and DD to help keeping the micro-macro mapping relationship stay correct.

The DD function is called Distribution function or Normal Distribution Function(abbr. NDF), gives the spatial/statical distribution of the micro normal mm over the macro normal nn, the α\alpha here is a user-controlled variable which describes how “rough” the surface it is, so we call it roughness or smoothness in practice (typically we’d like to build a non-linear mapping between the user-controlled roughness and the real α\alpha). We would use statical functions in practice since it’s the only possible way to calculate in real-time, about how to mapping from spatial function to statical function I recommend to read this paper [Hei14] for detail understanding.

The GG function is called Geometry function or Masking-shadowing function, but strictly speaking, we would better call it VV as Visibility function, since the Geometry function is usually used to compose the Visibility function actually, but in literal it is used exchangeably often. It gives a weight about how the microfacets influence themselves by masking each other along the view direction and shadowing each other along the incident light direction, it should be deduced accordingly with the DD function we chose.

I’ll list some common used DD and GG functions below:

DD function
Gaussian DD function [CT82]

DGaussian=ce(α/m)2D_{Gaussian}=ce^{(-\alpha/m)^2}

Beckmann DD function [CT82]

DBeckmann=1m2cos4αe[(tanα)/m]2D_{Beckmann}=\frac{1}{m^2\cos^4\alpha}e^{-[ \,(\tan\alpha)/m ] \,^2}

For these two DD functions, cc is an optional scaling factor, α\alpha as the angle between nn and hh, mm as the RMS of the slope of the microfacet. Since they are computationally expensive, it’s rare to see them in real products, but we’d like to treat them as the offline reference sometimes.

Berry DD function [Bur12]

DBerry=c((nh)2(α21)+1)D_{Berry}=\frac{c}{((n·h)^2(\alpha^2-1)+1)}

GGX/Trowbridge-Reitz DD function [Bur12]

DTR=c((nh)2(α21)+1)2D_{TR}=\frac{c}{((n·h)^2(\alpha^2-1)+1)^2}

Generalized GGX/Trowbridge-Reitz DD function [Bur12]

DGTR=c((nh)2(α21)+1)γD_{GTR}=\frac{c}{((n·h)^2(\alpha^2-1)+1)^\gamma}

For these three DD functions, cc is an optional scaling factor, α\alpha is the roughness, γ\gamma is an optional exponential factor. If we choose γ=10\gamma=10 it is fairly close to the Beckmann DD function. They are most commonly used DD functions as far as I know, gives a “long-tailed” visual appearance.

GG function
Cook-Torrance GG function [CT82]

Gcooktorrance=min{1,2(nh)(nωo)(ωoh),2(nh)(nωi)(ωoh)}G_{cook-torrance} = \min\{1,\frac{2(n·h)(n·\omega_o)}{(\omega_o·h)},\frac{2(n·h)(n·\omega_i)}{(\omega_o·h)}\}

This one comes from the original paper itself but I haven’t seen it in any product level shader yet since it could be deduced to other more optimal versions.

Smith GG function [Smith67] [Hei14]

GSmith=χ+(ωoωm)1+Λ(ωo)G_{Smith}=\frac{\chi^+(\omega_o·\omega_m)}{1 + \Lambda(\omega_o)}

The original paper is not publicly available, so I list the deduced version from a later reference. Here the nominator χ+(u)\chi^+(u) is a heavy-side function, when u>0u>0 then χ+(u)=1\chi^+(u)=1, otherwise χ+(u)=0\chi^+(u)=0, it ensures the sidedness effect, while the Λ()\Lambda() function is an integral over the slopes of the microsurface, which gives the masking probability. So, unless we provide a possible Λ()\Lambda() function, this formula can’t be translated to a shader.

Schlick GG function [Sch94]

GSchlick=nωo(nωo)(1k)+knωi(nωi)(1k)+kG_{Schlick}=\frac{n·\omega_o}{(n·\omega_o)(1-k)+k}·\frac{n·\omega_i}{(n·\omega_i)(1-k)+k}

kk is the user-controlled roughness, in practice we could remapping roughness to α=Roughness12\alpha=\frac{Roughness 1}{2} [Bur12], k=α2/2k=\alpha^2/2 [Kar13] to get a better non-linearity result. Schlick GG function is the approximation of Smith GG function in [0,1][ \,0,1] \,, which is kind friendly for our application scene because its parameter requirement is acceptable.

Height correlated Smith GG function [Hei14] [LR14]

GCorrelatedSchlick=χ+(ωoh)χ+(ωih)1+Λ(ωo)+Λ(ωi)G_{CorrelatedSchlick}=\frac{\chi^+(\omega_o·h)\chi^+(\omega_i·h)}{1+\Lambda(\omega_o)+\Lambda(\omega_i)}

Λ(m)=1+1+α2tan2(θm)2=1+1+α2(1cos2(θm))cos2(θm)2\Lambda(m)=\frac{-1+\sqrt{1+\alpha^2\tan^2(\theta_m)}}{2}=\frac{-1+\sqrt{1+\frac{\alpha^2(1-\cos^2(\theta_m))}{\cos^2(\theta_m)}}}{2}, because the microfacet would mask and shadow at the same time, then if we correlate the masking and shadowing parts with the respect of its height would bring some energy loss back. The detailed deduction is math heavily, I’d recommend reading the corresponding papers for further understanding.

Multi-scattering Smith GG function [HHED16]

All of the GG functions I listed above only take care of a single-scattering phenomenon, while in reality, the rougher surface would have more possibility to bounce light around different microfacets, it’s important to count these part of the energy. The paper [HHED16] gives a stochastic method based ground truth, and later [IW17] gives an implementable compensation way to achieve it, an alternative version of the general formula is given by the book as:

“… fms(l,v)=FRsF1π(1RsF1)(1F(1RsF1))(1RsF1(l))(1RsF1(v))f_{ms}(l, v) = \frac{\overline F \overline {Rs_{F_1}}}{\pi(1-\overline {Rs_{F_1}})(1-\overline F(1-\overline {Rs_{F_1}}))}(1-Rs_{F_1}(l))(1-Rs_{F_1}(v)) …” - pg. 346, ”Real-Time Rendering”, 4th Edition.

Let’s start to decompose this formula from the FF Fresnel function. As we discussed before, the Snell’s law could give us the direction of the transmitted light, and the ideal mirror reflection could give us the reflected light direction, but we don’t know how much energy is reflected and how much is transmitted, this is quite an annoy problem. But luckily it has been solved (with some restricted conditions) already in 19th century by French scientist Augustin-Jean Fresnel as Fresnel equations. Long talk in short, since we only care about the ideal sandbox in our real-time compensations, we would look for a nice enough Fresnel function to simulate this phenomenon. The Schlick Fresnel approximation is widely adopted nowadays, it’s has a form as FSchlick=F0+(F90F0)(1(cosθ))5)F_{Schlick}=F_0 + (F_{90} - F_0)(1-(\cos\theta))^5), F0F_0 is the specular reflectance from normal incidence, it represents the IOR when the view direction is perpendicular with the surface as ωon\omega_o \parallel n; similar F90F_{90} is the IOR when ωon\omega_o \perp n, some of the implementations would assume F90F_{90} is always 1, and it could cover most of common conductor/dielectric materials type. Finally cosθ=nωo\cos\theta=n·\omega_o or cosθ=nωi\cos\theta=n·\omega_i, it’s another kind of cosine-weighted contribution appears here.

The F\overline F here means the average of FF over all the different cosine of angles, if we just use the Schlick Fresnel approximation mentioned above, it could simply be calculated as F=2021F0+121\overline F = \frac{20}{21}F_0 + \frac{1}{21}.

Then we’d move on to another average, RsF1Rs_{F_1}, it is the directional albedo of , which is the specular BRDF term with set to 1, which could be interpreted as the irradiance of a pure white surface when illuminated by a unit directional light source, and Stephen Hill wrote a nice blog series about it. Basically, if we could implement the single-scattering BRDF, then could just use some discrete numerical integral method (like Importance sampling) to calculate it and save it to a look-up table, which exactly similar like what the split-summing technique of IBL applicated in [Kar13].

Use Simple Lambert model as the microfacet BRDF in Cook-Torrance model [LR14]

fm(ωo,ωi,m)=fLambert=ρπf_m(\omega_o,\omega_i,m) = f_{Lambert} = \frac{\rho} {\pi} and then fcooktorrance=ρπ1nωonωiΩG(v,l,m)D(m,α)max(0,ωom)max(0,ωim)dmf_{cook-torrance}=\frac{\rho}{\pi}\frac{1}{|n·\omega_o||n·\omega_i|}\int\limits_{\Omega}G(v,l,m) D(m,\alpha)\max(0,\omega_o·m)\max(0,\omega_i·m)dm

Still, no analysis solution, but gives a theoretical fundamental about the problem we need to solve, and unified all the problem inside one microsurface theory.

Oren-Nayar model [ON94]

forennayar=ρπ(A+(Bmax(0,cos(ωiωo))sin(max(ωi,ωo))tan(min(ωi,ωo))))f_{oren-nayar}=\frac{\rho} {\pi}(A+(B·\max(0, \cos (\omega_i - \omega_o))·\sin(\max(\omega_i, \omega_o))·tan(\min(\omega_i,\omega_o)))) A=1α2α+0.66A=1-\frac{\alpha}{2\alpha+0.66}, B=0.45αα+0.09B=0.45\frac{\alpha}{\alpha+0.09} and α\alpha is the roughness, it’s an approximation of the general Cook-Torrance model, when α=0\alpha = 0 we’ll get the Simple Lambert model. Could treat Oren-Nayar model as a kind of generalization of Simple Lambert model.

Disney model [Bur12]

Another advanced Lambert-based diffuse model which considers about Fresnel effect, fdisney=ρπ(1+(Fd901)(1(nωi))5)(1+(Fd901)(1(nωo))5)f_{disney}=\frac{\rho} {\pi}(1+(F_{d90}-1)(1-(n · \omega_i))^5)(1+(F_{d90}-1)(1-(n · \omega_o))^5), or written as fDisney=ρπFSchlick(1,Fd90,n,ωo)FSchlick(1,Fd90,n,ωi)f_{Disney}=\frac{\rho} {\pi}F_{Schlick}(1, F_{d90}, n, \omega_o)·F_{Schlick}(1, F_{d90}, n, \omega_i), here Fd90=0.5+2(hωi)2αF_{d90}=0.5+2(h·\omega_i)^2\alpha, α\alpha is roughness.

Normalized Disney model [LR14]

For the sake of energy conservation, we could remapping the original Disney model to [0,1][ \,0,1] \,, then fnormalizedDisney=cρπFSchlick(1,Fd90,n,ωo)FSchlick(1,Fd90,n,ωi)f_{normalizedDisney}=c·\frac{\rho} {\pi}F_{Schlick}(1, F_{d90}, n, \omega_o)·F_{Schlick}(1, F_{d90}, n, \omega_i), c=11.51+0.511.51αc=\frac{1}{1.51}+\frac{0.51}{1.51}\alpha it’s a scaling factor, I deduced it here for to better compare with the original version, and now Fd90=0.5+(2(hωi)20.5)αF_{d90}=0.5+(2(h·\omega_i)^2-0.5)\alpha, α=Roughnessγ\alpha=Roughness^\gamma, in practice the original paper chooses γ=2\gamma=2, and it find when γ=4\gamma=4 it’s almost near the result in [Sch14] where α=(0.3+0.7Roughness)6\alpha=(0.3+0.7Roughness)^6.

Specular part

Phong model

fPhong=(rωo)αf_{Phong} = (r·\omega_o)^\alpha, rr is the reflection direction of ωi\omega_i, it’s the most famous and commonly used specular model in last few decades, and even programmed inside the graphics hardware, it needs the exponential factor α\alpha as the user-controlled parameter.

Normalized Phong model

fnormalizedPhong=c(rωo)αf_{normalizedPhong} = c·(r·\omega_o)^\alpha, c=α+12πc=\frac{\alpha+1}{2\pi}, actually Phong model gives a DD function in a microsurface view of point, here α\alpha thus could be thought as the roughness.

Blinn-Phong model

fBlinnPhong=(nh)αf_{Blinn-Phong} = (n·h)^\alpha, an optimization of Phong model, in practice if we choose mapping αblinnphong=4αphong\alpha_{blinn-phong}=4\alpha_{phong} then Blinn-Phong model would looks like Phong model [Wiki1].

Normalized Blinn-Phong model

fnormalizedBlinnPhong=c(nh)αf_{normalizedBlinn-Phong} = c·(n·h)^\alpha, c=α+24π(22α2)c=\frac{\alpha+2}{4\pi(2-2^{\frac{-\alpha}{2}})}.

Cook-Torrance model [CT82] [Hei14]

fcooktorrance=F(ωo,h,f0,f90)D(h,α)G(ωo,ωi,h)4nωonωif_{cook-torrance} = \frac{F(\omega_o, h , f_0, f_{90})D(h, \alpha)G(\omega_o, \omega_i, h)}{4|n·\omega_o||n·\omega_i|}, the new kids (popular from ~2012) in town! Everything we’ve talked before, the denominator is deduced from the Jacobian Matrix when we change the space from the microfacet space to macro and makes it’s quite elegant, we use the microfacet theory but calculate in macro!

Some sample codes

a. Simple Lambert model + Blinn-Phong model

vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, vec3 viewPos, vec3 fragPos)
{    
    vec3 N = normalize(normal);
    vec3 L = normalize(-light.direction);
    vec3 V = normalize(viewPos - fragPos);
    vec3 H = normalize(V + L);

    float NdotH = max(dot(N , H), 0.0);
    float NdotL = max(dot(N , L), 0.0);
    
    // ambient color
    vec3 ambientColor = diffuse * light.color * 0.04;

    // diffuse color
    vec3 diffuseColor = diffuse * NdotL * light.color;
    
    // specular color
    float alpha = 32;
    vec3 specularColor = specular * pow(NdotH, alpha) * light.color;
    
    return (ambientColor + diffuseColor + specularColor);
}

b. Oren-Nayar model + Normalized Blinn-Phong model

// Oren-Nayar diffuse BRDF
// ----------------------------------------------------------------------------
float orenNayarDiffuse(float LdotV,  float NdotL, float NdotV, float roughness) 
{
  float s = LdotV - NdotL * NdotV;
  float t = mix(1.0, max(NdotL, NdotV), step(0.0, s));

  float sigma2 = roughness * roughness;
  float A = 1.0 - (0.5 * sigma2 / (sigma2 + 0.33));
  float B = 0.45 * sigma2 / (sigma2 + 0.09);

  return max(0.0, NdotL) * (A + B * s / t);
}

vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, float roughness, vec3 viewPos, vec3 fragPos)
{    
    vec3 N = normalize(normal);
    vec3 L = normalize(-light.direction);
    vec3 V = normalize(viewPos - fragPos);
    vec3 H = normalize(V + L);
    float LdotV = max(dot(L , V), 0.0);
    float NdotH = max(dot(N , H), 0.0);// ambient color
    vec3 ambientColor = diffuse * light.color * 0.04;

    // diffuse color
    float Fd = orenNayarDiffuse(LdotV, NdotL, NdotV, roughness);
    vec3 diffuseColor = diffuse * Fd * light.color;
    
    // specular color
    float alpha = 32;
    float normalizedScaleFactor = (alpha + 2) / (4 * PI * (2 - pow(2, (-alpha / 2))));
    vec3 specularColor = specular * (1 - Fd) * pow(NdotH, alpha) * normalizedScaleFactor * light.color;
    
    return (ambientColor + diffuseColor + specularColor);
}

c. Normalized Disney model + Cook-Torrance (specular) model, use DTRD_{TR}+GCorrelatedSchlickG_{CorrelatedSchlick}+FSchlickF_{Schlick} combination

(reference Frostbite Engine [LR14])

// Frostbite Engine model
// ----------------------------------------------------------------------------
// Specular/Diffuse BRDF Fresnel Component
// ----------------------------------------------------------------------------
vec3 Frostbite_fresnelSchlick(vec3 f0, float f90, float u)
{
    return f0 + (f90 - f0) * pow(1.0 - u, 5.0);
}
// Diffuse BRDF
// ----------------------------------------------------------------------------
float Frostbite_DisneyDiffuse(float NdotV, float NdotL, float LdotH, float earRoughness)
{    
    float energyBias = mix(0, 0.5, linearRoughness);
    float energyFactor = mix(1.0, 1.0/1.51, linearRoughness);
    float fd90 = energyBias + 2.0 * LdotH * LdotH * linearRoughness;
    vec3 f0 = vec3 (1.0, 1.0, 1.0);
    float lightScatter = Frostbite_fresnelSchlick(f0, fd90, NdotL).r;
    float viewScatter = Frostbite_fresnelSchlick(f0, fd90, NdotV).r;
    return lightScatter * viewScatter * energyFactor;
}
// Specular BRDF Geometry Component
// ----------------------------------------------------------------------------
float Frostbite_V_SmithGGXCorrelated(float NdotL , float NdotV , float alphaG)
{
    float alphaG2 = alphaG * alphaG;
    float Lambda_GGXV = NdotL * sqrt(NdotV * NdotV * (1.0 - alphaG2) + alphaG2);
    float Lambda_GGXL = NdotV * sqrt(NdotL * NdotL * (1.0 - alphaG2) + alphaG2);
    return 0.5 / max((Lambda_GGXV + Lambda_GGXL), 0.00001);
}
// Specular BRDF Distribution Component
// ----------------------------------------------------------------------------
float Frostbite_D_GGX(float NdotH , float roughness)
{
    // remapping to Quadratic curve
    float m = roughness * roughness;
    float m2 = m * m;
    float f = (NdotH * m2 - NdotH) * NdotH + 1;
    return m2 / (f * f);
}
// ----------------------------------------------------------------------------
vec3 Frostbite_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
    vec3 N = normalize(normal);
    vec3 L = normalize(-light.direction);
    vec3 V = normalize(viewPos - fragPos);
    vec3 H = normalize(V + L);

    float NdotV = max(dot(N , V), 0.0);
    float LdotH = max(dot(L , H), 0.0);
    float NdotH = max(dot(N , H), 0.0);
    float NdotL = max(dot(N , L), 0.0);

    // Specular BRDF
    float f90 = 1.0;
    vec3 F = Frostbite_fresnelSchlick(F0, f90, LdotH);
    float G = Frostbite_V_SmithGGXCorrelated(NdotV, NdotL, roughness);
    float D = Frostbite_D_GGX (NdotH, roughness);
    vec3 Fr = F * G * D;

    // Diffuse BRDF
    float Fd = Frostbite_DisneyDiffuse(NdotV, NdotL, LdotH ,roughness * roughness); 
    
    return (Fd * albedo + Fr) * light.color * NdotL / PI;
}

d.Simple Lambert model + Cook-Torrance (specular) model, use DTRD_{TR}+GSchlickG_{Schlick}+FSchlickF_{Schlick} combination

(reference from Unreal Engine 4[Kar13])

// Unreal Engine model
// ----------------------------------------------------------------------------
// Specular BRDF Distribution Component
// ----------------------------------------------------------------------------
float Unreal_DistributionGGX(float NdotH, float roughness)
{
    float a = roughness*roughness;
    // remapping to Quadratic curve
    float a2 = a * a;
    float NdotH2 = NdotH*NdotH;

    float nom   = a2;
    float denom = (NdotH2 * (a2 - 1.0) + 1.0);
    denom = denom * denom;

    return nom / denom;
}
// Specular BRDF Geometry Component
// ----------------------------------------------------------------------------
float Unreal_GeometrySchlickGGX(float NdotV, float roughness)
{
    float r = (roughness + 1.0);
    float k = (r*r) / 8.0;

    float nom   = NdotV;
    float denom = NdotV * (1.0 - k) + k;

    return nom / denom;
}
// ----------------------------------------------------------------------------
float Unreal_GeometrySmith(float NdotV, float NdotL, float roughness)
{
    float ggx2 = Unreal_GeometrySchlickGGX(NdotV, roughness);
    float ggx1 = Unreal_GeometrySchlickGGX(NdotL, roughness);

    return ggx1 * ggx2;
}
// Specular BRDF Fresnel Component
// ----------------------------------------------------------------------------
vec3 Unreal_fresnelSchlick(float cosTheta, vec3 F0)
{
    return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
}
// ----------------------------------------------------------------------------
vec3 Unreal_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
    vec3 N = normalize(normal);
    vec3 L = normalize(-light.direction);
    vec3 V = normalize(viewPos - fragPos);
    vec3 H = normalize(V + L);

    float NdotV = max(dot(N , V), 0.0);
    float NdotH = max(dot(N,  H), 0.0);
    float HdotV = max(dot(H , V), 0.0);
    float NdotL = max(dot(N , L), 0.0);
        
    // Specular BRDF
    vec3 F = Unreal_fresnelSchlick(HdotV, F0);
    float G = Unreal_GeometrySmith(N, V, L, roughness);      
    float D = Unreal_DistributionGGX(N, H, roughness);  
           
    vec3 nominator = D * G * F; 
    float denominator = 4 * NdotV * NdotL;
    vec3 specular = nominator / max(denominator, 0.00001);
    
    // for energy conservation       
    vec3 kS = F;
    vec3 kD = vec3(1.0) - kS;  
    kD *= 1.0 - metallic;      
   
    return ((kD * albedo + specular) * light.color * NdotL) / PI;
}

To be continued.

Bibliography:

[Bur12] B. Burley. “Physically Based Shading at Disney”. In: Physically Based Shading in Film and Game Production, ACM SIGGRAPH 2012 Courses. SIGGRAPH ’12. Los Angeles, California: ACM, 2012, 10:1–7. isbn: 978-1-4503-1678-1. doi: 10.1145/2343483.2343493. url: http://selfshadow.com/publications/s2012-shading-course/.

[CT82] R. L. Cook and K. E. Torrance. “A Reflectance Model for Computer Graphics”. In: ACM Trans. Graph. 1.1 (Jan. 1982), pp. 7–24. issn: 0730-0301. doi: 10.1145/357290.357293. url: http://graphics.pixar.com/library/ReflectanceModel/.

[Hei14] E. Heitz. “Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs”. In: Journal of Computer Graphics Techniques (JCGT) 3.2 (June 2014), pp. 32–91. issn: 2331-7418. url: http://jcgt.org/published/0003/02/03/.

[HHED16] E. Heitz, J. Hanika, E. d’Eon, C. Dachsbacher, “Multiple-Scattering Microfacet BSDFs with the Smith Model”. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2016, Volume 35 Issue 4, July 2016, ISSN: 0730-0301 E, ISSN: 1557-7368 doi>10.1145/2897824.2925943, url: https://eheitzresearch.wordpress.com/240-2/

[HTSG91] Xiao D. He, Kenneth E, Torrance, Frangois X. Sillion and Donald P. Greenberg. “A Comprehensive Physical Model for Light Reflection”. In: ACM SIGGRAPH Computer Graphics Homepage, Volume 25 Issue 4, July 1991, Pages 175-186, ACM New York, NY, USA, doi>10.1145/127719.122738, url: https://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf

[Kar13] B. Karis. “Real Shading in Unreal Engine 4”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2013 Courses. SIGGRAPH ’13. Anaheim, California: ACM, 2013, 22:1–22:8. isbn: 978-1-4503-2339-0. doi: 10.1145/2504435.2504457. url: http://selfshadow.com/publications/s2013-shading-course/.

[LR14] S. Lagarde and C. de Rousiers. “Moving Frostbite to PBR”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2014 Courses. SIGGRAPH ’14. Vancouver, Canada: ACM, 2014, 23:1–23:8. isbn: 978-1-4503-2962-0. doi: 10.1145/2614028.2615431. url: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/.

[Sch94] Schlick, Christophe, “An Inexpensive BRDF Model for Physically-based Rendering”, Computer Graphics Forum, vol.13, no.3, Sept.1994, pp.149–162. http://dept-info.labri.u-bordeaux.fr/ ~Schlick/DOC/eur2.html

[Sch14] N. Schulz. “Moving to the Next Generation - The Rendering Technology of Ryse”. In: Game Developers Conference. 2014.

[Wiki1] https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model

Published Aug 12, 2018

Random randomness in randomized randomization.