# Physically Based Rendering – Material

I learned the shading magic like how the most of others did, from a classic Blinn-Phong model to some more complex models like Cook-Torrance model, but the understanding would always be confined around the common practices shaped by our slow speed computer, and our eager desires for the more realistic results. With these compensations, tricks, and hacks we may achieve lots of cool and amazing stuff at first, but if we don’t have a clear bird’s-eye view (for example people like me who is always suffering among the formulae), then we would lose ourselves inside just shader code and strange artifacts often and can’t find the solutions without painful testing and doubting.

Then I was thinking, why not implement and learn something from the basic of the physics again? Let’s forget about a little what I’ve already written and start to construct a shading pipeline from scratch, following the physical interpretations, then seeking for solutions to run between each 16 ms. Let’s go back to the start.

## So, what is light?

“…In physical optics, light is modeled as an electromagnetic transverse wave, a wave
that oscillates the electric and magnetic fields perpendicularly to the direction of its
propagation…” – pg. 293, “Real-Time Rendering“, 4th Edition.

(The book already has become the bible of me, it’s a nice working reference, a balanced textbook and a well-formed dictionary, I’ll quote a lot from it later in this ctrl-c + ctrl-v style post (Why not?))

Light, a kind of electromagnetic wave whose property shaped by its wavelengths majorly, is one of the fundamental beings in our universe, and all of our beautiful rendering works would focus on how to calculate the physical transmission of it always. But what about to think in this way, just because it could be received by our eyes (a psychophysical phenomenon) then it had a special name as “light”, and if we could treat it not so special in our theoretical discussion would something become easier and more generally to handle with?

Let’s say, the basic electromagnetic mechanism works well too with light, all and no exceptions (of course it is), then we could just model a general solution in the computer, and feed it with the light wavelengths \lambda we emitted from some origins, and other relating properties of some objects which stops the light propagation to calculate the result (may also contain the influences from the space they are in later, now let’s just work inside an ideal static vacuum). The classic physics model of the electromagnetic mechanism is described by Maxwell’s equations successfully, and with it we could figure out how the electric field propagating through space, thus means to get the correct phenomenon of the electromagnetic wave in our simulation box.

But unfortunately, we don’t have such huge computational resources to run a field propagation simulation in real-time inside our personal computer, representing a non-analytical electric field also requires a 3D data structure at least, we must allow some compensations. Luckily, physicists have already simplified the scope of the problems for us, instead of using the original electromagnetic field related methods, we could try to use the radiation methods to focus on the energy property only, and this would bring us a simplification of only considering about 2 things, the emitter and the receiver of the electromagnetic wave.

## A new measurement

“…Light waves are emitted when the electric charges in an object oscillate. Part of
the energy that caused the oscillations—heat, electrical energy, chemical energy—is
converted to light energy, which is radiated away from the object. …” – pg. 296, “Real-Time Rendering“, 4th Edition.

Then we would arrive at Radiometry, which deals with the measurement of electromagnetic radiation. For the common usage, there are some radiometric quantities which represent the measurement of the electromagnetic radiation energy with respects to other basic physical quantities such like time/distance/area/angle, I’ll list some of them below which are more important to our rendering business, with the reference from the book and Wikipedia.

Quantity Unit Notes
Name Symbol Name Symbol
Radiant flux {\Phi_e} joule per second or watt {\frac{J}{s}} or {W} also called Radiant power

We won’t directly calculate Radiant energy, because usually we would handle with some objects which has shape and some events which happens during a period, it’s more convenient to cancel these quantities at first, that means we better choose to use Radiant intensity, Irradiance Flux density and Radiance.

To evaluate how a light source emitted light or energy, we need to define what is a light source at first. In this blog post, I would choose to discuss the ideal punctual light source only, which has an infinity small shape, the same as a point in space. Also, I would idealize it with an omnidirectional radiation characteristic, which means the radiation wouldn’t variant around different steradians.

We could deduce the definition of Radiant Intensity first, imagine a unit sphere surrounding the point light source, a unit steradian would emit some energy per second, then I_e = \frac{d\Phi}{d\omega}.

Similar, we could get Irradiance Flux density (or call it Radiant exitance if we want to emphasize it’s more about emitting rather than receiving, but it assumes we use some area light sources), M_e = \frac{d\Phi}{dA}.

And then Radiance is L_e = \frac{d\Phi}{d\omega} * \frac{d\Phi}{dA \cos\theta} = \frac{d^2\Phi}{ d\omega dA \cos\theta}, we now need to consider about the angle between the surface normal and the unit steradian because the actual effective area is not same as the original unit area, it is projected (A\cos\theta is called projected area), so here we add a cos to it. This kind of cos-weighted distribution is called Lambert’s cosine law.

## Through time and space, once and forever?

“…The oscillating electrical field pushes and pulls at the electrical
charges in the matter, causing them to oscillate in turn. The oscillating charges emit
new light waves, which redirect some of the energy of the incoming light wave in new
directions. This reaction, called scattering, is the basis of a wide variety of optical
phenomena. …” – pg. 297, “Real-Time Rendering“, 4th Edition.

If we emit any light/energy in vacuum, it would never change the direction and the energy unless meets with some obstructions (for example some molecules or small dust floating in space or some large giant gas planets like Saturn, or our body/eyes), then it would be absorbed or scattered due to the characteristics of the obstructions. Actually, this is all what we need to care about, the obstructions IS exactly the receiver! Then if we could model how the receiver it is we would solve the problem. But again, the limitation of our computational resources doesn’t allow us to simulate every molecule, instead, we have to follow some macro scope rules to abstract them. We call these single or group of trivial shape objects which influence the wave propagation as particles, and the volume the particles fulfilled with as media.

We live on Earth where the air and water are the most common noticeable medium, who gives us an inspiration for aesthetic creations for thousands of years. For to measure how they influence the light propagation, we need to define a kind of ratio between the original light and the affected light.

“…The ratio of the phase velocities of the original and new waves defines an optical
property of the medium called the index of refraction (IOR) or refractive index,
denoted by the letter n. Some media are absorptive. They convert part of the light
energy to heat, causing the wave amplitude to decrease exponentially with distance.
The rate of decrease is defined by the attenuation index, denoted by the Greek letter
\kappa (kappa). Both n and \kappa typically vary by wavelength. Together, these two numbers
fully define how the medium affects light of a given wavelength, and they are often
combined into a single complex number n + i \kappa, called the complex index of refraction. …” – pg. 298, “Real-Time Rendering“, 4th Edition.

As the book introduced, we use complex IOR to represent the characteristics of a media, but in practice of local illumination, we’d often only care about the real number part n, since the attenuation happens around the conductor medium more and we typically imply we would treat the air as our original media (or in the volume rendering business, but I won’t cover about that topic here). Then we could simply get IOR by n = \frac{c}{v}, where c is the light speed in vacuum and v is the light speed in the media. For a further detailed discussion about complex IOR, I recommend this post by Sébastien Lagarde, which he talked around the situations that covering all the other possible medium interfaces.

And then we would have a physical law called Snell’s law, which relates the incident angle with refracted angle by IOR of two mediums, written as \sin(\theta_t) = \frac{n_1}{n_2}\sin(\theta_i), where \theta_i is the angle between the interface normal and the incident light direction, \theta_t as the angle between the inverse of the interface normal and the refracted/transmitted light direction. We denote the index of refraction on the “outside” (the side where the incoming, or incident, wave originates) as and the index of refraction on the “inside” (where the wave will be transmitted after passing through the surface) as .

Now if we know the angles (here we abstract the light to a single monochromatic beam, which has “angle” and a single wavelength, but actually we’ve talked before how it is in real situation) and the IOR of the medium, could we calculate anything useful to display on the screen? Well, with Snell’s law we could figure out the direction change of the light, but we didn’t know how much energy would change, also if the light is not monochromatic, we don’t know which range of wavelengths would be reflected or refracted. And the most important problem is, we are receiving light through our eyes, but not always directly from the light source, what we want to receive are those light “reflected” from different surfaces, rather than those directly emitted from the sun or the bulbs!

So actually, we need to figure out such a scenario: Light is emitted from a source, meets with some surfaces and changes, then somehow travels into our eyes. When the light “hit” the surface we then could try to apply Snell’s law. But Snell’s law is just about how light changes at the medium interface in a very ideal situation, we don’t know what would happen next, since the conservation of energy always work in this universe (until the moment I wrote this line it is still valid), the reflected light must be weaker than the incident light, but how much? Also, where the refracted light part goes?

That requires us to give further information about how light continuously traveling. As I mentioned before, media is made up by particles, then the size of particles and the distance of particles should have an influence on the light transmission. But since it’s impossible to calculate everything happened inside the media, we choose to model the region around the surface of the media only which has much more contributions to the light into our eyes.

“…In rendering, we typically use geometrical optics, which ignores wave effects such as
interference and diffraction. This is equivalent to assuming that all surface irregularities
are either smaller than a light wavelength or much larger. …” – pg. 303, “Real-Time Rendering“, 4th Edition.

“…surface irregularities much larger than a wavelength
change the local orientation of the surface. When these irregularities are too small
to be individually rendered—in other words, smaller than a pixel—we refer to them
as microgeometry. …” – pg. 304, “Real-Time Rendering“, 4th Edition.

“…For rendering, rather than modeling the microgeometry explicitly, we treat it statistically
and view the surface as having a random distribution of microstructure
normals. As a result, we model the surface as reflecting (and refracting) light in a
continuous spread of directions. The width of this spread, and thus the blurriness of
reflected and refracted detail, depends on the statistical variance of the microgeometry
normal vectors—in other words, the surface microscale roughness. …” – pg. 304, “Real-Time Rendering“, 4th Edition.

The book introduced the microgeometry theory, which is balanced between the computational burden and the credibility of the result when it was adopted into real-time rendering community. Since our screen has limited discrete pixels, then it’s meaningless and wasting to calculate anything too accurately, instead we choose to use some statistical models for a cheaper routine. You may have heard some statistical methods used in rendering like the most famous Monte Carlo method before, here we would follow the similar path to figure out how to get the final light.

## The Rendering Equation

Now we could introduce an almost accurate representation of the interaction between light and surfaces. For a single light in wavelength \lambda at time t who hits the surface in a unit area A from a unit steradian \omega_i, and is “changed” by the surface then finally “re-emitted” to our eyes by an unit steradian \omega_o, we could write such a formula: L_o(A,\omega_o,\lambda,t) = f_r L_i(A,\omega_i,\lambda,t), which describe the incident radiance is weighted by a factor f_r who indicates the surface optic characteristic then contributes to the surface irradiance. Then we could do an integration for every possible incident direction of different light around the hemisphere centered at the surface normal, combine with the original emitted light from the surface itself, to get a more general formula. One summarization formula for rendering in this context is The Rendering Equation, L_o(p,\omega_o,\lambda,t) = L_e(p,\omega_o,\lambda,t) + \int\limits_{\Omega} f_r(p,\omega_i,\omega_o,\lambda,t) L_i(p,\omega_i,\lambda,t) (n \cdot \omega_i) d\omega_i, basically it is the same transcript of what I talked before, but with a simplification that we ignore the surface area, just minimize it to a point p.

Further more, we would like to treat the light-surface interaction as an time-domain individual event, then we could omit t. And because we would finally send a RGB color space data to the screen, so a continuous spectral irradiance is fairly unnecessary, then we would like to also cancel the spectrum by replacing it with 3 individual similar formulae, which have same form but care only about each color channels, thus we could just write one instead as The Reflectance Equation: L_o(p,\omega_o) = \int\limits_{\Omega} f_r(p,\omega_i,\omega_o) L_i(p,\omega_i) (n \cdot \omega_i) d\omega_i.

“…Local reflectance
is quantified by the bidirectional reflectance distribution function (BRDF), denoted
as f(l, v). …” – pg. 310, “Real-Time Rendering“, 4th Edition.

Now if we know how the incident light they are, then we just need to give a f_r weight who described the entire optic characteristic of the media and would get the final results. This f_r(p,\omega_i,\omega_o), like the book introduced, is called BRDF, combining with the microgeometry method I listed above, we would have chance to finally write some codes to calculate. But at first, let’s take a look back at the surface model we used here, we say “surface” not “interface”, the “surface” indicates we are inspecting in a region around the “interface”, so it should have more properties than what Snell’s law could describe. Let’s take a look:

As the media won’t always absorb and scatter all the refracted light inside (for example most of the non-conductors won’t, since there are too less free electrons inside to do the business), some of them would leave the media and enter the previous media again, which we called this kind of phenomenon as Subsurface Scattering. A typical practice is to separate the BRDF to 2 parts, a reflection part as specular and a local subsurface scattering part as diffuse. I would like to use the notation as f_s and f_d later.

Also, sometimes we need to calculate more general subsurface scattering, then we would use a bidirectional scattering distribution function (abbr. BSSRDF) instead, and for the light who travel through the entire media and leave in another surface it becomes bidirectional transmittance distribution function (abbr. BTDF). Together they are called as BxDF.

To make a BxDF physically correct, we need to achieve 3 goals：

1. f_r(\omega_i,\omega_o) \ge 0, a BxDF never results negative weight;

2. f_r(\omega_i,\omega_o) = f_r(\omega_o,\omega_i), it’s called Helmholtz reciprocity, simply speaking means if we change the incident and the observe direction it should have the same result;

3. \forall \omega_i, \int\limits_{\Omega}f_r(\omega_i,\omega_o)(n \cdot \omega_i) d\omega_i \le 1, means we need to follow the energy conservation law, the weight should never exceed than 1, the relative outgoing light energy should never exceed than the relative incoming light energy.

In practice there is a kind of convenient way to evaluate whether a BxDF is energy conservation or not called White furnace test, I’ll talk about it later.

## BRDF

BRDF would be thought as f_r(\omega_i,\omega_o) = \frac{dL_o(\omega_o)}{L_i(\omega_i)\cos\theta_i d\omega_i}, which gives us a possibility to measure it in real, for example MERL is one kind of BRDF database. Also, we could write BRDF as f_r(\omega_i,\omega_o) = f_d(\omega_i,\omega_o) + f_s(\omega_i,\omega_o), to indicate that we would like to seperate BRDF to the specular and diffuse parts and solve them independently.

Let’s use n as the macro surface normal vector and m as the micro surface normal vector, and h\ = \frac{\omega_o + \omega_i}{||\omega_o + \omega_i||} as the normalized halfway vector of the view direction and light direction.

Also, for the sake of convenience to discuss BRDF more practically, I’d like to introduce the Directional albedo which measures the amount of light coming from a given direction that is reflected at all, into any outgoing direction in the hemisphere around the surface normal, in formula as R_s = \int\limits_{\Omega}f(l, v)(n·s)ds, if the BRDF is Helmholtz reciprocal then could substitute s with l or v freely.

### Diffuse part

#### Simple Lambert model

f_{Lambert} = \frac{\rho} {\pi}, \rho as the “color” of the surface, strictly speaking it is the subsurface scattering part of the surface irradiance under a particular lighting circumstance; \pi comes from the fact that we treat the surface as the Lambertian surface, thus it won’t change due to the view and light direction, then the BRDF integral over the hemisphere yield it.

Let’s visualize its directional albedo: (BRDF Explorer/Octave WIP)

This simple Lambert diffuse model would ignore the surface micro scope variation, combines with the Lambert’s cosine law we could already get L_o(\omega_o) = \int\limits_{\Omega}\frac{\rho} {\pi}L_i(\omega_i)\cos\theta_i d\omega_i, since the integral of direction \omega_i over the hemisphere is \pi, then it would cancel the \pi in the BRDF, so finally we would just get L_o(\omega_o) = \rho \cos\theta_i, exactly the same as what we learned first in the real-time rendering class 101!

#### Cook-Torrance model

The lack of micro scope detail of the simple Lambert model limits us to get further realistic results, luckily as the researchers and scientists work on it for decades, we’ve already had some advanced replacements of the simple Lambert model, one of them which is used most commonly today in real-time rendering community is the microgeometry theory, we’d refer it as microfacet theory here.

R.L. Cook and K. E. Torrance [CT82] wrote a paper in 80’s which is the root of the most popular adopted microfacet model today, with the other nice references [Hei14][LR14] we could conclude the general Cook-Torrance model as f_{Cook-Torrance}=\frac{1}{|n·\omega_o||n·\omega_i|}\int\limits_{\Omega}f_m(\omega_o,\omega_i,m)G(\omega_o,\omega_i,m) D(m,\alpha)\langle \omega_o·m\rangle \langle \omega_i·m\rangle dm. It emphasizes that the macro BRDF is correlated with the micro BRDF, and it could be calculated through the integral over the microfacet m, with the additional weight functions G and D to help keeping the micro-macro mapping relationship stay correct.

The D function is called Distribution function or Normal Distribution Function(abbr. NDF), gives the spatial/statical distribution of the micro normal m over the macro normal n, the \alpha here is a user-controlled variable which describes how “rough” the surface it is, so we call it roughness or smoothness in practice (typically we’d like to build a non-linear mapping between the user-controlled roughness and the real \alpha). We would use statical functions in practice since it’s the only possible way to calculate in real-time, about how to mapping from spatial function to statical function I recommend to read this paper [Hei14] for detail understanding.

The G function is called Geometry function or Masking-shadowing function, but strictly speaking, we would better call it V as Visibility function, since the Geometry function is usually used to compose the Visibility function actually, but in literal it is used exchangeably often. It gives a weight about how the microfacets influence themselves by masking each other along the view direction and shadowing each other along the incident light direction, it should be deduced accordingly with the D function we chose.

I’ll list some common used D and G functions below:

##### D function
###### Gaussian D function [CT82]

D_{Gaussian}=ce^{(-\alpha/m)^2}

###### Beckmann D function [CT82]

D_{Beckmann}=\frac{1}{m^2\cos^4\alpha}e^{-[ \,(\tan\alpha)/m ] \,^2}

For these two D functions, c is an optional scaling factor, \alpha as the angle between n and h, m as the RMS of the slope of the microfacet. Since they are computationally expensive, it’s rare to see them in real products, but we’d like to treat them as the offline reference sometimes.

###### Berry D function [Bur12]

D_{Berry}=\frac{c}{((n·h)^2(\alpha^2-1)+1)}

###### GGX/Trowbridge-Reitz D function [Bur12]

D_{TR}=\frac{c}{((n·h)^2(\alpha^2-1)+1)^2}

###### Generalized GGX/Trowbridge-Reitz D function [Bur12]

D_{GTR}=\frac{c}{((n·h)^2(\alpha^2-1)+1)^\gamma}

For these three D functions, c is an optional scaling factor, \alpha is the roughness, \gamma is an optional exponential factor. If we choose \gamma=10 it is fairly close to the Beckmann D function. They are most commonly used D functions as far as I know, gives a “long-tailed” visual appearance.

##### G function
###### Cook-Torrance G function [CT82]

G_{cook-torrance} = \min{1,\frac{2(n·h)(n·\omega_o)}{(\omega_o·h)},\frac{2(n·h)(n·\omega_i)}{(\omega_o·h)}}

This one comes from the original paper itself but I haven’t seen it in any product level shader yet since it could be deduced to other more optimal versions.

###### Smith G function [Smith67] [Hei14]

G_{Smith}=\frac{\chi^+(\omega_o·\omega_m)}{1 + \Lambda(\omega_o)}

The original paper is not publicly available, so I list the deduced version from a later reference. Here the nominator \chi^+(u) is a heavy-side function, when u>0 then \chi^+(u)=1, otherwise \chi^+(u)=0, it ensures the sidedness effect, while the \Lambda() function is an integral over the slopes of the microsurface, which gives the masking probability. So, unless we provide a possible \Lambda() function, this formula can’t be translated to a shader.

###### Schlick G function [Sch94]

G_{Schlick}=\frac{n·\omega_o}{(n·\omega_o)(1-k)+k}·\frac{n·\omega_i}{(n·\omega_i)(1-k)+k}

k is the user-controlled roughness, in practice we could remapping roughness to \alpha=\frac{Roughness 1}{2} [Bur12], k=\alpha^2/2 [Kar13] to get a better non-linearity result. Schlick G function is the approximation of Smith G function in [ \,0,1] \,, which is kind friendly for our application scene because its parameter requirement is acceptable.

###### Height correlated Smith G function [Hei14] [LR14]

G_{CorrelatedSchlick}=\frac{\chi^+(\omega_o·h)\chi^+(\omega_i·h)}{1+\Lambda(\omega_o)+\Lambda(\omega_i)}

\Lambda(m)=\frac{-1+\sqrt{1+\alpha^2\tan^2(\theta_m)}}{2}=\frac{-1+\sqrt{1+\frac{\alpha^2(1-\cos^2(\theta_m))}{\cos^2(\theta_m)}}}{2}, because the microfacet would mask and shadow at the same time, then if we correlate the masking and shadowing parts with the respect of its height would bring some energy loss back. The detailed deduction is math heavily, I’d recommend reading the corresponding papers for further understanding.

###### Multi-scattering Smith G function [HHED16]

All of the G functions I listed above only take care of a single-scattering phenomenon, while in reality, the rougher surface would have more possibility to bounce light around different microfacets, it’s important to count these part of the energy. The paper [HHED16] gives a stochastic method based ground truth, and later [IW17] gives an implementable compensation way to achieve it, an alternative version of the general formula is given by the book as:

“… f_{ms}(l, v) = \frac{\overline F \overline {Rs_{F_1}}}{\pi(1-\overline {Rs_{F_1}})(1-\overline F(1-\overline {Rs_{F_1}}))}(1-Rs_{F_1}(l))(1-Rs_{F_1}(v)) …” – pg. 346, “Real-Time Rendering“, 4th Edition.

Let’s start to decompose this formula from the F Fresnel function. As we discussed before, the Snell’s law could give us the direction of the transmitted light, and the ideal mirror reflection could give us the reflected light direction, but we don’t know how much energy is reflected and how much is transmitted, this is quite an annoy problem. But luckily it has been solved (with some restricted conditions) already in 19th century by French scientist Augustin-Jean Fresnel as Fresnel equations. Long talk in short, since we only care about the ideal sandbox in our real-time compensations, we would look for a nice enough Fresnel function to simulate this phenomenon. The Schlick Fresnel approximation is widely adopted nowadays, it’s has a form as F_{Schlick}=F_0 + (F_{90} – F_0)(1-(\cos\theta))^5), F_0 is the specular reflectance from normal incidence, it represents the IOR when the view direction is perpendicular with the surface as \omega_o \parallel n; similar F_{90} is the IOR when \omega_o \perp n, some of the implementations would assume F_{90} is always 1, and it could cover most of common conductor/dielectric materials type. Finally \cos\theta=n·\omega_o or \cos\theta=n·\omega_i, it’s another kind of cosine-weighted contribution appears here.

The \overline F here means the average of F over all the different cosine of angles, if we just use the Schlick Fresnel approximation mentioned above, it could simply be calculated as \overline F = \frac{20}{21}F_0 + \frac{1}{21}.

Then we’d move on to another average, Rs_{F_1}, it is the directional albedo of , which is the specular BRDF term with set to 1, which could be interpreted as the irradiance of a pure white surface when illuminated by a unit directional light source, and Stephen Hill wrote a nice blog series about it. Basically, if we could implement the single-scattering BRDF, then could just use some discrete numerical integral method (like Importance sampling) to calculate it and save it to a look-up table, which exactly similar like what the split-summing technique of IBL applicated in [Kar13].

#### Use Simple Lambert model as the microfacet BRDF in Cook-Torrance model [LR14]

f_m(\omega_o,\omega_i,m) = f_{Lambert} = \frac{\rho} {\pi} and then f_{cook-torrance}=\frac{\rho}{\pi}\frac{1}{|n·\omega_o||n·\omega_i|}\int\limits_{\Omega}G(v,l,m) D(m,\alpha)\max(0,\omega_o·m)\max(0,\omega_i·m)dm

Still, no analysis solution, but gives a theoretical fundamental about the problem we need to solve, and unified all the problem inside one microsurface theory.

#### Oren-Nayar model [ON94]

f_{oren-nayar}=\frac{\rho} {\pi}(A+(B·\max(0, \cos (\omega_i – \omega_o))·\sin(\max(\omega_i, \omega_o))·tan(\min(\omega_i,\omega_o))))
A=1-\frac{\alpha}{2\alpha+0.66}, B=0.45\frac{\alpha}{\alpha+0.09} and \alpha is the roughness, it’s an approximation of the general Cook-Torrance model, when \alpha = 0 we’ll get the Simple Lambert model. Could treat Oren-Nayar model as a kind of generalization of Simple Lambert model.

#### Disney model [Bur12]

Another advanced Lambert-based diffuse model which considers about Fresnel effect, f_{disney}=\frac{\rho} {\pi}(1+(F_{d90}-1)(1-(n · \omega_i))^5)(1+(F_{d90}-1)(1-(n · \omega_o))^5),
or written as f_{Disney}=\frac{\rho} {\pi}F_{Schlick}(1, F_{d90}, n, \omega_o)·F_{Schlick}(1, F_{d90}, n, \omega_i), here F_{d90}=0.5+2(h·\omega_i)^2\alpha, \alpha is roughness.

#### Normalized Disney model [LR14]

For the sake of energy conservation, we could remapping the original Disney model to [ \,0,1] \,, then f_{normalizedDisney}=c·\frac{\rho} {\pi}F_{Schlick}(1, F_{d90}, n, \omega_o)·F_{Schlick}(1, F_{d90}, n, \omega_i), c=\frac{1}{1.51}+\frac{0.51}{1.51}\alpha it’s a scaling factor, I deduced it here for to better compare with the original version, and now F_{d90}=0.5+(2(h·\omega_i)^2-0.5)\alpha, \alpha=Roughness^\gamma, in practice the original paper chooses \gamma=2, and it find when \gamma=4 it’s almost near the result in [Sch14] where \alpha=(0.3+0.7Roughness)^6.

### Specular part

#### Phong model

f_{Phong} = (r·\omega_o)^\alpha, r is the reflection direction of \omega_i, it’s the most famous and commonly used specular model in last few decades, and even programmed inside the graphics hardware, it needs the exponential factor \alpha as the user-controlled parameter.

#### Normalized Phong model

f_{normalizedPhong} = c·(r·\omega_o)^\alpha, c=\frac{\alpha+1}{2\pi}, actually Phong model gives a D function in a microsurface view of point, here \alpha thus could be thought as the roughness.

#### Blinn-Phong model

f_{Blinn-Phong} = (n·h)^\alpha, an optimization of Phong model, in practice if we choose mapping \alpha_{blinn-phong}=4\alpha_{phong} then Blinn-Phong model would looks like Phong model [Wiki1].

#### Normalized Blinn-Phong model

f_{normalizedBlinn-Phong} = c·(n·h)^\alpha, c=\frac{\alpha+2}{4\pi(2-2^{\frac{-\alpha}{2}})}.

#### Cook-Torrance model [CT82] [Hei14]

f_{cook-torrance} = \frac{F(\omega_o, h , f_0, f_{90})D(h, \alpha)G(\omega_o, \omega_i, h)}{4|n·\omega_o||n·\omega_i|}, the new kids (popular from ~2012) in town! Everything we’ve talked before, the denominator is deduced from the Jacobian Matrix when we change the space from the microfacet space to macro and makes it’s quite elegant, we use the microfacet theory but calculate in macro!

### Some sample codes

a. Simple Lambert model + Blinn-Phong model

vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, vec3 viewPos, vec3 fragPos)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);

float NdotH = max(dot(N , H), 0.0);
float NdotL = max(dot(N , L), 0.0);

// ambient color
vec3 ambientColor = diffuse * light.color * 0.04;

// diffuse color
vec3 diffuseColor = diffuse * NdotL * light.color;

// specular color
float alpha = 32;
vec3 specularColor = specular * pow(NdotH, alpha) * light.color;

return (ambientColor + diffuseColor + specularColor);
}


b. Oren-Nayar model + Normalized Blinn-Phong model

// Oren-Nayar diffuse BRDF
// ----------------------------------------------------------------------------
float orenNayarDiffuse(float LdotV,  float NdotL, float NdotV, float roughness)
{
float s = LdotV - NdotL * NdotV;
float t = mix(1.0, max(NdotL, NdotV), step(0.0, s));

float sigma2 = roughness * roughness;
float A = 1.0 - (0.5 * sigma2 / (sigma2 + 0.33));
float B = 0.45 * sigma2 / (sigma2 + 0.09);

return max(0.0, NdotL) * (A + B * s / t);
}

vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, float roughness, vec3 viewPos, vec3 fragPos)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);
float LdotV = max(dot(L , V), 0.0);
float NdotH = max(dot(N , H), 0.0);// ambient color
vec3 ambientColor = diffuse * light.color * 0.04;

// diffuse color
float Fd = orenNayarDiffuse(LdotV, NdotL, NdotV, roughness);
vec3 diffuseColor = diffuse * Fd * light.color;

// specular color
float alpha = 32;
float normalizedScaleFactor = (alpha + 2) / (4 * PI * (2 - pow(2, (-alpha / 2))));
vec3 specularColor = specular * (1 - Fd) * pow(NdotH, alpha) * normalizedScaleFactor * light.color;

return (ambientColor + diffuseColor + specularColor);
}


c. Normalized Disney model + Cook-Torrance (specular) model, use D_{TR}+G_{CorrelatedSchlick}+F_{Schlick} combination

(reference Frostbite Engine [LR14])
“ glsl
// Frostbite Engine model
// —————————————————————————-
// Specular/Diffuse BRDF Fresnel Component
// —————————————————————————-
vec3 Frostbite_fresnelSchlick(vec3 f0, float f90, float u)
{
return f0 + (f90 – f0) * pow(1.0 – u, 5.0);
}
// Diffuse BRDF
// —————————————————————————-
float Frostbite_DisneyDiffuse(float NdotV, float NdotL, float LdotH, float earRoughness)
{<br />
float energyBias = mix(0, 0.5, linearRoughness);
float energyFactor = mix(1.0, 1.0/1.51, linearRoughness);
float fd90 = energyBias + 2.0 * LdotH * LdotH * linearRoughness;
vec3 f0 = vec3 (1.0, 1.0, 1.0);
float lightScatter = Frostbite_fresnelSchlick(f0, fd90, NdotL).r;
float viewScatter = Frostbite_fresnelSchlick(f0, fd90, NdotV).r;
return lightScatter * viewScatter * energyFactor;
}
// Specular BRDF Geometry Component
// —————————————————————————-
float Frostbite_V_SmithGGXCorrelated(float NdotL , float NdotV , float alphaG)
{
float alphaG2 = alphaG * alphaG;
float Lambda_GGXV = NdotL * sqrt(NdotV * NdotV * (1.0 – alphaG2) + alphaG2);
float Lambda_GGXL = NdotV * sqrt(NdotL * NdotL * (1.0 – alphaG2) + alphaG2);
return 0.5 / max((Lambda_GGXV + Lambda_GGXL), 0.00001);
}
// Specular BRDF Distribution Component
// —————————————————————————-
float Frostbite_D_GGX(float NdotH , float roughness)
{
float m = roughness * roughness;
float m2 = m * m;
float f = (NdotH * m2 – NdotH) * NdotH + 1;
return m2 / (f * f);
}
// —————————————————————————-
vec3 Frostbite_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos – fragPos);
vec3 H = normalize(V + L);

<pre><code>float NdotV = max(dot(N , V), 0.0);
float LdotH = max(dot(L , H), 0.0);
float NdotH = max(dot(N , H), 0.0);
float NdotL = max(dot(N , L), 0.0);

// Specular BRDF
float f90 = 1.0;
vec3 F = Frostbite_fresnelSchlick(F0, f90, LdotH);
float G = Frostbite_V_SmithGGXCorrelated(NdotV, NdotL, roughness);
float D = Frostbite_D_GGX (NdotH, roughness);
vec3 Fr = F * G * D;

// Diffuse BRDF
float Fd = Frostbite_DisneyDiffuse(NdotV, NdotL, LdotH ,roughness * roughness);

return (Fd * albedo + Fr) * light.color * NdotL / PI;
</code></pre>

}
“

d.Simple Lambert model + Cook-Torrance (specular) model, use D_{TR}+G_{Schlick}+F_{Schlick} combination

(reference from Unreal Engine 4[Kar13])

“ glsl
// Unreal Engine model
// —————————————————————————-
// Specular BRDF Distribution Component
// —————————————————————————-
float Unreal_DistributionGGX(float NdotH, float roughness)
{
float a = roughness<em>roughness;
float a2 = a * a;
float NdotH2 = NdotH</em>NdotH;

<pre><code>float nom = a2;
float denom = (NdotH2 * (a2 – 1.0) + 1.0);
denom = denom * denom;

return nom / denom;
</code></pre>

}
// Specular BRDF Geometry Component
// —————————————————————————-
float Unreal_GeometrySchlickGGX(float NdotV, float roughness)
{
float r = (roughness + 1.0);
float k = (r*r) / 8.0;

<pre><code>float nom = NdotV;
float denom = NdotV * (1.0 – k) + k;

return nom / denom;
</code></pre>

}
// —————————————————————————-
float Unreal_GeometrySmith(float NdotV, float NdotL, float roughness)
{
float ggx2 = Unreal_GeometrySchlickGGX(NdotV, roughness);
float ggx1 = Unreal_GeometrySchlickGGX(NdotL, roughness);

<pre><code>return ggx1 * ggx2;
</code></pre>

}
// Specular BRDF Fresnel Component
// —————————————————————————-
vec3 Unreal_fresnelSchlick(float cosTheta, vec3 F0)
{
return F0 + (1.0 – F0) * pow(1.0 – cosTheta, 5.0);
}
// —————————————————————————-
vec3 Unreal_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos – fragPos);
vec3 H = normalize(V + L);

<pre><code>float NdotV = max(dot(N , V), 0.0);
float NdotH = max(dot(N, H), 0.0);
float HdotV = max(dot(H , V), 0.0);
float NdotL = max(dot(N , L), 0.0);

// Specular BRDF
vec3 F = Unreal_fresnelSchlick(HdotV, F0);
float G = Unreal_GeometrySmith(N, V, L, roughness);
float D = Unreal_DistributionGGX(N, H, roughness);

vec3 nominator = D * G * F;
float denominator = 4 * NdotV * NdotL;
vec3 specular = nominator / max(denominator, 0.00001);

// for energy conservation
vec3 kS = F;
vec3 kD = vec3(1.0) – kS;
kD *= 1.0 – metallic;

return ((kD * albedo + specular) * light.color * NdotL) / PI;
</code></pre>

}
“

To be continued.

Bibliography：

[Bur12] B. Burley. “Physically Based Shading at Disney”. In: Physically Based Shading in Film and Game Production, ACM SIGGRAPH 2012 Courses. SIGGRAPH ’12. Los Angeles, California: ACM, 2012, 10:1–7. isbn: 978-1-4503-1678-1. doi: 10.1145/2343483.2343493. url: http://selfshadow.com/publications/s2012-shading-course/.

[CT82] R. L. Cook and K. E. Torrance. “A Reﬂectance Model for Computer Graphics”. In: ACM Trans. Graph. 1.1 (Jan. 1982), pp. 7–24. issn: 0730-0301. doi: 10.1145/357290.357293. url: http://graphics.pixar.com/library/ReflectanceModel/.

[Hei14] E. Heitz. “Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs”. In: Journal of Computer Graphics Techniques (JCGT) 3.2 (June 2014), pp. 32–91. issn: 2331-7418. url: http://jcgt.org/published/0003/02/03/.

[HHED16] E. Heitz, J. Hanika, E. d’Eon, C. Dachsbacher, “Multiple-Scattering Microfacet BSDFs with the Smith Model”. In: ACM Transactions on Graphics (TOG) – Proceedings of ACM SIGGRAPH 2016, Volume 35 Issue 4, July 2016, ISSN: 0730-0301 E, ISSN: 1557-7368 doi>10.1145/2897824.2925943, url: https://eheitzresearch.wordpress.com/240-2/

[HTSG91] Xiao D. He, Kenneth E, Torrance, Frangois X. Sillion and Donald P. Greenberg. “A Comprehensive Physical Model for Light Reflection”. In: ACM SIGGRAPH Computer Graphics Homepage, Volume 25 Issue 4, July 1991, Pages 175-186, ACM New York, NY, USA, doi>10.1145/127719.122738, url: https://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf

[Kar13] B. Karis. “Real Shading in Unreal Engine 4”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2013 Courses. SIGGRAPH ’13. Anaheim, California: ACM, 2013, 22:1–22:8. isbn: 978-1-4503-2339-0. doi: 10.1145/2504435.2504457. url: http://selfshadow.com/publications/s2013-shading-course/.

[LR14] S. Lagarde and C. de Rousiers. “Moving Frostbite to PBR”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2014 Courses. SIGGRAPH ’14. Vancouver, Canada: ACM, 2014, 23:1–23:8. isbn: 978-1-4503-2962-0. doi: 10.1145/2614028.2615431. url: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/.

[Sch94] Schlick, Christophe, “An Inexpensive BRDF Model for Physically-based Rendering”, Computer Graphics Forum, vol.13, no.3, Sept.1994, pp.149–162. http://dept-info.labri.u-bordeaux.fr/ ~Schlick/DOC/eur2.html

[Sch14] N. Schulz. “Moving to the Next Generation – The Rendering Technology of Ryse”. In: Game Developers Conference. 2014.