I learned the shading magic like how the most of others did, from a classic Blinn-Phong model to some more complex models like Cook-Torrance model, but the understanding would always be confined around the common practices shaped by our slow speed computer, and our eager desires for the more realistic results. With these compensations, tricks, and hacks we may achieve lots of cool and amazing stuff at first, but if we don’t have a clear bird’s-eye view (for example people like me who is always suffering among the formulae), then we would lose ourselves inside just shader code and strange artifacts often and can’t find the solutions without painful testing and doubting.
Then I was thinking, why not implement and learn something from the basic of the physics again? Let’s forget about a little what I’ve already written and start to construct a shading pipeline from scratch, following the physical interpretations, then seeking for solutions to run between each 16 ms. Let’s go back to the start.
“…In physical optics, light is modeled as an electromagnetic transverse wave, a wave
that oscillates the electric and magnetic fields perpendicularly to the direction of its propagation…” - pg. 293, ”Real-Time Rendering”, 4th Edition.
(The book already has become the bible of me, it’s a nice working reference, a balanced textbook and a well-formed dictionary, I’ll quote a lot from it later in this ctrl-c + ctrl-v style post (Why not?))
Light, a kind of electromagnetic wave whose property shaped by its wavelengths majorly, is one of the fundamental beings in our universe, and all of our beautiful rendering works would focus on how to calculate the physical transmission of it always. But what about to think in this way, just because it could be received by our eyes (a psychophysical phenomenon) then it had a special name as “light”, and if we could treat it not so special in our theoretical discussion would something become easier and more generally to handle with?
Let’s say, the basic electromagnetic mechanism works well too with light, all and no exceptions (of course it is), then we could just model a general solution in the computer, and feed it with the light wavelengths we emitted from some origins, and other relating properties of some objects which stops the light propagation to calculate the result (may also contain the influences from the space they are in later, now let’s just work inside an ideal static vacuum). The classic physics model of the electromagnetic mechanism is described by Maxwell’s equations successfully, and with it we could figure out how the electric field propagating through space, thus means to get the correct phenomenon of the electromagnetic wave in our simulation box.
But unfortunately, we don’t have such huge computational resources to run a field propagation simulation in real-time inside our personal computer, representing a non-analytical electric field also requires a 3D data structure at least, we must allow some compensations. Luckily, physicists have already simplified the scope of the problems for us, instead of using the original electromagnetic field related methods, we could try to use the radiation methods to focus on the energy property only, and this would bring us a simplification of only considering about 2 things, the emitter and the receiver of the electromagnetic wave.
“…Light waves are emitted when the electric charges in an object oscillate. Part of
the energy that caused the oscillations—heat, electrical energy, chemical energy—is converted to light energy, which is radiated away from the object. …” - pg. 296, ”Real-Time Rendering”, 4th Edition.
Then we would arrive at Radiometry, which deals with the measurement of electromagnetic radiation. For the common usage, there are some radiometric quantities which represent the measurement of the electromagnetic radiation energy with respects to other basic physical quantities such like time/distance/area/angle, I’ll list some of them below which are more important to our rendering business, with the reference from the book and Wikipedia.
Quantity | Unit | Notes | ||
---|---|---|---|---|
Name | Symbol | Name | Symbol | |
Radiant energy | joule | Energy of electromagnetic radiation | ||
Radiant flux | joule per second or watt | or | also called Radiant power | |
Radiant intensity | watt per steradian | steradian is similar with angle in 2D space | ||
Irradiance Flux density | or | watt per square metre | Ir-radiance, means the radiance received by a surface | |
Radiance | watt per steradian per square metre |
We won’t directly calculate Radiant energy, because usually we would handle with some objects which has shape and some events which happens during a period, it’s more convenient to cancel these quantities at first, that means we better choose to use Radiant intensity, Irradiance Flux density and Radiance.
To evaluate how a light source emitted light or energy, we need to define what is a light source at first. In this blog post, I would choose to discuss the ideal punctual light source only, which has an infinity small shape, the same as a point in space. Also, I would idealize it with an omnidirectional radiation characteristic, which means the radiation wouldn’t variant around different steradians.
We could deduce the definition of Radiant Intensity first, imagine a unit sphere surrounding the point light source, a unit steradian would emit some energy per second, then .
Similar, we could get Irradiance Flux density (or call it Radiant exitance if we want to emphasize it’s more about emitting rather than receiving, but it assumes we use some area light sources), .
And then Radiance is , we now need to consider about the angle between the surface normal and the unit steradian because the actual effective area is not same as the original unit area, it is projected ( is called projected area), so here we add a cos to it. This kind of cos-weighted distribution is called Lambert’s cosine law.
“…The oscillating electrical field pushes and pulls at the electrical
charges in the matter, causing them to oscillate in turn. The oscillating charges emit new light waves, which redirect some of the energy of the incoming light wave in new directions. This reaction, called scattering, is the basis of a wide variety of optical phenomena. …” - pg. 297, ”Real-Time Rendering”, 4th Edition.
If we emit any light/energy in vacuum, it would never change the direction and the energy unless meets with some obstructions (for example some molecules or small dust floating in space or some large giant gas planets like Saturn, or our body/eyes), then it would be absorbed or scattered due to the characteristics of the obstructions. Actually, this is all what we need to care about, the obstructions IS exactly the receiver! Then if we could model how the receiver it is we would solve the problem. But again, the limitation of our computational resources doesn’t allow us to simulate every molecule, instead, we have to follow some macro scope rules to abstract them. We call these single or group of trivial shape objects which influence the wave propagation as particles, and the volume the particles fulfilled with as media.
We live on Earth where the air and water are the most common noticeable medium, who gives us an inspiration for aesthetic creations for thousands of years. For to measure how they influence the light propagation, we need to define a kind of ratio between the original light and the affected light.
“…The ratio of the phase velocities of the original and new waves defines an optical
property of the medium called the index of refraction (IOR) or refractive index, denoted by the letter . Some media are absorptive. They convert part of the light energy to heat, causing the wave amplitude to decrease exponentially with distance. The rate of decrease is defined by the attenuation index, denoted by the Greek letter (kappa). Both n and typically vary by wavelength. Together, these two numbers fully define how the medium affects light of a given wavelength, and they are often combined into a single complex number , called the complex index of refraction. …” - pg. 298, ”Real-Time Rendering”, 4th Edition.
As the book introduced, we use complex IOR to represent the characteristics of a media, but in practice of local illumination, we’d often only care about the real number part , since the attenuation happens around the conductor medium more and we typically imply we would treat the air as our original media (or in the volume rendering business, but I won’t cover about that topic here). Then we could simply get IOR by , where is the light speed in vacuum and is the light speed in the media. For a further detailed discussion about complex IOR, I recommend this post by Sébastien Lagarde, which he talked around the situations that covering all the other possible medium interfaces.
And then we would have a physical law called Snell’s law, which relates the incident angle with refracted angle by IOR of two mediums, written as , where is the angle between the interface normal and the incident light direction, as the angle between the inverse of the interface normal and the refracted/transmitted light direction. We denote the index of refraction on the “outside” (the side where the incoming, or incident, wave originates) as and the index of refraction on the “inside” (where the wave will be transmitted after passing through the surface) as .
Now if we know the angles (here we abstract the light to a single monochromatic beam, which has “angle” and a single wavelength, but actually we’ve talked before how it is in real situation) and the IOR of the medium, could we calculate anything useful to display on the screen? Well, with Snell’s law we could figure out the direction change of the light, but we didn’t know how much energy would change, also if the light is not monochromatic, we don’t know which range of wavelengths would be reflected or refracted. And the most important problem is, we are receiving light through our eyes, but not always directly from the light source, what we want to receive are those light “reflected” from different surfaces, rather than those directly emitted from the sun or the bulbs!
So actually, we need to figure out such a scenario: Light is emitted from a source, meets with some surfaces and changes, then somehow travels into our eyes. When the light “hit” the surface we then could try to apply Snell’s law. But Snell’s law is just about how light changes at the medium interface in a very ideal situation, we don’t know what would happen next, since the conservation of energy always work in this universe (until the moment I wrote this line it is still valid), the reflected light must be weaker than the incident light, but how much? Also, where the refracted light part goes?
That requires us to give further information about how light continuously traveling. As I mentioned before, media is made up by particles, then the size of particles and the distance of particles should have an influence on the light transmission. But since it’s impossible to calculate everything happened inside the media, we choose to model the region around the surface of the media only which has much more contributions to the light into our eyes.
“…In rendering, we typically use geometrical optics, which ignores wave effects such as
interference and diffraction. This is equivalent to assuming that all surface irregularities are either smaller than a light wavelength or much larger. …” - pg. 303, ”Real-Time Rendering”, 4th Edition.
“…surface irregularities much larger than a wavelength
change the local orientation of the surface. When these irregularities are too small to be individually rendered—in other words, smaller than a pixel—we refer to them as microgeometry. …” - pg. 304, ”Real-Time Rendering”, 4th Edition.
“…For rendering, rather than modeling the microgeometry explicitly, we treat it statistically
and view the surface as having a random distribution of microstructure normals. As a result, we model the surface as reflecting (and refracting) light in a continuous spread of directions. The width of this spread, and thus the blurriness of reflected and refracted detail, depends on the statistical variance of the microgeometry normal vectors—in other words, the surface microscale roughness. …” - pg. 304, ”Real-Time Rendering”, 4th Edition.
The book introduced the microgeometry theory, which is balanced between the computational burden and the credibility of the result when it was adopted into real-time rendering community. Since our screen has limited discrete pixels, then it’s meaningless and wasting to calculate anything too accurately, instead we choose to use some statistical models for a cheaper routine. You may have heard some statistical methods used in rendering like the most famous Monte Carlo method before, here we would follow the similar path to figure out how to get the final light.
Now we could introduce an almost accurate representation of the interaction between light and surfaces. For a single light in wavelength at time who hits the surface in a unit area from a unit steradian , and is “changed” by the surface then finally “re-emitted” to our eyes by an unit steradian , we could write such a formula: , which describe the incident radiance is weighted by a factor who indicates the surface optic characteristic then contributes to the surface irradiance. Then we could do an integration for every possible incident direction of different light around the hemisphere centered at the surface normal, combine with the original emitted light from the surface itself, to get a more general formula. One summarization formula for rendering in this context is The Rendering Equation, , basically it is the same transcript of what I talked before, but with a simplification that we ignore the surface area, just minimize it to a point .
Further more, we would like to treat the light-surface interaction as an time-domain individual event, then we could omit . And because we would finally send a RGB color space data to the screen, so a continuous spectral irradiance is fairly unnecessary, then we would like to also cancel the spectrum by replacing it with 3 individual similar formulae, which have same form but care only about each color channels, thus we could just write one instead as The Reflectance Equation: .
“…Local reflectance
is quantified by the bidirectional reflectance distribution function (BRDF), denoted as . …” - pg. 310, ”Real-Time Rendering”, 4th Edition.
Now if we know how the incident light they are, then we just need to give a weight who described the entire optic characteristic of the media and would get the final results. This , like the book introduced, is called BRDF, combining with the microgeometry method I listed above, we would have chance to finally write some codes to calculate. But at first, let’s take a look back at the surface model we used here, we say “surface” not “interface”, the “surface” indicates we are inspecting in a region around the “interface”, so it should have more properties than what Snell’s law could describe. Let’s take a look:
As the media won’t always absorb and scatter all the refracted light inside (for example most of the non-conductors won’t, since there are too less free electrons inside to do the business), some of them would leave the media and enter the previous media again, which we called this kind of phenomenon as Subsurface Scattering. A typical practice is to separate the BRDF to 2 parts, a reflection part as specular and a local subsurface scattering part as diffuse. I would like to use the notation as and later.
Also, sometimes we need to calculate more general subsurface scattering, then we would use a bidirectional scattering distribution function (abbr. BSSRDF) instead, and for the light who travel through the entire media and leave in another surface it becomes bidirectional transmittance distribution function (abbr. BTDF). Together they are called as BxDF.
To make a BxDF physically correct, we need to achieve 3 goals:
, a BxDF never results negative weight;
, it’s called Helmholtz reciprocity, simply speaking means if we change the incident and the observe direction it should have the same result;
, means we need to follow the energy conservation law, the weight should never exceed than 1, the relative outgoing light energy should never exceed than the relative incoming light energy.
In practice there is a kind of convenient way to evaluate whether a BxDF is energy conservation or not called White furnace test, I’ll talk about it later.
BRDF would be thought as , which gives us a possibility to measure it in real, for example MERL is one kind of BRDF database. Also, we could write BRDF as , to indicate that we would like to seperate BRDF to the specular and diffuse parts and solve them independently.
Let’s use as the macro surface normal vector and as the micro surface normal vector, and as the normalized halfway vector of the view direction and light direction.
Also, for the sake of convenience to discuss BRDF more practically, I’d like to introduce the Directional albedo which measures the amount of light coming from a given direction that is reflected at all, into any outgoing direction in the hemisphere around the surface normal, in formula as , if the BRDF is Helmholtz reciprocal then could substitute with or freely.
, as the “color” of the surface, strictly speaking it is the subsurface scattering part of the surface irradiance under a particular lighting circumstance; comes from the fact that we treat the surface as the Lambertian surface, thus it won’t change due to the view and light direction, then the BRDF integral over the hemisphere yield it.
Let’s visualize its directional albedo: (BRDF Explorer/Octave WIP)
This simple Lambert diffuse model would ignore the surface micro scope variation, combines with the Lambert’s cosine law we could already get , since the integral of direction over the hemisphere is , then it would cancel the in the BRDF, so finally we would just get , exactly the same as what we learned first in the real-time rendering class 101!
The lack of micro scope detail of the simple Lambert model limits us to get further realistic results, luckily as the researchers and scientists work on it for decades, we’ve already had some advanced replacements of the simple Lambert model, one of them which is used most commonly today in real-time rendering community is the microgeometry theory, we’d refer it as microfacet theory here.
R.L. Cook and K. E. Torrance [CT82] wrote a paper in 80’s which is the root of the most popular adopted microfacet model today, with the other nice references [Hei14][LR14] we could conclude the general Cook-Torrance model as . It emphasizes that the macro BRDF is correlated with the micro BRDF, and it could be calculated through the integral over the microfacet , with the additional weight functions and to help keeping the micro-macro mapping relationship stay correct.
The function is called Distribution function or Normal Distribution Function(abbr. NDF), gives the spatial/statical distribution of the micro normal over the macro normal , the here is a user-controlled variable which describes how “rough” the surface it is, so we call it roughness or smoothness in practice (typically we’d like to build a non-linear mapping between the user-controlled roughness and the real ). We would use statical functions in practice since it’s the only possible way to calculate in real-time, about how to mapping from spatial function to statical function I recommend to read this paper [Hei14] for detail understanding.
The function is called Geometry function or Masking-shadowing function, but strictly speaking, we would better call it as Visibility function, since the Geometry function is usually used to compose the Visibility function actually, but in literal it is used exchangeably often. It gives a weight about how the microfacets influence themselves by masking each other along the view direction and shadowing each other along the incident light direction, it should be deduced accordingly with the function we chose.
I’ll list some common used and functions below:
For these two functions, is an optional scaling factor, as the angle between and , as the RMS of the slope of the microfacet. Since they are computationally expensive, it’s rare to see them in real products, but we’d like to treat them as the offline reference sometimes.
For these three functions, is an optional scaling factor, is the roughness, is an optional exponential factor. If we choose it is fairly close to the Beckmann function. They are most commonly used functions as far as I know, gives a “long-tailed” visual appearance.
This one comes from the original paper itself but I haven’t seen it in any product level shader yet since it could be deduced to other more optimal versions.
The original paper is not publicly available, so I list the deduced version from a later reference. Here the nominator is a heavy-side function, when then , otherwise , it ensures the sidedness effect, while the function is an integral over the slopes of the microsurface, which gives the masking probability. So, unless we provide a possible function, this formula can’t be translated to a shader.
is the user-controlled roughness, in practice we could remapping roughness to [Bur12], [Kar13] to get a better non-linearity result. Schlick function is the approximation of Smith function in , which is kind friendly for our application scene because its parameter requirement is acceptable.
, because the microfacet would mask and shadow at the same time, then if we correlate the masking and shadowing parts with the respect of its height would bring some energy loss back. The detailed deduction is math heavily, I’d recommend reading the corresponding papers for further understanding.
All of the functions I listed above only take care of a single-scattering phenomenon, while in reality, the rougher surface would have more possibility to bounce light around different microfacets, it’s important to count these part of the energy. The paper [HHED16] gives a stochastic method based ground truth, and later [IW17] gives an implementable compensation way to achieve it, an alternative version of the general formula is given by the book as:
“… …” - pg. 346, ”Real-Time Rendering”, 4th Edition.
Let’s start to decompose this formula from the Fresnel function. As we discussed before, the Snell’s law could give us the direction of the transmitted light, and the ideal mirror reflection could give us the reflected light direction, but we don’t know how much energy is reflected and how much is transmitted, this is quite an annoy problem. But luckily it has been solved (with some restricted conditions) already in 19th century by French scientist Augustin-Jean Fresnel as Fresnel equations. Long talk in short, since we only care about the ideal sandbox in our real-time compensations, we would look for a nice enough Fresnel function to simulate this phenomenon. The Schlick Fresnel approximation is widely adopted nowadays, it’s has a form as , is the specular reflectance from normal incidence, it represents the IOR when the view direction is perpendicular with the surface as ; similar is the IOR when , some of the implementations would assume is always 1, and it could cover most of common conductor/dielectric materials type. Finally or , it’s another kind of cosine-weighted contribution appears here.
The here means the average of over all the different cosine of angles, if we just use the Schlick Fresnel approximation mentioned above, it could simply be calculated as .
Then we’d move on to another average, , it is the directional albedo of , which is the specular BRDF term with set to 1, which could be interpreted as the irradiance of a pure white surface when illuminated by a unit directional light source, and Stephen Hill wrote a nice blog series about it. Basically, if we could implement the single-scattering BRDF, then could just use some discrete numerical integral method (like Importance sampling) to calculate it and save it to a look-up table, which exactly similar like what the split-summing technique of IBL applicated in [Kar13].
and then
Still, no analysis solution, but gives a theoretical fundamental about the problem we need to solve, and unified all the problem inside one microsurface theory.
, and is the roughness, it’s an approximation of the general Cook-Torrance model, when we’ll get the Simple Lambert model. Could treat Oren-Nayar model as a kind of generalization of Simple Lambert model.
Another advanced Lambert-based diffuse model which considers about Fresnel effect, , or written as , here , is roughness.
For the sake of energy conservation, we could remapping the original Disney model to , then , it’s a scaling factor, I deduced it here for to better compare with the original version, and now , , in practice the original paper chooses , and it find when it’s almost near the result in [Sch14] where .
, is the reflection direction of , it’s the most famous and commonly used specular model in last few decades, and even programmed inside the graphics hardware, it needs the exponential factor as the user-controlled parameter.
, , actually Phong model gives a function in a microsurface view of point, here thus could be thought as the roughness.
, an optimization of Phong model, in practice if we choose mapping then Blinn-Phong model would looks like Phong model [Wiki1].
, .
, the new kids (popular from ~2012) in town! Everything we’ve talked before, the denominator is deduced from the Jacobian Matrix when we change the space from the microfacet space to macro and makes it’s quite elegant, we use the microfacet theory but calculate in macro!
a. Simple Lambert model + Blinn-Phong model
vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, vec3 viewPos, vec3 fragPos)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);
float NdotH = max(dot(N , H), 0.0);
float NdotL = max(dot(N , L), 0.0);
// ambient color
vec3 ambientColor = diffuse * light.color * 0.04;
// diffuse color
vec3 diffuseColor = diffuse * NdotL * light.color;
// specular color
float alpha = 32;
vec3 specularColor = specular * pow(NdotH, alpha) * light.color;
return (ambientColor + diffuseColor + specularColor);
}
b. Oren-Nayar model + Normalized Blinn-Phong model
// Oren-Nayar diffuse BRDF
// ----------------------------------------------------------------------------
float orenNayarDiffuse(float LdotV, float NdotL, float NdotV, float roughness)
{
float s = LdotV - NdotL * NdotV;
float t = mix(1.0, max(NdotL, NdotV), step(0.0, s));
float sigma2 = roughness * roughness;
float A = 1.0 - (0.5 * sigma2 / (sigma2 + 0.33));
float B = 0.45 * sigma2 / (sigma2 + 0.09);
return max(0.0, NdotL) * (A + B * s / t);
}
vec3 CalcDirectionalLight(dirLight light, vec3 normal, vec3 diffuse, vec3 specular, float roughness, vec3 viewPos, vec3 fragPos)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);
float LdotV = max(dot(L , V), 0.0);
float NdotH = max(dot(N , H), 0.0);// ambient color
vec3 ambientColor = diffuse * light.color * 0.04;
// diffuse color
float Fd = orenNayarDiffuse(LdotV, NdotL, NdotV, roughness);
vec3 diffuseColor = diffuse * Fd * light.color;
// specular color
float alpha = 32;
float normalizedScaleFactor = (alpha + 2) / (4 * PI * (2 - pow(2, (-alpha / 2))));
vec3 specularColor = specular * (1 - Fd) * pow(NdotH, alpha) * normalizedScaleFactor * light.color;
return (ambientColor + diffuseColor + specularColor);
}
c. Normalized Disney model + Cook-Torrance (specular) model, use ++ combination
(reference Frostbite Engine [LR14])
// Frostbite Engine model
// ----------------------------------------------------------------------------
// Specular/Diffuse BRDF Fresnel Component
// ----------------------------------------------------------------------------
vec3 Frostbite_fresnelSchlick(vec3 f0, float f90, float u)
{
return f0 + (f90 - f0) * pow(1.0 - u, 5.0);
}
// Diffuse BRDF
// ----------------------------------------------------------------------------
float Frostbite_DisneyDiffuse(float NdotV, float NdotL, float LdotH, float earRoughness)
{
float energyBias = mix(0, 0.5, linearRoughness);
float energyFactor = mix(1.0, 1.0/1.51, linearRoughness);
float fd90 = energyBias + 2.0 * LdotH * LdotH * linearRoughness;
vec3 f0 = vec3 (1.0, 1.0, 1.0);
float lightScatter = Frostbite_fresnelSchlick(f0, fd90, NdotL).r;
float viewScatter = Frostbite_fresnelSchlick(f0, fd90, NdotV).r;
return lightScatter * viewScatter * energyFactor;
}
// Specular BRDF Geometry Component
// ----------------------------------------------------------------------------
float Frostbite_V_SmithGGXCorrelated(float NdotL , float NdotV , float alphaG)
{
float alphaG2 = alphaG * alphaG;
float Lambda_GGXV = NdotL * sqrt(NdotV * NdotV * (1.0 - alphaG2) + alphaG2);
float Lambda_GGXL = NdotV * sqrt(NdotL * NdotL * (1.0 - alphaG2) + alphaG2);
return 0.5 / max((Lambda_GGXV + Lambda_GGXL), 0.00001);
}
// Specular BRDF Distribution Component
// ----------------------------------------------------------------------------
float Frostbite_D_GGX(float NdotH , float roughness)
{
// remapping to Quadratic curve
float m = roughness * roughness;
float m2 = m * m;
float f = (NdotH * m2 - NdotH) * NdotH + 1;
return m2 / (f * f);
}
// ----------------------------------------------------------------------------
vec3 Frostbite_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);
float NdotV = max(dot(N , V), 0.0);
float LdotH = max(dot(L , H), 0.0);
float NdotH = max(dot(N , H), 0.0);
float NdotL = max(dot(N , L), 0.0);
// Specular BRDF
float f90 = 1.0;
vec3 F = Frostbite_fresnelSchlick(F0, f90, LdotH);
float G = Frostbite_V_SmithGGXCorrelated(NdotV, NdotL, roughness);
float D = Frostbite_D_GGX (NdotH, roughness);
vec3 Fr = F * G * D;
// Diffuse BRDF
float Fd = Frostbite_DisneyDiffuse(NdotV, NdotL, LdotH ,roughness * roughness);
return (Fd * albedo + Fr) * light.color * NdotL / PI;
}
d.Simple Lambert model + Cook-Torrance (specular) model, use ++ combination
(reference from Unreal Engine 4[Kar13])
// Unreal Engine model
// ----------------------------------------------------------------------------
// Specular BRDF Distribution Component
// ----------------------------------------------------------------------------
float Unreal_DistributionGGX(float NdotH, float roughness)
{
float a = roughness*roughness;
// remapping to Quadratic curve
float a2 = a * a;
float NdotH2 = NdotH*NdotH;
float nom = a2;
float denom = (NdotH2 * (a2 - 1.0) + 1.0);
denom = denom * denom;
return nom / denom;
}
// Specular BRDF Geometry Component
// ----------------------------------------------------------------------------
float Unreal_GeometrySchlickGGX(float NdotV, float roughness)
{
float r = (roughness + 1.0);
float k = (r*r) / 8.0;
float nom = NdotV;
float denom = NdotV * (1.0 - k) + k;
return nom / denom;
}
// ----------------------------------------------------------------------------
float Unreal_GeometrySmith(float NdotV, float NdotL, float roughness)
{
float ggx2 = Unreal_GeometrySchlickGGX(NdotV, roughness);
float ggx1 = Unreal_GeometrySchlickGGX(NdotL, roughness);
return ggx1 * ggx2;
}
// Specular BRDF Fresnel Component
// ----------------------------------------------------------------------------
vec3 Unreal_fresnelSchlick(float cosTheta, vec3 F0)
{
return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
}
// ----------------------------------------------------------------------------
vec3 Unreal_CalcDirectionalLightRadiance(dirLight light, vec3 albedo, float metallic, float roughness, vec3 normal, vec3 viewPos, vec3 fragPos, vec3 F0)
{
vec3 N = normalize(normal);
vec3 L = normalize(-light.direction);
vec3 V = normalize(viewPos - fragPos);
vec3 H = normalize(V + L);
float NdotV = max(dot(N , V), 0.0);
float NdotH = max(dot(N, H), 0.0);
float HdotV = max(dot(H , V), 0.0);
float NdotL = max(dot(N , L), 0.0);
// Specular BRDF
vec3 F = Unreal_fresnelSchlick(HdotV, F0);
float G = Unreal_GeometrySmith(N, V, L, roughness);
float D = Unreal_DistributionGGX(N, H, roughness);
vec3 nominator = D * G * F;
float denominator = 4 * NdotV * NdotL;
vec3 specular = nominator / max(denominator, 0.00001);
// for energy conservation
vec3 kS = F;
vec3 kD = vec3(1.0) - kS;
kD *= 1.0 - metallic;
return ((kD * albedo + specular) * light.color * NdotL) / PI;
}
To be continued.
Bibliography:
[Bur12] B. Burley. “Physically Based Shading at Disney”. In: Physically Based Shading in Film and Game Production, ACM SIGGRAPH 2012 Courses. SIGGRAPH ’12. Los Angeles, California: ACM, 2012, 10:1–7. isbn: 978-1-4503-1678-1. doi: 10.1145/2343483.2343493. url: http://selfshadow.com/publications/s2012-shading-course/.
[CT82] R. L. Cook and K. E. Torrance. “A Reflectance Model for Computer Graphics”. In: ACM Trans. Graph. 1.1 (Jan. 1982), pp. 7–24. issn: 0730-0301. doi: 10.1145/357290.357293. url: http://graphics.pixar.com/library/ReflectanceModel/.
[Hei14] E. Heitz. “Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs”. In: Journal of Computer Graphics Techniques (JCGT) 3.2 (June 2014), pp. 32–91. issn: 2331-7418. url: http://jcgt.org/published/0003/02/03/.
[HHED16] E. Heitz, J. Hanika, E. d’Eon, C. Dachsbacher, “Multiple-Scattering Microfacet BSDFs with the Smith Model”. In: ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2016, Volume 35 Issue 4, July 2016, ISSN: 0730-0301 E, ISSN: 1557-7368 doi>10.1145/2897824.2925943, url: https://eheitzresearch.wordpress.com/240-2/
[HTSG91] Xiao D. He, Kenneth E, Torrance, Frangois X. Sillion and Donald P. Greenberg. “A Comprehensive Physical Model for Light Reflection”. In: ACM SIGGRAPH Computer Graphics Homepage, Volume 25 Issue 4, July 1991, Pages 175-186, ACM New York, NY, USA, doi>10.1145/127719.122738, url: https://www.graphics.cornell.edu/pubs/1991/HTSG91.pdf
[Kar13] B. Karis. “Real Shading in Unreal Engine 4”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2013 Courses. SIGGRAPH ’13. Anaheim, California: ACM, 2013, 22:1–22:8. isbn: 978-1-4503-2339-0. doi: 10.1145/2504435.2504457. url: http://selfshadow.com/publications/s2013-shading-course/.
[LR14] S. Lagarde and C. de Rousiers. “Moving Frostbite to PBR”. In: Physically Based Shading in Theory and Practice, ACM SIGGRAPH 2014 Courses. SIGGRAPH ’14. Vancouver, Canada: ACM, 2014, 23:1–23:8. isbn: 978-1-4503-2962-0. doi: 10.1145/2614028.2615431. url: http://www.frostbite.com/2014/11/moving-frostbite-to-pbr/.
[Sch94] Schlick, Christophe, “An Inexpensive BRDF Model for Physically-based Rendering”, Computer Graphics Forum, vol.13, no.3, Sept.1994, pp.149–162. http://dept-info.labri.u-bordeaux.fr/ ~Schlick/DOC/eur2.html
[Sch14] N. Schulz. “Moving to the Next Generation - The Rendering Technology of Ryse”. In: Game Developers Conference. 2014.
[Wiki1] https://en.wikipedia.org/wiki/Blinn%E2%80%93Phong_shading_model