Physically Based Shading and Image Based Lighting 12


Light is complicated, and we really don’t have a full equation that accurately models light in the real world. This might sound confusing because of all the recent strides in CG and visual technology. Well, it’s all approximate – it’s just that we select functions which approximate really well. The unfortunate truth is: There is no one equation for light – only approximations.

Blinn-Phong is an approximation. If you’ve been following my recent blog posts you might have identified that the lighting model I have been using since the start has been Blinn-Phong. But let’s face it, Blinn-Phong hasn’t really improved with age – the paper on this algorithm was published originally in `77. At the time of writing, it’s almost a 40 year-old approximation for lighting. Can’t we do better?

Well, yes actually!

Physically Based Shading

TwitterBackground

Important: The following article has a ton of before and after pictures. When something is paired with a direction (Left), that refers to the image that takes up the Left side of the view).

Modern shading models are often referred to as Physically Based. They feature a more complicated lighting model, which separates into multiple equations with three special interchangeable factors – the whole equation forms a Bidirectional Reflectance Distribution Function (BRDF) known as the Cook-Torrance Model. A BRDF is essentially a function which models the amount of reflected light across the surface of an object. Bidirectional means that if the light and the view were to switch places, the equation would produce the same results. Reflectance is just what it sounds like, some factor representing the amount of light reflected. Distribution is the integral of the probability, in our case the distribution is the light over the object. Much like cumulative distribution functions in probability, we expect that the sum of all it’s parts to equal 1 (conservation of energy, in our case). And Function – it is a function. :)

The Cook-Torrance Model can be expressed as follows:

This model represents the amount of light reflected from an object (similar to Blinn-Phong) but with an approximation that takes into account the microscopic levels of detail on the surface of the object. The three functions F, G, and D are the specular factors which represent (respectively) Fresnel, Geometric Occlusion, and Normal (of Microfacet) Distribution. The power of this kind of BRDF is that different specular functions can be swapped out with whatever approximation you see fit (so long as they correspond to the same geometric meaning). What I mean by this is that there are several approximations to each of these functions, you only need to choose one, but you have the freedom to select whichever you want.

Let’s discuss the factors in more detail.

Fresnel Factor


Slide to compare Fresnel Off (Left) and Fresnel On (Right). 

Fresnel is the amount of light that reflects based on the current angle of incident between the light and the normal. As the incident angle becomes increasingly large, the amount of light that reflects into our eyes becomes greater. At 90° Angle of Incidence (AOI) the amount of light that reflects is 100%. An interesting fact about the Fresnel factor is that every type of known material has reflection – yes, even the ones you wouldn’t expect. If you look towards a light where you and the light have an increasing angle of incidence, you can force out this specular factor. It would make sense that no object completely consumes light, that wouldn’t physically make sense.

However not everything reflects the same amount of light at all angles – in fact the base value with angle of incidence is known as F0. Different types of materials have different values of F0 – ranging between 0.01~0.95. Absolutely nothing outside of that range. (Silver is the most reflective metal, and it has a base F0 of 0.95, to my knowledge ice is the lowest with 0.018).

Sc0tt Games has a pretty good table of non-metal reflective indices.

Geometric Occlusion Factor


Slide to compare Smith-Schlick-Beckmann (Left) and Smith-Ggx (Right). 

The next factor represents the amount of the surface – at a microscopic level – that is self-occluding. This parameter should ideally only affect rough objects. As an object becomes more rough, the amount of microfacet self-occlusion increases, so the amount of specular light observed decreases.

If we try to imagine a perfectly smooth surface, we can identify that there are still impurities with it at a microscopic level. Because of this, we can say that there is some amount of shading that’s going on, even if it’s small. SmithSchlickBeckmann, SmithGgx, and Cook-Torrance all seem to have pretty good equations for Geometric Occlusion.

Normal Distribution Factor


Slide to compare Beckmann (Left) and Ggx (Right). 

This factor is very unfortunately named. The reason why is because it’s often confused with regular, mathematic Normal Distributions (like what we used to blur Exponential Shadow Maps in the previous blog post). However, the name is appropriate.

The Normal Distribution is a function which determines the probability that the faces on a microfacet surface are oriented towards the normal of the surface. This tends to control the spread and falloff of the specular term. You often see Ggx used because it has a much wider tail to the specular reflection – which is pretty pleasing to the eye.

Microfacet BRDF Equations

So this wouldn’t be an experiment of all the different shading equations without a laundry list of equations to try. I’m just going to list the functions I found, and at the end of each section talk a little bit about my favorite combinations.

Definitions










Fresnel Equations


 


 


 

Geometry Equations





 

Note: The general form of the Smith equations is to take the product of the function called twice – once with arguments (l, h) and one with arguments (v, h). As such:

In the following equations, the variable i is the placeholder for whichever variable is plugged in first (l or v).

Geometry Equations (Smith)

 




Distribution Equations




Cumulative Distribution (Sample Skewing)


Phong:


Beckmann:


Ggx:

Comparison

Generally GGX, or some mixture of Smith/GGX is very popular. I tend to like different ones depending on the scene and light composition. I stick with Schlick’s Approximation for Fresnel. For Geometry Occlusion I prefer either Smith/Ggx, Smith-Schlick-Beckmann, or Cook-Torrance. And for Normal Distribution, Ggx has a longer specular tail – I tend to prefer that. For Importance-Sampling, I tend to mix and match (even though mathematically this is incorrect) by using Beckmann sampling with Ggx Normal Distribution. But you can see how the terms work together to produce pretty impressive results.

A sample showing interpolations between different Metallic and Roughness values.

What’s most impressive about the above picture is that every object here is white. The only changing parameters is Metallic and Roughness.

By these two variables alone we can represent a wide spectrum of materials. Towards the top we can see metals ranging from brushed and rough, to smooth and reflective. Move down the metal spectrum we hit a wall where objects seem to maintain some of their own diffuse color – these are called dielectrics. These objects range from glossy, crystalline materials, to smooth plastics. At the rough end of the spectrum you can spot matte surfaces and rubber materials.

In order to compare differences in specular factors, I have implemented all of the functions above as shader subroutines (OpenGL 4.0 >) which allows me to dynamically switch factors for the BRDF without recompiling shaders. It’s definitely not as efficient as writing a compact implementation of the entire BRDF, but it allows us to see all of the possibilities with great ease. One interesting anomaly is that Smith-Beckmann didn’t seem to play nicely with any other factor aside from the Beckmann distribution. You’ll notice white specles where the reflection is over-pronounced when Smith-Beckmann is paired improperly.

Material Structure

The material structure I’ve settled on is a simplified version of Unreal’s material system (Base Color, Metallic, and Roughness).

The Base Color is the color which we use for the diffuse portion of our lighting equation. It also doubles as the specular tint for metallic objects. So if we have an object that falls in the range of Dielectrics, this color is used for the diffuse term. If it falls in the range of Metals, it’s multiplied in as the specular tint. Metallic is simply the F0 value for the material, and it is clamped to be within the range [0.02, 0.99] (Cook-Torrance’s Fresnel equation didn’t play nice with F0 of 1, and everything should have at least some specular). Roughness is a term which is used in several of the Microfacet BRDF functions above, in order to make the distribution of rough/smooth more linear, we have re-parametrized roughness by squaring it (as outlined above in the Definitions section), and that there is a minimum roughness of 0.01 (Materials with surfaces infinitely smooth can exist in a vacuum, but due to Cold Welding this is a short-lived experience).

As I pointed it out, when travelling along the spectrum of metals we hit a wall where diffuse is no longer applied. This “wall” is what separates the dielectrics (non-conductive material) with the metals (conductive material). This section of F0 is more commonly referred to as semiconductors (somewhatly conductive materials). An interesting fact about measurements of different metals is that they tend to have absolutely no diffuse term to them. So what I do for convenience-sake is split the materials into two separate calculations – Dielectrics and Metals. How we interpolate between those calculations is through the lesser-seen semiconductors range.

Very few materials fall under the range of semiconductors [0.2, 0.45]. But for ease of implementation, and to allow some form of physical blending, I do allow these ranges. This semiconductor range is where I interpolate between the two blend models. So starting from the base F0 of the semiconductors, to the top-most value, we interpolate between the two results of the different blend modes. Here is some shader code showing this interpolation:

The idea is pretty simple. Whenever you have a diffuse and a specular term (pretty much always), plug them in to the BlendMaterial equation to make sure that the blending is done properly.

Note: How you wish to blend your materials is entirely up to you. Another method is to blend between the minimum and maximum metallic values for a wide spectrum of materials.

Importance Sampled Image Based Lighting

Another thing we can do to make our application more appealing is some approximation of the light within the environment. Image Based Lighting (IBL) is a nice way of approximating multiple samples on an environment for scenes with highly complex lighting which otherwise would take a lot of time to calculate. Look around in the current room you’re in – it’s likely there is more than one light on. It’s also possible that there are many lights contributing to the environment of many different shapes and sizes. To actually model all of this lighting in real-time can be a burden.

Image Based Lighting is taking a picture of the environment (for lack of a better word), and referencing that picture for ambient light instead of the single, global attenuation approximation we were using. In previous work, you might notice that the global ambient light is just some constant value which attenuates into the distance until nothing is left. This is basically replacing that constant global lighting with some good approximation using an image as reference.


Slide to compare 256 Samples (Left) with 20 Samples (Right). 

The problem is – depending on the roughness of the object, there may need to be many samples to better calculate the reflecting light. Otherwise we end up sampling parts of the environment which aren’t impacting the integral of our reflection by much. This produces gritty, spotty images were you can see that the sampling was not ideal. Modern engines will preprocess that information into multiple textures so that the entire ambient term of a pixel can be applied with a single texture lookup. But what if we don’t want to spend tons of time preprocessing that information?

Importance Sampling is the idea that when we do have to take multiple samples, we skew the samples in the direction of the reflection across the normal of the view vector. This sounds complicated, but if you could imagine an entire set of vectors all generally pointing the same direction – and rotating the whole group to point towards a light – that’s what’s happening here. The initial Importance Sample vectors are calculated by using the material’s Normal Distribution function’s integral (the Cumulative Distribution Functions above). So all we need to do is calculate some predictable, but low-discrepancy sample of random numbers, and skew them based on the integral of the Normal Distribution function.


Slide to compare 256 Samples (Left) with 20 Importance Samples (Right). 

The Importance Sample is much faster. Instead of 256 samples per fragment over the entire hemisphere, we are looking at 20 samples per fragment for a fairly equivalent-looking image. Of course there will be differences since we’re running an approximation, but you can still make out the light sources as well as their size and color – which is the important part. All you really need to do to be “Importance Sampling” is skew your initial data by one of the Cumulative Distribution Sample functions above. Another thing you can do to improve predictable randomness is rotating the sample by using the Alchemy’s XOR Rotation. (Skew the sample points, and rotate the skew based on the pixel coordinates). This noise makes it harder to find repeating patterns in the sampling, which allows us to do less samples and get similar results.

Note: The way I learned about Hammersley points in detail was through an excellent blog post discussing “Points on a Hemisphere” by Holger Dammertz. I cannot even begin to do his post justice, so if you need more information on the magic Hammersley function below, look no further. I did make one minor alteration to his source – since I’m using OpenGL 4, I’ve opted to use GLSL’s bitfieldReverse instead of using the one Holger provides.

Here is sample resulting GLSL:

Putting it All Together

The one last thing I want to show you is that the entire ambient light pass can be done in one go, with a single full-screen quad. The multiple samples – though reduced – is still too much for us to be doing for every object. Especially since objects might be occluded by other objects. So the idea here is that we render a single fullscreen quad, and at each position either write the environment to the screen (depth == 1) or perform the lookup/calculation of the ambient light for the object at that point (depth != 1). The following sample shader shows you how this is done, this is essentially the ambient pass which happens before any other light is calculated – and since we’re overwriting values, we don’t need to clear the lightbuffer.

Note that textureSphereLod is just a custom function which samples a sphere-map, by using a given point, and translating it to uv coordinates. The function you call here will differ depending on whether you use sphere-maps, cube-maps, or dual-parabaloid. I used sphere-maps purely out of convenience. I’ve heard cube-maps have the lowest distortion of any method, but I haven’t tested for this. In this sample, the distortion from the sphere-map was not sufficient enough to where it was noticeable.

Final Renders

That’s all there is to it! It’s a little tough to wrap your head around Microfacet BRDFs, but the results are promising. It also requires that you and your artists are in-sync with one another, as it provides artists with different material parameters. But the whole idea is that now we can introduce complicated lighting scenarios, and the material won’t have to be tweaked or altered to fit in a different scene with different lighting. Any object should be able to be placed in a new environment, and the lighting should look like the object fits within that environment – that’s the end goal. What’s more impressive is that this end goal is feasible, and real-time 60fps. In modern day titles which try to model the real world, there is no reason not to adopt a more advanced BRDF – even smartphones can afford such technology if it’s implemented properly. Below are a few renders I did using different environments for ambient light – hope you enjoy Physically Based Shading!

Looking up at the ceiling lights in a theatre.

Looking up at the ceiling lights in a theatre.

MotionBlur

Showing off Per-Fragment Motion Blur a little here. The camera is backing away from the objects.

BearGirlOutside

A very solemn, rough metal structure. Bear-Girl provided by Liz Pulanco.

CatOutside

A Z-Brush sculpture of a cat in the same environment as above. Cat provided by Mandi Aka.

BrushedSilverDragon


A silver, brushed metal dragon sculpture. This was rendered in an environment without proper tone mapping, hence the over-sampling. Nevertheless, I thought the results were kind of cool, so I kept it!

Believe it or not - this is just a different angle of the "Comparisons" shot from the article above. Shows how much of an impact a different angle can give.

Believe it or not – this is just a different angle of the “Comparisons” shot from the article above. Shows how much of an impact a different angle can give.

References

Note: Links with an asterisk (*) by them indicate a source that was integral in my understanding in Physically Based Shading. I highly recommend these sources.


Leave a Reply to Karsten Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

12 thoughts on “Physically Based Shading and Image Based Lighting

  • Dimitris

    Hello,

    First of all I want to congratulate you for this post. It helped me a lot to figure how PBR works.

    I do have one question though. I’m building a small game engine (mostly to practice on CG) and I’m trying to implement PBR. I set up my BRDF and my material struct and I’m pretty sure everything works fine (or close to fine). The problem is that I do not understand what to do with ambient light (environment) contribution.

    Right now I compute light using Cook Torrance model but that’s for point lights. That returns me a result. To that result I figured I should add the environment contribution (reflections from skybox) somehow.

    Could you please give me any pointers to help me figure out what to do with ambient?

    • Trent Post author

      Hey Dimitris,

      So IBL is supposed to be a better approximation to the ambient contribution entirely. And since we have an image with light intensities, we can even approximate the reflection off an object by some approximation to the integral over the contributing light (this is done in the above code through the function radiance). Without having significant light directions with contribution intensities, we just have a global ambient term, like in any other shading model.

      All global ambient light represents is the amount of light just existing within the environment at any given point. So it doesn’t vary, it doesn’t cause shading, it doesn’t even introduce specular highlights (because there is no light origin or direction, it just exists). Many game engines implement a distance attenuation which kind of causes something that looks like shading, not sure if you want to consider this, or how common it really is any more. However, I think just simply multiplying the diffuse colour of the object with the global lighting contribution should be enough (like “fragColor = material.diffuse * global_ambient; {rest of lighting calculation after…}”) A quick Google search pulls up this article, which seems like it paints a pretty good picture of what a simple scalar ambient term represents: http://www.tomdalling.com/blog/modern-opengl/07-more-lighting-ambient-specular-attenuation-gamma/

      Global ambient light is really kind of a hack. IBL is just a better approximation (still a hack) of a million distant lights within the environment, where global ambient is just kind of: “eh, there is no light, but this much light just exists everywhere without direction”. You can kind of see this diffuse contribution being applied via the following code from above: “vec3 Kdiff = irrMap * baseColor() / pi;” but again, since we have more dynamic information via the spheremap surrounding the scene at an infinite distance with infinite “points” of light, we can also add a nice handy specular term that wouldn’t normally be possible with _just_ the ambient scalar.

      Without IBL, a general specular is usually added via a directional light (that is, a light that just exists casting light everywhere at some defined-by-the-user angle.) You may want to consider this minimally if you don’t want to introduce IBL, so that all objects can just have specular if they are shiny, even without being within “range” of a light.

      Hope this helps! :)

      EDIT: This was a very roundabout response. The simple version is: IBL is the ambient term entirely. Specifically the contribution of “light just existing” is the irradiance map. I would say you either use IBL, or you use some global constant scalar. Using both might look weird in cases (e.g. global ambient “red”, in an IBL scene predominantly “blue”, might look weird.) You could try just add-scale (global_ambient * material.diffuse) into the object’s diffuse, but I can’t promise this’ll look too great in all cases.

      • Dimitris

        Hello and sorry for the late response.

        Thank you for your reply. It really helped me understand. At first I tried to use IBL like I use the global ambient, just add it to diffuse color. Which didn’t work obviously, it wasn’t right.

        So just to be sure, you’re saying that IBL is yet another light source. So if I have 3 point lights and IBL I compute my BRDF for all of them one by one and I should end up with something like this:

        final_color = light1_after_brdf + light2_after_brdf + light3_after_brdf + ambient_IBL_after_env_brdf;

  • Karsten

    Hey man, I’ve been reading your stuff to improve my knowledge in OpenGL using Qt. You make everything seem so simple. Your post about physically based shading has helped me to understand the basics of it. I’ve got a question though, I hope you can answer it.

    In their coursenotes, Epic games choose D * NoH / 4 * (VoH) as their pdf to resolve the integral. However this is part of the denominator, so you actually just remove the D from the equation which I tought to be an important part of the Cook-Torrance shading model?

    I see that you still compute the distribution so how is your approach different than theirs?

    Thanks for your help ;)

    • Trent Post author

      Hey Karsten,

      When doing importance sampling, if you select a pdf which aligns you with the “shape of samples” (for lack of a better phrase) of the distribution, then it becomes unnecessary to add in the occlusion from the normal distribution function (D). This allows us to omit certain parts of the equation altogether, since we’re technically sampling our light source along that distribution.

      I actually do the same thing, see above:
      fColor += F_ * G_ * LColor * VoH / (NoH * NoV);

      I like to describe it that way, I think it makes more sense to think that the shape of our sampling distribution implicitly “creates” the normal distribution. The equations I pulled were from the Microfacet Models for Refraction through Rough Surfaces (https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf) as linked at the bottom of the article.

      Now, for why I still use the distribution function – that is a part that Epic eludes to, but they don’t implement in their course notes. Notice how they have code to do environment map sampling, but they always pass in 0 for their level of detail? Well, using the distribution function is part of the equation for calculating the LoD for the environment map to reduce samples taken at runtime. (see: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html)

      From their notes:
      “The sample count can be reduced significantly by using mip maps [3], but counts still need to be greater than 16 for sufficient quality.”

      We see this to be true in my case – I needed about 20 samples to make things look decent. However, as they mention, a better implementation may be to pre-calculate using the split-sum approximation (I did not implement this). I can’t speak much on the split-sum approach, but I wanted to do things the manual way before adopting the split-sum approach. Mostly because I wanted to learn what’s going on as best as possible.

      Hope this helps! :)

      • Karsten

        Thanks for the answer,
        now everything is clearer to me.
        Good that you mentioned GPU gems, I was desperately trying to understand how Mip Maps fit into this all, maybe you should link it too ;)

        I know, I think I have to go the same route because my materials are changing color / roughness etc. often so precalculating wouldn’t be that efficient. (It seems that you have to integrate twice as much)

  • idovelemon

    Hello!

    Very nice article about PBR. I learn a lot from your post.

    I just have a question about IBL when preintegrate EnvironmentBRDF.

    In , there is a function IntegrateBRDF(float Roughness, float nDotv).

    In this function, it calls ImportanceSampleGGX(Xi, Roughness, N). I do not know where this ‘N’ come from.

    Is this a fixed value,for example this N always equal (0, 1, 0), or it come from somewhere out of this IntegrateBRDF?

    Could you tell me how to get this “N” value ?

  • 6ip.biz

    I m happy about this, since it represents a movement away from hacking around with magic numbers and formulae in shaders, towards focusing on the underlying material and lighting models.

  • jacksparow

    Now the mainstream engine’s approach is to use image-based lighting to achieve approximate global illumination, but such a technique seems to be only applicable to outdoor scenes.What kind of technology is used in the indoor scene to achieve global illumination?For example, in a room with natural light, or in a completely enclosed indoor environment.