Light is complicated, and we really don’t have a full equation that accurately models light in the real world. This might sound confusing because of all the recent strides in CG and visual technology. Well, it’s all approximate – it’s just that we select functions which approximate really well. The unfortunate truth is: There is no one equation for light – only approximations.
BlinnPhong is an approximation. If you’ve been following my recent blog posts you might have identified that the lighting model I have been using since the start has been BlinnPhong. But let’s face it, BlinnPhong hasn’t really improved with age – the paper on this algorithm was published originally in `77. At the time of writing, it’s almost a 40 yearold approximation for lighting. Can’t we do better?
Well, yes actually!
Physically Based Shading
Important: The following article has a ton of before and after pictures. When something is paired with a direction (Left), that refers to the image that takes up the Left side of the view).
Modern shading models are often referred to as Physically Based. They feature a more complicated lighting model, which separates into multiple equations with three special interchangeable factors – the whole equation forms a Bidirectional Reflectance Distribution Function (BRDF) known as the CookTorrance Model. A BRDF is essentially a function which models the amount of reflected light across the surface of an object. Bidirectional means that if the light and the view were to switch places, the equation would produce the same results. Reflectance is just what it sounds like, some factor representing the amount of light reflected. Distribution is the integral of the probability, in our case the distribution is the light over the object. Much like cumulative distribution functions in probability, we expect that the sum of all it’s parts to equal 1 (conservation of energy, in our case). And Function – it is a function.
The CookTorrance Model can be expressed as follows:
This model represents the amount of light reflected from an object (similar to BlinnPhong) but with an approximation that takes into account the microscopic levels of detail on the surface of the object. The three functions F, G, and D are the specular factors which represent (respectively) Fresnel, Geometric Occlusion, and Normal (of Microfacet) Distribution. The power of this kind of BRDF is that different specular functions can be swapped out with whatever approximation you see fit (so long as they correspond to the same geometric meaning). What I mean by this is that there are several approximations to each of these functions, you only need to choose one, but you have the freedom to select whichever you want.
Let’s discuss the factors in more detail.
Fresnel Factor
Slide to compare Fresnel Off (Left) and Fresnel On (Right). 
Fresnel is the amount of light that reflects based on the current angle of incident between the light and the normal. As the incident angle becomes increasingly large, the amount of light that reflects into our eyes becomes greater. At 90° Angle of Incidence (AOI) the amount of light that reflects is 100%. An interesting fact about the Fresnel factor is that every type of known material has reflection – yes, even the ones you wouldn’t expect. If you look towards a light where you and the light have an increasing angle of incidence, you can force out this specular factor. It would make sense that no object completely consumes light, that wouldn’t physically make sense.
However not everything reflects the same amount of light at all angles – in fact the base value with angle of incidence 0° is known as F0. Different types of materials have different values of F0 – ranging between 0.01~0.95. Absolutely nothing outside of that range. (Silver is the most reflective metal, and it has a base F0 of 0.95, to my knowledge ice is the lowest with 0.018).
Sc0tt Games has a pretty good table of nonmetal reflective indices.
Geometric Occlusion Factor
Slide to compare SmithSchlickBeckmann (Left) and SmithGgx (Right). 
The next factor represents the amount of the surface – at a microscopic level – that is selfoccluding. This parameter should ideally only affect rough objects. As an object becomes more rough, the amount of microfacet selfocclusion increases, so the amount of specular light observed decreases.
If we try to imagine a perfectly smooth surface, we can identify that there are still impurities with it at a microscopic level. Because of this, we can say that there is some amount of shading that’s going on, even if it’s small. SmithSchlickBeckmann, SmithGgx, and CookTorrance all seem to have pretty good equations for Geometric Occlusion.
Normal Distribution Factor
Slide to compare Beckmann (Left) and Ggx (Right). 
This factor is very unfortunately named. The reason why is because it’s often confused with regular, mathematic Normal Distributions (like what we used to blur Exponential Shadow Maps in the previous blog post). However, the name is appropriate.
The Normal Distribution is a function which determines the probability that the faces on a microfacet surface are oriented towards the normal of the surface. This tends to control the spread and falloff of the specular term. You often see Ggx used because it has a much wider tail to the specular reflection – which is pretty pleasing to the eye.
Microfacet BRDF Equations
So this wouldn’t be an experiment of all the different shading equations without a laundry list of equations to try. I’m just going to list the functions I found, and at the end of each section talk a little bit about my favorite combinations.
Definitions

Fresnel Equations

Geometry Equations

Note: The general form of the Smith equations is to take the product of the function called twice – once with arguments (l, h) and one with arguments (v, h). As such:
In the following equations, the variable i is the placeholder for whichever variable is plugged in first (l or v).
Geometry Equations (Smith)

Distribution Equations 
Cumulative Distribution (Sample Skewing)Phong:Beckmann:Ggx: 
Comparison
Generally GGX, or some mixture of Smith/GGX is very popular. I tend to like different ones depending on the scene and light composition. I stick with Schlick’s Approximation for Fresnel. For Geometry Occlusion I prefer either Smith/Ggx, SmithSchlickBeckmann, or CookTorrance. And for Normal Distribution, Ggx has a longer specular tail – I tend to prefer that. For ImportanceSampling, I tend to mix and match (even though mathematically this is incorrect) by using Beckmann sampling with Ggx Normal Distribution. But you can see how the terms work together to produce pretty impressive results.
What’s most impressive about the above picture is that every object here is white. The only changing parameters is Metallic and Roughness.
By these two variables alone we can represent a wide spectrum of materials. Towards the top we can see metals ranging from brushed and rough, to smooth and reflective. Move down the metal spectrum we hit a wall where objects seem to maintain some of their own diffuse color – these are called dielectrics. These objects range from glossy, crystalline materials, to smooth plastics. At the rough end of the spectrum you can spot matte surfaces and rubber materials.
In order to compare differences in specular factors, I have implemented all of the functions above as shader subroutines (OpenGL 4.0 >) which allows me to dynamically switch factors for the BRDF without recompiling shaders. It’s definitely not as efficient as writing a compact implementation of the entire BRDF, but it allows us to see all of the possibilities with great ease. One interesting anomaly is that SmithBeckmann didn’t seem to play nicely with any other factor aside from the Beckmann distribution. You’ll notice white specles where the reflection is overpronounced when SmithBeckmann is paired improperly.
Material Structure
The material structure I’ve settled on is a simplified version of Unreal’s material system (Base Color, Metallic, and Roughness).
The Base Color is the color which we use for the diffuse portion of our lighting equation. It also doubles as the specular tint for metallic objects. So if we have an object that falls in the range of Dielectrics, this color is used for the diffuse term. If it falls in the range of Metals, it’s multiplied in as the specular tint. Metallic is simply the F0 value for the material, and it is clamped to be within the range [0.02, 0.99] (CookTorrance’s Fresnel equation didn’t play nice with F0 of 1, and everything should have at least some specular). Roughness is a term which is used in several of the Microfacet BRDF functions above, in order to make the distribution of rough/smooth more linear, we have reparametrized roughness by squaring it (as outlined above in the Definitions section), and that there is a minimum roughness of 0.01 (Materials with surfaces infinitely smooth can exist in a vacuum, but due to Cold Welding this is a shortlived experience).
As I pointed it out, when travelling along the spectrum of metals we hit a wall where diffuse is no longer applied. This “wall” is what separates the dielectrics (nonconductive material) with the metals (conductive material). This section of F0 is more commonly referred to as semiconductors (somewhatly conductive materials). An interesting fact about measurements of different metals is that they tend to have absolutely no diffuse term to them. So what I do for conveniencesake is split the materials into two separate calculations – Dielectrics and Metals. How we interpolate between those calculations is through the lesserseen semiconductors range.
Very few materials fall under the range of semiconductors [0.2, 0.45]. But for ease of implementation, and to allow some form of physical blending, I do allow these ranges. This semiconductor range is where I interpolate between the two blend models. So starting from the base F0 of the semiconductors, to the topmost value, we interpolate between the two results of the different blend modes. Here is some shader code showing this interpolation:
1 2 3 4 5 6 7 8 9 
// Blend between dielectric and metallic materials. // Note: The range of semiconductors is approx. [0.2, 0.45] vec3 BlendMaterial(vec3 Kdiff, vec3 Kspec, vec3 Kbase) { float scRange = smoothstep(0.2, 0.45, metallic()); vec3 dielectric = Kdiff + Kspec; vec3 metal = Kspec * Kbase; return mix(dielectric, metal, scRange); } 
The idea is pretty simple. Whenever you have a diffuse and a specular term (pretty much always), plug them in to the BlendMaterial equation to make sure that the blending is done properly.
Note: How you wish to blend your materials is entirely up to you. Another method is to blend between the minimum and maximum metallic values for a wide spectrum of materials.
Importance Sampled Image Based Lighting
Another thing we can do to make our application more appealing is some approximation of the light within the environment. Image Based Lighting (IBL) is a nice way of approximating multiple samples on an environment for scenes with highly complex lighting which otherwise would take a lot of time to calculate. Look around in the current room you’re in – it’s likely there is more than one light on. It’s also possible that there are many lights contributing to the environment of many different shapes and sizes. To actually model all of this lighting in realtime can be a burden.
Image Based Lighting is taking a picture of the environment (for lack of a better word), and referencing that picture for ambient light instead of the single, global attenuation approximation we were using. In previous work, you might notice that the global ambient light is just some constant value which attenuates into the distance until nothing is left. This is basically replacing that constant global lighting with some good approximation using an image as reference.
Slide to compare 256 Samples (Left) with 20 Samples (Right). 
The problem is – depending on the roughness of the object, there may need to be many samples to better calculate the reflecting light. Otherwise we end up sampling parts of the environment which aren’t impacting the integral of our reflection by much. This produces gritty, spotty images were you can see that the sampling was not ideal. Modern engines will preprocess that information into multiple textures so that the entire ambient term of a pixel can be applied with a single texture lookup. But what if we don’t want to spend tons of time preprocessing that information?
Importance Sampling is the idea that when we do have to take multiple samples, we skew the samples in the direction of the reflection across the normal of the view vector. This sounds complicated, but if you could imagine an entire set of vectors all generally pointing the same direction – and rotating the whole group to point towards a light – that’s what’s happening here. The initial Importance Sample vectors are calculated by using the material’s Normal Distribution function’s integral (the Cumulative Distribution Functions above). So all we need to do is calculate some predictable, but lowdiscrepancy sample of random numbers, and skew them based on the integral of the Normal Distribution function.
Slide to compare 256 Samples (Left) with 20 Importance Samples (Right). 
The Importance Sample is much faster. Instead of 256 samples per fragment over the entire hemisphere, we are looking at 20 samples per fragment for a fairly equivalentlooking image. Of course there will be differences since we’re running an approximation, but you can still make out the light sources as well as their size and color – which is the important part. All you really need to do to be “Importance Sampling” is skew your initial data by one of the Cumulative Distribution Sample functions above. Another thing you can do to improve predictable randomness is rotating the sample by using the Alchemy’s XOR Rotation. (Skew the sample points, and rotate the skew based on the pixel coordinates). This noise makes it harder to find repeating patterns in the sampling, which allows us to do less samples and get similar results.
Note: The way I learned about Hammersley points in detail was through an excellent blog post discussing “Points on a Hemisphere” by Holger Dammertz. I cannot even begin to do his post justice, so if you need more information on the magic Hammersley function below, look no further. I did make one minor alteration to his source – since I’m using OpenGL 4, I’ve opted to use GLSL’s bitfieldReverse instead of using the one Holger provides.
Here is sample resulting GLSL:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 
// Hammersley function (return random lowdiscrepency points) vec2 Hammersley(uint i, uint N) { return vec2( float(i) / float(N), float(bitfieldReverse(i)) * 2.3283064365386963e10 ); } // Random rotation based on the current fragment's coordinates float randAngle() { uint x = uint(gl_FragCoord.x); uint y = uint(gl_FragCoord.y); return float(30u * x ^ y + 10u * x * y); } // Example function, skewing the sample point and rotating // Note: E is two values returned from Hammersley function above, // from within the same loop. vec2 DGgxSkew(vec2 E) { float a = roughness() * roughness(); E.x = atan(sqrt((a * E.x) / (1.0  E.x))); E.y = pi2 * E.y + randAngle(); return E; } // Example function, turn a skewed sample into a 3D vector // This results in a vector that is looking somewhere in // the +Z Hemisphere. vec3 MakeSample(vec2 E) { float SineTheta = sin(E.x); float x = cos(E.y) * SineTheta; float y = sin(E.y) * SineTheta; float z = cos(E.x); return vec3(x, y, z); } 
Putting it All Together
The one last thing I want to show you is that the entire ambient light pass can be done in one go, with a single fullscreen quad. The multiple samples – though reduced – is still too much for us to be doing for every object. Especially since objects might be occluded by other objects. So the idea here is that we render a single fullscreen quad, and at each position either write the environment to the screen (depth == 1) or perform the lookup/calculation of the ambient light for the object at that point (depth != 1). The following sample shader shows you how this is done, this is essentially the ambient pass which happens before any other light is calculated – and since we’re overwriting values, we don’t need to clear the lightbuffer.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 
/******************************************************************************* * lighting/environment.frag * * Apply the lighting calculation to a given fragment of incident light. * Uses GBuffer information to access statistics about the scene itself. ******************************************************************************/ #include <GBuffer.ubo> #include <Math.glsl> #include <GlobalBuffer.ubo> #include <Bindings.glsl> #include <Physical.glsl> layout(binding = K_TEXTURE_0) uniform sampler2D environment; layout(binding = K_TEXTURE_1) uniform sampler2D irradiance; uniform uvec2 Dimensions; // Light Output layout(location = 0) out highp vec4 fFragColor; // Computes the exact mipmap to reference for the specular contribution. // Accessing the proper mipmap allows us to approximate the integral for this // angle of incidence on the current object. float compute_lod(uint NumSamples, float NoH) { float dist = D(NoH); // Defined elsewhere as subroutine return 0.5 * (log2(float(Dimensions.x * Dimensions.y) / NumSamples)  log2(dist)); } // Calculates the specular influence for a surface at the current fragment // location. This is an approximation of the lighting integral itself. vec3 radiance(vec3 N, vec3 V) { // Precalculate rotation for +Z Hemisphere to microfacet normal. vec3 UpVector = abs(N.z) < 0.999 ? ZAxis : XAxis; vec3 TangentX = normalize(cross( UpVector, N )); vec3 TangentY = cross(N, TangentX); // Note: I ended up using abs() for situations where the normal is // facing a little away from the view to still accept the approximation. // I believe this is due to a separate issue with normal storage, so // you may only need to saturate() each dot value instead of abs(). float NoV = abs(dot(N, V)); // Approximate the integral for lighting contribution. vec3 fColor = vec3(0.0); const uint NumSamples = 20; for (uint i = 0; i < NumSamples; ++i) { vec2 Xi = Hammersley(i, NumSamples); vec3 Li = S(Xi); // Defined elsewhere as subroutine vec3 H = normalize(Li.x * TangentX + Li.y * TangentY + Li.z * N); vec3 L = normalize(reflect(V, H)); // Calculate dot products for BRDF float NoL = abs(dot(N, L)); float NoH = abs(dot(N, H)); float VoH = abs(dot(V, H)); float lod = compute_lod(NumSamples, NoH); float F_ = F(VoH); // Defined elsewhere as subroutine float G_ = G(NoL, NoV, NoH, VoH); // Defined elsewhere as subroutine vec3 LColor = textureSphereLod(environment, L, lod).rgb; // Since the sample is skewed towards the Distribution, we don't need // to evaluate all of the factors for the lighting equation. Also note // that this function is calculating the specular portion, so we absolutely // do not add any more diffuse here. fColor += F_ * G_ * LColor * VoH / (NoH * NoV); } // Average the results return fColor / float(NumSamples); } void main() { vec3 V = normalize((Current.ViewToWorld * vec4(viewPosition(), 0.0)).xyz); vec3 color; // No object, instead display the environment. if (depth() == 1.0) { color = textureSphereLod(environment, V, 0.0).rgb; } // Object, approximate the ambient light. else { vec3 N = normalize((Current.ViewToWorld * vec4(normal(), 0.0)).xyz); vec3 L = normalize(reflect(V, N)); float NoV = saturate(dot(N, V)); float NoL = saturate(dot(N, L)); // Calculate different portions of the color vec3 irrMap = textureSphereLod(irradiance, N, 0.0).rgb; vec3 Kdiff = irrMap * baseColor() / pi; vec3 Kspec = radiance(N, V); // Mix the materials color = BlendMaterial(Kdiff, Kspec); } fFragColor = vec4(color, 1.0); } 
Note that textureSphereLod is just a custom function which samples a spheremap, by using a given point, and translating it to uv coordinates. The function you call here will differ depending on whether you use spheremaps, cubemaps, or dualparabaloid. I used spheremaps purely out of convenience. I’ve heard cubemaps have the lowest distortion of any method, but I haven’t tested for this. In this sample, the distortion from the spheremap was not sufficient enough to where it was noticeable.
Final Renders
That’s all there is to it! It’s a little tough to wrap your head around Microfacet BRDFs, but the results are promising. It also requires that you and your artists are insync with one another, as it provides artists with different material parameters. But the whole idea is that now we can introduce complicated lighting scenarios, and the material won’t have to be tweaked or altered to fit in a different scene with different lighting. Any object should be able to be placed in a new environment, and the lighting should look like the object fits within that environment – that’s the end goal. What’s more impressive is that this end goal is feasible, and realtime 60fps. In modern day titles which try to model the real world, there is no reason not to adopt a more advanced BRDF – even smartphones can afford such technology if it’s implemented properly. Below are a few renders I did using different environments for ambient light – hope you enjoy Physically Based Shading!
References
Note: Links with an asterisk (*) by them indicate a source that was integral in my understanding in Physically Based Shading. I highly recommend these sources.
 Any of the wonderful Siggraph courses (2012, 2013, 2014)
 *Physics and Math of Shading (Naty Hoffman)
 *Understanding the Shadow Masking Function (Eric Heitz)
 Moving Frostbite to PBR (Sébas1en Lagarde, Charles de Rousiers)
 Real Shading in Unreal Engine 4 (Brian Karis) [Slides]
 Crafting a NextGen Material Pipeline for The Order: 1886 (David Neubelt, Matt Pettineo)
 *Microfacet Models for Refraction through Rough Surfaces (Bruce Walter, Stephen Marschner, Hongsong Li, Kenneth Torrance)
 PhysicallyBased Shading at Disney (Brent Burley)
Hello,
First of all I want to congratulate you for this post. It helped me a lot to figure how PBR works.
I do have one question though. I’m building a small game engine (mostly to practice on CG) and I’m trying to implement PBR. I set up my BRDF and my material struct and I’m pretty sure everything works fine (or close to fine). The problem is that I do not understand what to do with ambient light (environment) contribution.
Right now I compute light using Cook Torrance model but that’s for point lights. That returns me a result. To that result I figured I should add the environment contribution (reflections from skybox) somehow.
Could you please give me any pointers to help me figure out what to do with ambient?
Hey Dimitris,
So IBL is supposed to be a better approximation to the ambient contribution entirely. And since we have an image with light intensities, we can even approximate the reflection off an object by some approximation to the integral over the contributing light (this is done in the above code through the function radiance). Without having significant light directions with contribution intensities, we just have a global ambient term, like in any other shading model.
All global ambient light represents is the amount of light just existing within the environment at any given point. So it doesn’t vary, it doesn’t cause shading, it doesn’t even introduce specular highlights (because there is no light origin or direction, it just exists). Many game engines implement a distance attenuation which kind of causes something that looks like shading, not sure if you want to consider this, or how common it really is any more. However, I think just simply multiplying the diffuse colour of the object with the global lighting contribution should be enough (like “fragColor = material.diffuse * global_ambient; {rest of lighting calculation after…}”) A quick Google search pulls up this article, which seems like it paints a pretty good picture of what a simple scalar ambient term represents: http://www.tomdalling.com/blog/modernopengl/07morelightingambientspecularattenuationgamma/
Global ambient light is really kind of a hack. IBL is just a better approximation (still a hack) of a million distant lights within the environment, where global ambient is just kind of: “eh, there is no light, but this much light just exists everywhere without direction”. You can kind of see this diffuse contribution being applied via the following code from above: “vec3 Kdiff = irrMap * baseColor() / pi;” but again, since we have more dynamic information via the spheremap surrounding the scene at an infinite distance with infinite “points” of light, we can also add a nice handy specular term that wouldn’t normally be possible with _just_ the ambient scalar.
Without IBL, a general specular is usually added via a directional light (that is, a light that just exists casting light everywhere at some definedbytheuser angle.) You may want to consider this minimally if you don’t want to introduce IBL, so that all objects can just have specular if they are shiny, even without being within “range” of a light.
Hope this helps!
EDIT: This was a very roundabout response. The simple version is: IBL is the ambient term entirely. Specifically the contribution of “light just existing” is the irradiance map. I would say you either use IBL, or you use some global constant scalar. Using both might look weird in cases (e.g. global ambient “red”, in an IBL scene predominantly “blue”, might look weird.) You could try just addscale (global_ambient * material.diffuse) into the object’s diffuse, but I can’t promise this’ll look too great in all cases.
Hello and sorry for the late response.
Thank you for your reply. It really helped me understand. At first I tried to use IBL like I use the global ambient, just add it to diffuse color. Which didn’t work obviously, it wasn’t right.
So just to be sure, you’re saying that IBL is yet another light source. So if I have 3 point lights and IBL I compute my BRDF for all of them one by one and I should end up with something like this:
final_color = light1_after_brdf + light2_after_brdf + light3_after_brdf + ambient_IBL_after_env_brdf;
Yup, this is how my application works.
What happens is there is a “light accumulation buffer” that continues to have light added to it, and then during the presentation phase we normalize the colour information and present it on the screen.
I believe this is the file that does that logic: https://github.com/TReed0803/QtOpenGL/blob/master/resources/shaders/gbuffer/viewport.frag
You can see, all that happens is we accept light information from the accumulation buffer, do some tone mapping, and then apply a lineartorgb approximation.
EDIT: l2rgb is defined in Math.glsl (https://github.com/TReed0803/QtOpenGL/blob/master/resources/shaders/Math.glsl)
Hey man, I’ve been reading your stuff to improve my knowledge in OpenGL using Qt. You make everything seem so simple. Your post about physically based shading has helped me to understand the basics of it. I’ve got a question though, I hope you can answer it.
In their coursenotes, Epic games choose D * NoH / 4 * (VoH) as their pdf to resolve the integral. However this is part of the denominator, so you actually just remove the D from the equation which I tought to be an important part of the CookTorrance shading model?
I see that you still compute the distribution so how is your approach different than theirs?
Thanks for your help
Hey Karsten,
When doing importance sampling, if you select a pdf which aligns you with the “shape of samples” (for lack of a better phrase) of the distribution, then it becomes unnecessary to add in the occlusion from the normal distribution function (D). This allows us to omit certain parts of the equation altogether, since we’re technically sampling our light source along that distribution.
I actually do the same thing, see above:
fColor += F_ * G_ * LColor * VoH / (NoH * NoV);
I like to describe it that way, I think it makes more sense to think that the shape of our sampling distribution implicitly “creates” the normal distribution. The equations I pulled were from the Microfacet Models for Refraction through Rough Surfaces (https://www.cs.cornell.edu/~srm/publications/EGSR07btdf.pdf) as linked at the bottom of the article.
Now, for why I still use the distribution function – that is a part that Epic eludes to, but they don’t implement in their course notes. Notice how they have code to do environment map sampling, but they always pass in 0 for their level of detail? Well, using the distribution function is part of the equation for calculating the LoD for the environment map to reduce samples taken at runtime. (see: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch20.html)
From their notes:
“The sample count can be reduced significantly by using mip maps [3], but counts still need to be greater than 16 for sufficient quality.”
We see this to be true in my case – I needed about 20 samples to make things look decent. However, as they mention, a better implementation may be to precalculate using the splitsum approximation (I did not implement this). I can’t speak much on the splitsum approach, but I wanted to do things the manual way before adopting the splitsum approach. Mostly because I wanted to learn what’s going on as best as possible.
Hope this helps!
Thanks for the answer,
now everything is clearer to me.
Good that you mentioned GPU gems, I was desperately trying to understand how Mip Maps fit into this all, maybe you should link it too
I know, I think I have to go the same route because my materials are changing color / roughness etc. often so precalculating wouldn’t be that efficient. (It seems that you have to integrate twice as much)
Hey Reed,
I hope you can answer me one last question maybe?
Did you precalculate the Irradiance map?
Thanks
Yup, I am using a precalculated irradiance map – sorry for the late response, Karsten!