Exponential Shadow Maps

Shadows are a real problem.

They’re expensive, must be split into several shadow maps/levels, and they involve multiple passes.

It’s something we take for granted every day – shadows are just kind of there. It’s a part of lighting. But when we simulate lighting mathematically, the lighting model (regardless of it’s complexity) is simply unable to cover shadow casting within the environment. Shadow mapping has proven to scale well, but it’s not without it’s problems. In this article we will discuss an approximation shadow mapping technique known as Exponential Shadow Mapping – and see the role it plays in modern shadow algorithms.

Exponential Shadow Maps

Traditional Shadow Maps

Before we can discussing Exponential Shadow Maps, it’s imperative that you understand traditional shadow mapping. The traditional shadow map is basically the scene rendered from the perspective of the light (as if we could see everything the light touches), and depth information is captured for later calculations.

After we generate a shadow map, we then render the light volume from the point of view of the camera (how we did traditional lighting in my last post, Deferred Rendering Pipeline). The big difference when shadowing is involved – is that we reference the shadow map by translating our point to the light’s perspective-space, and access the depth value stored. Given that we have the depth from the light’s perspective, and the calculated depth from the fragment, we can see whether or not a point is in shadow.


The scene rendered with traditional shadow mapping. At the top-left we can see what the light sees.

To recap, the main workflow is something like this:

  1. Render the scene from the light’s perspective.
    (For efficiency reasons, you should only render what the light volume actually touches).
  2. Render the light from the camera’s perspective.
    1. For each fragment, move the view position of the pixel into the light’s perspective-space.
      (Light.ViewToLightPersp * viewPos)
    2. Offset and scale the light perspective point by 0.5 so that it is in texture lookup coordinates [0, 1].
      (Matrices.PerspToTexture * lightPerspPos)
      Note: Matrices.PerspToTexture looks like this:

      Warning: This matrix is column-major, which is the format GLSL expects you to present your matrices in. If you want to construct this matrix row-major, you must transpose the matrix.
    3. Query the shadow map for the depth from the light’s perspective.
      (texture(shadowMap, texturePos.xy) if un-projected, textureProj(shadowMap, texturePos.xyw) otherwise)
    4. Compare the queried value to the actual depth value.
      (found by taking post-projection light position z and dividing it by the near/far divisor w; e.g. lightPerspPos.z / lightPerspPos.w)
    5. If the queriedLightDepth is less than or equal to the calculatedLightDepth, then the current pixel is in shadow.
      (Apply no lighting)

To learn in more detail about shadow mapping, I recommend the following tutorial:

Problems With Shadow Maps

In theory, everything above sounds perfect. A little inefficient, but mathematically sound. But when first implemented, you’ll be greeted with a pretty ugly result.

Let’s take the following fragment shader into consideration:

Sweet Jesus NO! Oh man What HAPPENED to you Sphere?

Sweet Jesus NO! Oh man What HAPPENED to you Sphere?

The above artifact is known as “Shadow Acne”, the problem is that the resolution of the shadow map isn’t exactly enough to satisfy every new point that we’re rendering from our perspective. And so, we get points which are mathematically “in shadow”, though technically they should appear lit. Another way of wording it is that the pixels appear to be ever so slightly more distant than the queried depth value. The core problem (at some level) is that our shadow map is of insufficient resolution to satisfy our current projection.

But we can’t just allocate a bigger shadow map, that doesn’t actually fix the problem – it may reduce noise, but it probably won’t reduce it by much. The problem is we would need an infinitely precise shadow map to get the values we actually expect – and that’s not possible.

Let’s hack our way out of this problem. A shadow map is already a simulation of lighting, maybe we can fudge the numbers a little to get more accurate appearing results.

For our first hack, we can introduce some epsilon which allows our depths to be off a little. The epsilon will have to differ based on the orientation and perspective of the light, since the amount of possible error is different based on the view/light perspective. A general solution to this is to introduce a bias which is scaled by the tangent of the slope. If you consider how the tangent function acts, this basically means the closer the slope is to 1 (meaning perpendicular to the view) the more exponentially we scale the bias by. As you can imagine, this is only an approximation for the error, but it tends to work in most cases. You may also want to clamp the bias value between some appropriate min/max range.

Much nicer, this is more what we expect.

Much nicer, this is more what we expect.

This looks nicer, but there are still problems. First is the presence of an if statement in the shader code. If statements are very disrupting to GPUs, and can cause major slowdown for fragments which must branch in execution. One way to combat this is to set an edge in which the value moves from 0 -> 1. The best function for a hard edge like this is GLSL’s step function. Though you may think it does the same thing as your if statement, it’s actually far more efficient about it. You should always prefer numerically adjusting values to branching statements.

Okay it’s more efficient, but the original problem that introduced the shadow acne still exists in another fashion.

At some level, I can approach the shadow in such a way that I can easily identify the shadow’s edge. This is not how real shadows look since an environment isn’t ever lit by a single point of light, but rather a volume which distributes light around the room. These light areas are difficult to model, so we tend to represent light as if it comes from a single point for a first pass on lighting. If you look at a shadow in the real world, you’ll notice that the shadow has some area where it’s darkest, and then some blurry transition from shadowed to the lit area (known as the penumbra).

Traditional shadow maps present us with the following problem (notice the pixelation):

Notice the hard shadow casted by the sphere? It's jarring compared to the spotlight's falloff.

Notice the hard shadow casted by the sphere? It’s jarring compared to the spotlight’s falloff.

Exponential Shadow Mapping

A simple way of tackling these hard-edged shadows is by simply taking many samples, or guessing based on some random variable distribution how much inside/outside of the shadow we are. One of the easiest implementations is Percentage Closer Filtering (PCF), and can be implemented utilizing the hardware by using sampler2DShadow uniform types. Another method is to do the sampling yourself using a Poisson distribution. However, Exponential Shadow Maps go an entirely different route with this.

Exponential Shadow Maps store a part of the approximation of the shadow casting test, and allows us to recombine it later for the actual approximation.

The whole equation for Exponential Shadow Maps is as follows:

Exponential Shadow Mapping algorithm

In the above algorithm, c is some constant chosen by the user, z is the actual rendered depth from the light’s perspective, and d is the depth which we are testing against.

The derivation for this is pretty straight forward (and covered in the original paper linked to above) – so I won’t cover it here. However, the key factor is that this new representation of the shadow testing equation allows us to split an important part of the algorithm into the shadow mapping pass. With our previous shadow mapping technique, we could not filter or change the shadow map. The only way we could see where the “edge” of a shadow was, was by taking multiple samples. In this case, the test is split out partly into the the rendered shadow map, and since the final approximation is simply scaled by the shadow map value, we have more freedom to filter the values in the shadow map.

You can see how the math for this works out, if we have some values for z and d such that they approach each other, the product of the two factors approaches 1 (lit). The further apart these two values get, the more the two factors approach 0 (shadowed). For transparent objects, it’s possible to get a value greater than 1 – but that isn’t too much to worry about, we can simply clamp it. However, this convenient distance checking introduces another problem not previously known to shadow mapping – light bleeding. That’s what the c parameter is for. A larger c causes a larger potential difference between the two factors. It doesn’t really eliminate the problem, it just masks it in a semi-appropriate way.

The big problem is that as c becomes larger, the value stored in the shadow map increases, well, exponentially! This means that it’s very easy to run ourselves out of precision, and be stuck with shadow maps which simply don’t work. In your implementation, you may also try to scale the end result in the shadow casting fragment shader so that it hides the light bleeding a bit – I haven’t tested it, but I’d imagine it could do some work without ruining your precision.

The core shadow mapping algorithm remains largely unchanged, except that the value we save from the mapping pass isn’t simply the depth (eg. we need a fragment shader) and the calculation of the shadow caster visibility is different.

The visibility in the above fragment will be some value [0,1] such that it represents the approximate light unobstructed at this point. This is basically a 1:1 correlation to the exponential shadow map equation listed above. The biggest thing to keep in mind is that the equation actually represents an approximation to a shadow. Now we can blur the shadow map to give a soft shadow effect.

Compute Shader Gaussian Blur

Update (3/22/15):

I was going to provide some reasoning for the values used in the compute shader, as well as information about when to prefer compute shaders to fragment shaders. But it’s a separate topic, so for now we will simply show that a compute shader is possible. We will later cover when to prefer a compute shader, and what some of the compute shader best practices are. I would like to give it more time than this article would allow.

I haven’t tested whether it would be faster to use a compute shader or a regular fragment shader to apply this blur. Regardless, I figured it was probably a good time to become acquainted with compute shaders. A compute shader is a special kind of shader whose only input is through the form of uniforms, shader storage buffer objects, bound textures, and dispatch sizes. It’s basically a way to program the GPU to do some general computing for you, and hopefully return the result in less time.

The biggest issue with compute shaders is knowing when it’s appropriate. Sometimes the setup time of a compute shader is enough that doing it on your own (on the CPU, or within the fragment shader) would actually be more efficient! So, efficiency testing aside, we’re going to delve into compute shaders 101.

General GPU Layout

Many programmers are used to thinking of the few cores they have on the CPU. Kernel handles scheduling, so many times we don’t really even need to worry ourselves about which core we run on. Writing code which executes on a GPU is a little different. Let’s look a very simplified memory model for the modern GPU.


Of course, scale this to hundreds of threads in each workgroup, and hundreds of workgroups in each GPU. Essentially, this is what you’re programming for. The closer to the thread that you put the memory, the faster access to the memory will be. The general idea for a GPU is that these workgroups all work together on a problem, and then the answer is put back in some range that no one but that workgroup is allowed to touch. Fun fact, this is actually why it’s sometimes called a fragment shader, because you’re programming for the workgroup (which has a fragment of pixels [threads] generating output).

So the idea here is that we will separate parts of the picture out to be blurred by each workgroup, and each workgroup will help build the final blurred image. We want to store temporary information in the shared memory space so that we can efficiently calculate a result, and then we will commit the information back to the global memory space for access. That’s a really quick overview of what a GPU (approximately) looks like. Another important fact is that workgroups cannot communicate with one another.

Local Workgroup Layout

As mentioned, compute shaders don’t actually have any input values. Unlike vertex and fragment shaders which have very obvious in/out keywords, compute shaders don’t allow it for general variables. There is one main input that the user can provide – local workgroup size. This basically says how big the requested workgroup is, or rather – how many threads we want to populate for a particular work item. We can select any dimension of local thread size, the end result of executed threads for that workgroup is all of the sizes multiplied together. The glsl for this workgroup size is a special nameless in attribute.

If you want only a 1-dimension workgroup, you are able to only provide the local_size_x value, in this case the other values are implied to be 1. If a size greater than the maximum workgroup size allowed by the hardware has been requested, a compiler error will occur.

Shared Keyword

The shared keyword basically declares some variable that is to be shared within the workgroup. Creating a variable like this is as simple as adding the keyword shared to the beginning of any declaration. The shared memory cannot be initialized to anything though (which thread would get to initialize it?) – so if you want the values to start at some number, you have to have some way of having the threads work together to initialize the values.

As mentioned above, the idea is that shared memory access for the workgroup is faster than global memory access. So we can load information into the shared workgroup space, everyone can reference and write to it, and then everyone sends information back to the global memory space.

Predefined Variables

Like with fragment and vertex shader, compute shaders define some variables that allow a thread to see information about it’s execution. Where it is in terms of the global execution queue, which unique identifier the local thread has, and the size of the current workgroup.

  • uvec3 gl_WorkGroupSize
    • Defines the size requested by the Local Workgroup Layout. (local_size_x, local_size_y, local_size_z)
  • uvec3 gl_NumWorkGroups
    • Defines the values passed into the C call to glDispatchCompute(x, y, z). (x, y, z) 
  • uvec3 gl_WorkGroupID
    • Defines the unique identifier of the workgroup based on the requested executions by C call to glDispatchCompute().
    • Within the range (0, 0, 0) -> (gl_NumWorkGroups.x – 1, gl_NumWorkGroups.y – 1, gl_NumWorkGroups.z)
  • uvec3 gl_LocalInvocationID
    • Identifies the current executing thread’s unique identifier for this workgroup.
    • Within the range (0, 0, 0) -> (gl_WorkGroupSize.x – 1, gl_WorkGroupSize.y – 1, gl_WorkGroupSize.z – 1)
  • uvec gl_GlobalInvocationID
    • Identifies the global unique identifier for the current thread based on a combination of the above variables.
    • Can be recalculated by gl_WorkGroupID * gl_WorkGroupSize + gl_LocalInvocationID

Final Compute Shader

Though I haven’t covered all of the intricacies, the rest of the information can pretty easily be found in comments and code provided. This is my final blur compute shader.

The Gaussian Blur weights provided can be generated by using the Normal Distribution equation, with some given input parameter for the standard deviation. I’m fairly new to random distributions, in fact I just started learning about them this year – so I’m unsure if I’m implementing them in the best way possible. What I opted to do was simply create a class which will generate the distribution data, and be able to be written to a Uniform Buffer object for usage within the compute shader. I’ve also opted to limit the blur to a width of 32 (the compute shader requires some constant size for shared memory declarations).


Normal Distribution


I’ve simply coded up the Normal Distribution function, and allow a variable for sample positionmean, and standard deviation to be passed into it. Values at certain points are then calculated, and in the end we divide each value in our normal distribution by the sum of the values (normalizing it so that the sum is 1). In general, we don’t want to offset the normal distribution, so my “BlurDataBuilder” really only accepts width, and standard deviation, passing 0s into the normal distribution function for the mean. How I generate such weights is as listed:

Putting it All Together

The idea is that you do two passes which will blur the screen both horizontally and vertically, producing a Gaussian blurred image. I’ve noticed that in general I want my width to match my standard deviation, or else the weights provide little-to-no difference past the standard deviation’s “bounds”. Below are a few pictures showing both the exponential shadow map (in order to display it we take the log of the value and divide by the Exponential). One picture shows an unfiltered map (e.g. no blur), the other picture shows a filtered map with a width of 8 and a standard deviation of 8.

Exponential Shadow Map (Unfiltered)
shadowMap_0_0 shadowCast_0_0
Exponential Shadow Map (Filtered [Width = 8, Deviation = 8])
shadowMap_8_8 shadowCast_8_8

Final Renders

Exponential Shadow Maps are a pretty simple addition to any existing graphics framework. It just requires a little understanding of the underlying topics, and what the variables in the Exponential Shadow Mapping equation stand for. The hardest part of any shadow mapping algorithm is setting the framework up for the multiple passes with different render properties. After that it’s just implementing variations of different shadow mapping techniques. Though I dislike the approximate value that the Exponential Shadow Maps provide, I understand that it’s extremely convenient to be able to soften shadows in such a trivial way – so I will likely deal with any shortcomings of the algorithm (too small to really care about).


 — And the below pictures are properly gamma corrected to allow for multiple lights —

This image re-enables proper gamma correction. Which is not on for other images.Again, gamma corrected with bunny model instead of sphere.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.