# Screen Space Ambient Occlusion 8

Previously we updated our BRDF to use Physically Based Shading (PBS) and gave the ambient term an overhaul with Importance Sampled Image Based Lighting (IBL). These changes alone greatly assisted in making our real-time renderer near-realistic (relative to real-time rendering). The advanced ambient term has one issue though – the ambient contribution is independent of the geometry and how much of the environment is actually visible. For a more realistic render, we must approximate the amount of the environment that each fragment sees. The way we do this is through a process known as Ambient Occlusion.

# Ambient Occlusion

Ambient Occlusion is the process of, well – occluding the ambient term. As simple as it sounds, remember that our ambient term is an approximation of it’s own. We’re not actually raycasting, so we cannot rely on the implied probability of light making it’s way into cracks and crevasses for our occlusion term. That being said, we do have enough information to give a pretty good estimate for the occlusion term for each fragment.

 Slide to compare SSAO Disabled (Left) and SSAO Enabled (Right).

We usually want to work with our algorithms in screen space to reduce the likelihood of precision errors if we happen to be far away from the origin. So whenever you see “screen-space”, you should be pretty interested already. For Ambient Occlusion, this is no different, and as such we will prefer Screen Space Ambient Occlusion to alternatives.

## Alchemy Ambient Occlusion

I looked at a few different kinds of ambient occlusion, but the results from the Alchemy paper stood out as the best to me. Alchemy defines the obscurance estimation as follows:

$\inline \dpi{200} \bg_white A \approx max \left( 0, 1 - \frac{2\sigma}{s} \sum_{i=1}^{s}\frac{max(0,\overrightarrow{v_i}\cdot \widehat{n}+z_c\beta)}{\overrightarrow{v_i}\cdot\overrightarrow{v_i} + \epsilon} \right )^k$

Where…

$\inline \dpi{120} \bg_white \overrightarrow{v_i}$ is the vector from the fragment position to the $\inline \dpi{120} \bg_white i^{th}$ sample position.
$\inline \dpi{120} \bg_white \widehat{n}$ is the fragment position’s normal.
$\inline \dpi{120} \bg_white z_c$ is the depth in camera space of the fragment.
$\inline \dpi{120} \bg_white \beta$ is some threshold modifier for fixing depth-based precision issues and light bleeding.
$\inline \dpi{120} \bg_white \sigma$ is the shadow scalar (typically for increasing the power of the occlusion).
$\inline \dpi{120} \bg_white k$ is a power modifier for contrast control of the shadow.
$\inline \dpi{120} \bg_white \epsilon$ is some small epsilon for preventing divide by 0. (Typically 0.0001)

That all sounds pretty complicated, but if you think about what it means geometrically, it’s fairly simple. We sample $\inline \dpi{120} \bg_white s$ texels around the current fragment – for each texel, we will dot their normal with the vector from the original fragment to the current texel. This gives us an intensity of occlusion (the partial magnitude of $\inline \dpi{120} \bg_white \overrightarrow{v_i}$), as a percentage by being divided by the actual magnitude of $\inline \dpi{120} \bg_white \overrightarrow{v_i}$. You could imagine that a very powerful obscruance factor could be introduced by a vector which is the maximum sampled distance away from our fragment, and the resulting $\inline \dpi{120} \bg_white \overrightarrow{v_i}$ is along the fragment’s normal (by being overhanging geometry – for example, a fragment underneath a car samples the bottom of the car itself).

The depth modifier $\inline \dpi{120} \bg_white \beta$ allows us to apply an additive offset to handle cases where the distance introduces error with our normal $\inline \dpi{120} \bg_white \widehat{n}$ and vector $\inline \dpi{120} \bg_white \overrightarrow{v_i}$. In these cases, $\inline \dpi{120} \bg_white \beta$ is used to scale the shadow when we are within a corner or crack. The scalar $\inline \dpi{120} \bg_white \sigma$ changes the intensity of the shadowed parts by being multiplied into the final summation value. I’ve found that some modification of $\inline \dpi{120} \bg_white k$ is necessary for crisper shadows – which controls the overall contrast between the shadowed and the non-shadowed areas. This is very similar to the contrast term in tone mapping algorithms, which is applied through a power modification as well.

The algorithm above was derived by an earlier algorithm

$\inline \dpi{200} \bg_white A \approx 1-\frac{2\pi\sigma}{s}\sum_{i=1}^{s}\frac{max(0,\overrightarrow{v_i} \cdot \widehat{n})\cdot X^+(r-||\overrightarrow{v_i}||)}{max(\sigma^2,\overrightarrow{v_i}\cdot\overrightarrow{v_i})}$

Which contains the heaveside function $\inline \dpi{120} \bg_white X^+$ (GLSL’s step function) for ignoring points outside of the radius $\inline \dpi{120} \bg_white r$. I’ve found that the control over the shadow’s intensity via $\inline \dpi{120} \bg_white \sigma$ was too important to remove, and dividing by the potentially larger $\inline \dpi{120} \bg_white \sigma^2$ depending on the length of $\widehat{v_i} \cdot \widehat{v_i}$ made it unusable as a shadow scalar, and I found myself constantly modifying $k$ for added contrast. This was not ideal, so I stuck with the removal of $\sigma^2$ from the denominator, but I did not remove the heaveside function $\inline \dpi{120} \bg_white X^+$ (geometrically, I could not make sense of their decision to remove it). My reasoning for this is because I don’t wish to take into account the points which lie far outside of the allowable ambient occlusion radius. These modifications bring us to the following function. (I’ve hard-coded $\inline \dpi{120} \bg_white \epsilon$ as the recommended 0.0001, as it’s only purpose is to prevent division by 0 – though I imagine it doesn’t need to be hard-coded and could be configurable.)

$\inline \dpi{200} \bg_white A \approx max \left(0, 1-\frac{2\sigma}{s}\sum_{i=1}^{s}\frac{max(0,\overrightarrow{v_i} \cdot \widehat{n})\cdot X^+(r-||\overrightarrow{v_i}||)}{\overrightarrow{v_i} \cdot \overrightarrow{v_i} + \epsilon} \right)$

Note: I attempted to do smooth blending with an $\inline \dpi{120} \bg_white R_{Inner}$ and $\inline \dpi{120} \bg_white R_{Outer}$ which would GLSL smoothstep between their respective values. But I found that I often set $\inline \dpi{120} \bg_white R_{Inner}$ = $\inline \dpi{120} \bg_white R_{Outer}$, thus removing the need for the smoothstep. This makes sense that only the Heaveside would be needed, because the amount of occlusion is already taken into account in $\inline \dpi{120} \bg_white \overrightarrow{v_i} \cdot \widehat{n}$ in the numerator. It’s not the same physical property (it doesn’t represent separation allowing light to bleed in, it more represents an overhang) but I accepted the overhang-only approach, as the shadows looked more reasonable with lesser values.

The other change I made was an alteration suggested by the paper Scalable Ambient Occlusion – to apply a random rotation based on an XOR hash which utilizes the x and y coordinate.

Which is utilized in the random disk-selection of our point. This applies a “predictably random” rotation of the initial sample point selection. This is done so that pixels near each other with similar geometry around them don’t select the same thing multiple times – which unfiltered may be identifiable as “repeats” in the shadow-sampled geometry. I mentioned this briefly in my previous blog post, Physically Based Shading and Importance Sampled Image Based Lighting, but I did not show an example of the artifacts which it produces. Below you can see an example of if I had sampled without applying some kind of random rotation to each fragment’s sample points.

### Example of the Error (See below for what proper sampling looks like)

Notice how the geometry seems to “repeat” within the shadows? This is less visible after the Bilateral Blur is preformed – but if we “randomize” the pattern we get better initial data.

I also use a arithmetic “if” (GLSL’s step function) for making sure the current sample is within the visible region. The reason for this is to avoid the edges bleeding into the SSAO buffer as “shadowed” area, which is incorrect. You can see these artifacts for yourself by simply omitting the EdgeError variable in your calculations (below), and having a purely smooth surface which goes off one of the edges.

Putting this together, my final ambient occlusion shader looks like this:

## Bilateral Blur

Much like with IBL, we notice artifacting and spottyness when we take few samples. In this case we must blur the sampled image in order to hide the under-sampling. This can’t technically be done with a two-pass compute shader, but we will ignore that and do it anyway, because for the most part any error contributed from the lack of resolution is hard to notice.

 Slide to compare Unfiltered (Left) and Filtered (Right).

The algorithm for blurring an image while preserving hard cuts and edges is very similar to our original Gaussian blur from the Exponential Shadow Mapping article. The only difference is that we will now weight the blur contribution (inversely) of each texel by the difference in view depth between the blurring point, and each contributing point within the blur. Much like the previous blurring algorithm, we will also do this two-pass (vertical and horizontal). Recall that this is technically incorrect – but the results are generally indistinguishable from a gameplay perspective – it’s best to accept the error this produces and proceed with the simplest method.

The algorithm looks something like this:

$\inline \dpi{200} \bg_white \widehat{I}(x) = \frac{\underset{x_i=\Omega}{\sum}W(x_i,x)I(x_i)}{\underset{x_i=\Omega}{\sum}W(x_i,x)}$

Where…

$\inline \dpi{150} \bg_white W(x_i,x) = R(x_i,x)S(x_i,x)$
$\inline \dpi{150} \bg_white R(x_i,x) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(d_i-d)^2}{2\sigma^2}}$

$\inline \dpi{150} \bg_white S(x_i,x) = \textrm{Normal Distribution Weights}$

Effectively, the only change past our previous blur compute shader is to compute the range modifier function $\inline \dpi{120} \bg_white R$$\inline \dpi{120} \bg_white \sigma$ represents the standard deviation from our normal distribution, while $\inline \dpi{120} \bg_white \sigma^2$ is the variance of the distribution. Our values $\inline \dpi{120} \bg_white d$$\inline \dpi{120} \bg_white d_i$$\inline \dpi{120} \bg_white N$$\inline \dpi{120} \bg_white N_i$, come from either the current fragment’s depth/normal, or the $\inline \dpi{120} \bg_white i^{th}$ comparison point’s depth/normal (all of these values in view-space). To help make things a little faster, I loaded this data into shared memory (the view depth, and the view normals), that way after everything was loaded, the workgroup could simply number crunch and return the result.

Note: It was mentioned to me that there is an error with the compute shader from the Exponential Shadow Map tutorial – bounds were not being checked properly. For Spotlight Exponential Shadow Maps, these errors were generally introduced within the range which is shadowed entirely by the spotlight-factor (so I didn’t bother fixing them). Though generally this can prove to be troublesome for other types of lights. For blurring the ambient occlusion, this is definitely a problem – we see and use all of the ambient occlusion map. So another change is that you must check your bounds and do whatever you can to reduce banding from blurring the edges.

Again, I unfortunately have not compared the compute shader’s performance with a fragment shader. I simply want to become comfortable with compute shaders – at a later date I will hopefully get around to comparing the performance of a good compute shader vs. a good fragment shader for filtering. I’d like to give it more thought than the time/space that these posts will currently allow. (Most likely, I’ll get into identifying more bottlenecks when I get around to a major refactor of the famework I’ve developed.)

## Final Images

And with just one fullscreen quad, 20 samples/fragment, clocking in at 9.6 million samples for a traditional 800×600 application – Screen Space Ambient Occlusion is born! This generally requires less filtering and sampling than IBL (unless you’ve pre-computed your IBL, in which case SSAO is definitely the hog here). The results are beautiful, and it really helps to put the object within the environment. For the following set of screenshots, I determined the best way to sample the effect would be with Disabled/Enabled sliders, and I have the SSAO turned way up in terms of shadow intensity and contrast. However, the computation is still the same, and they’re all still 20 samples – they just looks pretty darn good like that.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

## 8 thoughts on “Screen Space Ambient Occlusion”

• Michael Speer

Trent, wonderful article! Really solid work, and expertly explained. But I think it might be the case that the sliders are backwards, at least in Google Chrome? I seem to be getting the SSAO enabled images (and filtered dragon) when the slider is to the left.

• Trent Post author

Oh odd, I don’t seem to be having this issue. Thanks for bringing it to my attention, though. Would you mind sending a screenshot of the issue to treed0803@gmail.com so I can confirm it? Everything seems solid on my end (Chrome).

• Magnus

Great article! Just a minor comment that in the initial formula, the power term, “k”, is missing (it looks like it’s been cropped out of the image).

• Trent Post author

I approved this message a way long time ago, and completely forgot to fix the article. :v
Sorry about that, and thanks for the correction!

• Olaf

Hey! I know that it’s been quite some time since you wrote the code. I have a few questions. What is the viewPosition()? You seem to get that function from a buffer object and it can take Samples as an input? Are the Current.Dimensions just the window’s X and Y dimensions? What are pi and pi2 in line 36? And lastly: Is there a workaround if I don’t have a normal buffer? I was trying to get this working in Vizard, which OpenGL implementation is kind of outdated.

• Trent Post author

Hey Olaf,

viewPosition() returns the position in 3D space of the current executing fragment in view-space. This can be calculated or reverse engineered with some clever math using only the current executing uv-offset for the fullscreen quad and the depth value (howeveer you store it), and the Perspective-To-View matrix which will move us in reverse in the graphics pipeline from perspective-space to view-space. Or you can just have the view-space-position of the fragment ready in some manner.

Yup, Current.Dimension is used to normalize gl_fragCoord, since the values returned are [0, Dimension], actually – there might be a tiny bit of error here, because depending on how gl_fragCoord is defined, I think it might be [0.5, Dimension + 0.5] unless you redeclare it with certain specifiers, so I’m not completely sure it’s 100% accurate – might want to investigate. (https://www.opengl.org/sdk/docs/man/html/gl_FragCoord.xhtml)

pi is PI, as in 3.14159… etc. etc. – You’d think pi2 would be pi-halves, but it’s not – looks like it’s just good-old 2*pi (https://github.com/TReed0803/QtOpenGL/blob/e12eb713759c296a6791df247cc8bf92970423c7/resources/shaders/Math.glsl) Not sure what I was thinking with the naming scheme there.

I’m not sure what Vizard is, but I can’t imagine there would be an old enough OpenGL implementation to not allow you to store data in a texture representing the normals (that’s all I do here, nothing fancy). If you can store buffers of stuff, see if there’s a way to generate a buffer of normals, and attach that as input – otherwise I’m not sure what to say.

Sorry for the late reply, by the way – Hope this information helps, though!