So, shadows. The eternal challenge with rasterised rendering.
There are many use cases for shadows, which account for the many ways in which to approximate them; several shadow mapping techniques exist for real time shadows from dynamic geometry, there’s baked lighting for static geometry, there’s screen space ambient occlusion for ambient occlusion between both dynamic and static geometry and there’s blob shadows for highly ambient lit environments, where the geometry needs to look grounded. The last case is the one we’ll concern ourselves with in this post.
When creating 3D applications with a lot (20 to 10000) of dynamically placed static objects and targeted at low-end devices, fx. indoor architectural visualisation in a browser, you can’t depend on having resources for fancy screen space effects or lots of rendering passes to create shadow maps, first bounce GI or SSAO. What is usually done then is to create blob shadows for the static assets located on even surfaces and use those blob shadows to ground the objects in the scene. These blob shadows are created by artists by rendering the silhouette of the object and then blurring the silhouette by hand in fx. Photoshop. The blurring can either be done by simply applying a uniform blur filter to the entire silhouette or by blurring it via gefühl to achieve a hard shadow where the geometry is close to the floor and a soft shadow where the geometry is further away. An example of uniform blurring vs context sensitive blurring can be seen in the images below.
In figure 1a we see the hard silhouette of the bike projected onto the floor below it and in 1b the silhouette has been blurred six times using a fixed size blur radius.
In figure 2a the silhouette is still hard, but it has been created with an exponential intensity falloff based on the depth of the shadow. Figure 2b then shows the silhouette from 2a, which has been blurred six times using a blur radius that depends logarithmically on the intensity in the current processed pixel.
The difference is clearly noticeable; in figure 1b the naive blob shadow of the bike looks like the blob shadow from a shark or rocket and while it does help to ground the bike in the scene, it doesn’t give the viewer any impression of the vertical shape of the bike. In figure 2b we get a much better blob shadow, here the shadow is strong in places where the bike’s geometry is close to the floor and drops off as parts of the bike get further away. This is especially obvious around the wheels, where the shadow’s decrease in intensity can be seen to follow the curvature of the wheel.
Having seen artists spend days on creating these blob shadows, with varying success, and with the prospect of them spending/wasting even more time on it in the future, I decided to try to create a blob shadow creator extension for Unity, a Blob-O-Matic if you will.
The process is relatively easy. First the model is rendered to a rendertexture using an orthogonal camera located right below the model. The distance to the fae plane is set to the distance that we want the shadows to be cast. The model gets rendered using a replacement shader that outputs the geometry’s distance to the near plane instead of the colors. This gives us the silhouette seen above in figure 2a. Alternatively the depth buffer could have been used to grab the distance of the vertex from the near plane. This would allow the plugin to support arbitrary static vertex shaders, but the current method is faster, the codebase simpler and if you’re performing vertex deformation of a static model in the vertex shader, you should probably consider preprocessing the model anyway.
Once the depth sensitive silhouette has been rendered, multiple blur passes are performed on the silhouette. These all use a varying blur kernel size, where dark pixels are assigned a low blur radius and brighter pixels are given a higher radius. The result of this is that dark shadows, created where the model is close to the plane, stay dark, and brighter shaded shadows get blurrier and receive a larger shadow penumbra. To reduce banding artefacts from the blur passes I opted to not use Unity’s cone tap technique and instead used blur by simply computing an offset into the silhouettes mipmap level. The downside to using mipmaps for blurring is that I can’t specify my own blur kernels (gaussian, spline-based), which is why you can specify blur passes instead to approximate a gaussian blur.
The end result can be seen and experimented with in this Unity webplayer.
I am currently putting together an editor extension, where models and prefabs can be selected in the Inspector and allow the user to bake a drop shadow into them. A picture of the extension can be seen below with shadows generated for the famous Utah teapot and Mitsuba’s material preview model.
As future work on the extension I would like to add support for depth independent transparent geometry. This would require two passes, the first pass would render opaque geometry as is done now, the second pass would collect the opacity of the transparent geometry and combine it as is done in standard Whitted ray tracers.
The current version of the plugin also requires a Unity layer to be set aside for the extension, as all models are placed and rendered from the scene. As far as I can see this is the only solution that allows me to render combined models into an arbitrarily sized rendertexture and let them be rotated by the user. AssetPreview.GetAssetPreview is simply too limited right now. If anyone knows a better solution I would love to hear it, as the current solution is far from perfect and can easily result in hidden memory leaks, although they only exist at editor time.
The GUI itself could use a major overhaul, but for now it does what it is supposed to.