Skin rendering in Unity

A wet skin effect can be achieved by lowering the roughness parameter

Continuing my shader studies, I have decided to implement something more complex this time. I was always impressed by the results achieved in the GPU Gems 3 article “Advanced Techniques for Realistic Real-Time Skin Rendering”, so that is what I chose to implement. Once again I’ve used Unity, mainly for how easy it is to write custom shaders to it.

The rendering method described in the publication in question uses a sum of Gaussians to approximate the diffusion profile of human skin. It also uses a variation of Translucent Shadow Maps (TSM) to render light being transmitted through thin surfaces, such as ears and nostrils. The article is very well written, and for the most part the implementation was straight-forward, but I did have a few problems achieving the desired result, and that is what I will focus in here. You can check the complete source code here.

Rendering in texture space

The technique requires a irradiance texture to be rendered in texture space. In order to do that, we must calculate the vertices output coordinates using their UV coordinates. That is pretty trivial in a vertex shader, we only need to remap the UVs from the (0,1) interval to (-1, 1) and invert the y coordinate:

Extra parameters

When reading the thickness from the blurred irradiance textures, I have found that using that value directly does not lead to the expected results. Instead, I multiply it by a variable, which I called _TSMSpread. As the value of that variable gets lower, the light being transmitted through thin surfaces gets more spread out.

I have also found that the calculated blur scale was too small, and almost no changes could be seen in the blurred textures. To solve that, I have added a divisor to the scale calculated in the Gaussian convolution shaders, called _BlurStepScale. The lower its value, the more accentuated the blur becomes.

The authors also mention that the stretch coordinates must be multiplied by some constant. I have listed that constant as a shader property (_StretchScale) in the stretch shader, and found that a value of 0.001 works well for the models used.

Model space normals

In order to calculate the thickness of the object, the incident normals are required. If the more common tangent space normal maps were used, we would need to calculate two different change of basis matrices in the fragment shader, one for the fragment normal and one for the incident normal. However, we would not have access to the incident vertex normal and tangent unless they were rendered in textures. Instead of that, it’s easier to just use model space normals (unless the models were animated, which is not the case here).

TSM

TSM requires the scene to be rendered from the light view. In order to get the light camera and projection matrices, I’ve decided to attach an actual camera object to the light. The downside is that the camera has to be manually positioned and pointed in the direction of the object being rendered. For now, only directional lights are supported.

Attenuating seams

Naively implementing this technique causes seams to appear in areas that are adjacent in the model but disconnected in texture space. To attenuate that effect, I rendered a simple mask to a texture, with a value of 1 for each rendered fragment and 0 for background. Since I needed to blur that mask using the same width and variances used in the irradiance texture, I stored it in the alpha channel of the stretch texture. The next step is to multiply the mask from all blurred textures to form a new “seam mask”. Finally, when calculating the diffuse skin contribution, I interpolated between the irradiance value and a simple local light calculation using the seam mask as weight. The GPU gems article describes other methods that might be used to remove the seams.

Seam attenuation

Without and with seam attenuation

TSM also results in seams at the edges of the light projection. I removed those seams by scaling the vertices along their normal by a small amount, as described in the article “Real-Time Approximation to Subsurface Scattering”, in the first GPU Gems book.

Integrating with Unity’s shadows

Probably the main problem I had was integrating the shaders with Unity’s built-in shadows. Unity uses cascaded shadow maps to render it’s shadows, but it does not provides custom shaders access to the actual depth texture. Instead, it gathers all the shadow attenuation information on a screen space texture, which shaders can access. However, since the irradiance texture had to be rendered in texture space, I could not simply sample that screen space shadow map, as it would have no information about the geometry not facing the camera.

To solve this problem, I have decided to create my own texture space shadow map, using the depth information present in the TSM. This was done in 3 steps: the first renders the shadows to a texture, the seconds blurs that texture, so I can have soft shadows, and the third one applies the blurred shadows to the irradiance texture. In my implementation, I have left a parameter to turn this shadows on or off, but disabling them leaves some light artifacts in areas that have normals aligned with the light direction, but should be shadowed by the head itself.

Leave a Reply

Your email address will not be published. Required fields are marked *

*