Envision, Create, Share

Welcome to HBGames, a leading amateur game development forum and Discord server. All are welcome, and amongst our ranks you will find experts in their field from all aspects of video game design and development.

Shadow Mapping in 3D Graphics

It only took me 3 years, but I finally decided to get shadow mapping done.

I remember a graphics programmer once saying 'If you CAN add shadows, do it, it will instantly make everything cooler', usually we cheat with shadows by having blobs under the characters or by pre-baking something into the environment, but one of the ways to add real-time shadows to a scene is to implement shadow mapping, a technique that requires a good knowledge of your 3D API.

So this is the scene I started with:
start.png


Because I want to show off that this is real time shadows, this is actually an animated scene. The cube rotates on the XYZ axis and the wall on the right moves from below the floor to just above the floor and loops that animation.

The light-source is on the right hand-side, just off the edge of the screen. This is what the light is looking at:
light_orth.png

WOAH THAT LOOKS FUNKY

So the light I'm simulating in this will be light from the sun, to emulate this I had to remove as much perspective as possible because the sun is really big and really far away, if I were to do this at 1/10000000000th of the distance and depth that it should be then the sun's view would be this:
sun10.png

Not very useful, it's easier to just remove the need to deal with depth by using an orthographic camera matrix for this (Removes depth perspective), essentially it removes distance from the equation and just looks at the scene as if the field of view is nearly zero degrees.

Handy!

Now to get into the meat of things. Shadows in real life appear because light is blocked by an object, if we were to break this down, we could say "Object A's surface is closer to the light source than the Object B at this point, so Object B has a shadow on it".
If we were to do super accurate, realistic shadows, we would be simulating the rays/waves/particles (Depending on your discipline!) of light from the light source and illuminating where they hit, but that is incredibly expensive to do on a computer in real-time, it is faster to presume everything is hit by light and then paint on some pretty shadows where things are blocking other things.

I'm pretty sure everyone has seen something like this in their physics classes at school:
phys.png


But what we want to do to emulate this is this:
wantthis.png


Something that the GPU has built in that handles depth and object occlusion...THE DEPTH BUFFER!
Sweet, so we can use the depth buffer to find out if the object is being occluded from the light or not! All we need to do is draw a depth buffer from the light's POV in the direction that it is looking and then we have the information on what surfaces are CLOSEST to the light!

This is the depth buffer of the funky orthogonal view that the light source has on the scene (Compare it with the one above)
sundepth.png

OpenGL (The API I am using here) encodes the depth buffer with 0.0 being closest to the camera's near plane and 1.0 being the furthest against the camera's far plane, we can represent that in colour with 0.0 being black and 1.0 being white.

This is the depth buffer of our main camera looking at the scene.
maindepth.png


Okay so we have these two depth buffers, one is generated by the light looking at the scene with an orthographic matrix and the other is our camera looking at the scene with it's perspective matrix.

During the stage where we render the scene from the camera's point of view (And generate that depth buffer) we can read from the light source's depth buffer and use the light source's orthographic view matrix to translate the object we're rendering and check it in the perspective of the light source a second time, but this time looking at the light source's depth buffer and judging if the object is behind the depth buffer's closest value.

That sounds like a lot to take in, (And it is when you have no idea what you're doing), so let me simplify it as much as I can.

AFTER drawing that orthographic depth buffer of what the light sees when looking at the scene, we move to the perspective of our main camera and start drawing the world from that perspective, keeping the light source's point of view and depth buffer on hand as a reference.
When we draw an object, we briefly move back to the perspective of the light, check how deep the object is from the light, and then compare that with the light's depth buffer we made earlier (Because we are reusing the light's matrix, the depth buffers are written in the same XY pixel coord, well not really, but it's easy to translate it).

Here's all that in GLSL 3.3 shader code:
C:
<div class="c" id="{CB}" style="font-family: monospace;"><ol>// Vertex shader

gl_Position = uniform_cameraMatrix * vec4( glVertex_position, <span style="color: #cc66cc;">1.0 ); // Our object's position from the camera's point of view

glFragment_shadowCoord = uniform_shadowMatrix * vec4( glVertex_position, <span style="color: #cc66cc;">1.0 ); // Our object's position from the light's point of view

C:
<div class="c" id="{CB}" style="font-family: monospace;"><ol>// Fragment shader

vec4 shadowCoordinateWdivide = glFragment_shadowCoord / glFragment_shadowCoord.<span style="color: #202020;">w; // First step is to convert the orthographic projection of the object into a quaternion position, we can do this by dividing by the w component, this generally works for all quaternions

<span style="color: #993333;">float distanceFromLight = texture( uniform_shadowDepthBuffer, shadowCoordinateWdivide.<span style="color: #202020;">st ).<span style="color: #202020;">z; // We then read the light source's depth buffer to get the original depth at the same co-ordinate

 

// Is the distance between our object and the light further away from the distance that is on the depth buffer on the same screen-coord?

<span style="color: #b1b100;">if ( shadowCoordinateWdivide.<span style="color: #202020;">z > distanceFromLight  ) {

    // It's in shadow!

} <span style="color: #b1b100;">else {

    // It's not in shadow...

}

NICE! Now that we know we can do this, let's plug it all in and try it out!
borken.png

Ah shit, it didn't work...

As 3D rasterisation is pretty much a massive hack (I mean, the depth buffer exists, proof enough?) we need to do a bit of tweaking to this perfect idea to make it actually work. First of all, the orthographic matrix used for that light? Let's give it a bit of a bias so we can reduce our checking depth, gives a 0.5 bias towards if something is in shadow or not.

This is how I did it with my C code:
C:
<div class="c" id="{CB}" style="font-family: monospace;"><ol>mat4_t biasMatrix = {

    { <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.0f },

    { <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.0f },

    { <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.0f, <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">0.0f },

    { <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">0.5f, <span style="color: #cc66cc;">1.0f }

};

mat4_t biasShadowProjectionMatrix;

Matrix_Mul( &biasShadowProjectionMatrix, &shadowProjectionMatrix, &biasMatrix, MAT4 );
So this biasShadowProjectionMatrix is our new orthographic light point of view matrix thingy, let's upload that to the GPU for our camera's POV rendering.

biasA.png

HURRAH! WE HAVE SHADOWS FINALLY JEEZ >:C

But...It's so ugly...

What's happening here is that the shadow itself is Z-fighting with the object that the shadow should be landing on, we need more of a bias to correct this! Doh! Back to the hackery...

Actually this one is a simple fix by using some knowledge of physics.

What if I told you that the shadows casted by this object:
cullback.png

Are the same as shadows casted by this object:
cullfrom.png


You can test this yourself, get a bowl and shine a light towards the bottom of the bowl and look at the size of the shadow, then flip the bowl around so the inside is facing the light source then check the size of the shadow again, it's the same. You can basically invert anything in the world and it will have the same shadow.

OpenGL by default culls the back of objects (Because, to be fair, you never see the back of things), to add a bias that is still physically correct we can invert the objects for the light source's POV, this adds more depth to the objects (Because the back is further from the front, which is physically true for anything!) so when we render it in the "correct" way, the depth of the object we're drawing will be closer than what the light source believes the depth is, putting it fully into light.

In OpenGL we can do this simply with:
C:
<div class="c" id="{CB}" style="font-family: monospace;"><ol>glCullFace( GL_FRONT );

// Draw from light's POV

 

glCullFace( GL_BACK );

// Draw, with shadows, from camera's POV

awesome.png

AWESOME. BUT WHAT THE HELL MAN YOU DIDN'T FIX THE THING ON THE RIGHT???

So we've inverted the shapes, but now we're having the same z-fighting problem with the self-shadowing, simply said, because we flipped the shapes around, the depth of the object that is facing away from the light source is now the same as the depth of the object from the light source's POV.

There are two ways to fix this, you can either disable self-shadowing on the same surface and use another method for doing plane-based shadows (Such as the method that's been around forever, vertex/fragment lighting with the dot product...) or you can stick with shadow mapping and do some hackery.

The hackery method involves punch physics in the face and slapping on a bias of 0.0005.

C:
<div class="c" id="{CB}" style="font-family: monospace;"><ol>// Fragment shader

vec4 shadowCoordinateWdivide = glFragment_shadowCoord / glFragment_shadowCoord.<span style="color: #202020;">w;

shadowCoordinateWdivide.<span style="color: #202020;">z += <span style="color: #cc66cc;">0.0005; // Because why not

sweet.png

Okay now it's working. Except for one problem.

wut.png

What the hell is happening here!?!?

Well, simply said, the shadow map's resolution is not high enough. Because the depth buffer is a block of computer memory the records fragment depth (It's pretty much a fixed-sized image) we lose all sorts of information about the depth between pixels.

Look at this image.
beams.png

Those yellow beams of light are hitting the inverse of the objects, so they hit the back of the cube, what the light source sees is coloured red, what the camera sees is in green. So those beams of light? Imagine they are not beams of light, imagine they are pixels on the depth buffer of the light source, imagine that each beam of light in that image is a pixel on a 512x512 image. Okay, but how do we know that the areas between the "beams" of light are in shadow? We don't. This code presumes everything is within light (Remember when I said that at the beginning? Yes I did.), so when it can't check between the fragments (pixels) on the depth buffer, it default to presuming it's in light.

This is the point where shadow mapping becomes insanely complex, this is relatively small amount of code, but to fix that final problem, you can end up with one MASSIVE block of fragment shader checking the neighouring fragments on the light source's depth buffer, finding the average, judging if an AREA is within shadow (No longer if a fragment of an object is), interpolating between spaces, etc, etc, it just becomes crazy.

The solution that I prefer? Turn off self shadowing and do them a different way. Remember that += 0.0005 in the fragment shader above? Change that to -= 0.0005 and then self shadows are gone.
dumb.png

It looks pretty dumb now, but what we can do is implement a different algorithm for self shadows (dot product between the light and object normal!) and let the large distance between objects handle the real-time shadow mapping.

But that's for everyone else to deal with.

Here's a video of everything in action:
http://www.youtube.com/watch?v=Dc8QyLyHzIw
 
As a bonus, here's some shots looking at the cube's shadow for when you change the resolution of the light source's depth buffer. The article had a resolution of 1024x1024.

mapA.png

mapB.png

mapC.png

mapD.png


By increasing the resolution, you increase the accuracy of the shadow.
 

Thank you for viewing

HBGames is a leading amateur video game development forum and Discord server open to all ability levels. Feel free to have a nosey around!

Discord

Join our growing and active Discord server to discuss all aspects of game making in a relaxed environment. Join Us

Content

  • Our Games
  • Games in Development
  • Emoji by Twemoji.
    Top