Friday, November 15, 2013

World Position from Depth - What NOT to do !!

Ah yes, the ever famous question of Deferred Renderer world: "How do I reconstruct the world position from the depth?" There are lots of blogs and pages about this topic. Following are a few that I read through:

http://www.phasersonkill.com
http://gamedev.stackexchange.com
http://mynameismjp.wordpress.com

Today I'm going to talk a little bit about this. The following is the standard method:

  1. Use texture coords to read depth sample
  2. Convert texture coords [0-1] to clip space [-1 1]
  3. Convert depth sample to linear depth
  4. Calculate View Space coordinates
  5. Use Inverse View Matrix to convert from View Space to World Space
CPU Side:
A = farPlane / (farPlane - nearPlane)
B = (-farPlane * nearPlane) / (farPlane - nearPlane)
AdjustX = (projectionMatrix(0,2) + 1) / projectionMatrix(0,0)
AdjustY = (projectionMatrix(1,2) + 1)/ projectionMatrix(1,1)

In Shader:
linearDepth = B / (depth - A)
viewSpacePosition = float3(clipSpacePos.x * AdjustX, clipSpacePos.y * AdjustY, 1.0f) * linearDepth
worldSpacePosition = mul(viewSpacePosition, InverseViewMatrix)

I wasn't using this initially. I was using something much simpler that turned out to be almost right, without even knowing the math. It was only after chatting with my colleagues that I realized that my method could have been wrong. And so, I decided to implement the above method (mathematically correct) and compare it to my supposedly incorrect method.

Here's what I did initially (WRONG WAY !!):
  1. Use texture coords to read depth sample
  2. Convert texture coords [0-1] to clip space [-1 1]
  3. Use Inverse View Projection Matrix to convert float3(clipSpace.xy, depthSample) to someSpace
  4. Convert from someSpace to worldSpace by dividing each component by w !!!!!!
I somehow hacked away and found that this worked for me. I figured that we multiply WVP matrix with position and then divide by w. If we go backwards and use Inverse WVP, we get some position in some space, and bring it back by dividing by that position's w (feels like by doing this we undo the initial divide by w). I could be totally wrong on the explanation, but the results look pretty similar. I tried a test where both methods calculates their own world space positions and render the difference as the result. By doing that, i really got the picture of why my method was wrong, even though it looked "correct" when used for lighting. Here's an example of the error. It changes based on position/angle.

So, if you're implementing the world space reconstruction from Depth, please use the method suggested on the Top rather than the one at the bottom.

Wednesday, November 6, 2013

Encapsulating a spot light with a approximated cone mesh for Deferred Lights

 I've been improving on my deferred renderer by using basic primitives for rendering each of the lights during the lighting for optimizations. Following is the link I am using as reference for my optimizations:

Keith Judge's blog on Stencil Buffer Optimizations for Deferred Lights

 For point lights, I use a sphere and for spot lights I use a cone. I wanted to use an approximated mesh for rendering the light.

The problem of using an approximated mesh without any adjustments is that the edges get cut off.


To illustrate the issue, here's what happens:
 


Let's asume the following:
  • 's' = size of the cone
  • 'b' = angle of the cone
  • 'r' = base radius of cone = s * cos(b)
  • 'h' = height of cone = s * sin(b)
If we consider the original angle and size of cone, we notice that the mesh ends up inside the spot light. On looking at the base of the cone mesh, we see the regular polygon (represents the cone mesh) surrounding by the circle (represents the actual spot light). Only the vertices of the polygon fall on the circle. The area color in red shows what areas get cut off.

We can adjust the mesh slightly to encapsulate the spot light. We need to figure out what modifications should be made. One option is to increase the radius of the light, and increase the angle of the cone mesh by some amount. If we use a constant amount, it would be a large value to cover for all the cases. So we need to calculate a new angle and radius for each mesh, and use that for the cone mesh. Here is the math to figure out those values. Let's assume that we use a mesh that has 'n' sides on the cone. Our aim is to get the base of the cone mesh to encapsulate the base of the spotlight circle. For that we should do the following:

From the diagram, I've marked the following:
  • r = original base radius of cone
  • nr = new radius which is the line between the center to the vertex of the polygon
  • a = angle between r and nr
We can use the following formula to get nr:
nr = r / cos(a)

Next, we calculate the new angle and new radius of the cone mesh.
From the above diagram, we can calculate the following:
  • b = original angle of cone
  • h = height of cone = s * cos(b)
  • c = new angle of cone = atan(nr / h)
  • t = new size of cone = nr / cos(c)
Once you calculate 'b' and 't', you can use that as the angle and the size to render the cone mesh. The end result you get is this:

Hooray, no more cuts in the light source.