Monday, August 17, 2020

Path Trace Visualization - Part 1

 It's been sometime (again) since i wrote my last post as I've been super busy with work/family. Due to the pandemic times, I finally got some time to work on my very own personal GPU path tracer. I've been tweeting  regular updates on twitter under the handle @createthematrix. There was one particular update that caught the attention of many folks on twitter. And that is related to visualizing the path trace itself. Here is the link to that post. So I decided to write a blog post about the implementation as I feel that it could help many folks and could also be applied for other purposes also. This feature is very useful for catching isues, understanding how different BRDFs work, sampling scheme and so on in terms of path tracing.

This is by far not the best way to implement the visualization but it is an implementation. I have implemented this in C++/HLSL for both Vulkan/DX12 API. My engine abstracts out the explicit Vulkan/DX12 API calls to graphics interface. Hopefully this page can explain the details of the implementation in such a way that you can apply this to your engine also.

I'll start with explaining how to do a capture from the current camera itself for a certain number of frames. I'm going to assume that the reader already has a path tracer implemented as a compute shader dispatch where each thread represents a pixel on the output. I am also assuming that you have the framework to send data to the gpu via constant buffers.

This is the data used for the path entry:

struct PathCaptureEntry 
{
float3 startPosition;
uint pathId;
float3 endPosition;
uint bounce;
        float3 color;
        float alpha; 
};

And here are the resources used in the shader:

struct PathTraceVisualizationConstants
{
uint2 resolution;
uint2 mousePosition;
uint maxPathEntryCount;
uint maxPathFrameCollection;
uint pathIdFrameNumber;
int pathDebugId;
int boundDebugId;
        //add padding if needed
};

ConstantBuffer<PathTraceVisualizationConstants> constantsCB : register(b0);
RWStructuredBuffer<PathCaptureEntry> pathCaptureEntriesUav : register(u0);
RWBuffer<uint> pathCaptureEntryCountUav : register(u1);
RWTexture2D<float> distanceBufferUav : register(u2);

I've excluded all the other resources from the path tracer itself as we're focusing only on the visualization portion. The idea is to generate path entries during the path trace, and then draw them.

Resources Needed

1. Large structured buffer that stores a PathCaptureEntry for each line segment in the path. 

2. "Counter" buffer to maintain how many line segments are there. 

3. Indirect arguments buffer for instanced indirect line draw call. Initialize the values to {2,0,0,0} (2 is the vertex count and 0 is the instance count. Last two 0's for vertex/instance offset)

4. Distance buffer to store distance of first hit from camera. Not calling it "depth buffer" as this is storing distance from camera and not "nonlinear projected depth"

Storing the path entries

Here is the high level algorithm for writing out the path entries to the buffer :

1. Reset the counter to 0 at start of frame. 

2. Select a pixel on the screen (mouse click/text entry/hard coded) and pass that info to the shader.  During path tracing, when you get a hit and the compute shader thread Id matches the pixel position, add the path entry to the buffer if there's enough space in it. Following is the function for it

void AddPathEntry(uint2 threadId, PathCaptureEntry entry) 
    //this will ensure only 1 thread writes to the instance count and appends to the list 
    if (all(threadId.xy == constantsCB.mousePosition)) 
    
        if(pathId < constantsCB.maxPathFrameCollection) //i set this to 1000
        {
            uint currentIndex = 0; 
            InterlockedAdd(pathCaptureEntryCountUav[0], 1, currentIndex); 
            if (currentIndex < constantsCB.maxDebugEntryCount) 
            
                pathCaptureEntriesUav[currentIndex] = entry;          
            }
        }
    }
}

3. Also write out the distance from camera to first hit position into distanceBufferUav. This will be used later.

Now you can add path entries whenever you have a hit position like this:

PathCaptureEntry hitEntry; 
hitEntry.startPosition = ray.startPos; 
hitEntry.endPosition = hitInfo.worldPosition; 
hitEntry.bounce = i; 
hitEntry.pathId = pathId; 
hitEntry.alpha = 1.0f; 
hitEntry.color = float3(1.0f, 1.0f, 0.0f); 
AddPathEntry(threadId.xy, hitEntry);

For a miss its this: 

PathCaptureEntry missEntry;
missEntry.startPosition = ray.startPos;
missEntry.endPosition = ray.startPos + ray.direction * 100.0f; //or any distance you want
missEntry.bounce = i;
missEntry.pathId = pathId;
missEntry.alpha = 1.0f;
missEntry.color = float3(0.0f, 1.0f, 1.0f);
AddPathEntry(threadId.xy, missEntry);

For surface normals you could have this:

PathCaptureEntry surfaceNormalEntry;
surfaceNormalEntry.startPosition = ray.startPos;
surfaceNormalEntry.endPosition = ray.startPos + hitInfo.worldNormal * 0.5f; //or any distance you want
surfaceNormalEntry.bounce = i;
surfaceNormalEntry.pathId = pathId;
surfaceNormalEntry.alpha = 1.0f;
surfaceNormalEntry.color = float3(1.0f, 0.0f, 1.0f);
AddPathEntry(threadId.xy, surfaceNormalEntry);

If you have reached this far, then at the end of your path trace dispatch, you should have a counter set to some value and some buffer entries for upto constantsCB.maxPathFrameCollection frames if you selected a pixel on screen with valid info. 

Here's a screenshot of the path entries buffer in renderdoc:


This is the total count (an example):

And here is the distance buffer:

Now how do we render this info?

Rendering the path entries

 This is done through an InstancedIndirect draw call using lines as the primitive. Please do not leave it as a triangle primitive type. Since the indirect args already has the values {2,0,0,0}, we'll just need to update the instance count. Following is a compute shader to do just that.

[numthreads(1,1,1)] 
void CS_UpdateCapturePathIndirectArgs(uint3 threadId : SV_DispatchThreadID) 
    indirectPathDrawArgsUav[1] = pathCaptureEntryCount[0]; 
}

As an option you can directly increment indirectPathDrawArgsUav[1] in AddPathEntry but I decided to keep the counter and indirect arguments as separate for easier debugging.

Now that we have the indirect args, path entries buffer and a distance buffer, we can do an InstanceIndirectDraw with lines. I added this draw after the tonemapping pass directly on swapchain itself. The distance buffer is used for discarding pixels in the pixel shader if it fails the depth test. You could generate a non linear depth buffer similar to how graphics pipeline does it and then use the HW depth test, but in my case, I kept it simple. The pathID/bounceID stored from the path trace comes into use here for isolation. This means that you can specify which path or bounce you want. Following are the shader details:

struct VS_PathDrawInput 
    uint vertexID : SV_VertexID; 
    uint instanceID : SV_InstanceID; 
}; 

struct PS_PathDrawInput 
    float4 pos : SV_POSITION; 
    float4 worldPos : POSITION0; 
    float4 col : COLOR0; 
}; 

PS_PathDrawInput VS_PathDraw(VS_PathDrawInput input) 
    PathCaptureEntry pathEntry = pathCaptureEntries[input.instanceID]; 
    //choose between start/end based on vertexID
    float3 position = (input.vertexID & 1)?pathEntry.endPosition:pathEntry.startPosition; 
    if (constantsCB.pathDebugId >= 0 && constantsCB.pathDebugId != pathEntry.pathId) 
        position = 0.0f; 
    if (constantsCB.boundDebugId >= 0 && constantsCB.boundDebugId != pathEntry.bounce) 
        position = 0.0f; 
    PS_PathDrawInput output; 
    output.pos = mul(float4(position, 1.f), CameraConstantsCB.viewProjectionMtx);        output.worldPos = float4(position, 1.0f); 
    output.col = float4(pathEntry.color, pathEntry.alpha); 
    return output; 

float4 PS_PathDraw(PS_PathDrawInput input) : SV_Target 
    float dist = length(input.worldPos.xyz - CameraConstantsCB.eye.xyz);//more optimal to use distSqr
    uint2 pixelPos = (uint2)input.pos.xy; 
    float distanceSample = distanceBuffer.Load(uint3(pixelPos, 0)).x; // depth test in shader
    if (distanceSample < dist) 
        discard; 
    return input.col; 
}

Here's a screenshot of all the captured rays:

With path isolation:



If you have followed these steps, you'll be able to get similar results like above. While capturing the frames, it would be best not to move the camera as the starting point for the ray will change. We will talk about how to fix this and be able to get a continuous capture going with a detached camera in Part 2.

If you have any questions/feedback (or find any issues), feel free to add comments. Thanks for taking the time to read this.

No comments: