It's been sometime (again) since i wrote my last post as I've been super busy with work/family. Due to the pandemic times, I finally got some time to work on my very own personal GPU path tracer. I've been tweeting regular updates on twitter under the handle @createthematrix. There was one particular update that caught the attention of many folks on twitter. And that is related to visualizing the path trace itself. Here is the link to that post. So I decided to write a blog post about the implementation as I feel that it could help many folks and could also be applied for other purposes also. This feature is very useful for catching isues, understanding how different BRDFs work, sampling scheme and so on in terms of path tracing.
This is by far not the best way to implement the visualization but it is an implementation. I have implemented this in C++/HLSL for both Vulkan/DX12 API. My engine abstracts out the explicit Vulkan/DX12 API calls to graphics interface. Hopefully this page can explain the details of the implementation in such a way that you can apply this to your engine also.
I'll start with explaining how to do a capture from the current camera itself for a certain number of frames. I'm going to assume that the reader already has a path tracer implemented as a compute shader dispatch where each thread represents a pixel on the output. I am also assuming that you have the framework to send data to the gpu via constant buffers.
This is the data used for the path entry:
And here are the resources used in the shader:
I've excluded all the other resources from the path tracer itself as we're focusing only on the visualization portion. The idea is to generate path entries during the path trace, and then draw them.
Resources Needed
1. Large structured buffer that stores a PathCaptureEntry for each line segment in the path.
2. "Counter" buffer to maintain how many line segments are there.
3. Indirect arguments buffer for instanced indirect line draw call. Initialize the values to {2,0,0,0} (2 is the vertex count and 0 is the instance count. Last two 0's for vertex/instance offset)
4. Distance buffer to store distance of first hit from camera. Not calling it "depth buffer" as this is storing distance from camera and not "nonlinear projected depth"Storing the path entries
Here is the high level algorithm for writing out the path entries to the buffer :
1. Reset the counter to 0 at start of frame.
2. Select a pixel on the screen (mouse click/text entry/hard coded) and pass that info to the shader. During path tracing, when you get a hit and the compute shader thread Id matches the pixel position, add the path entry to the buffer if there's enough space in it. Following is the function for it
void AddPathEntry(uint2 threadId, PathCaptureEntry entry)
{
//this will ensure only 1 thread writes to the instance count and appends to the list
if (all(threadId.xy == constantsCB.
mousePosition)) {
if(pathId <
constantsCB.maxPathFrameCollection) //i set this to 1000 uint currentIndex = 0;
InterlockedAdd(pathCaptureEntryCountUav[0], 1, currentIndex);
if (currentIndex < constantsCB.maxDebugEntryCount)
{
pathCaptureEntriesUav[currentIndex] = entry;
}
}
}
}
PathCaptureEntry hitEntry;
hitEntry.startPosition = ray.startPos;
hitEntry.endPosition = hitInfo.worldPosition;
hitEntry.bounce = i;
hitEntry.pathId = pathId;
hitEntry.alpha = 1.0f;
hitEntry.color = float3(1.0f, 1.0f, 0.0f);
AddPathEntry(threadId.xy, hitEntry);
For surface normals you could have this:
Rendering the path entries
[numthreads(1,1,1)]
void CS_UpdateCapturePathIndirectArgs(uint3 threadId : SV_DispatchThreadID)
{
indirectPathDrawArgsUav[1] = pathCaptureEntryCount[0];
}
struct VS_PathDrawInput
{
uint vertexID : SV_VertexID;
uint instanceID : SV_InstanceID;
};
struct PS_PathDrawInput
{
float4 pos : SV_POSITION;
float4 worldPos : POSITION0;
float4 col : COLOR0;
};
PS_PathDrawInput VS_PathDraw(VS_PathDrawInput input)
{
PathCaptureEntry pathEntry = pathCaptureEntries[input.instanceID];
//choose between start/end based on vertexID
float3 position = (input.vertexID & 1)?pathEntry.endPosition:pathEntry.startPosition;
if (constantsCB.pathDebugId >= 0 && constantsCB.pathDebugId != pathEntry.pathId)
position = 0.0f;
if (constantsCB.boundDebugId >= 0 && constantsCB.boundDebugId != pathEntry.bounce)
position = 0.0f;
PS_PathDrawInput output;
output.pos = mul(float4(position, 1.f), CameraConstantsCB.viewProjectionMtx);
output.worldPos = float4(position, 1.0f);
output.col = float4(pathEntry.color, pathEntry.alpha);
return output;
}
float4 PS_PathDraw(PS_PathDrawInput input) : SV_Target
{
float dist = length(input.worldPos.xyz - CameraConstantsCB.eye.xyz);//more optimal to use distSqr
uint2 pixelPos = (uint2)input.pos.xy;
float distanceSample = distanceBuffer.Load(uint3(pixelPos, 0)).x;
// depth test in shader
if (distanceSample < dist)
discard;
return input.col;
}
No comments:
Post a Comment