This is a continuation from Part 1. The main limitations for the results from Part 1 is that you can only capture for a certain number of frames. Also if you move the camera during the time of the capture the rays will get recorded from a new point based on where the camera is. For this part I will focus on getting a continuous capture and detaching the camera during the visualization. There is going to be some slight modifications on the algorithm from Part 1. The aim is to replicate the results as this video.
Resources Needed
1. 2 Large structured buffer that stores a PathCaptureEntry for each line segment in the path.
2. 2 "Counter" buffer to maintain how many line segments are there.
3. Indirect arguments buffer for instanced indirect line draw call. Initialize the values to {2,0,0,0} (2 is the vertex count and 0 is the instance count. Last two 0's for vertex/instance offset)
4. Linear Depth buffer to store depth of first hit from cameraContinuous capture
1. If in case this is the first frame of the capture, we can clear pathCaptureEntryCountUav[0] to 0. Otherwise we ping pong between the 2 buffers/counters. One is from previous frame and one is for current frame. All the entries from the previous frame has its alpha updated based on the "pathCaptureFade" and if the alpha reaches 0 or less, it gets rejected. The rest are appended to the current buffer. Following is the compute shader for that.
[numthreads(64, 1, 1)]
void CS_PathTraceUpdateCapturedPaths(uint3 threadId : SV_DispatchThreadID)
{
if (threadId.x < pathCaptureEntryCount[0])
{
PathCaptureEntry entry = pathCaptureEntries[threadId.x];
entry.alpha -= constantsCB.pathCaptureFade;
//we dont care about pathID if its a continous capture
uint pathId = (constantsCB.flags & PATHTRACE_FLAGS_CONTINUOUSCAPTURE) ? 0 : entry.pathId;
//do not copy entry if alpha <= 0.0f
if (entry.alpha > 0.0f)
{
uint currentIndex = 0;
InterlockedAdd(pathCaptureEntryCountUav[0], 1, currentIndex);
if (currentIndex < constantsCB.maxDebugEntryCount)
{
pathCaptureEntriesUav[currentIndex] = entry;
}
}
}
}
2. Select a pixel on the screen (mouse click/text entry/hard coded) and pass that info to the shader. During path tracing, when you get a hit and the compute shader thread Id matches the pixel position, add the path entry to the buffer if there's enough space in it. This is the same as Part 1.
[numthreads(1,1,1)]
void CS_UpdateCapturePathIndirectArgs(uint3 threadId : SV_DispatchThreadID)
{
pathCaptureEntryCountUav[0] = 0; //previous frame's counter
indirectPathDrawArgsUav[1] = pathCaptureEntryCount[0]; //copying from current frame's counter
}
Detaching the camera
3. Path Trace the scene again but from path capture Camera. Skip writing out color/distance, but call the AddPathEntry function
Steps 2/3 can reuse the same function but have an additional bool:void CS_PathTrace_Common(uint3 threadId, bool capturePaths).
if (capturePaths){ AddPathEntry(threadId.xy, entry, true);
}
[numthreads(8, 8, 1)]
void CS_PathTrace(uint3 threadId : SV_DispatchThreadID)
{
CS_PathTrace_Common(threadId, false);
}
[numthreads(8, 8, 1)]
void CS_PathTrace_CapturePaths(uint3 threadId : SV_DispatchThreadID)
{
CS_PathTrace_Common(threadId, true);
}
PS_PathDrawInput VS_PathDraw(VS_PathDrawInput input)
{
PathCaptureEntry pathEntry = pathCaptureEntries[input.instanceID];
//choose between start/end based on vertexID
float3 position = (input.vertexID & 1)?pathEntry.endPosition:pathEntry.startPosition;
if (constantsCB.pathDebugId >= 0 && constantsCB.pathDebugId != pathEntry.pathId)
position = 0.0f;
if (constantsCB.boundDebugId >= 0 && constantsCB.boundDebugId < pathEntry.bounce) //this will reject bounces after the specified one
position = 0.0f;
PS_PathDrawInput output;
output.pos = mul(float4(position, 1.f), CameraConstantsCB.viewProjectionMtx); output.worldPos = float4(position, 1.0f);
output.col = float4(pathEntry.color, pathEntry.alpha);
return output;
}
Start capture while detached
[numthreads(1, 1, 1)]
void CS_UpdateMousePosition(uint3 threadId : SV_DispatchThreadID)
{
//all this can probably be simplified.
const CameraData camera = CameraConstantsCB;
const CameraData cameraCapture = CameraCaptureConstantsCB;
//convert mouse pos to world space selected position
float2 mousePos = (float2)constantsCB.mousePosition + 0.5f;
float2 ndcPos = mousePos * constantsCB.invResolution;
ndcPos.y = 1.0f - ndcPos.y; //flip y
ndcPos = ndcPos * 2.0f - 1.0f; //convert from [0 1] to [-1 1]
ndcPos.x *= camera.aspectRatio; //apply aspect ratio
ndcPos *= camera.tanFOV; //apply field of view
float3 viewSpaceRay = float3(ndcPos, 1.0f);
viewSpaceRay = normalize(viewSpaceRay);
float3 worldSpaceRay = mul(viewSpaceRay, (float3x3)camera.inverseViewMtx);
float linearDepthSample = linearDepth.Load(uint3(mousePos, 0)).x;
float3 worldSpacePos = camera.eye.xyz + worldSpaceRay * linearDepthSample;
//convert worldSpace Pos to capture camera space
float4 capturePos = mul(float4(worldSpacePos, 1.0f), cameraCapture.viewProjectionMtx);
capturePos.xy /= capturePos.w;
capturePos.xy = capturePos.xy * 0.5f + 0.5f;
capturePos.y = 1.0f - capturePos.y;
float2 mouseCapturePos = capturePos.xy * PathTraceConstantsCB.resolution;
//update mouse positions
debugMousePositionUav[0].xy = uint2(mouseCapturePos);
}
Now that the transformed pixel position is stored on the gpu, we need to update the
AddPathEntry function to use the gpu resource instead. (updated section in BOLD below)void AddPathEntry(uint2 threadId, PathCaptureEntry entry)
{
//this will ensure only 1 thread writes to the instance count and appends to the list
if (all(threadId.xy ==
debugMousePosition[0])) //instead of constantsCB.
mousePosition {
if(pathId <
constantsCB.maxPathFrameCollection) //i set this to 1000 uint currentIndex = 0;
InterlockedAdd(pathCaptureEntryCountUav[0], 1, currentIndex);
if (currentIndex < constantsCB.maxDebugEntryCount)
{
pathCaptureEntriesUav[currentIndex] = entry;
}
}
}
}
No comments:
Post a Comment