Warm tip: This article is reproduced from serverfault.com, please click

Ray traced shadow bias without normal

发布于 2020-12-06 06:07:01

I'm following Infinity Ward's approach to ray tracing shadows where they reconstruct the world position from the depth buffer and cast a ray from that towards the light. I find that I get really bad shadow acne using this method with 1 ray per pixel, probably because of numerical errors. The obvious fix is moving the world position a small amount in the normal direction, but I do not have access to normals. I figure it might go away once I shoot multiple rays per pixel but for performance reasons I'm trying to stick to 1. Any options or am I just doomed without access to normals?

void main() {
    const vec2 pixelCenter = vec2(gl_LaunchIDNV.xy) + vec2(0.5);
    const vec2 uv = pixelCenter/vec2(gl_LaunchSizeNV.xy);

    uint rayFlags = gl_RayFlagsOpaqueNV | gl_RayFlagsSkipClosestHitShaderNV | gl_RayFlagsTerminateOnFirstHitNV;
    float tMin = 0.001;
    float tMax = 10000.0;

    // sample the current depth
    float depth = texture(depthTexture, uv).r;

    // reconstruct world position of pixel
    vec3 origin = reconstructPosition(uv, depth, pc.inverseViewProjection);

    // ray direction is the inverse of the light direction
    vec3 direction = normalize(-pc.light_direction.xyz);

    // everything is considered in shadow until the miss shader is called
    payload = vec3(0);

    traceNV(AS, rayFlags, 0xFF, 0, 0, 0, origin, tMin, direction, tMax, 0);

    // store either the original 0, or 1 if the miss shader executed
    // if the miss shader executes it means it was able to 'reach' the light from the current pixel's position
    imageStore(shadowTexture, ivec2(gl_LaunchIDNV.xy), vec4(payload, 1.0));
}
Questioner
Nico van Bentum
Viewed
0
Firnor 2020-12-06 16:51:42

You could reconstruct an approximation of the normals by looking up the position of neighboring pixels (e.g. offset of 1 in x and y direction) and compute the cross products of their respective direction vectors to the current pixels.

So for example:

Px = reconstructPositionFromPixel(pixelCenter + vec2(1.0, 0.0));
Py = reconstructPositionFromPixel(pixelCenter + vec2(0.0, 1.0));
Tx = normalize(Px - pixelCenter); // approximation of tangent
Ty = normalize(Py - pixelCenter); // approximation of bitangent
N = cross(Tx, Ty);

Note, that this produces quite good results as long as the depth values change smoothly. However, this completely fails if there is a depth edge, i.e. 2 completely different depth values (e.g. of different object with different distance to the camera). This could be detected by measuring the distance of Px and Py and depending on a heuristic threshold simply reject this approximate normal. Also, this method cannot recover normal information from normal maps.