Moving triangle along normal to the next closest depth buffer value in vertex shader

We can assume modern OpenGL for this problem, though I don’t know if that changes anything.

My problem revolves around two quads that are at the exact same location, but one has to be rendered on top of the other. Obviously this is z-fighting waiting to happen.

I want to transform it in the shader by moving the two triangles from the ‘front quad’ along its normal by some distance D until it’s in front of the other one such that no z-fighting happens.

Since this unfortunately happens to a lot of the triangles in the engine (this is beyond my control due to the nature of the map data), my solution was as follows:

  • Put the ‘behind’ triangles into their own VBO, render it and be done with them
  • For the triangles that need to be in front, put them into another VBO and give them the normal for each vertex
  • Calculate the what-would-be-depth from the (uniformly set) camera position
  • Use the value above and somehow(???) find the next smallest value that won’t fight with the depth buffer
  • Move the two triangles along their normal by this small distance so the front quad will not have z-fighting problems

Everything is straight forward except for how to handle the converted distance to the next smallest depth value that will stop z-fighting for that specific camera position we provided. Is there some easy way to do this? Getting the distance from the camera to the vertex and converting that to the depth value isn’t hard. What I am struggling with is how to then find out how much I need to move back without accidentally making the movement so small that it doesn’t change the value.

Since I’m going to potentially be running this on a ton of triangles (lets say 50k-100k per frame) I can’t be doing something too taxing. Maybe that’s trivial for GPU’s nowadays though.

I want them to be as close together as possible so the user can’t tell that they are separate. I figure up close it’ll be able to look nice, and if it’s far away then even with the likely larger floating point error needed due to the way the depth buffer works, it’ll be far enough away that the user will not be able to notice.

How can I do this? Or is there another way?

I don’t know if there’s some kind of easy approximation I can do to avoid screwing around with ULP/numerical method stuff (which is not my strength).

Check out Eric Lengyel’s projection matrix trick that does this. Some links on that here:

[ul]
[li]Re: How to draw one line on top of another in OpenGL without Z-fighting [/li][/ul]

My problem revolves around two quads that are at the exact same location, but one has to be rendered on top of the other. Obviously this is z-fighting waiting to happen.

I want to transform it in the shader by moving the two triangles from the ‘front quad’ along its normal by some distance D until it’s in front of the other one such that no z-fighting happens.

Now what you’re talking about starts to sound like the Normal Offset shadow mapping technique. Link:

[ul]
[li]GDC_Poster_NormalOffset.png [/li][/ul]
In case you haven’t realized it yet, there are a bunch of prior techniques that have been cooked up to address the self-shadowing problem in shadow mapping (which leads to “shadow acne”), and these are right in-line with what you’re asking for. Here’s a good illustration of the self-shadowing problem: Shadow acne (referencing page).

Just websearch and you’ll find many of them, including: polygon offset, constant bias, normal offset, depth gradiant, dual depth layer, midpoint, Lengyel’s projection matrix trick, etc.

If the quads have identical vertex coordinates, then each fragment will have identical depth values in both quads, and you’ll end up with the fragments from either the first quad drawn (with glDepthFunc(GL_LESS)) or the last quad drawn (with glDepthFunc(GL_LEQUAL)). Z-fighting occurs for primitives which are coplanar but with different vertices.

Why along its normal? Moving it toward the viewer is simpler; it doesn’t need the normal and avoids the need to scale the distance depending upon the angle with the viewer.

If you move them in the -Z direction in NDC, you avoid the issues with depth precision varying with Z. If you’re using multiple draw calls, the simplest solution is to change either the depth transformation (glDepthRange()) or the projection transformation (adding a positive offset to m[2][2] will add a constant negative offset to all depth values). For a 24-bit depth buffer, you want to add roughly -2-24 ~= -6e-8 to all depth values (if you’re modifying the projection matrix, double this value, as converting from Z in [-1,1] to depth in [0,1] halves the scale). You may need to increase that value to account for rounding errors, but that’s the appropriate order of magnitude.

Or you could achieve the same effect as modifying the projection matrix by modifying gl_Position directly, in which case you need roughly:
[var]gl_Position.z += -1.2e-7 * gl_Position.w;[/var]

See glPolygonOffset.