early depth testing

OpenGL says that if depth function is GL_LESS and layout qualifier is depth_less, then OpenGL will perform the early depth test.
Now if the original value in buffer is 0.5 and if the depth for particular pixel is 0.8, it will fail the early depth test. But if we are modifying the value to 0.4 then it should not fail. how does this work in this case.?

Where? That seems backwards.

layout(depth_less) tells the implementation to assume that the value written to gl_FragDepth will be less than or equal to the depth value obtained by interpolation (i.e the value which would be written to the depth buffer if the program didn’t statically assign to gl_FragDepth).

Suppose that the depth-test function is GL_LESS. If the interpolated value is less than the value in the depth buffer, the test is guaranteed to pass so the fragment shader must be run. If the interpolated value is greater than the value in the depth buffer, the value written to gl_FragDepth might still be less than the value in the depth buffer, so again the fragment shader must be run.

Conversely, if the depth test was GL_GREATER, in the case where the interpolated value is less than the value in the depth buffer, the layout(depth_less) qualifier indicates that the value written to gl_FragDepth will also be less than the value in the depth buffer, so there’s no need to run the shader.

Essentially, the implementation needs to distinguish between “will fail” and “could pass”, not between “will pass” and “could fail”. Any “don’t know” cases must be treated as a potential pass, with the definitive depth test being performed once the fragment shader has been run and calculated the final value of gl_FragDepth.

[QUOTE=GClements;1272296]Where? That seems backwards.

layout(depth_less) tells the implementation to assume that the value written to gl_FragDepth will be less than or equal to the depth value obtained by interpolation (i.e the value which would be written to the depth buffer if the program didn’t statically assign to gl_FragDepth).

Suppose that the depth-test function is GL_LESS. If the interpolated value is less than the value in the depth buffer, the test is guaranteed to pass so the fragment shader must be run. If the interpolated value is greater than the value in the depth buffer, the value written to gl_FragDepth might still be less than the value in the depth buffer, so again the fragment shader must be run.

Conversely, if the depth test was GL_GREATER, in the case where the interpolated value is less than the value in the depth buffer, the layout(depth_less) qualifier indicates that the value written to gl_FragDepth will also be less than the value in the depth buffer, so there’s no need to run the shader.

Essentially, the implementation needs to distinguish between “will fail” and “could pass”, not between “will pass” and “could fail”. Any “don’t know” cases must be treated as a potential pass, with the definitive depth test being performed once the fragment shader has been run and calculated the final value of gl_FragDepth.[/QUOTE]

Thanks for the clarification. i was under the assumption that depth test occurs only once, so if there is early depth test it wont occur after FS.

If the fragment shader uses layout(early_fragment_tests) (originally from ARB_shader_image_load_store), the stencil and depth tests are performed prior to executing the fragment shader, and stencil and depth updates and occlusion query sample counts are based upon the early tests. Even if the fragment shader is run, the value written to the depth buffer is the interpolated value used for the early depth test, not from any value written to gl_FragDepth.

AIUI, layout(depth_less) etc (from AMD_conservative depth) are supposed to be transparent so long as the actual value written to gl_FragDepth obeys the declared relationship with the interpolated value. In particular, if the fragment shader is actually run, the value written to the depth buffer will be that written to gl_FragDepth, regardless of whether it obeys the declared relationship.