How to display contents of p-buffer depth buffer?

What is the easiest way to display the contents of a p-buffer’s depth buffer for debugging purposes?

After I render a scene to a p-buffer I want to visualize the contents of the depth buffer.

Assume I have a display buffer and a p-buffer, both created with 24-8-24 bits of color-alpha-depth respectively, on recent NVIDIA hardware/driver.

Thank you.

glReadPixels and then display it as a monochrome image.

Mikael

Mikael,

Thank you for the quick response. I’ll use glReadPixels.

robo

I must be doing something wrong, because all I get is a white screen.

First I make the PBuffer context current, then render a scene into it.
To read from the “current” PBuffer:
glReadPixels( 0, 0, bufferWidth, bufferHeight, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, (void*)pPixels );

Then I make the display context current and use the following to write to the back buffer:
glDrawPixels( displayWidth, displayHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, pPixels );

Then swap the double-buffer display. Nothing but white shows up.

I’ve confirmed the single-buffered PBuffer is created with a pixel format that has 24 depth bits (in addition to the color and alpha bits).

I’ve also confirmed the near/far clipping planes on the projection are set correctly to just barely encompass the scene geometry (to get the most contrast out of the depth bits).

What am I missing?

Thank you.

[This message has been edited by robosport (edited 03-16-2004).]

Hi !

The docs says that the depth value is always clamped 0-1 so it may not work with GL_UNSIGNED_BYTE, try to use GL_FLOAT and see if that works, in that case you might have to scale it to unsigned bytes yourself.

Mikael

I forgot to mention that I had tried GL_FLOAT on both the read and draw before trying GL_UNSIGNED_BYTE. No difference.

I’ll try reading with GL_FLOAT and scaling/converting to GL_UNSIGNED_BYTE myself before drawing.

(sorry if this is a duplicate - the forum server starting kicking out errors during my post, so I’m re-posting).

I wrote a quick loop just to give me the min/man of the values read. Stepping through the loop (ignoring min/max for the moment) the results are really odd.

When I do the glReadPixels from the PBuffer depth buffer using GL_UNSIGNED_INT, I get all 255 values. When I do the same read using GL_FLOAT type, I get a repeating pattern of 128, 63, 0, 0 then it repeats throughout the entire float array.

Any clues why that is? (not to mention my glReadPixels is REALLY slow - 1024x1024 takes almost two seconds to complete the read, even though everything else is hardware accelerated)

Embarrassingly enough I solved the mystery repeating pattern when reading in floats. The 128, 63, 0 0 is the individual bytes that make up the floating point value 1.0 when you accidentally read the floats into a buffer of bytes.

Okay. That still doesn’t solve the problem of the missing depth buffer data. When read in as GL_UNSIGNED_BYTE or as GL_FLOAT I still get all white with no variation on my min/max check.

My glReadPixels/glDrawPixels code worked fine on the RGBA buffers.

Am I missing something obvious. Surely a PBuffer can have a depth buffer, otherwise it would fail the ChoosePixelFormat call with the depth bits and hardware acceleration bit set in the attributes.

Any clues are much appreciated.

Thank you.

[This message has been edited by robosport (edited 03-16-2004).]

Mystery solved!

I had GL_DEPTH_TEST disabled. I never realized (before tonight) that turning off the depth test turned off actual updating of the depth buffer in addition to turning off the testing.

That’s almost as intuitive as glDepthMask having to be TRUE to not mask and FALSE to mask.

Anyway, problem solved. PBuffer depth buffer is working fine. Of course all the ways we discussed to display the contents of the depth buffer worked, there was just nothing being written to the depth buffer because GL_DEPTH_TEST was disabled.

Thank you for your help Mikael! (and all of you frequent contributors that make this forum so helpful)