depth buffer -> depth texture -> depth buffer

i’m trying to draw a scene, capture the color and depth buffers to separate textures (using glCopyTexImage2D), then later draw the color and depth info in the textures back into the buffers so i don’t have to redraw the scene (which is expensive) but can still draw other geometry into the scene correctly.

this obviously works fine using a combination of glReadPixels/glDrawPixels, but is much slower.

my question is: how do you use the ARB_depth_texture extension with glCopyTexImage2D and GL_DEPTH_COMPONENT to update the depth buffer? the color part is easy. but with the depth part, right now what’s happening is the POT screen-sized quad i draw with the depth texture sets the depth buffer to the depth of the quad, not the values in the texture.

i saw one thread in the forum on this topic, but it was never resolved. i remember an nvidia demo that did depth sprites of a bunch of pre-rendered spheres rendered on top of each other and correctly depth tested, but i can’t find it.

i can’t seem to find in the specs how to write the depth info stored in the texture into the depth buffer! help!

ps
if possible, i’d like to find a non-pixel-shader solution.

I’d be interested in a reply to this also. Is there any way to eliminate the copy by doing something like rendering to a pbuffer and binding depth as a texture?

edit - I see that nvidia has an extension to do this, but apparently nothing for ati… what would be nice is a way to detach a depth buffer from one color buffer and attach it to another…

You’ll have to use a fragment program that besides writing the color also writes the depth. (With the depth you read from your depth texture obviously :smiley: )
I think it is also possible on a gf4 to write the depth but you’l have to set up the texture shaders correclty (If I remember correclty you always have to do some dot product with the depth writing.)

Charles

Code Monkey, I believe WGL_ARB_buffer_region is more appropriate for what you’re trying to do.

http://oss.sgi.com/projects/ogl-sample/registry/ARB/wgl_buffer_region.txt

Greetz,

Nico

A thought - Is it possible to copy zbuffer to zbuffer by using glCopyPixels() and setting the source/dest buffers appropiately with glReadBuffer/glDrawBuffer?

This would be better than glReadPixels/glDrawPixels and could be used on ATI cards which don’t seem to support WGL_ARB_buffer_region.

thanks nico, my student discovered that too and it does do what we need.
we were also trying to get pixel buffers to work with glReadPixels/glDrawPixels and had problems, so this turned out to be a good solution.

too bad something similar doesn’t exist within GLX…

stephen - what you suggest also works, but only within the same context (although i think there is a wgl method for doing copies between contexts, which is what we had problems with when using pbuffers). but sadly this is not a good solution for our app, since e.g. we would have to have a window twice as big to hold the temporary depth and color buffers (which are needed every frame, but updated rarely themselves).

Something not mentioned before… Maybe you are pumping too much data across your AGP bus. That could considerably slow down your implementation.

I got what Stephen suggested working… sort of. I posted about it here . It only worked for single buffered windows.

Unfortunately, the workaround was going to be painful for me in other parts of my program. Fortunately, I got ARB_Buffer_Region working correctly, although it’s not supported on ATI cards so my users will have to use something else for the time being.

I have the exact same question. Unfortunately, I’m using GLX and I do not have that WGL extension, although I do have Kinetix’s GL_KTX_buffer_region. The specs on that one are hard to find, though, and it doesn’t seem like an extension I can rely on if I want maximum portability.

I want to save/restore the depth buffer state but glReadPixels/glDrawPixels is just way too slow. And, like Codemonkey said, using textures doesn’t work because the depth buffer is filled with the quad’s depth values not the depth components in the textures.

I feel like there HAS to be a fast way to do this that doesn’t involve fragment shaders but I can’t figure it out. Has anybody ever managed to write to the depth buffer without using the WGL/KTX extensions or Read/DrawPixels?

Looks like this new extension may do what you want:
http://oss.sgi.com/projects/ogl-sample/registry/EXT/framebuffer_blit.txt

(When it becomes supported)