Could opengl render to main memory directly?

I want to save color buffer in my opengl program. It take me 0.6 second to get a 1024x768x16 image to main memory using glReadPixel. If there is other way to get render image quickly. If GeForce 256II can render to main memory directly, all problems will disappear.

Thanks

Hmnn… With glReadPixel, can I minimize my program and have it read the pixels from another openGL app?

The specs say that reading data from graphics context’s areas which are not exposed, result in undefined values.

In English, you cannot expect correct data from areas which are overlapped by another window, pushed out of the desktop etc.

So minimized windows don’t deliver anything. [Aha, saw the post in the other forum. It’s the other way round.
If you know the window handle of the other program’s window displaying the content to be snapped you could get the HDC and probaply the OpenGL rendercontext from there. But this is highly unlikely to work for OpenGL. But it will work with GDI on everthing you see on the desktop.]

For the performance of glReadPixels: The perfomance may be driver version dependent. Always try the newest release (~5.30).

Another thing is, that 16 bits color res needs more conversions than 32 bits. Though it’s twice the data to be read in 32 bits without conversions.

Another method you could try (works only for the the front buffer) would be a GDI BitBlt to a device independent bitmap in host memory. That should be pretty fast.

[This message has been edited by Relic (edited 07-27-2000).]

  1. glReadPixels() works (and SHOULD work) fine with overlapped windows
  2. glReadPixels() is a slow function
  3. stick to using GL_UNSIGNED_BYTE with GL_RGB or GL_BGRA formats for 32-bit and GL_UNSIGNED_SHORT with GL_RGB for 16-bit.

>>>1. glReadPixels() works (and SHOULD work) fine with overlapped windows<<<

You mean with windows which have the WS_OVERLAPPEDWINDOW style?
Ok, but it will definitely not work on windows areas which are hidden by another window. The glReadPixels results of those areas are undefined and implementation dependant.

Are you on windows? If so, it is pretty easy to get OpenGL to render to an in-memory device independent bitmap. The steps are to do this:

  • Use CreateCompatibleDC to create an in-memory device context. This will have a 1x1 monochrome bitmap selected into it by default, but you’ll change this.
  • Use CreateDIBSection to create an in-memory bitmap that has the color depth and dimensions that you want
  • Select the bitmap into the device context that you created in step 1
  • Now, do your normal OpenGL setup on the in-memory DC, including the picking of the appropriate pixel format and creation of a rendering context on the in-memory DC. Picking the pixel format is really important. I found that the ChoosePixelFormat function is inadequate for this purpose, so you’ll probably need to write something that picks the appropriate pixel format for your purpose.

-Now, you can make the in-memory DC’s rendering context current and render your scene directly to the DIB. The main advantage of this method is that you can control the resolution of the OpenGL rendering by modifying the size of the DIB. This will allow you to get great than screen resolution. Make sure you size the viewport to take up the entire bitmap.

Originally posted by ben:
Are you on windows? If so, it is pretty easy to get OpenGL to render to an in-memory device independent bitmap.

As far as I know this is not accelerated by the graphics card. So rendering into a bitmap will take longer than rendering it accelerated in back buffer and reading it with glReadPixels.