Render to Bitmap and OpenGL specific functions

Hello again, one more question from me…
How should i use OpenGL functions like glBindTexture, glTexImage2D glActiveTexture when rendering context is bitmap. I do have rendering context bound to main window and i’m using it to get proc adresses. But when i use bitmap context when binding textures\rendering - nothing seems to work and glGetError returns 1280(GL_INVALID_ENUM) after calling glTexImage2D\glTexParameteri\glActiveTexture et cetera.

For example of rendering to bitmap i used look here:

http://stackoverflow.com/questions/40529…4201472#4201472

I think, it will make things clear, because i’ve adopted it’s code(related to bitmap rendering) for my program. The problem is in that example author don’t use functions mentioned above.

Short version: Can i do it? How? Or maybe it should work straight forward and i did mistake somwhere else?

Render to bitmap is not accelerated, you probably end up using Microsoft software GL 1.1
Instead, render to an FBO (or the old fragile way, render normally without swapbuffers) then glReadPixels the data. A PBO can help for high performance async transfer if needed, see http://www.opengl.org/wiki/Pixel_Buffer_Object

FBO : http://www.opengl.org/wiki/Framebuffer_Object

To be more clear: what i’m trying to do is opengl Layered Window with trasparency. But more complicated than example above.
And it should work even on dinosaurs like GF2\Radeon 8xxx; As far as i know FBO\PBO aren’t even close to be supported by every adapter.

But according to OpenGL extension viewer base, GL_ARB_pixel_buffer_object is supported by GF2 and Radeon 8xxx. Is that true(Yah, i’m a naive idiot. I’ll be lucky if it really works on R300)?

PBO is a feature that don’t need specific hardware support. Provided it is correctly implemented in drivers, it should work.
FBO need almost no specific hardware support and could be available in GF2 and R300 … if exposed by the driver.
As I said, FBO and PBO are “bonus”, not mandatory.

Thanks for good news.

I found some example of PBO here:
http://www.codesampler.com/tag/pixel-buffer-object/

But it also uses FBO. I didn’t fully reserch that code yet, and i already wandering, do i necessarily need to use fbo to render to bitmap? According to GLExt Viewer, it is supported only beginning from GF6200\R9550;

PBO can be used for transfer from/to a normal framebuffer too :
http://www.songho.ca/opengl/gl_pbo.html#pack

Oh thanks(i had this article opened in my browser, waiting to check out tomorrow but now i’ll bookmark it), and i’ve just checked for GL_EXT_framebuffer_object, it is obviously more widely supported, than GL_ARB_framebuffer_object, that i’ve checked in previous post. But GL_EXT_pixel_buffer_object has almost every GF in supported list, but for radeon - only HD2xxx+. That’s weird it is supported by RIVA TNT\GF1.

There are some problems… I used exmple code from Asynchronous Read-back, but it reads nothing to array(or reads zeros); I render to a hidden window and then try to read pixels using PBO and glReadPixels and then BitBlt them to visible non-opengl window. If i read pixels directly to array without PBO - it does read values that seem valid(but as far as i know, it will not work everythere).

init:

if(glIsExtSupported("GL_ARB_pixel_buffer_object"))
{
	glGenBuffersARB = (PFNGLGENBUFFERSARBPROC) wglGetProcAddress("glGenBuffersARB");
	glBindBufferARB = (PFNGLBINDBUFFERARBPROC) wglGetProcAddress("glBindBufferARB");
	glBufferDataARB = (PFNGLBUFFERDATAARBPROC) wglGetProcAddress("glBufferDataARB");
	glDeleteBuffersARB = (PFNGLDELETEBUFFERSARBPROC) wglGetProcAddress("glDeleteBuffersARB");
	glBufferSubDataARB = (PFNGLBUFFERSUBDATAARBPROC) wglGetProcAddress("glBufferSubDataARB");
	glMapBufferARB = (PFNGLMAPBUFFERARBPROC) wglGetProcAddress("glMapBufferARB");
	glUnmapBufferARB = (PFNGLUNMAPBUFFERARBPROC) wglGetProcAddress("glUnmapBufferARB");

	glGenBuffersARB(2, PBO);
	glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, PBO[0]);
	glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, screenw*screenh*4, NULL, GL_STREAM_READ_ARB);

	glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, PBO[1]);
	glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB, screenw*screenh*4, NULL, GL_STREAM_READ_ARB);

	glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
}

render:

wglMakeCurrent(hdc, hglrc); // Hidden window context
...Render...
SwapBuffers(hdc);
glReadBuffer(GL_FRONT);
index = (index + 1) % 2;
nindex = (index + 1) % 2;

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, PBO[index]);

unsigned char* ptr;

glReadPixels(0, 0, screenw, screenh, GL_RGBA, GL_UNSIGNED_BYTE, 0);

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, PBO[nindex]);
ptr = (unsigned char*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);

if(ptr)
{
	hdc2 = CreateCompatibleDC(0);
	HBITMAP bm = CreateCompatibleBitmap(hdc2, screenw, screenh);
	HANDLE old = SelectObject(hdc2, bm);
	LOG << SetDIBits(hdc2, bm, 0, screenh, ptr, BmpI, DIB_RGB_COLORS) << endl; //LOGS 600(window height)

	hdc3 = GetDC(hwndvis); //DC of visible window
	BitBlt(hdc3, 0, 0, screenw, screenh, hdc2, 0, 0, SRCCOPY);
	ReleaseDC(hwndvis, hdc3);
}
else
	LOG << " ~PBO pointer is NULL:" << glGetError() << ";" << endl;

glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_ARB);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);

delete ptr;


wglMakeCurrent(0,0);

And yes, i dont really know how to BitBlt to non-opengl window and make it render/update after that. Even if i use glReadPixels without PBO, i still get black screen.

Hidden window is a bad idea
http://www.opengl.org/wiki/Common_Mistakes#The_Pixel_Ownership_Problem

So what’s why i get fully working scene copied to layered window on win7 with gtx 295, and a piece of desktop covered by invisible window on XP with GF6150? Or thats because i don’t get how “CreateCompatibleDC”, “CreateCompatibleBitmap”, “SelectObject” works?

That code works under win7 with gtx295(wnd1-visible window DC, wnd1c - visible window Compatible DC):


HBITMAP bm = CreateCompatibleBitmap(wnd1, screenw, screenh);
HANDLE old = SelectObject(wnd1c, bm);
SetDIBits(wnd1c, bm, 0, screenh, ptr, BmpI, DIB_RGB_COLORS);
BitBlt(wnd1, 0, 0, screenw, screenh, wnd1c, 0, 0, SRCCOPY);
SelectObject(wnd1c, old);
DeleteObject(bm);		

I don’t think it’s really valid. It works with posted earlier PBO\read pixels code. It started to work after i removed binding of second PBO before calling glMapBufferARB(why? i have no clue. i’m exhausted by that BS);

So if i’ll add FBO(i have no idea how, yet), is there any significant functional(at least for my target) difference between GL_EXT_framebuffer_object and GL_ARB_framebuffer_object. I mean 1st one has wider support, but can i rely on it?

And my PBO is fixed by allocating it manually and copying data like this:

glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, PBO[nindex]);
memcpy(ptr, (unsigned char*)glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB), screenw*screenh*4);

What a shame to do mistakes as in my previous posted code.


And suddenly...I recalled OGL Layered Window demo i've seen long time ago:

[http://www.dhpoware.com/demos/glLayeredWindows.html](http://www.dhpoware.com/demos/glLayeredWindows.html)

It uses WGL_ARB_pbuffer(supported even by your microwave oven). There are sources. Anyone pro can take a quick look at them and confirm this method will not fail, please? I some kinda doubt it because author renders to WS_EX_LAYERED.