Full precision RenderBuffer

Hi guys,

I would like to render to a RenderBuffer with “float” precision but I get “unsigned char” precision. If I put a value of 0.1 to a gl_FragColor I get 0.0980392.
Why 0.0980392 instead of 0.1?
If I want to represent 0.1 from [0, 1] to [0, 255] I get the value 25 because I can not represent 25.5. If I want to convert 25 to [0, 1] again, then it appears the magic value 0.0980392.

I don’t want a conersion to [0, 255], I want the full precision but I don’t know what should I do. Some help please?

This is the code:

Initialization of renderbuffer and framebuffer:

GLuint renderBuffer;
GLuint frameBuffer;

glGenRenderbuffers( 1, &renderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, mWinWidth, mWinHeight );
CHECK_GL_ERROR();

glGenFramebuffers( 1, &frameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBuffer);
if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
    CECK_GL_ERROR();
glBindFramebuffer( GL_FRAMEBUFFER, 0 );

Preparing for drawing:

glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glViewport( 0, 0, mWinWidth, mWinHeight );
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

Getting the pixels:

float *pixels = new float [4 * mWinWidth * mWinHeight];

glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glReadBuffer(renderBuffer);
glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA, GL_FLOAT, pixels);

Use a floating point image format instead of an unsigned normalized one.

If I change this line:

glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, mWinWidth, mWinHeight );

For this one:

glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA32F, mWinWidth, mWinHeight );

And this line:

glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA, GL_FLOAT, pixels);

For this one:

glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA32F, GL_FLOAT, pixels);

I get and GL_INVALID_ENUM after this line:

glReadBuffer(renderBuffer);

And after this line:

glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA32F, GL_FLOAT, pixels);

Should I change something else? What is wrong?

Thanks

Because the fifth argument is the format which should be GL_RGBA not GL_RGBA32F.

This solve the problem in line:

glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA32F, GL_FLOAT, pixels);

But not in the line:

glReadBuffer(renderBuffer);

Thx

“renderBuffer” is not an enumerator; it is an object. glReadBuffer tells OpenGL which attachment you want to read from. Not which object. That is, GL_COLOR_ATTACHMENT0, 1, 2, etc.

Thanks, problem solved.

This is the final code:

Initialization of renderbuffer and framebuffer:

GLuint renderBuffer;
GLuint frameBuffer;

glGenRenderbuffers( 1, &renderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA32F, mWinWidth, mWinHeight );
CHECK_GL_ERROR();

glGenFramebuffers( 1, &frameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBuffer);
if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
    CECK_GL_ERROR();
glBindFramebuffer( GL_FRAMEBUFFER, 0 );

Preparing for drawing:

glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glViewport( 0, 0, mWinWidth, mWinHeight );
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );

Getting the pixels:

float *pixels = new float [4 * mWinWidth * mWinHeight];

glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA, GL_FLOAT, pixels);