Hi guys,
I would like to render to a RenderBuffer with “float” precision but I get “unsigned char” precision. If I put a value of 0.1 to a gl_FragColor I get 0.0980392.
Why 0.0980392 instead of 0.1?
If I want to represent 0.1 from [0, 1] to [0, 255] I get the value 25 because I can not represent 25.5. If I want to convert 25 to [0, 1] again, then it appears the magic value 0.0980392.
I don’t want a conersion to [0, 255], I want the full precision but I don’t know what should I do. Some help please?
This is the code:
Initialization of renderbuffer and framebuffer:
GLuint renderBuffer;
GLuint frameBuffer;
glGenRenderbuffers( 1, &renderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glRenderbufferStorage( GL_RENDERBUFFER, GL_RGBA, mWinWidth, mWinHeight );
CHECK_GL_ERROR();
glGenFramebuffers( 1, &frameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBuffer);
if( glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
CECK_GL_ERROR();
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
Preparing for drawing:
glBindFramebuffer( GL_FRAMEBUFFER, frameBuffer );
glDrawBuffer(GL_COLOR_ATTACHMENT0);
glViewport( 0, 0, mWinWidth, mWinHeight );
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
Getting the pixels:
float *pixels = new float [4 * mWinWidth * mWinHeight];
glBindRenderbuffer( GL_RENDERBUFFER, renderBuffer );
glReadBuffer(renderBuffer);
glReadPixels(0, 0, mWinWidth, mWinHeight, GL_RGBA, GL_FLOAT, pixels);