Hi…
So i lost all the text from my post with some weird manip… So i’ll think i’ll be quicker this time…
I’m running into trouble while trying to render to multiple slices of a 3d textures using FBO & GLSL & ARB_draw_buffers.
It looks like the data inside the 3D texure after querying it back to central memory is corrupted.
For starters, it is not impossible at all that there is some problem due to the way i use FBO extension
so here is the source code for my FBO wrapper: http://pastebin.com/f62d8799b
I’m not getting any opengl error and the FBO status is complete after checking.
I’m using the renderTo3DTexture procedure afterwards this way:
distmaptex = new texture3d( axisLengths[0], axisLengths[1], axisLengths[2], TEX_RGB_F32, NULL );
distmaptex->filterNearest();
// pass 1 init
if (!shComputeDist2DSlice)
shComputeDist2DSlice = new shader( "shDistanceMap2D" );
shaderTextureSet *pass1Textures = new shaderTextureSet;
pass1Textures->add( depthPeelTexPX, "texturePX", 0 );
pass1Textures->add( depthPeelTexNX, "textureNX", 1 );
pass1Textures->add( depthPeelTexPY, "texturePY", 2 );
pass1Textures->add( depthPeelTexNY, "textureNY", 3 );
shComputeDist2DSlice->attachTextures( pass1Textures );
float *shComputeDist2DSlice_xInvSize = shComputeDist2DSlice->GetUniform( "xInvSize" );
if (shComputeDist2DSlice_xInvSize) *shComputeDist2DSlice_xInvSize = 1.f / axisLengths[0];
float *shComputeDist2DSlice_yInvSize = shComputeDist2DSlice->GetUniform( "yInvSize" );
if (shComputeDist2DSlice_yInvSize) *shComputeDist2DSlice_yInvSize = 1.f / axisLengths[1];
// pass 1
renderTo3DTexture( shComputeDist2DSlice, "layer", "nLayers", "layerInvSize", distmaptex );
float *testTex = new float [3*axisLengths[0]*axisLengths[1]*axisLengths[2]];
glGetTexImage( GL_TEXTURE_3D, 0, GL_RGB, GL_FLOAT, testTex );
for (int i=0; i<axisLengths[2]; i++)
{
int ind = 3*axisLengths[0]*axisLengths[1]*i;
fprintf( glslLog, "%f
", testTex[ind] );
}
delete [] testTex;
assert( glGetError() == GL_NO_ERROR );
Now some shader code:
varying vec2 uv;
uniform float layer;
uniform float layerInvSize;
void main()
{
gl_FragData[0].rgb = vec3( layer + 0.0 );
gl_FragData[1].rgb = vec3( layer + 1.0 );
gl_FragData[2].rgb = vec3( layer + 2.0 );
gl_FragData[3].rgb = vec3( layer + 3.0 );
gl_FragData[4].rgb = vec3( layer + 4.0 );
gl_FragData[5].rgb = vec3( layer + 5.0 );
gl_FragData[6].rgb = vec3( layer + 6.0 );
gl_FragData[7].rgb = vec3( layer + 7.0 );
}
Expected result would be to get increasing float32 values from 0 to 255 in this case, here is however the actual result:
0.000000
0.000000
0.000000
0.000000
4.000000
0.000000
0.000000
0.000000
8.000000
8.000000
8.000000
8.000000
12.000000
8.000000
8.000000
8.000000
16.000000
16.000000
16.000000
16.000000
20.000000
16.000000
16.000000
16.000000
24.000000
24.000000
.....
224.000000
232.000000
232.000000
232.000000
232.000000
236.000000
232.000000
232.000000
232.000000
240.000000
240.000000
240.000000
240.000000
244.000000
240.000000
240.000000
240.000000
248.000000
248.000000
248.000000
248.000000
252.000000
248.000000
248.000000
248.000000
So… anyone has an idea what i did wrong? :s
Last detail, hardware is a radeon 4870 under windows 7 64 bits using latest drivers (9.12)…
Regards and thank you for any help in advance…