Hi,
I understand that there are additional factors that play into this, but is it expected that a shader based volume rendering algorithm performs faster on compressed textures than on uncompressed textures?
I played around with a shader based volume rendering algorithm from VTK and changed it to compress the 3d texture for the volume during the upload to the graphics board. I wanted to see what the tradeoff is between the additional data that I can hold in the texture memory and the performance drop during the rendering. I was surprised to find out that a maximum intensity projection of my sample data set (512 x 512 x 700 voxels) was about 20% faster with compressed textures. My graphics card is a NVidia G210M.
I assumed that the shader that does the actual volume ray casting does not care about whether a texture is compressed or not and that the only difference is that when the tracer asks for the value of the voxel that in one case the result comes directly from the texture buffer and in the other
case it decompresses the value of the requested pixel on the fly (I understand that the algorithm is optimized for random access and on-the-fly decompression speed). Still I assumed at least some performance drop.
Now where I would understand the performance increase would be if the textures (compressed or uncompressed) do internally get copied around while performing ray casting and that the reduced bandwidth requirements of the compressed textures make the difference, but I don't see where - and why -
this should happen.
Or maybe assuming that the texture sampling of the ray casting shader is what takes most of the time, the GPU caches some data (neighboring blocks of pixels?) during decompression to on-chip memory and texture memory is accessed less often?
Any ideas?
Thanks,
Mark