Texture compression issues

Hi all,

I’ve got a couple of questions about texture compression:

  1. Can I dynamically create a texture (e.g. in a pbuffer or in the back buffer in a preprocessing step) then compress it on the fly and use it to replace a part of a compressed texture (w/ glCompressedTexSubImage)? And how fast would that be?
    I need it to be fast and I need compression because I want to update a huge texture (1024x1024 or 2048x2048 depending on the HW) small bits at a time. And in this case, I need this texture to be double-buffered, so this would take twice the amount of memory and can’t be done without compression.

  2. Can normal maps be compressed or would this have too much of an impact on the normals?

1: You can update both compressed and uncompressed texture with both compressed and uncompressed data. How fast this is depends on how much you update. Updating a single pixel is not likely to cost that much compared to updating a whole 1k to 2k squared image. And we can’t say whether it’s fast enough for you. Only you can answer that, and I suggest you try it and see how well it works.

And can you please explain what a double buffered texture is? Never heard of that one before.

2: A normal map is nothing special. It’s an ordinary image to OpenGL, but YOU, as a programmer, interpretate the texels in a different way. Just remember than compressing the normal map, most normals will no longer be normalized.

Thx for answering Bob.

About the no longer normalized normals in the normal map, that’s exactly what I was refering to … my question should have been: is this acceptable?

For the double buffered texture, what I mean is that I think I can’t afford to update the entire 2k x 2k texture in one step. What I would like to do is update only, say a 512x512 (or even less) part of the texture at a time. Obviously, a partially updated texture can’t be used, this is why I need two textures: one being displayed, the other being updated. Once the updated texture is ready, I display it and update the other one. This is what I call a double-buffered texture.

The problem here is that I’m unsure if I can update only a subregion of a compressed texture. As I understand the GL_ARB_texture_compression specification, it is not possible … or it might be for a given compression scheme.
What I would have to do is to render a e.g 512x512 square in the backbuffer, compress it on the fly to a compressed texture object, then copy this compressed texture to a subregion of my 2k x 2k texture.
Is it possible to do this with S3TC compression?

Just a comment regarding the non-normalized normals (lots of ‘n’ !): when you use your normal map with some filtering (say GL_LINEAR), your normals are not normalized anyway and you have to use a normalisation cube map if you want to normalize them again…

Is the error intoduced by the compression much bigger than the one introduced by the filtering ?

Regards.

Eric

Originally posted by Eric:
[b]Just a comment regarding the non-normalized normals (lots of ‘n’ !): when you use your normal map with some filtering (say GL_LINEAR), your normals are not normalized anyway and you have to use a normalisation cube map if you want to normalize them again…

Is the error intoduced by the compression much bigger than the one introduced by the filtering ?

Regards.

Eric[/b]

Eric,

I’ve just tried both: only filtering the normal map and compressing it …
My advice: Do not compress normal maps
The result is terrible (I used DXT1).
On the other hand, I couldn’t see a difference between GL_NEAREST and GL_LINEAR as filtering parameters for the the normal map (when uncompressed).

For the texture update thing, I think I’ll use smaller uncompressed textures that I’ll update entirely each frame. I made some tests and visual quality seems good enough for my application.

Moz,

Thanks for the advice, that’s good to know !

Regards.

Eric

One of the key questions I see here is, “Will copy from a render target to a compressed texture be fast?” The answer is that it is reasonably unlikely with the common compression formats today.

-Evan

Originally posted by ehart:
[b]
One of the key questions I see here is, “Will copy from a render target to a compressed texture be fast?” The answer is that it is reasonably unlikely with the common compression formats today.

-Evan[/b]

The actual key question is “can I update only a subregion of an s3tc compressed texture”. After reading the extension’s spec, I’m not sure at all. If that was possible, I wouldn’t have to update the entire 2k x 2k texture, and I could achieve an acceptable performance.

Hey Eric!

I just tried converting my Bump map demo to use GL_NEAREST for the normal map, and although the actual lighting itself did not change, you could see pixelation occuring in the shading.

It looks much better with GL_LINEAR, although this makes me wonder weather the normal coming from a bilinear filtered normap map is of unit length?

Nutty

Originally posted by Nutty:
…makes me wonder weather the normal coming from a bilinear filtered normap map is of unit length…

I bet it’s not !

Unless the driver “sees” that you are actually filtering a normal map and normalizes the result (but it should be said in the specs if this was the case…).

Anyway, a normal map is usually quite smooth regarding the direction changes of the normals. Hence, interpolating linearly between two neighbours should give an “almost-normalized” vector. Note that it’s not true if your normals describe a funky surface with a rapidly changing curvature !

Regards.

Eric

This is all very interesting (by the way, I too found out that a filtered normal map gives better results, that’s what I’ve noticed on my terrain engine which lighting is done using a normal map) but what about my original question 'bout updating only a small part of a compressed texture?

You can absolutely update a sub-region of an S3TC texture. It compresses blocks of 4x4 pixels, so you may have to align all your updates on 4x4 boundaries.

I’ve never understood what this means for MIP map levels where the size is smaller than 4x4, though. Does anyone care to clarify what is supposed to happen? My guess is that the driver will generate a 4x4 block even for lower mip map levels, and thus waste a small amount of texture memory, but it’s still supposed to work?

Thanks jwatte,

by the way, if I have mipmaps, how can I update them? Do I have to copy and scale down to all the mip levels?

If you have mipmaps, you must scale the image and update each mipmap level. Otherwise the lower mipmap levels won’t be updated, and will hold old information. That is, one scale and one SubImage() for each mipmap level.

You can, of course, use the SGIS_generate_mipmap extension, which will automatically create/update all mipmap levels. The GeForce series supports it at least, dunno about TNT series.

Thanx Bob

But will SGIS_generate_mipmap work for compressed textures?

You can use SGIS_generate_mipmap with compressed textures, and it will work with our drivers, but it will be slow enough to likely be useless.

On a related topic, SGIS_generate_mipmap will work for paletted textures, but averaging indices may not exactly be what you were looking for…

  • Matt

And would the “Scale + SubImage technique” give reasonably good performance?

Originally posted by jwatte:
[b]You can absolutely update a sub-region of an S3TC texture. It compresses blocks of 4x4 pixels, so you may have to align all your updates on 4x4 boundaries.

I’ve never understood what this means for MIP map levels where the size is smaller than 4x4, though. Does anyone care to clarify what is supposed to happen? My guess is that the driver will generate a 4x4 block even for lower mip map levels, and thus waste a small amount of texture memory, but it’s still supposed to work?[/b]

Yes, it’s still supposed to work. and yes, that’s a reasonably good guess about what hardware might do. The lowest mip levels are generally intersting things in hardware, but you should never need to know that.

Originally posted by Eric:

Anyway, a normal map is usually quite smooth regarding the direction changes of the normals. Hence, interpolating linearly between two neighbours should give an “almost-normalized” vector. Note that it’s not true if your normals describe a funky surface with a rapidly changing curvature !

But even if the normal surface is smoothly varying, with only minimal curvature, each 4x4 block of the texture can only represent 4 discrete values, and they are linear and evenly spaced along the line in RGB565 space.

It seems unlikely that you’ll be satisfied by DXT compressed normal maps.

My best advice: 1- test it yourself. 2- Don’t expect every implmentation to compress equally well. You may very well want to compress normal maps (and cache them) or as a pre-process. if you depend on the driver to compress them in real time, you’ve no real guarantee about the resultant quality.

regards,
J

In general, I strongly recommend that you do not generate compressed textures on the fly. Compression is a slow process. Driver compression is provided as a convenience, not something that you should really be using in a production-quality application.

  • Matt