Possible error in paletted texture spec

Hello,

Apologies if I’m missing something, but I don’t get table 3.17.1 in this spec. Shouldn’t there be 4 x 8bit texels in the PALETTE8_xxx diagram rather than 8 x 4bit?

Also, I humbly submit that showing the packing of 8bit texels into a 32bit word is misleading in any case - if I packed a word as shown and stored it to memory, wouldn’t the result depend on CPU endianness? Why does 32bit packing come into this at all?

This extension has a couple of problems. Besides the issues you raise, the reference implementation performs an alignment to byte boundary between the individual data rows if you use 4-bit indices. I have not been able to derive this alignment from the specification. Furthermore, the requirement of the palette formats to be used as internal formats contradicts the statement in the main specification, according to which only base internal formats with implementation defined internal represntation need to be supported. For the latter I asked to review board for clarification.

  • HM

Is the reference implementation GLESonGL?

It’s here: Downloads

Not sure if it’s exactly the same package, since it’s called differently.

  • HM

Yep, that’s the one I was looking at.

As long as we’re sticking the knife in, I should point out that level>0 and level<0 are both handled incorrectly by GLESonGL’s glCompressedTexImage2D; level>0 is (incorrectly) accepted whereas level<0 just generates an error and fails.

Anyway, I just submitted all this stuff to the GLESonGL bug-reporting address.

Bump.
The reference implementation is still broken.

And I still think that defining texture layout in memory as endian-dependant is a bit weird.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.