I have a lot of dense geometry to render.

Since the geometry is already chunked in a tile-like organization, I don't need the full scale of a 32-bit float for my vertices. In some cases I could get by with a 16-bit short, or even an 8-bit unsigned byte (!). I would of course need to adjust the transformation matrices I'm using.

My questions are:
1) Is this supported at all?
2) Is it a "good idea" in general? Perhaps such usages are atypical, and are not handled in an optimized code path in the driver/GPU?

The idea is of course to save GPU memory. In addition, since the server could output a much more efficient binary encoding (I'm using binary XHR to load data), it would save network bandwidth as well. Thirdly, since I need to retain this data at the JavaScript side as well (to do window queries and iterate geometry), I would have JavaScript VM memory.

Comments welcome.