32F texture clamping?

I have a GPU terrain renderer in my project based on Harald Vistnes’s one from Game Programming Gems 6 (chapter 5.5). Basically, it’s uploading a 17x17 flat, square patch to a VBO and using a vertex shader with a Vertex Texture Fetched-heightmap to deform it. The terrain grid is rendered as a quad tree, rendering the patch for each leaf with an offset and scale depending on LOD, passed as uniforms. This means that the same amount of verts/triangles is used to cover varying surface areas.

Back on topic. I’ve got it working with 16-bit heightmaps (using a GL_LUMINANCE8_ALPHA8 hack, as not all hardware supports GL_LUMINANCE16), but I’d like to extend hardware compatibility to SM3.0 cards, which only support 32-bit float formats for VTFing. The thing is, the heightmap gets clamped to the [0…1] range, and it doesn’t even look like it’s normalized against the highest value in the texture, the pixels look quite random.

The original Harald Vistnes’s demo program for this technique uses Direct3D and a 32-bit float DDS texture for the heightmap, which means it’s loaded natively, and there is no clamping. I’m guessing it’s OpenGL’s pixel transfer that’s ruining it for me, as according to the spec, it’s bound to clamp the incoming values to [0…1] no matter what. I can’t find any way to disable this behaviour.

So the solutions that come to my mind are:
[ul][li]either upload the texture bypassing glTexImage2D,[*]or normalize the heightmap before uploading and pass the scale and bias to shaders via uniforms.[/ul][/li]Unless there is some way of disabling the clamping that I don’t know of.

Now, of the two, the first approach seems more appealing to me. Are Pixel Buffer Objects the way to go here?

Any comments appreciated.

Ah, my bad. I haven’t analyzed Vistnes’s code thoroughly enough, his heightmap is normalized and he’s using a world matrix (an equivalent of modelview) to scale it vertically. Sorry for the hassle.

Still, I would like to know if it’s possible to disable the clamping.

Take a look at GL_ARB_texture_float:
http://opengl.org/registry/specs/ARB/texture_float.txt

"
Overview
[…]
Floating-point components are clamped to the limits of the range
representable by their format.
[…]
"
ie no clamping to [0,1]

Another extension you can consider:

GL_ARB_color_buffer_float:
http://opengl.org/registry/specs/ARB/color_buffer_float.txt

"OverView
[…]
Clamping control provides a way to disable certain color clamps
and allow programs, and the fixed-function pipeline, to deal in
unclamped colors. There are controls to modify clamping of vertex
colors, clamping of fragment colors throughout the pipeline, and
for pixel return data.
"

I know these extensions, found them earlier today when googling for the answer. Apparently, in ARB_texture_float by “format” they didn’t mean the internal format of the image. Float colour buffer is irrelevant as I’m rendering to a fixed point buffer.

If the float textures weren’t clamped, why would I be getting this?



Two screenshots from the same point of view, the second with a bit more altitude to make it more clear that the terrain is flat. The verts are displaced by 1 unit max.
Don’t mind the GNU image. :wink:

Besides, glTexImage2D says this:

GL_LUMINANCE

                        Each element is a single luminance value.
                        The GL converts it to floating point,
                        then assembles it into an RGBA element by replicating the luminance value
                        three times for red, green, and blue and attaching 1 for alpha.
                        Each component is then multiplied by the signed scale factor GL_c_SCALE,
                        added to the signed bias GL_c_BIAS,
                        and [b]clamped to the range [0,1][/b]
                        (see glPixelTransfer).

glPixelTransfer manual entry confirms this.

Any clues?

I’m going with normalizing the heightmap on load at the moment.