Just a quick Question about HDR

Does HDR have to use a float buffer? the effects of HDR can be done on a simple 8bit per channel RGBA pbuffer, and the HDR examples on ATI Rendermonkey use 8bit per channel RGBA RenderTargets (GL2 ver)…so i was wondering if i have to change my pbuffers and render targets to floating point, i dont want to for compatability reasons.

Twixn

If you take a closer look on the two different versions in the RenderMonkey (GL2 and DX) you will be able to see that the DX version uses 64 bit floating point buffers. The effect is different too. In the DX version you can still see the details of the teapot when its in front of the sun. In the GL version the teapot dissapears.
Im pretty sure, you do need the 64 bit floating point buffer for correct HDR Rendering. The GL2 demo in Rendermonkey is a hack where they control the light bleeding using alpha values, and incrementing those when they do the blur step.
You should be able to achieve a similar effect if you are content with the hacking approach, but then its not true HDR Rendering.

btw. You will only be able to see the DX9 effect if you have capable hardware I guess :slight_smile:

Does anyone know why ATI did not do the same thing with OpenGL ? Surely it should be possible to do true HDR with OpenGL too?, but maby they just wanted to showcase both the real deal and the hack ??

I use 64bit int texture formats and buffers on OpenGL, but nVidia doesn’t support those. :rolleyes:

Originally posted by RAMman:
Does anyone know why ATI did not do the same thing with OpenGL ? Surely it should be possible to do true HDR with OpenGL too?, but maby they just wanted to showcase both the real deal and the hack ??
RM does not support float textures or float render targets in GL.
Maybe next version?

Are you sure V-Man ? I have the newest version of RM, and I just updated the GL version so it is excatly like the DX version. uses 64 bit float textures, and the result on screen is identical too. My RM version is 1.5.

HDR stands for “high dynamic range”. 8 bits fixed precision isn’t very high dynamic range – about 49 dBV.

Floating point values get high dynamic range from the exponent part (rather than the precision part) so even a 1.5.10 16-bit float has pretty large dynamic range; about 200 dBV (depending on how you count denormals etc).

Note that HDR does not mean “glow” or even “overbrightening”, although those effects are often used in HDR renderers. HDR means that you render a potentially large difference of color values (say, between 0.0001 and 10,000 in the same scene) and then use some combination of mechanisms to present it to the user through the low-dynamic-range fixed-precision frame buffer. Tone mapping + glow is a typical approach.

Hmmm…where does it say 64bit in the RM 1.5 HDR example? I’m not seeing it here (which probably means it’s right infront of my face :slight_smile: ).

EDIT: The stuff I said there above was about the GL version. If I load the DX9 version I get nothing but a black window with the messages “failed to create renderable texture…” So I guess whatever they are trying to do in this example NVIDIA doesn’t support yet. Surely this 6800 is capable of supporting whatever it is this demo is requesting. It better for 400 freaking dollars. :stuck_out_tongue: meh.

-SirKnight

Thanks for that…So how does HDR work differently from a simple 32bit buffer, in terms of colour output etc, are the colour components still clamped to [0,1], or do they operate differently

Twixn

PS, sorry for my lack of knowledge on HDR, so far i have not found much material on the inner workings of it.

Originally posted by RAMman:
Are you sure V-Man ? I have the newest version of RM, and I just updated the GL version so it is excatly like the DX version. uses 64 bit float textures, and the result on screen is identical too. My RM version is 1.5.
I did a test and it looked like values were clamped.
Also, there is a warning message that appears when I try to use floating RGBA. Something about a limitation.

Unless there is an error within the test…