alpha blending for floating buffer

can anyone tell me if there is any extention now support the alpha blending for the floating buffer? Thanks a lot!

No.

Only nVidia’s NV40-based hardware even supports it, and then only with 16-bit float buffers.

Some formal mechanism to check for this would be very useful.

Currently, we’re doing a per-card check to tell if we have to manually emulate blending and bilinear for float using shaders and multiple passes, or if we can use the fast path. This is obviously a Bad Thing.

Could we have a glGet() that to calculate the maximum bit depth that we can count on bilinear, and blending?

Pete

Thanks Korval
I am using the Geforce 6800. Can you give me more details how the blend works there? What function should I call?

Same function as usual :slight_smile:

You must render to a floating point target(pbuffer)
There are no new functions or enums, you just tream floationg-point targets as fixed-point ones

[edit]
@pete:
I haven’t tested it, but I think your problem can be sorved in following way: after loading a floating-point texture you set the filtering to linear. If the driver doen’t support this filtering, you should get an error message. Same with blending: you set the blend model to alpha-blending and look at glGetError.

At least I think so. :slight_smile:

I haven’t tested it, but I think your problem can be sorved in following way: after loading a floating-point texture you set the filtering to linear. If the driver doen’t support this filtering, you should get an error message. Same with blending: you set the blend model to alpha-blending and look at glGetError.
I’m pretty sure OpenGL doesn’t work that way. Filtering on float textures is specified to work under ARB_texture_float. What isn’t specified is whether or not this will have any reasonable performance (ie: throw you back to software rasterizing). The same is true of float buffers; you can turn it on, but it’s up to the implementation as to whether it has to kick back into software to do it.

But I tried. How ever, it doean’t work. What we are trying to achieve is volume rendering using pbuffer. so the new slices with certain alpha will blend with the target texture.

Originally posted by Korval:

Filtering on float textures is specified to work under ARB_texture_float.

Hmm… Didn’t know about it… If I’m not mistaken the question of implementation hints where raised rather oftenly.

However, there is another solution(rather a hack). There was a interesting library by SGI written a long time ago. The purpose of the library was to determine how “fast” are sertain features. For example: they rendered a mesh with and without lighting. If the performance differences where less then 50% they considered lighting to be fast.
This approach can be used with OpenGL too. Say, if performance of filtering on a fp texture is at least 50-70% of filtering on a normal texture then you can consider this feature to be hardware supported.

It would be interesting to write such a library. One then provides it with your custom pixel format and let it tests different features…

I think a lot of developers using OpenGL would like to have such a library…

Thanks for the suggestions. Unfortunately we don’t see either errors or software emulation of float*, so the only fallback would be rendering a small texture with bilinear enabled, reading it back, and seeing if we saw the expected filtering.

> Filtering on float textures is specified to work under ARB_texture_float.

Since no shipping hardware can meet this requirement of the spec for all float modes, it would certainly help us out if there were a limitation value exposed, as there is for a lot of other extensions like the number of texture units in multi-texturing, or the max texture size.

It seems like a similar case to those, or am I missing something?

thanks,
Pete

  • on OS X, ATI R3xx, that is.

on OS X, ATI R3xx, that is.
Sounds like a driver bug. Not uncommon, when dealing with ATi hardware.

I hoped it would be another way…

But it’s true, I was actually surprised when Korval pointed out the importance of software fallback - I don’t know any driver(this: consumer cards, not shure of proffecional) that would actually DO the fallback. Look at GLSL… So I hoped that drivers would be honest and report an error… We really need to force driver developers making the drivers more “honest”. Preffered way: provide software fallback, if you have no time or wish to implement this, please at least give an error message…

A new extension for querying hardware support would be handy and very easy to implement.
Like glIsSupportedbyHardware(cap)