Upcoming pixel/register glEXT's, problems with DX8 Pixel Shaders, & more-

Does anyone know about upcoming OpenGL extensions for new hardware? There have been many references to a much needed ‘Render-to-Texture’ extension, but little else on new OpenGL extensions, particularly those that expose the pixel-processing power of upcoming chips, which will come with the DX8 pixel shaders.

Considering that the consumer graphic accelerator biz has consolidated [really just 4 big players right? => Nvidia:ATI:VIA(via S3):Matrox], i hope that exposing cross-platform GL extensions will be a requisite to stay competitive. Although Windows is the $$primary$$ 3d PC platform, the tiny market of non DirectX platforms (linux/apple/cross-platform-titles) should be have some value for chip-makers that outweighs the costly engineering of driver/api_extension development.

Unfortunately, nobody except Nvidia has really developed their GL capabilities. Despite repeated email petitions, ATI has yet to expose any Radeon pixel/register processing except a ‘dot3’ env, which is sad considering the potential of ‘limited dependent texture reads’ via their EVBM as well as the priority_ID (for Shadow-maps) and any register capabilities.

Via NV_RegCombiners, Nvidia exposed capabilities not available in DX that reduce the #passes or enhance operations (additional attentuation, 16bit compare for Shadow-maps,etc.). Hopefully this chip-specific exposure continues for NV20 & X-Box.

Since there are only 4 major chip-makers, it is simple for a software developer to write pixel-handling routines for each chipset/style-of-chip-capabilities (in fact, this really needs to happen in most 3d games/environments just to balance performance/visual-quality tradeoff). In the past, most chip vendors exposed an entire API(Glide,S3Metal,RRedline,etc.), not just an optional extension to a common API (OpenGL).

I’m raising these issues because i need to stay with OpenGL, since it is the choice for any cross-platform work. Although we spent considerable time playing with the new DX8 materials, there isn’t enough new functionality to have 2 codebases (1 DX8 for Windows, 1 OpenGL for all others). More importantly, the new DX8 pixel shaders forecast some difficulties since they are costly to generate (i.e. recommended pre-compiled use).

When rendering from a large database of dynamic 3d objects/materials, there is an enormous variety of possible shading methods. Many events in the world will change an object’s ‘shader’ such as adding a highlight, x-ray movie-projection, burning, wet, or even consider a piece of shiny metal that gains a splattered dirt decal texture which also varies the specular shininess. The possible configurations are too large to be precomputed (& compiled as a fixed set of DX Shaders). But dynamically generating these shading methods into a set of register commands and/or multi-pass states or a ‘run-time’ pixel shader is feasible. That’s what we do now for plain vanilla GL, dual-pass EXT_Env_Comb, NV_RegComb, etc. For programmable shaders, we will just have to treat them like textures and have a working ‘uploaded & compiled’ cache of current shaders. Yet we fear the compilation-time for creating/uploading a new shader will regularly interrupt the game world.

Thus we are very curious if anyone out there can share/hint at upcoming plans for GL exposure and any pros/cons for programmble but highly-dynamic pixel-handling?

Thanks for reading this lonnnnnng post and for any thoughts-

[[PS-Chip-makers:: it’d be great to see you guys speaking out on this, which keeps the GL community enthusiastic to develop for your future chipsets. The 3d market is slowly evolving beyond just games! ]]

We will continue to expose new features through extensions in OpenGL. There’s plenty of great stuff planned.

  • Matt

oooh Matt! Don’t leave me drooling like this! Tell us everything!!

Yeah Matt!

Please tell us a bit more or at least if there are any plans for Nvidia’s upcoming hardware to support dynamically created pixel-programs (or just further/enhanced Reg Combiners).

You’ll have to wait.

  • Matt

Matt…you’re evil.

What kind of adoption of these extensions
will there be by other hardware vendors,
though?

I mean, I can see how Matt’s bosses would not
want him to tell ATI how to correctly
implement their version of NV_vertex_program,
but that’s really what has to happen for a
technology which changes an engine to the
core like that to take off in majority.

You’ll have to wait.

Until when?

j

These vendor specific extension are the nightmare of the engine developper.
You must make code that works on all current gfx cards and achieve similar effects.
I think they better speed up the process of registering an official (ARB is it ?) extension for a specific feature, and hardware manufacturers should use the existing extensions for existing features.
I don’t think manufacturer specific extensions are good. No more ATI_ … or NV_ … !
Examples :
NV_texgen_reflection is now EXT_texgen_reflection and is use on both NVidia and RadeON cards.
ATI_vertex_blend is now
ARB_vertex_blend.

Also, OpenGL should speed up releasing new OpenGL versions officially including the latest extensions … like DirectX !

But anyway, i like OpenGL’s portability and will never use DirectX for that reason

I totally agree on this point.I think the ARB should start standardizing these new extensions as soon as they can as well as coming out with newer versions of OpenGL.

Well, actually, EXT_texgen_reflection is a non-extension. We created NV_texgen_reflection and put it in our driver. ATI chose to rename it to EXT_texgen_reflection without even asking us first – and if they had asked, we would have responded that there’s no point in doing so, because there are no functional changes to the extension. There is absolutely nothing gained by adding another additional name that creates more confusion and that is not recognized by existing applications.

ARB extensions are slightly different, but just making an extension into an ARB extension doesn’t necessarily do anything other than say that the API is “approved” in some sense or another. It certainly does not force anyone to implement the extension. No one is going to implement an ARB extension that they can’t support well, and if an extension isn’t either supported widely now or expected to be supported widely in the future, no one on the ARB will vote for it.

Creating new OpenGL versions that roll in the “latest and greatest” extensions is also cause for trouble. If anyone doesn’t support those extensions, they’ll have to implement them completely in software.

The letters at the beginning of an extension’s name are really quite irrelevant.

  • Matt

Sorry Matt, but i can’t agree with you on this point …

I don’t think NVidia would be happy to use a “ATI_…” extension, or 3DLabs an “NV_…” extension … marketing reasons.
And for the developper, it’s better to have officially approved extensions to unify the names and avoid such NV_ ATI_ 3DFX_ 3DLAB_ stuff having different names and syntax but doing the same stuff.

I clearly understand NVidia would be happy to see little NV_s everywhere, but you don’t want to have a trial for monopoly like ms, do you ?

No, actually, we have no objections to using other vendors’ extension names. They’re just names!

There simply haven’t been many cases where we’ve even had the chance. We have one S3 extension and one IBM extension right now, and we would have had an ATI extension except that it was EXT-ified before it shipped (dot3).

I happen to think that the MS trial is a real shame, that we should stop prosecuting success, and that antitrust laws should be repealed, but that’s another story…

  • Matt

In fact, I believe that 6.31 drivers in my GeForce2 exposed an ATIX_texture_env_dot3 extension, among SGI, IBM and S3 extensions too, I think.

Anyway, it would be very cool to have different vendors to discuss and agree on adding extensions, specially if their products are targeted at the same market segments (for example 3d gamming).

#1) I agree that it’d be great if there was a new OpenGL release, perhaps not a version 2.0, but an altogther different version, say ‘A’ that is a tight encapsulation of the well-used modern features: vertex/pixel processing, buffer/state management, capability/device querying, and extension handling. Having a small, functionality-tight, version of OGL would make cross-platform driver development/enhancement much easier for all.

#2)However, i do think that vendor-specific extensions that expose unique functionality of a given chipset (NV_Register_Combiners) is a very good thing. It prevents us from having to develop solely for the lowest common denominator of features and allows us to continually push the envelope. You simply can’t do that with a non-vendor-extensible API, like DX where MS controls what’s available (although they did let Matrox add a matrix-palette extension to DX7). If DX8 was cross-platform but didn’t support vendor-extensions, i’d still be using OpenGL.

I totally agree that vendor extensions MUST be ! That’s the future, a door open to new features.
But why name it NV_register_combiners ?
EXT_register_combiners would have been fine (valided by the opengl registry) and so other hardware manufacturer already have a standardized version of the extension to use if they make a card that support register combiners too !
In other words :
If ati makes a register combiner extension in the radeon 2 (only an example !), i’m afraid the extension will be validated because of it’s frequent use and will be renamed to EXT_register_combiners.
So all older software which test for “NV_register_combiners” will say “extension not found” even if the extension exists with another name. And all new software will have to test both names.
I’m not angry about the card manufacturer to make new extensions (of course not) but try to find a way for us programmers to find our way in this jungle easier.

[This message has been edited by paddy (edited 01-26-2001).]

I don’t think that Ati would like to simply copy the implementation and functionality of nVidia. If someone comes later with a feature, he will be focused on making it better than the already exitant implementations, with some unique functions. However it is possible to create extensions to extensions…

No, if someone else implements register combiners, they should call it NV_register_combiners, because that is what its name is!

If you want to promote an extension to EXT, you MUST have the permission of the vendor who originated the extension. Obviously, there is no one to enforce this rule, which is why it has been broken so many times (EXT_texture_edge_clamp, EXT_texgen_reflection, etc.), but it is a rule nonetheless.

  • Matt

Matt, that’s obviously not logical !
This means that if another vendor makes a feature similar to NV_register_combiners with it’s own technology, he will have to make a new extension if NVidia does not approve the use of their extension name ! This makes two different extensions for the same job … and also gives the opportunity to a specific vendor to “block” a feature. And i thought OpenGL was freely useable software …

Never mind, we all know the world will never be perfect …

[This message has been edited by paddy (edited 01-26-2001).]

Paddy, I believe you misunderstood, Matt said that the vendor need approval to replace the NV by EXT.

though I believe the name doesn’t not really matter. It is the name of the first vendor that did. The other vendor are free to implement it!

If a company wants to look good, they only had to think and implement the extension first! It’s not Nvidia fault if Ati folks lacks creativity and change name to not look like they are actually behind the other and just trying to keep up!

But actually their product would probably sell better if they supported all the nvidia extensions!! The Radeon is a good performer and the price is cheaper than most GeForce 2 I’ve seen.