I’m not sure why you always take this angle when people talk about feature requests.
Because that is the angle that ought to be taken. Rather than innundating hardware developers with random requests for functionality, the functionality in question should have some justification to it.
I could just as easily say that hardware should do shadows for you. However, the complicated nature of shadows in a scan converter, coupled with the means of accessing it in a fragment program, makes this too difficult to implement in hardware. As such, even making the request is unreasonable.
The existence of fragment and vertex programs is justified. The existence of “primitive” programs is justified. The existence of floating-point render targets and textures is justified. However, there is an argument for each of these features that justifies them. If you can’t justify a feature, it shouldn’t be added.
There is not a single thing that one has to have this feature for. It is simply nice. It would be useful in 1000 different cases and essential in none.
Then name some cases. Justify the necessity of having this functionality in the same way that other features are justified. Or are you simply wanting to have a feature simply to have it? That kind of thinking leads to a hardware nightmare, where you just add an opcode because it sounded like a good idea at the time, rather than evaluating the need for a feature.
If you tell me that an entire class of advanced rendering techniques would use this functionality, and without it they would run 20x slower, and that they are crucial towards the ultimate goal of photorealism, then there is sufficient justification for adding the feature. If you can’t do that for this feature, then there is no point in having it.
Because what we have doesn’t support high-precision blending.
Which hardware developers have promised to provide in the future (NV40/R420). So that point is moot.
What we have doesn’t support exponential light decay based on thickness (obtained by having the ability to read depth or stencil buffer of current pixel in fp).
Which, of course, could be passed in as the “alpha” given the above operations. So, once again, hardly a necessity.
What we have doesn’t support using a different blend equation for color and alpha, or for each color channel.
How useful is this, compared to what you already have? And how often will this functionality be required?
Also, EXT_blend_func_separate exists, so at least RGB and ALPHA can be blended separately. Said functionality could be extended to offer independent RGB blend functions.
An analogy for you: Ability to do blending by reading frame/depth/stencil buffer from fragment program is to the current blending model as fragment programs are to register combiners. The former is completely general whereas the latter is nothing more than flipping a few hard-wired switches.
That’s not justification; that’s explaining the current situation. Also, it presupposes a certain state of mind: that fixed-function is always bad, and that programmability is always good. This is not the case for all fixed-functionality. Should we start ripping out bilinear/trilinear/anisotropic filtering operations and just let the fragment shader do it? It can, so why waste the hardware? Except for the fact that the texture unit will always be much faster at it than a fragment program.
The justification for fragment programs is pretty obvious; a programmable model is needed in order to support the flexibility of modern and advanced graphics needs. Virtually any advanced graphics application will need fragment programs.
Most of these applications will be just fine with the regular alpha blending ops.
I’m just asking questions that hardware developers ask. No more, no less. It is precisely these questions that lead to hardware vendors telling Microsoft to remove the feature from DX Next.
[This message has been edited by Korval (edited 12-08-2003).]
[This message has been edited by Korval (edited 12-08-2003).]