Simplify

Delete fixed pipeline support and kill ALL non-needed renderstates. Just make a few objects:

  1. Buffers -> Vertex, index buffers
  2. Textures -> simple textures, cubemaps, shadowmaps
  3. Shaders -> vertex/fragment shaders
  4. Essential render states -> raster mode, culling, antialiasing, z/stencil/alpha tests, scisor test.

Forget automatic texture coordinate generation, fixed pipeline transformations,
display lists, glVertex …

In my opinion, there are like 200000 ways to do THE SAME. That can’t be. Kill the FP asap and its ridiculous extensions ( who is using the FP extension for reflexion environment mapping if I can use the programmable pipeline… no sense… ).

[This message has been edited by santyhammer (edited 01-26-2004).]

Display lists are actually very useful (fast), especially in conjunction with vertex programs…

Display lists are actually very useful (fast), especially in conjunction with vertex programs…

To an extent, yes, but I wouldn’t be upset to see them go (on the assumption that GL 2.0 is a remake of the API). The original use for them was for texture objects. Once the texture object extension came about, they were useful for sending vertex data. But, now we have VBO’s. There isn’t much real use for display lists anymore.

Korval,

Would you use display lists to store a set of OpenGL state in?

Barthold

Would you use display lists to store a set of OpenGL state in?

No. I have no idea how that will perform. It might be a good idea for some cards, but it might be a bad one for some cards.

Let them go?

I thought it was pretty smart feature. You can put anything in there (most).

I’m not sure why it’s called a display list. It should be called a command buffer. You create and execute command buffers.

I have to agree.

OpenGL is starting to get cluttered with an explosion in the number of (redundant) ways to perform tasks.

I’d love to see a this sort of refinement in gl 2.0.

Anything useful but non-necessary could be added to glu. Speaking of glu, it’s starting to feel dated…

The only thing that I think should be removed from OpenGL is stippling.

Originally posted by barthold:
[b]Korval,

Would you use display lists to store a set of OpenGL state in?

Barthold[/b]
That’s all I ever use display lists for, and I’d hate to see that usage model go away.

The only thing that I think should be removed from OpenGL is stippling.

Some (most?) CAD apps use stippling.

If you have alpha, why you need stippling? what DECENT CAd graphic card dont support alpha omg?

If you have alpha, why you need stippling?

Non-additive z-buffered stippled transparency works with any drawing order.

It is not very pleasant for the eye though.

non-additive 1 bit alpha too

[This message has been edited by santyhammer (edited 01-29-2004).]

Originally posted by al_bob:
Some (most?) CAD apps use stippling.
Why can’t they just use texturing and alpha test, like everyone else? That’s functionally equivalent - and probably easier to implement/support, dunno. Stippling is completely redundant and might cause implementation problems due to orthogonality requirements. How does a non-native implementation of stippling interact with texturing? Alpha testing? Blending?

Look it up. Line stipple is a 16x1 1 bit “texture” with a kludgy interface. Polygon stipple is a 32x32 1 bit “texture” with yet another kludgy interface and uses fragment position in window space as its “texture coordinates”. Ugh.

First of all, I use stippling in some of my software, which I’m developing for my dissertation, to draw dotted and dashed lines on a graph. Line stippling is NOT just the same as 1-D texturing; in texturing, you have to specify the tex coordinates, and there is no suitable tex coord generation mode for stippling. To emulate line stippling, you would have to add up the length of all line segments you had drawn and use that sum to calculate the starting texture coordinate, which would put more work on the programmer, as opposed to the driver. I would be sad to see stippling go.

Why are so many people obsessed with removing GL features that they don’t use? So you switched over to using vertex and fragment programs, why does that mean that someone else can’t use the fixed function pipeline if they prefer it? Not everyone uses GL for cutting-edge 3d games. Removing the fixed function pipeline, display lists, immediate mode, etc. wouldn’t benefit you at all, but it would certainly hurt a large portion of users. I’m in favor of fixing little idiosyncrasies (like column-major matrices), but I don’t think features should be removed simply because some people think they’re not advanced enough.

I imagine a lot of people here started learning the API by writing a program that drew a spinning cube or something similar with glVertex* commands. Then you made it a little more advanced by adding lighting, texture mapping, etc. Imagine if, for your very first program, you had to write vertex and fragment shaders just to do these simple tasks. Seems kind of silly, doesn’t it?

I would be sad to see stippling go.

So you would have to do some almost trivial quanity of work to build stippling out of textures. That’s hardly an onerous task. It’s not like you’re being asked to

Why are so many people obsessed with removing GL features that they don’t use?

In this case, it isn’t just a feature that we don’t use. It’s a feature that makes the spec more obfuscated and complex (interactions with stippling have to be dealt with, on a per-fragment basis), as well as easily implemented by the user.

Removing the fixed function pipeline, display lists, immediate mode, etc. wouldn’t benefit you at all, but it would certainly hurt a large portion of users.

I think you need a light-weight fixed-function pipeline. No complicated lighting (just diffuse, maybe specular, and no attenuation). No texture coordinate generation. Maybe even no multi-texturing without the direct use of fragment programs. In that case, it would be purely for basic prototyping.

I don’t care much for display lists, and I don’t think they are particularly useful in terms of performance coding (their only real purpose) compared to VBOs. So I have no problem seeing them go.

but I don’t think features should be removed simply because some people think they’re not advanced enough.

If you’re talking about a major revision of the spec, where you are willing to break backwards compatibility, then every feature should be considered for the chopping block. Each feature needs to rationalize it’s existance, given a particular design paradigm for OpenGL 2.0.

Aaron, if you need to draw simply a cube, use ogl1.x, not ogl2

[This message has been edited by santyhammer (edited 01-31-2004).]

Stippling isn’t as simple as “just” using a 1D texture and picking the right texture coordinates:

Polygon stippling is initially disabled. If it is enabled, a rasterized polygon fragment with window coordinates xw and yw is sent to the next stage of the GL if and only if the (xw mod 32)th bit in the (yw mod 32)th row of the stipple pattern is 1 (one).

and

It is incremented after each fragment of a unit width aliased line segment is generated, or after each i fragments of an i width line segment are generated. The i fragments associated with count s are masked out if

pattern bit (s / factor) mod 16

is zero, otherwise these fragments are sent to the frame buffer. Bit zero of pattern is the least significant bit.

I’d be interested in how you are all planning to replace it by 1D textures (or whatever else).

Stippling isn’t as simple as “just” using a 1D texture and picking the right texture coordinates:

True, 100% OpenGL spec-accurate stippling isn’t as simple as a 1D texture. But, a 1D texture is good enough to get the point across.

Originally posted by al_bob:
[b]Stippling isn’t as simple as “just” using a 1D texture and picking the right texture coordinates:

I’d be interested in how you are all planning to replace it by 1D textures (or whatever else).[/b]
That’s exactly the point.
Line stippling breaks the GL paradigm of “self-contained primitives” or whatver you’d like to call it. I can’t imagine how you’d go on to build an implementation that can transform multiple vertices of a stippled line loop in parallel. For every other GL primitive this is perfectly possible and heavily exploited.
It’s f***ing complicated to do unless you have dedicated hardware for stippling and for stippling alone. Only very few applications use it, and those typically use it for only a handful of lines per frame (for selection regions aka “rubber bands”).

So, as an implementor, you have two choices
1)build dedicated hardware for something that’s needed for 1 per cent of all apps, and in these apps, for 1 per cent of all primitives.
2)don’t build dedicated hardware for stippling and instead layer it on top of your texturing and alpha test hardware. Yes, this approach will most likely require software transform for stippled lines to get the pattern offset right.

Would anyone notice the difference? I don’t think so. AFAICS there’s not a single application out there that would suffer a noticable performance loss.
Removing stipple from the core spec eases implementation for the choice #2 implementors.

The choice #1 implementors (if there are any!?) may want to expose it as an extension. That’s the preferred way to expose more outlandish features. Once we have PPPs, stippling will just be another redundant fixed function subset.

Originally posted by Korval
I don’t care much for display lists, and I don’t think they are particularly useful in terms of performance coding (their only real purpose) compared to VBOs. So I have no problem seeing them go.
I agree regarding geometry data. I disagree regarding state blocks. Display lists are tremendously useful for these purposes.
E.g. if you compile an abstract effect description into actual target state, you may end up with multiple thousand cycles of work. If you send this work to the current GL state, that work will be lost once you compile another effect. If you instead submit it to a display list, you can quickly reuse it later.
Shader programming extensions recognize this and encapsulate the programs in objects.

Pure state display lists have another interesting property: the contained state can be guaranteed to be error free and can be stored in the driver in the ‘real’ format (barring only one exception that I’m aware of - full search results ). CallList (for lists containing only state) can be implemented without error checking, as a simple series of memcpys and the setting of “state dirty” flags, in the majority of cases.
They are closely related to Push/PopAttrib in this respect.

Maybe display list should have been split between state and geometry from the start? So you’d have geometry lists and state lists.
If we pretend this had been the case all the time, I’d agree to tag geometry lists for removal, seeing how they’re now superseded by STATIC_DRAW VBOs (and VBOs are now a core feature).

If you still want to remove display lists altogether, you’re removing functionality that’s similar in infrastructure to Push/PopAttrib. If one goes, so should the other. In any case, I’d prefer that both stay. And finally, I think GL_COMPILE_AND_DRAW can be safely removed. Implementors generally advise against using it, so why would we want to keep it?

[This message has been edited by zeckensack (edited 02-01-2004).]

if you compile an abstract effect description into actual target state, you may end up with multiple thousand cycles of work. If you send this work to the current GL state, that work will be lost once you compile another effect. If you instead submit it to a display list, you can quickly reuse it later.

What if the material and effect used for any particular mesh region is constantly changing? The light positions are always in flux. The number of lights change, so this will require, at least, swapping programs. Many of your state parameters are non-constant, so you’re going to have to update them every frame. And even the constant ones will change depending on changing shaders.

Also, display lists are not know for being particularly fast when dealing with state. Some state works in hardware, and some doesn’t. You don’t know which is which, and to set the wrong ones could lead to a massive performance loss.

In addition, glslang has this, somewhat annoying, feature of actually having the program object store state information. So, effectively, there’s your display list.