bogus example on opengl.org?

I think that the example code at http://www.opengl.org/resources/faq/technical/color.htm is wrong. Consider the case where the loop tries to walk all colors i.e. when numPrims is 2^(redBits+greenBits+blueBits). In this case it never submits the largest representable value (i.e 0xffffffff). It only submits the largest color component shifted up to the top of the integer (e.g. 0xff000000). So it never reaches full intensity because, as http://linux.die.net/man/3/glcolor3ui says, “Unsigned integer color components, when specified, are linearly mapped to floating-point values such that the largest representable value maps to 1.0 (full intensity), and 0 maps to 0.0 (zero intensity).”.

Each of the shifts is enough to get the color component up to the top of the word. The bits are masked off from the color number (indx) in-place and then shifted up as far as they need to go to abut the top of the word. For 24bit color (8bits per component) you end up with:

0xRR000000
0xGG000000
0xBB000000

All good so far. It is just that, if the Linux manpage I cite is right (and my experimentation indicates that it sure is), then that isn’t enough. What you need for calling glColor3ui (or any of its similarly named friends) is:

0xRRRRRRRR
0xGGGGGGGG
0xBBBBBBBB

In summary, I think that http://www.opengl.org/resources/faq/technical/color.htm and http://linux.die.net/man/3/glcolor3ui cannot both be right.

Do you agree? If so, which one is wrong?

0xRRRRRRRR
0xGGGGGGGG
0xBBBBBBBB
Only the part in bold is needed as each channel is 8 bits only.
So both are true :slight_smile:
This code snippet could be clearer, not using unsigned integers but unsigned bytes only.
It could also use the alpha channel, for 256 times more objects selectable if you don’t trip over blending modes.

I understand that you are saying that the call only picks up as many significant bits as it needs from the most-significant-bit downward. So 0xff000000 is really the same as 0xffffffff.

I have not found this to be true in my experiments. The OpenGL implementation I use (http://www.lwjgl.org/) only has glColor3ub for non-float versions of the call. Bytes should be clearer anyway, as you say.

If I switch down to 16bit High Color display (16=5red+6green+5blue) I’ve found that 0xf8 does not give full intensity red or full blue as I first imagined from http://www.opengl.org/resources/faq/technical/color.htm. The color intensities are found in roughly equal steps between 0x00 and 0xff (not between 0x00 and 0xf8).

I can provide demonstration code in Java that displays a box of all available High Color colors (passing values of 0x00-0xff) and displays a box to the right of it that looks similar but is actually full of duplicate colors (passing 0x00-0xf8) to support my argument. http://linux.die.net/man/3/glcolor3ui , talks about the “largest representable value” by which I take it to mean “the largest represent value of that particular type e.g. 0xff in this case” seeming to further support my thoughts.

I think either my OpenGL implementation (http://www.lwjgl.org/) is wrong or http://www.opengl.org/resources/faq/technical/color.htm is wrong or I’ve messed up somewhere :wink:

Here is the code …


import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.lwjgl.opengl.GL11;

/**
 * Show's two boxes of colors according to two theories of how glcolor3ui()
 * might work. See the two inner classes RenderColourBoxAsPerLinuxManPage and
 * RenderColourBoxAsPerOpenGlExample for more info.
 * 
 * <p>
 * RenderColourBoxAsPerLinuxManPage is the only one that comes out correct and
 * is shown on the left.
 */
public class ColorBox {
	private static final int SCREEN_HEIGHT = 600;
	private static final int SCREEN_WIDTH = 800;

	private void initLwjglDisplay() {
		try {
			Display.setDisplayMode(new DisplayMode(SCREEN_WIDTH, SCREEN_HEIGHT));
			Display.create();
			Display.setVSyncEnabled(true);
		}
		catch (LWJGLException e) {
			throw new RuntimeException(e);
		}
	}

	private void initOpenGL() {
		GL11.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);

		GL11.glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
		GL11.glMatrixMode(GL11.GL_PROJECTION);
		GL11.glLoadIdentity();
		GL11.glOrtho(0, SCREEN_WIDTH, SCREEN_HEIGHT, 0, 1, -1);
		GL11.glMatrixMode(GL11.GL_MODELVIEW);

		GL11.glDisable(GL11.GL_BLEND);
		GL11.glDisable(GL11.GL_DITHER);
		GL11.glDisable(GL11.GL_FOG);
		GL11.glDisable(GL11.GL_LIGHTING);
		GL11.glDisable(GL11.GL_TEXTURE_1D);
		GL11.glDisable(GL11.GL_TEXTURE_2D);
		GL11.glShadeModel(GL11.GL_FLAT);
	}

	private static int makeMask(int x) {
		int a = -1;
		a <<= x;
		return ~a;
	}

	/**
	 * <b>Scales</b> it up within the byte because this <a
	 * href="http://linux.die.net/man/3/glcolor3ui">man page</a> talks about the
	 * "largest representable value" probably meaning
	 * "the largest represent value of that particular type" i.e. 255 here.
	 * 
	 * <p>
	 * Does work. Produces 2^16 unique colors.
	 */
	class RenderColourBoxAsPerLinuxManPage extends Renderer {
		@Override
		public int extract(int value, int bitPosition, int bitWidth) {
			int mask = makeMask(bitWidth);
			value >>>= bitPosition;
			value &= mask;

			int answer = value * 255;
			answer /= mask;
			return answer;
		}
	}

	/**
	 * <b>Shifts</b> it up against the top of the byte  just as this <a
	 * href="http://www.opengl.org/resources/faq/technical/color.htm">opengl
	 * example</a> does.
	 * 
	 * <p>
	 * Does not work. Of the 2^16 pixels produced many (although unique color
	 * values are sent) are taken as duplicate colors by OpenGL (4993
	 * duplicates, as it turns out).
	 */
	class RenderColourBoxAsPerOpenGlExample extends Renderer {
		@Override
		public int extract(int value, int bitPosition, int bitWidth) {
			int mask = makeMask(bitWidth);
			value >>>= bitPosition;
			value &= mask;

			int answer = value << (8 - bitWidth);
			return answer;
		}
	}

	abstract class Renderer {
		abstract int extract(int value, int bitPosition, int bitWidth);

		void render(int xDisplacement) {
			int redBits = GL11.glGetInteger(GL11.GL_RED_BITS);
			int greenBits = GL11.glGetInteger(GL11.GL_GREEN_BITS);
			int blueBits = GL11.glGetInteger(GL11.GL_BLUE_BITS);

			int colorBits = redBits + greenBits + blueBits;
			if (colorBits != 16) {
				throw new RuntimeException(
						"This demo is only for HighColor/16bit color displays ... change to 16bit color and run again?");
			}
			int numPrims = 1 << colorBits;
			int lengthOfBoxEdge = 1 << (colorBits / 2);
			for (int indx = 0; indx < numPrims; indx++) {
				int r = extract(indx, greenBits + blueBits, redBits);
				int g = extract(indx, blueBits, greenBits);
				int b = extract(indx, 0, blueBits);
				GL11.glColor3ub((byte) r, (byte) g, (byte) b);
				int x = indx % lengthOfBoxEdge + xDisplacement;
				int y = indx / lengthOfBoxEdge + 50;
				GL11.glRectf(x, y, x + 1, y + 1);
			}
		}
	}

	private void render() {
		GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
		new RenderColourBoxAsPerLinuxManPage().render(50);
		new RenderColourBoxAsPerOpenGlExample().render(SCREEN_WIDTH / 2);
	}

	private void startTheExample() {
		initLwjglDisplay();
		initOpenGL();
		while (true) {
			render();
			Display.update();
			Display.sync(10);
			if (Display.isCloseRequested()) {
				Display.destroy();
				break;
			}
		}
	}

	public static void main(String[] argv) {
		ColorBox cb = new ColorBox();
		cb.startTheExample();
	}
}


Indeed you are right, 0xffffffff unsigned int is near 0x100 in unsigned byte rather than 0xff, so there is almost an off-by-one error near the max value.
So unsigned byte is mandatory.

You probably know this but Java’s signed-ness is largely a matter of interpretation which, for example, is why it has three bit-shift operators.

I don’t follow. Where is the off-by-one error? What is it that needs to be changed to correct it? Thanks!!

Sorry I responded before seeing your code, I only meant that indeed the code sample in opengl.org was indeed slightly wrong. 2 objects can end up with the same id, when color picking among a lot of objects, even in 888 mode.

From a cursory glance, your code look good.

I was floundering myself until I wrote the above code to put my mind to rest.

Scaling to the “largest representable value”
(as http://linux.die.net/man/3/glcolor3ui suggests)
is right.

Shifting up to the top of the value
(as http://www.opengl.org/resources/faq/technical/color.htm does)
is wrong.

Wrong in way may pass for right at high bit depths, hiding the problem. Maybe it even counts as an OpenGL gotcha? It is counter-intuitive that you cannot just shift up and have it work correctly in all circumstances. It certainly got me and it got the writers of http://www.opengl.org/resources/faq/technical/color.htm too.

The docs on that page
“Again, use glColor3ui() (or glColor3us(), etc.) to specify the color with your values stored in the most significant bits of each parameter.” are wrong too. At least they are consistent and wrong in the same way as the example code.

Can something now be done to right this error? I’ve free time and I’m happy to help.