Color Lookup Table questions

Hi Everyone,

I am new to fragment shaders, myself. I am trying to write a shader to apply a large LUT to an image. As a matter of fact, I want to apply 2 LUTs to an image, I apply the first LUT, do some operations, then apply the second LUT.

Problem 1:
What is the best way to apply a 65536 (16-bit deep) 1D LUT (in a texture) to an image? The complication is that the Texture size is often smaller than 65536. (ie: a Texture buffer width of 8192 will be 8 pixels high in order to hold a 65536 sized 1D Lookup Table.) I have worked out a way to use a cascading “if” statements to offset the pixel value into the texture to perform the lookup. This doesn’t seem to be very efficient. Is there a better way?

Problem 2:
In my shader, I need to perform 2 different LUT operations at different times in my color processing chain. In order to be efficient, I created a GL_TEXTURE_RECTANGLE_EXT texture, 16 x 8192 in size. Both LUTs are 8x8192, so I use glTexSubImage2D to place them side by side. I use the method I described above, to do the LUT.

Using an ATI graphics card, my 2 lookups work perfectly. On an nVidia card, the first Lookup from using coords 0,0 to 8192,8 works perfectly. But my second Lookup fails, returning garbage. Is there some limitation on nVidia that I should be aware of? It almost seems like the glTexSubImage2D is failing. But the weird part is that if I comment out the first Lookup, the second works.

I am stumped. (I wish there was a GLSL debugger)

bob.

This doesn’t seem to be very efficient. Is there a better way?

Use modulus arithmetic.

If “lValue” is a floating point value representing the index that you need to look up:


vec2 texCoord(floor(lValue / 8.0), mod(lValue, 8.0));

That will be faster than if statements.

As for the second one, we would need to see code to know what’s happening.

Here is the Fragment Shader…


vec4 applyLUT(in vec4 inPixel, in float lutOffset)
{
	vec4 color;
	// Offset & Index to be calculated as above..
	color.r = texture2DRect(LUT, vec2(offset, index)).r;
	color.g = texture2DRect(LUT, vec2(offset, index)).r;
	color.b = texture2DRect(LUT, vec2(offset, index)).r;
	
	return color;
}

void main()
{
	vec4 inputColor = texture2DRect(ImageTexture, gl_TexCoord[0].st);
	
	// Apply first LUT
	inputColor = applyLUT(inputColor, 0.0);

	// Do Something Here..
	inputColor = inColor * 0.9;

	// Apply second LUT
	inputColor = applyLUT(inputColor, 4.0);

	gl_FragColor = inputColor;
}

I am using the following code to create and populate the texture…


glEnable(GL_TEXTURE_RECTANGLE_EXT);
glGenTextures(1, &_lutID[0]);
				
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, _lutID[0]);
				
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_RECTANGLE_EXT, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
				
GLsizei combinedHeight = lutHeight * 2;  // Two LUTs
glTexImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, GL_RED, lutWidth, combinedHeight, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);			
glBindTexture(GL_TEXTURE_RECTANGLE_EXT,  0);


glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT, _lutID[0]);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, lutHeight, lutWidth, lutHeight, GL_RED, GL_UNSIGNED_SHORT,  lut->data);
glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, lutWidth, lutHeight, GL_RED, GL_UNSIGNED_SHORT, lut2->data);
glBindTexture(GL_TEXTURE_RECTANGLE_EXT,  0);

This code works just fine with my ATI Radeon 5870. The NVidia GT 330M on my laptop doesn’t work.

A quick note…
The applyLUT function using the “if” statements had the problem with the garbage. I don’t have the modulus code working yet. As mentioned, this work on my ATI, but not the NVidia.

bob.

Are your laptop drivers up-to-date?

Yup…

Well, I am officially stumped.
I tried several combinations of loading these textures. It appears that for what every reason, the NVidia sampler2DRect doesn’t like being referenced twice on the same channel.

If I do something like: (In my Fragment shader)


	vec4 inputColor = texture2DRect(ImageTexture, gl_TexCoord[0].st);
	vec4 inPixel = inputColor * 65535.0;
	float lutOffset = 0.0;

	float index = floor(inPixel.r / 4096.0) + lutOffset;
	float offset = mod(inPixel.r, 4096.0);
	inPixel.r = texture2DRect(LUT, vec2(offset, index)).r;

	lutOffset = 16.0;
	index = floor(inPixel.r / 4096.0) + lutOffset;
	offset = mod(inPixel.r, 4096.0);
	inPixel.r = texture2DRect(LUT, vec2(offset, index)).r;

This fails on the nVidia. The second texture2DRect comes back with garbage, even though I know for a fact that there is LUT data there. If I reverse this operation, the “offset 16.0” LUT is found, but the “offset 0.0” comes back as garbage.

Strangely, this works perfectly with the ATI drivers on the Macintosh (10.6.8).
Does the NVidia Mac drivers not allow you to sample twice??

bob.

It could just be a driver bug.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.