3D textures on iOS

Hi all,

So this is my problem:

I’m doing an app for iOS (5) and I’m using OpenGL ES 2.0. The problem is that I need to display a 3D image that is represented using luminance values. I already have the buffer with this information in it, but I don’t know how to exactly display it.

Since 3D textures are not supported by OpenGL ES, I’ve tried to display the image using shaders and a kind of volume rendering (using multiple 2D textures to send the 3D image to the fragment shader), without any success (probably because I’m too noobish for the things I’m trying to do :().

So please, if you can think of a possible way to represent it, I’ll be eternally grateful.

Thanks.

I think you have the right idea of using multiple 2D textures and I guess you are converting the luminance values to RGB in the fragment shader? That is commonly done as YCrCb to RGB color space conversion for video frames.

However, the only reason I can see to process more than one 2D texture per pass of the fragment shader would be if you want to interpolate the luminance values from luminance values from neighboring layers in the third dimension? Of course, you must also handle the top and bottom-most layers as special cases. Also, you should ask the OpenGL ES driver how many texture units the GPU has, as this varies from platform to platform.

glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &iValue);

Regards, Clay

Thanks for your response, Clay.

I guess you are converting the luminance values to RGB in the fragment shader?

Yep, that’s the idea… I’m trying to use a raycasting algorithm which link I cannot paste because this post gets denied otherwise -.-

However, the only reason I can see to process more than one 2D texture per pass of the fragment shader would be if you want to interpolate the luminance values from luminance values from neighboring layers in the third dimension? Of course, you must also handle the top and bottom-most layers as special cases.

Sorry, I don’t see why the top and bottom layers are special cases :S
The reason I have to use multiple 2D textures is that the maximum size of a texture is 2048x2048, while the 3D image I’m trying to render is 256x256x128… so I need exactly 2 textures to be able to have the entire image in the fragment shader.

glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &iValue);

Yep, I’ve done this before. I know I have 8 available textures.

Recently I’ve realized that something weird happens… At the risk of looking like a noob (in fact, I am) and of bothering you too much, I’d like to ask if you know why this may happen:

Passing only one texture to the fragment shader (so only half of the 3D image) and trying to print the whole texture using only the following:


void main()
{
	gl_FragColor = texture2D(sampler0, gl_TexCoord[0].st);
}

For some reason, when I pass the texture like this:


glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 256, 256, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, getSlice(0)); //getSlice(0) basically returns the pointer to the buffer with the 3D image

The result is ok, I can see a slice of the image occupying the whole cube I use as base (I’d like to add an image here but again, I can’t or the post gets denied…)

However, when I pass the texture like this:


glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, 512, 256, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, getSlice(0));

I expect in the result to see 2 slices of the image in 3D, one next to other. However, 4 slices are displayed (undistorted and occupying 1/4 of the face of the cube).

This makes no sense for me :S How can this be possible? I mean, I only gave the space for 2 slices but 4 are shown :S

Do you have any idea of what can be happening here? I’m almost sure it has to be a very basic error but I don’t have a clue…

Thanks for everything and sorry for the looong post.

This is really different than what I thought you were describing. I’m not sure this will work, but I think you would need to specify the texture dimensions as 2048 x 2048 to glTexImage2D() and then divide the x and y texture coordinates by 4 or 8 for it to sample properly. I have not used GL_LUMINANCE, but you also need to be sure how many bytes of data it is reading from the texture for each sample, 1 or 4, because that affects how you need to scale the texture coordinates also.

Regards, Clay

Yes, I have a function to translate 3D texture coordinates to 2D and it seems to work fine…

I haven’t found a solution yet, but thank you for trying :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.