rendering larger images

Hi!

I have a question: I wrote an ImageViewer, which has a Qt QGLWidget in it and in this QGLWidget I load an image. I found out that when rendering a scene in OpenGL, the resolution of the image is limited to the workstation screen size. But I need to show images, which are larger than my screen size in my ImageViewer. What can I do to handle this? I know, it has something to do with tiling but I don’t really know how it works.

thanks for your suggestions!
mfg
liz

Originally posted by Liz:
I know, it has something to do with tiling but I don’t really know how it works.

Tiling a texture means that you split it up into smaller pieces and render them individually. Depending on how large the texture is and how much texture memory you have available, you may or may not need to tile the texture. To draw it, just set up an orthographic projection with the same size as the texture and draw a textured quad with that size. Use glTranslate to pan. If you tile the texture, beware of texture seams.

thanks for your answer,
maybe you can tell me more exactly what I have to do.
If I want to tile my image, what are the common steps I have to do. I dont’t really know what functions to use - can I do this with gluBuild2DMipmaps or is it the wrong way? I’m quite new to OpenGL and I need some help.

thanks in advance
mfg
liz

This might be what you’re looking for…
http://www.mesa3d.org/brianp/TR.html

not really, are there any other suggestions?

mfg
liz

Originally posted by Liz:
not really, are there any other suggestions?

The library suggested by nutball is probably exactly what you need(I sure can use it.Thanks nutball!). But I’ll explain it anyway. Lets say you have a texture of size 2048x2048, and want to tile this thing in 128x128 tiles. This means 16x16 separate textures. What you would do is to:

  • allocate 256 texture objects using glGenTextures
  • create each texture with glTexImage2D using data extracted from the originally loaded texture
  • draw each tile texture by applying it to a separate quad

Mipmaps shouldn’t be necessary unless you’re planning on viewing your loaded texture under heavy minification.

what does mfg mean?

I guess liz = german so mfg probably means mit freundlichen Grüßen.

I guess liz = german so mfg probably means mit freundlichen Grüßen.[/QUOTE]

ups, sorry, it is by habit… but you are right, it means mit freundlichen grüßen…

cu (better?)
liz

It’s all greek to me.

oh, I forgot, translated in English it means: yours sincerely.

But I have another question: what can I do, if I have an image with weight and height that are not powers of 2. Are there any possibilities to create a texture with such an image?

cu
liz

You could use gluScaleImage() or some other image scaling function to generate an image with width and height being powers of two.

Or use the NV_texture_rectangle extension ( http://oss.sgi.com/projects/ogl-sample/registry/NV/texture_rectangle.txt ).

In the near future you can also use the ARB_texture_non_power_of_two extension ( http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_non_power_of_two.txt ).

Mvg

If you don’t wont to stretch/scale your image you can create a texture that is the next power of 2 up, and place your texture in it. When applying it to a quad, just adjust your texcoords to reflect the actual size.

This can potentially waste a lot of texture space though.

My problem is, that I have to tile the image because there is not enough texture memory for my image. But I fear that I cannot tile images which are not powers of 2. Am I right?

cu
liz

I don`t get it. The 2 people above gave a solution and you are still asking the same question.

Use texture compression if you have to http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_compression.txt

Personally I use the texture matrix to scale the texture coordinates correctly for a non-power of 2 texture sub image. That way you just store a matrix as part of your texture object in your app - keeps things nice and tidy.
To do it with raw opengl calls (not recommended, you should use your own matrix class, and just upload if non-identity), you’d do something like this:-
// at texture creation time…
float scaleX = imageWidth/textureWidth;
float scaleY = imageHeight/textureHeight;

// at usage time…
glMatrixMode(GL_TEXTURE);
glPushMatrix();
glScalef(scaleX, scaleY, 1.0f);
// bind your texture and draw your geometry,
// with total disregard for whether the image
// in your texture is power of 2 or not, so
// texcoord 1,1 will still be the top-right
// of your image for example (although not
// top-right of the actual texture area
glPopMatrix();

Liz,

I’m guessing that the reason you’re still asking is that maybe the question you’re really asking is :

“Given an non-power-of-two-dimensioned image larger than the size of the screen (say, a 10000 x 10000 image on a 1600 x 1200 screen), how do you ‘best’ display the image under OpenGL?”

In which case the NV_texture_rectangle would technically work, but would be incredibly slow trying to draw, in my example above, a roughly 285 MB texture (if 24-bit color).

What I’ve done in the past is divide the image into tiles of some convieniant size, by which I mean “Of power-of-two” dimensions for Open-GL purposes, as well as fairly small to reduce memory and speed rendering. I usually default to 256 x 256 pixels because my app has to run on an old crappy system.

So, when the user opens the image, your code has to run through the image and divide it into 256 x 256 pixel blocks. One pointnto consider when you write that code - If the image is of an odd (as in non-even) size, don’t create any 1 x 1 blocks. Instead, if you have, say, a 1555 x 1536 image, your array tiles should look like :

[256][256][256][256][256][256][16][2][1]
[256][256][256][256][256][256][16][2][1]
[256][256][256][256][256][256][16][2][1]
[256][256][256][256][256][256][16][2][1]
[256][256][256][256][256][256][16][2][1]
[256][256][256][256][256][256][16][2][1]

where each [number] is the width in pixels, and all heights are 256.

Anyway, when I say “make tiles”, I mean “create a texture”. In my example above, you end up with 54 textures in memory. Remember, once you’ve called glTexImage2D() to create a tile, you don’t have to store the image data for the tile yourself - OpenGL does that for you. All you need to store is the coordinates of each tile’s corners.

Incidentally, it would be much better to optimize that last column down to just 2 tiles ( a [1 x 1024] tile and a [1 x 512] tile instead of 6 [1 x 1] tiles ). Probably good to optimize any tile that ends up consuming less than 100kb of memory…

Now, when you go to draw, you can decide (based on the user’s position and zoom level, for example) how much of the image to display. This tells you which tiles are visible and need to be drawn (don’t draw every tile every time!). Run through and draw a bunch of GL_QUADs across the screen, binding to the appropriate texture for each GL_QUAD.

This gives you a simple “Pan around the image” type application, but “Zooming” out to view the whole image will still be quite slow because you’re still moving all 286MB of image data.

The solution is to add a step to the tile-building process. Once you’ve created all the tiles, take the original 286 MB image and use your favorite high-quality image shrinking algorithm to cut it’s dimensions in half, so you end up with a 77 x 768 pixel image. No go create a whole new set of tiles for this new, “Lower Resolution” version of the image :

[256][256][256][8][1]
[256][256][256][8][1]
[256][256][256][8][1]

Now, when the user zooms out, you draw this set of tiles instead of the first set, and you’re moving only 1.7MB if you draw every tile. You can also continue lowering the resolution of the image until it fits into a single tile if you want to…

This is very similar to mipmapped textures, (in fact, you’re basically doing your own mipmapping). You could have just mipmapped the original set of tiles, but I find that doing so leads to little grid lines because the minimization algorithm doesn’t know anything about the adjacent tiles.

Anyway, this probably seems pretty convoluted to most of you, but simple image viewing programs have gotten annoyingly complicated lately, what with digital cameras spitting out 100MB + images. Ever try to use the standard Windows API to BitBlt such an image? Ouch. Anyway, I suspect all the “real” imaging applications are using a fancier version of this, because when you zoom or pan quickly, the image is drawn one little (probably power-of-two sized) square tile at a time.

Well, if this doesn’t help Liz, I hope it at least helps someone somewhere a bit! I don’t get to read the forums often, but next time I’m here I’ll try to remember to see if you had any questions…

Regards,
-Chris Bond