Problem with "picking"

Hi there, I’m new in this forum and a newbie in OpenGL. Currently I’m developing an app in OpenGL ES on Android platform.
Now I’m going to explain my problem… I have a 3d scene that change dynamically with the phone’s sensors. The sensors produce a modelview matrix and I set it in OpenGL with glLoadMatrix() at every frame. So the scene moves using the sensors.

Now, I need to detect which obj in the 3D scene has been touched by the user. I know this is “picking”. Someone has suggested to me to use this way:
“unproject from the (x,y) screen coordinates to a ray in model
space, and work out which object(s) intersect that ray”

First question: What does “model space” mean?

Second question:
I used unproject with screen coordinates and I get their world coordinates, but I don’t understand what they represent or better what they mean. :expressionless: Maybe are they just the result of Viewing Transformation (before the Model Transformation)? Is it correct?

Third question:
How should I use these coordinates to hit my aim? I don’t know at every time the objects’s coordinates if the modelview always change. I only know the vertex in the object’s local coordinates and not the final position after i applied all the transformation to put it in the right location in the scene. I hope is clear what I’ like to say :slight_smile:

I hope someone could help me.

Thanks in advance.

p.s. Sorry for my english, it’s not quite good :slight_smile:

  1. With color picking, you don’t need to worry about coordinate transformations.

  2. Though using unproject intuitively seems like it should be the way to pick an object, it is generally one of the worst possible ways to go. Color picking is generally much better:
    http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Main=55041&Number=284748

  3. With color picking, you don’t need to deal with any of this. That’s one of the reasons color picking is generally much better. Color picking doesn’t return coordinates, it returns the object ID that the user has picked.

Thanks for your answer…
I know that possibility using color picking, but can I use it also with textured objects and blending enabled? If yes, I also try this solution.

Anyway I’d lIke to understand and implement the ray picking way.
I understand that model scene is in world coordinates (only modelling transformation, without viewing transformation).

I’ve arrived at the point of having the ray, appling the gluUnproject twice with winZ = 0 and then = 1. Now what I should do is to intersect this ray with all my object to detect which has been tapped, right? I think I miss this last part. If I undestand well, to do that I need to work in world coordinates and so I need to get all the objects in world coordinates… How to do it?

You don’t understand the key to color picking: it is a two pass method, so, no, in the first pass you do not use any textures or blending. You color with the object ID, but you never display the first pass. In the second pass you can do whatever you want – that’s the pass you display.

If you insist on the ray tracing approach, you can google it. In any case, the way you get all your geometry in world coordinates is by transforming all your geometry on your CPU to world coordinates. It’s slow (that’s what the GPU should be used for), and it’s redundant because you still have to do it on the GPU, too, in order to display anything. In other words, you have to duplicate on the CPU what the GPU does.

Sorry, but I’m a bit confused… if I understand well, color picking allows me to “tag” an object with a solid color. Then I can apply texture and blending and any trasformation on it… when I tap an object into the screen I have to call glReadPixels() and to compare the call’s result with the color of all the objects to find what obj has been tapped. What about “ID” you said?

Here… To obtain the world coordinates from objects coordinates I have to apply only the model transformation, without Viewing transformation, right? Because if I apply both, I’m applying the MODEL_VIEW matrix and so I get the eye coordinates and not the world coordinates, right?

Sorry If I don’t understand quickly and if I’m asking obvious stuff, but I work on OpenGL since two weeks and this is the first time I use “picking” :slight_smile:

Again, for color picking, the two passes are separate :

  • one pass renders stuff normally and swapbuffers, to be seen by humans
  • one pass render without texturing, blending, antialising : only different solid color for each selectable object. This is never seen onscreen (no swapbuffers, stays on back buffer) and the only result is glReadPixel under the mouse pointer to read the color. The color is the ID of the object, pure red = big sphere, pink = small sphere, blue = cylinder, etc.

Oh Yes! I have just found out this solution “googling”. Before I didn’t understand color picking because I didn’t know the possibility to use a offscreen frame buffer, now it is clear how it works. :smiley:

I will try this way. I will inform you about the result. Thank you very much to both :smiley: