Question about Augmented Reality and OpenGL

Hi,

Im making a proyect about AR (augmented reality).

Basicly, in my project I capture images with a video camera and then i have to set a 3d model on that video like that model was a part of the reality.

I know some about the MODELVIEW and PROJECTION matrices. I guess that the PROJECTION matrix have to keep coordinates from the camera images. But, how can convert those images from the camera to a matrix?

I don’t know if my question is perfectly explained…if you want a better explanation I could give you.

Bye. Javi.

The PROJECTION matrix holds info for projector properties like fov.
Rotations and transformations usually go in MODELVIEW (although you can put some transformations in PROJECTION if you think this makes sense).

Now, the only thing I know about those issues is that you must process the image data to acquire informations from it.
Starting the system from a well-known place (near a finely placed checkboard for example, or looking at a tiled floor) could greatly help.

Another often applied processing is edge-detect, on which vertical lines are found. For interiors, a lot of things are cuboids so you have to analyze their parallax. By examining the evolution on a number of frames, you get the movement…

Well, I’m sorry to say something you probably already know, but this is like asking how to build a whole car!

Ok, i understand :slight_smile: but im just starting…

I would try to put an example maybe someone can
help me with my beginning:

Im interesting in knowing in how to convert a 2D circle drawn in a table that im recording with a camera, in a matrix of 4x4 like a mad model.

In the same time the view of the camera change the position (and the view of the circle), the values of the matrix 4x4 also change.

Can i make this with OGL?

I know there was a project from Quake 2 that aimed to do AR. What you’re looking for is usually the trickiest part of AR. Generally, as Obli said you’ll have to find some interresting geometry in it. But the most important one is to detect perpendicular lines, which obviously will give you pertinent properties from what you see.

Well I can remember now that the project from Quake 2 didn’t used anything from a camera. But instead it used a real location (like a house) and added unreal stuffs inside it like false ennemis, false objects and so on. The thing was to use binoculars that persons put so that they could really move in a real location and play with virtual things.

getting camera parameters from the image is the hardest part of the AR. You generaly need to calibrate your camera (retrieve its intrinsec parameters) and then do camera tracking to get its position and orientation in space.

I don’t think opengl will help you with this. It will help you display 3D objects over your video and that’s all…

you can have a look at ARtoolkit:
http://www.hitl.washington.edu/research/shared_space/download/

and this:
http://www.se.rit.edu/~jrv/research/ar/

I did a few years ago a system to do AR using ARtoolkit, OpenCV and OpenGL to display the virtual objects above the video but its far in my memory…

Hopes it helps…

thnks gemelli_d!