Ray tracing, Z coordinate (depth) of bounding box seems much bigger then it should be

I am making 3d scene and using ray tracing algorithm. When I add bounding box with the same length of sides every time I get the scene that looks like a tunnel, not like a box. Bounding box seems much deeper that it is.
My positioning settings are:

[ul]
[li]Camera position: (0,0,-10)
[/li][li]Camera direction: (0,0,1), so towards positive Z axis
[/li][li]Up vector: (0,1,0)
[/li][li]Distance from the camera to image plane: 10 units
[/li][li]One sphere at (0,0,2) with r of 2 units
[/li][li]Bounding box 10 units for all axis.
[/li][/ul]

So image plane should be at 0 units for Z coordinate and the center of it is (0,0) for the X and Y.

This is the image I am getting:

[ATTACH=CONFIG]1698[/ATTACH]

And as you can see a little dot in the center is sphere on (0,0,2) which shouldn’t look that far away.

I got the box-like bounding box only when I used some crazy settings like when camera is 100+ units away from the image plane, and the bounding box is only 10 units in all axis, but I should get the right image with less distance from the plane.

Thanks.

The field of view angle is too large. Maybe you’re assuming degrees where radians are being used, or you’re using pixels for screen coordinates rather than normalising.

First I convert width and height to [-1,1] and than I find the vectors from camera to every pixel on the screen, so I don’t define angle of view, it is defined based on the dimensions of window and distance from camera to screen.

This is how I find vector from camera to pixel:

Vector3 v(float x, float y) { //x,y from [-1,1]
	Vector3 Iy = uk; //up vector
	Vector3 Ix = uk.crossProduct(dk) //dk being direction from camera to screen, so Ix and Iy are axis of the screen
	Vector3 ret = ek + dk*(t_dist)+(Ix*x*(h / 2.0) + Iy*y*(w / 2.0)); //ek- position of the camera, t_dist - distance from camera to screen, h,w - width and height of window
	return ret;
}

And now for every pixel (converted from [0,width] to [-1,1] and same for height) I cast a ray starting at position of camera, in the direction of :

v(x,y) - ek

[QUOTE=princip;1290390]First I convert width and height to [-1,1] and than I find the vectors from camera to every pixel on the screen, so I don’t define angle of view, it is defined based on the dimensions of window and distance from camera to screen.

This is how I find vector from camera to pixel:


	Vector3 ret = ek + dk*(t_dist)+(Ix*x*(h / 2.0) + Iy*y*(w / 2.0)); //ek- position of the camera, t_dist - distance from camera to screen, h,w - width and height of window
	return ret;
}

[/QUOTE]
If w and h are in pixels, your field-view angle is probably much too large. if Ix, Iy and dk are (1,0,0), (0,1,0) and (0,0,1) respectively, then v(1,1)=(h/2,w/2,t_dist) and the other corners will be similar (i.e. the same except for signs).

You said previously that

For a 90-degree (+/- 45-degree) field of view, you’d need w=h=20 (i.e. the distance from the centre of the window to the edge of the window needs to be equal to t_dist).

For reference, I’d suggest looking at how gluPerspective calculates the scale factors.

Oh, so dimension of the screen in the camera space is width by height of the window. I put bounding box dimensions the same as the window size in pixels and now the scene looks much better. Thanks for the help.