# Thread: depth coordinate

1. ## depth coordinate

I'm having trouble understanding this line from my book:

In perspective projection, the transformed depth coordinate (like the x and y coordinates) is subject to perspective division by the w coordinate. As the transformed depth coordinate moves farther away from the near clipping plane, its location becomes increasingly less precise. (See Figure 3-18.)

Figure 3-18 : Perspective Projection and Transformed Depth Coordinates

Therefore, perspective division affects the accuracy of operations which rely upon the transformed depth coordinate, especially depth-buffering, which is used for hidden surface removal.

first I'm wondering, is the depth coordinate the same as the z-coordinate you originally specify in GLvertex? Second, why does the accuracy decrease? Whats causing that to happen?

2. ## Re: depth coordinate

The depth values are stored as reciprocals in the depth buffer, with the result that depth buffer resolution is best near the near plane but decreases fast when you move out. I suggest you read this: http://www.sjbaker.org/steve/omniv/l..._z_buffer.html

3. ## Re: depth coordinate

"is the depth coordinate the same as the z-coordinate you originally specify in GLvertex"

No, it isn't. You specify a point in WORLD-space, so z there is only your third dimension. The "depth"-value (the z-coordinate in SCREEN-space) is the distance from your CURRENT camera position to the rasterized pixel. Only that it is not the actual distance (in world-units), but it is a value between 0 and 1, where 0 is exactly on your near-plane and 1 is exactly on your far-plane.

Additionally those depth-values are not distributed linearly. A value of 0.5 is not exactly half-way between near and far-plane, but it is much closer to the near-plane. That is exactly what your figure visualizes. The "density" (or distribution) of the depth-values is much higher closer to the near-plane. Therefore close to the near-plane many different distances can be represented (good precision) but close the the far-plane only few different values are representable (bad precision). That is why in games you sometimes see objects flicker (for example a sliding door inside of a wall), because there is not enough precision to differentiate between the depth of the door and the wall, so sometimes the door is rendered, although it should be hidden inside the wall. But when you come closer the problem disappears.

Hope my explanation helps a bit. Keep on reading about it, you will understand it soon.

Jan.

4. ## Re: depth coordinate

My thanks to Jan, for your explanation, and to you, Zengar, for that link. That article does seem to discuss precisely what I wanted to know, but I am puzzled by this equation:
z_buffer_value = (1<<N) * ( a + b / z )

Where:

N = number of bits of Z precision
a = zFar / ( zFar - zNear )
b = zFar * zNear / ( zNear - zFar )
z = distance from the eye to the object

...and z_buffer_value is an integer.
I have no idea what the notation (1<<N) stands for! O_o

5. ## Re: depth coordinate

1<<N is bit-shifting
equals pow(2.0,N)
So, if N=16, (1<<N) = 65536

6. ## Re: depth coordinate

AH! Thank you, Ilian!

7. ## Re: depth coordinate

Okay... this confounds me.

So we store the reciprocal of the depth coordinate in the z_buffer, according to the following formula:

z_buffer = 2^(z_buffer_size)*far*(1 - near/z)/(far - near)

note i applied some factoring, and i'm using far and near instead of z_far and z_near.

Lets assume for simplicity that the buffer size is 4 so that 2^(z_buffer_size) is 16. Let the near = 3 and far = 10. Our equation becomes:
z_buffer = 16*(10)*(1 - 3/z)/7.

Now since this equation is supposed to yield the inverse of the depth coordinate, then z values close to zNear should output a very large number, since the depth coordinate is supposed to be close to zero (1/large number). Yet, plugging in values close to 3 (the near clipping plane) yield values close to 0! (thanks to the (1 - 3/z) factor.) and the depth coordinate would therefore be large for items close to z_near!

Likewise, z values close to Zfar, around z = 10, should give you close to the reciprocal of 1, or just values close to 1. but plugging in 10, yields approximately 16, who's inverse yields a depth coordinate close to zero for a point near the far plane.

this seems backwards! is my math not correct? Is the formula posted in that link correct?

(edit) Incidentally, my current understanding is that the calculated z_buffer value is inverted to obtain the depth coordinate for that pixel. At least thats how I interpreted what Zengar said. Is this correct? My analysis seems to suggest that the z_buffer is scaled by a constant factor to range from 0 to 1, and this scaled down value is what we call the depth_coordinate. But thats just scaling, not taking the reciprocal!

8. ## Re: depth coordinate

I'm not so good at this, but i assume:

far*(1 - near/z)/(far - near)

Only this part calculates a value between 0 and 1 (and it is not linearly distributed, but a reciprocal).

Now, z_buffer_size is the bits, that your buffer has, such that

2^(z_buffer_size) * depth

is actually the depth-value (0 - 1) stored in an integer-buffer (well as BITS somehow). And of course this range is between 0 and 16. Of course, when you use that value later on, you need to interpret (int) 16 as (float) 1.0, but that's done by the hardware.

So, no it is all absolutely correct. It is the final value, no additional scaling to be done.

Jan.

9. ## Re: depth coordinate

So, no it is all absolutely correct. It is the final value, no additional scaling to be done.
umm...
Of course, when you use that value later on, you need to interpret (int) 16 as (float) 1.0, but that's done by the hardware.
wouldn't reducing [0,16] to [0,1] constitute scaling? :P thats exactly what I meant when I asked if the result was scaled.

I THINK I'm getting it now, though. So on a 16 bit buffer, we'd store the z_buffer value as an integer between 0 and 65536 (2^16)
and then the hardware divides this by 65536 (which is what i meant by scaling) to produce a depth coordinate between 0 and 1? Is that how it works?

(edit) Ugh... I'm loosing track here:
The depth (z) coordinate is encoded during the viewport transformation (and later stored in the depth buffer). You can scale z values to lie within a desired range with the glDepthRange() command. (Chapter 10 discusses the depth buffer and the corresponding uses for the depth coordinate.) Unlike x and y window coordinates, z window coordinates are treated by OpenGL as though they always range from 0.0 to 1.0.

depth coordinate, z coordinate, z values, depth buffer, z buffer, z window coordinates. I'm lost in the terminology here, I can't tell which one's which.

10. ## Re: depth coordinate

is this maybe a topic I should have posted in the 'advanced' forum?

Page 1 of 2 12 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•