Using near and far clipping to display layers of an object

Hello everyone

I am programming a non-game application using OpenTK in Windows, and I need to solve this specific problem.

I wish to render 3d objects in “layers” with the eye position being always at the top looking down.

For example, if I have a sphere of 10 units in diameter and a layer thickness of 1, I will be able to scroll through 10 layers. When I draw a layer I need to only include the fragments that are contained in that layer.

So, still considering I’m rendering from the top, and assuming culling is turned off, I’ll be able to see 2 circles (for the top and bottom layers) and 8 rings (for the rest of the layers, with extremely thin rings towards the middle).

Of course, the ideal solution would be to place my camera at the top of my object, facing down, and set the near and far clip to correspond to the layer’s top/bottom offsets, respectively. However, this does not look at all how I’d expect. The smaller the layer thickness is, the larger the error.

Here is how it looks when I try to render a sphere, very close to the top. Layer thickness is 0.1, sphere diameter is 2 units. These are the top three layers:
[ATTACH=CONFIG]930[/ATTACH]

You can see how the layers get a square shape as they close to the top, whereas it it should have been circular. The sphere model has 32 segments, by the way.

The obvious reason is that it’s a floating precision issue, but this is severely affecting the correctness of my application.

My requirements would be the following:

  • I need as high precision as possible. The solution should preferably work the same across any device.
  • I do NOT care about performance issues a possible solution may have. This is not a game and the image is not generated multiple times per second (usually it’s an one-off generation).
  • Preferably each layer should complement other layers - if I have to choose between overlapping or having “gaps” (i.e. fragments not filling in any layer), I’d rather go with overlapping.
  • I’d rather not having to resort to generating geometry (i.e. generate the rings dynamically, then render them).

I would rather not disclose why I have to do this, but I have very practical reasons behind it.

Thank you in advance.

Floating point precision is 23 bits, that is a resolution of 0,000244141 pixels on a 2k screen, so it is unlikely that you see any floating point precision issues.

The more likely issue is that you don’t render a sphere, because OpenGL simply can’t do that. It can only render a triangle mesh that approximates a sphere, and when you cut a slice through this “sphere” you will always get a polygon, not a circle.

To improve your results you can only increase the number of triangles representing the sphere, this will yield a better approximation and so the polygon slice will also better approximate a circle.

There is another option: use a fragment shader to ray-trace a sphere.

With the existing approach, the OP will get better results if the sphere’s axis is aligned with the view direction. That way, a 32-segment sphere will always produce a 32-segment polygon. If it’s degenerating to a square, that suggests that the axis is perpendicular to the view direction, resulting in an extreme slice consisting of four quads surrounding a vertex on the “equator”.