Cohen-Satherlend Clippling algorithm and screen XY

Hellow everybody, I’m now trying to implement Cohen-Satherlend Clippling algorithm in 3D space, which works with viewing volume. Generally I understand how this algorthithm works, but there still is small thing which I don’t know how to expain: lets suppose I apply model view matrix and projection matrix to some point (a, b, c). After that I’m getting point (x, y, z). To check if this point is in viewing volume I use this simplified Cohen-Satherlend algothithm: x > -1 && x < 1 && y > -1 && y < 1 && z > -1 && z < 1 (I can use this comparison freely, because my projection matrix converts coordinates of point (a, b, c) so that they are fitted in -1…1 radius in all 3 decarts directions - except those point which are out of viewing volume). If the last expression is true - than I use Viewport transformation to find out the coodrinates of (x, y, z) at screen. Suppose final coordinates of the final point at screen are: (A * x + C, B * y + D) = Point S. Here we can exactly see that Cohen-Satherland algorthithm has nothing to do with Viewport transformation. So the question is: how I can be sure that point S will not come out of boundaries of monitor after Viewport transformation has been applied on that point? I can change any constant - A, C, B or D so that point S will surely come out of boundaries of screen - and at this step I will get error Out_Of_Boundaries - (Please do not attach OpenGl to that topic). So what is the main clue of Cohen-Satherlend Clippling algorithm, it does not take into account Viewport transformation so it does not quarante that the point will not jump out of my screen. So what is the point of that algorithm? It is obvious that I can calculate constants A, B, C, D so that the point fits into screen. But I should be free to assign them any value, and at the same time ensure that point which is in viewing volume will not jump out of screen. So how do I need to do that? Am I actually misunderstood something?

You realise your post is impossible to read, unless one is paid for it ? And it has nothing to do with OpenGL. Maybe in the Math & Algos section, but for sure few people will be able to help, so do not be surprised if you have few answers.

Plus Cohen-Sutherland clipping is ages old and less efficient than other algos.

I agree with ZbuffeR’s comment on your post being very hard to understand. And I don’t think I really do.

Taking a random stab, it sounds like you’re trying to clip after you’ve done the perspective divide and the viewport transformation. You don’t want to do that. What happens when z_eye = 0? Ugly infinite blow-ups and sign reversals and such. You want to clip before the perspective divide.

Dark Photon
I’m cutting just before perspective divide, so it should work fine.

ZbuffeR
Ok, yes it is hard. Sorry for that, I thought that there won’t be any problems, I should have made a normal post at the first go! I know that the post is somewhat unreadable. But I get used to describing my problem in detail, but I hided this details in great mess… But when I have described my problem, you can understand how I’m thinking about it and point me out where I’m not right. So please give my post one more try to read this readable post and understand my problem, surely you will now! I really want to get this question answered. (glad if moderator replace first post with this one)


Hellow everybody, I’m now trying to implement Cohen-Satherlend Clippling algorithm in 3D space, which works with viewing volume. Generally I understand how this algorthithm works, but there still is small thing which I don’t know how to expain. So I will go step by step of all the processes a point should be exposed in order to be displayed at screen - to check if my suggestions are right and ask a question at the end of post.

If you have good knowledge of this algorithm you maybe can give answer on this final question right away: How to guarantee that point A, which survives Cohen-Satherland Clipping algorithm will not be drawn out of screen when Viewport transformation is applied on this point?

If you dont know the answer, in the rest of post I describe in detail what I do to prepare point for clipping algorithm:

Lets suppose I have point P in homogeneous coordinates:

Imagine we applied ModelView matrix transformation on point P so that we are getting new point Q: .

Lets now imagine I have a projection transformation matrix PR , which transforms coordinates of point Q to new point C, coordinates of which will lie in Canonic Viewing Volume. N means Near, F - Far, top - y coordinate which will be considered the top of screen, bottom - y coordinate which will be cosidered the botton of screen. Right and left are used for x coordinate.

My function to set these values in PR is as follows (similarly to OpenGL when we call glPerspective):


void Perspective(int viewAngle, float aspect, float N, float F)
{
	float top = N * TanTable[viewAngle >> 1];//Tan table hols values of tabulated Tan function throught 0..360, with H = 1 degree.
	float bot = -top;
	float right = top * aspect;
	float left = -right;
	PR[0][0] = 2 * N / (right - left);
	PR[1][1] = 2 * N / (top - bot);
	PR[0][2] = (right + left) / (right - left);
	PR[1][2] = (top + bot) / (top - bot);
	PR[2][2] = - (F + N) / (F - N);
	PR[2][3] = - 2 * F * N / (F - N);
}

Lets now project coordinates of point Q to get point C:

Finally we get point CF , which we can check if it lies inside Canonic Viewing Volume: (somewhat simplified Cohen-Satherland clipping, but for out purposes it is suffifient)

If CF survives this cutting, it is displayed at screen with coodrinates dictated by Viewport tranformation as follows: . Here we can exactly see than ViewPort transformation has nothing to do with the clipping algorithm. I can freely change any of these four contstants (A, B, C, D) so the point will surely jump out of screen, even if it is in center of viewing volume - and at this stage we are getting imaginary error COORDS_OUT_OF_SCREEN. So the main question of this thread which I’m leading up to: how to guarantee that point A, which survives Cohen-Satherland Clipping algorithm will not be drawn out of screen when Viewport transformation is applied on this point? (Surely the last sentence can freely subsitute all text that I’ve been writing for now, but… I should have described the process of how do I do my projecting and clipping - for you to point me out the problem. Glad if you read till now!)

This is actually done by the pixel ownership test, and happens after rasterization, i.e. pixels not owned by your window are always discarded. You can setup a viewport that projects only part of the clip volume to the window, but you won’t see anything outside of the window.

In practice you should always make sure that the viewport matches the window, so that as many vertices as possible can be rejected in the clipping stage.

Hmm, this sentence sounds right on one side, but on the other side I can say that clipping algorithm should do all the work regarding clipping, for example line in window. If not, so can I call Cohen-Satherland alg a clipper? Probably not.

Newertheless, it seems that you are right. I have to assign my Viewport transformation matrix a values, which will scale every point to screen size. Knowing that coordinates in Canonic Viewing Volume are flying in beetwen -1,0…1,0 it is very easy to calculate all of constants (A, B, C, D). These equasions are showing what I mean: . B and D are calulated in the same manner.

And by the way I have good news: I have implemented simplified Cohen-Satherland algorithm with (A, B, C, D) calculated accordingly to window size and everything seems to work fine! I feel that I’m very near to implementing full working clippler!. There is screenshoot of what my very simple program does (please don’t judge this picture as a poor one, I’m only learning and I will add new features soon, after I finish this clipper!):

To clarify / point out some interesting edge cases, consider this sequence of operations:

  1. the clipper clips to the view frustum
  2. the viewport transforms clip coords to window coords
  3. then the rasterizer draws points, lines, triangles

Note that the clipper does NOT clip to the viewport. And neither does the pixel ownership test.

So you can set the viewport to a subrect of the window (i.e. 4-up CAD style rendering) and then draw a 3D scene. Regular triangles will be effectively clipped to the viewport. But that is not necessarily true for other primitives. glBitmap, glDrawPixels, large points, and wide lines can all produce fragments during rasterization that would fall outside of the viewport.

That’s why OpenGL also has the SCISSOR test. It is independent of the pixel ownership test.

Another way to say this:
Clipping clips vertices.
Scissor clips pixels.

Thanks for info! If you also know algorithmical details of how Scissor works, can you please show me how do do this test?

Some more questions about this:

Do I need some specific data structures to store pixels data to effectively perform Scissor test?

Do I need some specific order of data processing?

Do scissor test actually tests all pixels, that are drawn in screen? Or there is effective quick method to quickly find out those points which lie out of screen?

Also, ZbuffeR said about clipping algs which are faster then Cohen-Satherland alg, what they are? On wiki I found Liang–Barsky algorithm, should I try this one?