Select only visible vertices

Hi,

Currently I’m using O3D framework with WebGL, love it.
I am working on a 3D annotation service, where people can highlight segment parts on the 3D objects and do tagging. However, I do stumble when it comes to selecting only the polygons that are visible on the screen.

So, how will I be able to retrieved visible vertices from z buffer or depth buffer in Webgl or O3D framework as well as determining the polygon is visible in the viewport?

This is the link, there will be quite a few functionalities that are not working since it is still in alpha stage of development, and it only works in latest Chrome (Firefox works but dodgy):
http://maenad-yu.cloud.itee.uq.edu.au/3 … rd&res=low

Currently, I am using a plane to block the ray as a temporary solution, however it still shows its limitation when the polygons are located before the plane. Doing raytracing directly onto the object to determine front faces can kill the performance, so I’m avoiding this method.

If anybody can help me it is highly appreciated.

Thanks
Dave

One old trick is to render to an off-screen buffer, turning off texture, antialiassing, blending, lighting, etc. Set the color of each object to a different value. After rendering, read back the color buffer and examine the resulting pixels. Since you know which objects were drawn in which colors, it’s easy to identify which objects were visible.

This is expensive for detecting “what’s on screen?” because you have to read back the image onto the CPU (SLOW!) and search to find all of the unique colors (VERY SLOW!). But for “what’s under the mouse pointer?”, it’s really neat - you render into a 1x1 pixel viewport right where the mouse is, read back one texel - and you’ll know what was clicked upon.

If you only do this when the mouse is clicked, the extra rendering time is rarely noticable.

– Steve

Hi Steve,

Is that mean I have to render the vertices into pixels that their colour will address to its indices or position, and use readPixel() to retrieve the colours on the viewpoirt? If not, how should it be done?

To note that the functionality is not merely picking using mouseclicks, but to highlight a segment on the 3D object that user defines. This method do sounds slow in this case especially the object I’m dealing with have 195000 vertices (65000 polygons), but it seems to be this is the only method to do so.

Dave

It depends on what precision of picking you need. You’d need to render each polygon in a different color if you wanted to figure out which polygons were visible…but yes - render each independently selectable “thing” in a different color - do a readPixel and figure out which colors are in the resulting buffer.

To note that the functionality is not merely picking using mouseclicks, but to highlight a segment on the 3D object that user defines.

I don’t understand what you mean by that. The user clicks on a “segment” (a polygon? a group of related polygons?) and when you know which one that was, you “highlight” it by drawing it to the screen in (for example) a brighter color than usual.

This method do sounds slow in this case especially the object I’m dealing with have 195000 vertices (65000 polygons), but it seems to be this is the only method to do so.

Well, it’s no slower than rendering your 3D scene normally is…possibly quite a bit faster because there is no texture, no lighting, etc.

If you only care what the user clicked on with the mouse, you can save time (as I said before) by only rendering a small viewport close to the mouse coordinates.

But testing 65,000 polygons in software…especially in JavaScript(!!)…is going to be a hell of a lot slower - and vastly more software effort to get it right. Let the GPU do the work.

– Steve

Hi Steve,

Sorry for not stating it clear, what I mean by segment is actually a group of polygons. The user will define a 2D region, and based on that region, the system will select the polygons within the user-defined region.

So that means is not only just 1 pixel is going be picked, it is a lot of them. That why I assume that it is going to be slow. And, I’m not sure how to code to make GPU to do this task.

While trying this method, is there any WebGl functions that can actually retrieve the vertices that is visible and should be used for rendering (getting values from z buffer maybe)?

Thanks
Dave

OK - let’s suppose your “pick region” is a rectangle…a “pick box”. Then you set your viewport/scissor to the size of the pick-box. You create an off-screen rendering context with those dimensions and you render each group of polygons that correspond to a “segment” in a different color. I presume that your segments can be numbered - and that you can translate that number into a unique color using some rule or other.

You’ll need to reserve a special color (let’s say it’s BLACK) which you clear the screen to before you render everything…make sure that your segment numbers never translate to black.

Then you read back those pixels and (in the CPU) you loop through them all translating the colors that you find back into “segment numbers”.

Now you have a list of all of the segments that were visible inside the pick box.

If your “2D region” is not rectangular - then you’ll have to render to an area equal to the bounding box of your 2D region.

Suppose you wanted to test a circular region. You’d have a couple of choices:

  1. When you loop through the read-back pixels, ignore the ones that lie outside of the circle.
  2. Before you render your polygons, render a black rectangle with a circular hole in it to the off-screen buffer - place it at a Z coordinate that’s right next to the near clip plane…or at least closer to the camera than all of your segment polygons.

I recommend (2) because it’s probably faster to traverse the pixels in the returned data if you don’t have to be too careful about whether you’re inside the picked region or not.

So that means is not only just 1 pixel is going be picked, it is a lot of them. That why I assume that it is going to be slow. And, I’m not sure how to code to make GPU to do this task.

Yeah - I understand. This technique works for a region that you need to check…it’s just faster if you only have to pick a single pixel.

The code in the GPU is extremely simple - it’s just rendering each of your segments in a single, unchanging color. Send the color of each “segment” to the GPU and have it write that color directly to the frame buffer without lighting, texturing or anything else. Your vertex shader can probably be identical to the one you usually use to render your polygons - the fragment shader will have just one line of code!

If you do a separate draw call for each “segment” then you can pass the color in a “uniform” variable - if you draw multiple segments with a single call then you’ll need to create a per-vertex data “attribute” to hold the color.

While trying this method, is there any WebGl functions that can actually retrieve the vertices that is visible and should be used for rendering (getting values from z buffer maybe)?

No. There are no WebGL features to retrieve vertices that are visible.

In mainstream OpenGL, there are two ways to do what you want (using “feedback” and using “occlusion testing”) - but neither of those techniques are available in OpenGL-ES (which is what WebGL is based upon). That’s why I’m recommending this technique.

Tried it, only half way success. The thing is that the polygon are so dense in my 3D object, that sometimes the colour detection addresses the wrong element.

However, it seems to be colour detection is the only solution to go for shape segmentation.

Thanks

Strange, Chrome will pick up a lot of unwanted polygons, while Firefox 4 works fine. The only thing I notice is that Chrome have anti-alias enabled and Firefox is disabled by default. Never know anti alias can make a big difference in this.

How do I disable anti-alias in Chrome?

Yes - as I explained above, it’s very important to disable antialiassing because that blends colors from adjacent pixels - and the result is disasterous for this algorithm.

You disable AA when you create the rendering context - you’ll want something like this:


   gl = canvas.getContext ( "experimental-webgl",
           {
             alpha             : false ,
             antialias         : false ,       //   <=== DO THIS!!!
             depth            : true  ,
             stencil           : false ,
             premultipliedAlpha: false } ) ;

I actually did tried that method, but it does not work for the second call of getContext, it still address the attributes from the first getContext that was initially created.

That means if I set the anti-alias to true when using getContext the first time, the value anti-alias is true. But if I want to change anti-alias into false later on, the anti-alias value remains true.

I’ve read through some mailing list, I’m not sure whether my understanding is correct, but it seems to be that attributes of getContext’s can only be processed at the first call. When there are multiple getContext, WebGL will only use the first getContext’s attributes and ignore the rest. And it seems to fit in to my current situation.

So is that means I must remain the anti-alias value as false all the time to make the segmentation method work? Or is there any way to get around so I can switch the anti-alias value whenever I want?

Sadly, the WebGL specification (section 2.1):

https://www.khronos.org/registry/webgl/specs/1.0/#2.1

…says:

On subsequent calls to getContext() with the ‘webgl’ string, the passed WebGLContextAttributes object, if any, shall be ignored.

…which seems to say that if you need to turn Antialiassing off for picking (which you certainly do), there is no way to have it turned on for rendering.

I’ve read through the mailing list, I think they make it like this is to support interoperability between Webgl libraries. Nevertheless, I still think that this flexibility of changing attributes are essential for most 3D application, should also be a part of Webgl.

By hey, maybe the working group have their big picture that I haven’t seen.

But really, thanks for your help Steve, my shape segmentation feature is now complete and the performance is acceptable (I’m also been lazy to read through all the pixels on the screen - is a research project after all).

I’m wonder if any other people working with Webgl or browser based VRML and X3D have actual done this before - allow user to define their own segment on 3D object. Seems to be that I’m the only one who is working on this.

Thanks for sharing your ideas.:smiley: Newbie here