OK - let’s suppose your “pick region” is a rectangle…a “pick box”. Then you set your viewport/scissor to the size of the pick-box. You create an off-screen rendering context with those dimensions and you render each group of polygons that correspond to a “segment” in a different color. I presume that your segments can be numbered - and that you can translate that number into a unique color using some rule or other.
You’ll need to reserve a special color (let’s say it’s BLACK) which you clear the screen to before you render everything…make sure that your segment numbers never translate to black.
Then you read back those pixels and (in the CPU) you loop through them all translating the colors that you find back into “segment numbers”.
Now you have a list of all of the segments that were visible inside the pick box.
If your “2D region” is not rectangular - then you’ll have to render to an area equal to the bounding box of your 2D region.
Suppose you wanted to test a circular region. You’d have a couple of choices:
- When you loop through the read-back pixels, ignore the ones that lie outside of the circle.
- Before you render your polygons, render a black rectangle with a circular hole in it to the off-screen buffer - place it at a Z coordinate that’s right next to the near clip plane…or at least closer to the camera than all of your segment polygons.
I recommend (2) because it’s probably faster to traverse the pixels in the returned data if you don’t have to be too careful about whether you’re inside the picked region or not.
So that means is not only just 1 pixel is going be picked, it is a lot of them. That why I assume that it is going to be slow. And, I’m not sure how to code to make GPU to do this task.
Yeah - I understand. This technique works for a region that you need to check…it’s just faster if you only have to pick a single pixel.
The code in the GPU is extremely simple - it’s just rendering each of your segments in a single, unchanging color. Send the color of each “segment” to the GPU and have it write that color directly to the frame buffer without lighting, texturing or anything else. Your vertex shader can probably be identical to the one you usually use to render your polygons - the fragment shader will have just one line of code!
If you do a separate draw call for each “segment” then you can pass the color in a “uniform” variable - if you draw multiple segments with a single call then you’ll need to create a per-vertex data “attribute” to hold the color.
While trying this method, is there any WebGl functions that can actually retrieve the vertices that is visible and should be used for rendering (getting values from z buffer maybe)?
No. There are no WebGL features to retrieve vertices that are visible.
In mainstream OpenGL, there are two ways to do what you want (using “feedback” and using “occlusion testing”) - but neither of those techniques are available in OpenGL-ES (which is what WebGL is based upon). That’s why I’m recommending this technique.