Compute the extent of a scene
GlowScript (glowscript.org) is an environment that enables even
3D animations using WebGL. For example, the shortest GlowScript
program is the single statement "box()", which generates a 3D cube,
and you can rotate the camera view and zoom using the mouse.
A feature is "autoscaling". By default, the camera is positioned
far enough back to display the entire scene (you can override this).
Currently the extent of the scene is computed in the CPU, but it
would be better to determine the extent of the scene in the GPUs.
I came up with what would seem to be a good way to use the GPUs to
compute the extent of a scene. Assuming for simplicity that the
center of the scene is (0,0,0), here's the idea:
Change the display elements from gl.TRIANGLES to gl.POINTS in order
to process only vertices, not all the rasterized points internal to
Specify gl.depthFunc(gl.GEQUAL), as we're interested in the largest
"depth" (extent) rather than the smallest.
In the vertex shader, calculate extent as the maximum of the absolute
values of the components of the world space position and pack this
value into the color in such a way as to be able to recover the value
Set gl_Position = vec4(-1.0, -1.0, extent, 1.0), so that we're
continually writing into pixel (0, 0), with the GEQUAL depth
criterion assuring that the largest value of extent will be the final
color of pixel (0, 0). I also tried writing to (-1.0+0.5/width,
-1.0+0.5/height), on the thought that maybe I had to give a coordinate
in the center of pixel (0, 0), but that made no difference.
I've done various tests that piggyback on my GPU code for picking an
object, and I believe my calculations are correct. (My pick code
assigns false colors to objects, equal to their internal ID numbers,
so that the final color at the mouse pixel location is the ID
number of the frontmost object lying under the mouse.)
However, there seems to be something fundamental I don't understand
about trying to store repeatedly into pixel location (0, 0), because
I always get zero bytes of color when I use readPixels of (0, 0, 1, 1).
I'm writing to a renderbuffer. Would it make a difference if I were
to write to a texture?
I would much appreciate any advice you can give. I assume that I'm
making some fundamental mistake in how the shaders work.