Some questions

I just aquired a project to do visualization in WebGL. My experience is some years C++, OpenGL and GLSL. I did some reading and have a basic idea about what’s possible and what not.

So here is what the WebGL widget should be able to do. That picture is from a MRI viewer i wrote a couple of years ago

So there’s nothing too fancy. Just some geometry, colored fiber tracts as lines or tubes, textured slices, interaction has to be decided when I know what’s possible and what is not.

So here are my questions.

  • I already know there’s no 3D Textures. That would mean about 160+200+160 2D textures for the 3 directional slices. How would WebGL handle such a large number of textures.
  • Is there some sort of native transparency? I guess vertex sorting or multipass rendering is probably not an option?
  • How is picking handled. Do I have to keep like my own scene graph and calculate intersections myself?
  • Even when downsampled I guess the raw data for a scene like in the picture is something between 20 and 50 MB ( being a bit liberal with the numbers here ). That’s quite a big junk that has to be transfered to the browser. Ok that’s not a question :slight_smile:

That’s it for now. I’m sure more question will come to more I learn. Thanks in advance for any answers.

WebGL is just a wrapper for the underlying OpenGL, OpenGL-ES or Direct3D implementation - with the restriction that only features that are available in OpenGL-ES 2.0 are provided.

The amount of available texture map memory is the same as for a desktop application running on the same machine - with the previso that the browser itself uses OpenGL/Direct3D for compositing, so it’s going to take a small amount of texture RAM. If your application ran within memory limits on some particular PC under Direct3D or OpenGL - then it should run just as well under WebGL.

Obviously if you expect this to run on a cellphone - then you’re going to have to be very careful about texture consumption.

I’m not sure what you mean by “native transparency” - the features you have are identical to OpenGL/GLSL…because this IS OpenGL/GLSL…with a few restrictions due to OpenGL-ES. If you need to sort vertices - you’ll be doing it in JavaScript, which is obviously going to be a lot slower than C++, but maybe not too terrible if you can use a standard library function to do the actual sorting because JavaScript standard library routines are probably written in C++.

Multipass techniques are certainly possible in WebGL - I’ve done multipass shadow rendering and also light blooms and such like. There is one notable restriction: There is no ability to render the depth buffer to texture - so if you need to use depth in a texture (as I did when rendering shadows), you have to do the trick of rendering your geometry one extra time with a shader that chops the per-pixel Z into RGB data.

Picking is not implemented in the API…you’ll have to do that in software somehow.

As for your data, the “20 to 50MB” would obviously have to be downloaded - but with a modern (say) 100Mbit/s broadband connection (that’s what I mostly get at home from Time/Warner cable - and that’s achievable on a “true” 4G network on a cellphone), that could take between 1.6 and 4.0 seconds…not likely to be a huge deal. But if your users are on dialup or on 3G cellphone - it’ll be a lot longer (on a 19.2kb modem, you’d be looking at something like 6 hours for 50MB - on a 3G cell, it’ll take 18 minutes).

Thanks Steve, that clears things up a bit.

That was badly worded and kinda stupid to ask. The question should’ve been more like how to do it. My Javascript experience is some years old so I’m not sure what’s feasible to do in JS. It seemed like bad idea to even attempt vertex sorting in JS.

That’s interesting. Is that code or an example/tutorial available?

Writing your own code to mindlessly sort polygons in JavaScript would probably be pretty slow. The “Array” class has a “sort()” function. You can pass it a comparison function and use it to sort arrays of any kind of class - so you could certainly use it to sort polygons by range. The comparison function would still be in slow JavaScript - but at least the data copying and the overall mechanics of the thing would probably be implemented in C++ under the hood…so it might not be that terrible.

The problem is that there is no indication of the kind of sort algorithm is uses. I’d bet it’s using the C/C++ “qsort” function under the hood - that’s a “Quick-sort”…but it’s not guaranteed to be by any kind of JavaScript specification.

Sadly, quick sort isn’t the best algorithm for sorting translucent polygons by range. Since you have last frame’s sorted list available to you - and for “normal” eyepoint motion, the order of the polygons doesn’t change much from one iteration of the code to the next. Ideally you need an algorithm with O(N) performance for already-sorted data. You’d also want an algorithm that doesn’t keep swapping the order of two equal items because that’ll cause flickering in your image.

That means you’d have to code the entire sort algorithm in JavaScript which is pretty slow.

But do you need to sort at all? Don’t you have a-priori knowledge of how these things should be rendered given the camera position? Very often with this kind of situation, you only need the data to be sorted in one of four or eight orderings - which you could do offline or just once on startup. Then at runtime, you’d maybe only need to decide which octant the camera is in compared to the center of the skull and regurgitate the polygons in one of the 8 pre-sorted orders. If that’s a valid approach, it would be lightning fast.

As for the post-effects stuff…sorry, but I can’t give away any code.

That’s an approach I certainly gonna try out. But as you can see in my picture the inner surface is randomly folded so there are different overlappings from different angles even when staying in the same quadrant or octant.

But as I said I’m still on my fact finding mission and if transparency is too much of a hazzle I might be able to skip it.

If polygons are that densely packed and at random angles - you won’t succeed in depth-sorting them correctly anyway.

Check out my (ancient!) OpenGL FAQ page about alpha sorting issues:

 [http://www.sjbaker.org/steve/omniv/alpha_sorting.html](http://www.sjbaker.org/steve/omniv/alpha_sorting.html)

…the awful ASCII-art images towards the bottom of the page explain why depth sorting doesn’t work when polygons are in close proximity. The problem boils down to the problem that polygons are not points - and whether you sort by nearest vertex, furthest vertex, centroid - or ANY other criterion - you’ll still have polygons rendered in the wrong order.

Why are you doing that anyway? Wouldn’t a dense set of parallel planes stacked around all three axes be the simplest representation? Render whichever set is closest to being at right angles to the line of sight.

If the polygons aren’t moving relative to each other, you could write an offline BSP tree generator. Rendering dense sets of polygons from a BSP tree is a lot faster than sorting - and it avoids the problems I described in my FAQ:

 [http://en.wikipedia.org/wiki/BSP_tree](http://en.wikipedia.org/wiki/BSP_tree)

BSP trees are a disaster when moving objects are present in the scene though…so they’ve somewhat fallen from favor. If all of your scan data is pre-set though - it might be an elegant solution.