The WebGL examples at are really great but as far as I can tell all the lighting examples have one flaw:

They don't implement any code that will provide lighting obstruction by other objects. For example, consider the Lesson 12 simple example or the Lesson 13 example. If another crate was placed beyond the first one, with respect to the light source, we would expect the second crate to be in the dark because it is obstructed by the first crate. However, since the lighting is just calculated based on the angle between the normal and the light source vector, the second crate will be illuminated just like the first one.

The problem is somewhat less visible if you implement the light drop off over distance but there are still plenty of cases where this will produce the wrong lighting effect.

One possible why, that I though of, to correct this would be to:

1. Render (to a non-visible buffer) the scene from each light source using a different colors for each fragment.
2. Scan the results to see which fragments get contributions from that light source (i.e. if the light source can not see that fragment then that light source should not be used in rendering the fragment to the visible screen)
3. Somehow record the results on a fragment evaluation for each light source
4. Render the scene using only the light sources that contribute to that particular fragment

This means a potential drastic reduction in frame rate because each scene must be rendered n+1 times (where n is the number of point light sources). Nevertheless I would like to try coding this since my application does not necessarily require a fast frame rate.

This also means you are limited in how many fragments you can have in a scene since there are only approx 256*256*256 = 16777216 colors to work with. Depending on the detail of objects that are being used but if an object has 50000 fragments then this would mean approx 335 objects. If you are rendering the entire world each time this may not be enough. However in complex worlds the user will probably limit the rendered objects to a sub-set of the world objects so this may be sufficient (e.g. no need to render, for example, any objects behind the current view).

The issue I have is to determine the best way to get the light source contribution data back to the shader so that the shader can determine which light sources to use for the final rendering. I can use a slightly modified version of Lesson 13 to do the shading, expanding it for multiple light sources, but I need a way to get the per-fragment light source data to the shader.

Assuming that I limit myself to 8 point light sources then I could represent this with an additional byte per fragment (i.e. one bit for each point light source to indicate if that light source contributes to that fragment).

Would it be best to send this data with one of the existing arrays being passed to WebGL or create a new one?

Since vertecies, texture coordinate vertecies and normals are all float these arrays are not ideal for tagging on a byte for the light source data. The Vertex Index is a byte array and thus I could add it to that array.

As far as I can see adding it to an existing array would eliminate the overhead of creating yet another array that needs to be pushed to WebGL each frame. However, the Vertex Index data is not likely to change each frame so it is essentially static whereas the light contributions could change frequently and thus changing its components would mean stepping over all the static Vertex Index data each time (i.e. because every 4th piece of data would be light data).

Any thoughts?