Lighting - Lioght Obstruction And Shadows

The WebGL examples at http://learningwebgl.com/blog/?p=1523 are really great but as far as I can tell all the lighting examples have one flaw: :cry:

They don’t implement any code that will provide lighting obstruction by other objects. For example, consider the Lesson 12 simple example or the Lesson 13 example. If another crate was placed beyond the first one, with respect to the light source, we would expect the second crate to be in the dark because it is obstructed by the first crate. However, since the lighting is just calculated based on the angle between the normal and the light source vector, the second crate will be illuminated just like the first one.

The problem is somewhat less visible if you implement the light drop off over distance but there are still plenty of cases where this will produce the wrong lighting effect.

One possible why, that I though of, to correct this would be to:

  1. Render (to a non-visible buffer) the scene from each light source using a different colors for each fragment.
  2. Scan the results to see which fragments get contributions from that light source (i.e. if the light source can not see that fragment then that light source should not be used in rendering the fragment to the visible screen)
  3. Somehow record the results on a fragment evaluation for each light source
  4. Render the scene using only the light sources that contribute to that particular fragment

This means a potential drastic reduction in frame rate because each scene must be rendered n+1 times (where n is the number of point light sources). Nevertheless I would like to try coding this since my application does not necessarily require a fast frame rate.

This also means you are limited in how many fragments you can have in a scene since there are only approx 256256256 = 16777216 colors to work with. Depending on the detail of objects that are being used but if an object has 50000 fragments then this would mean approx 335 objects. If you are rendering the entire world each time this may not be enough. However in complex worlds the user will probably limit the rendered objects to a sub-set of the world objects so this may be sufficient (e.g. no need to render, for example, any objects behind the current view).

The issue I have is to determine the best way to get the light source contribution data back to the shader so that the shader can determine which light sources to use for the final rendering. I can use a slightly modified version of Lesson 13 to do the shading, expanding it for multiple light sources, but I need a way to get the per-fragment light source data to the shader.

Assuming that I limit myself to 8 point light sources then I could represent this with an additional byte per fragment (i.e. one bit for each point light source to indicate if that light source contributes to that fragment).

Would it be best to send this data with one of the existing arrays being passed to WebGL or create a new one?

Since vertecies, texture coordinate vertecies and normals are all float these arrays are not ideal for tagging on a byte for the light source data. The Vertex Index is a byte array and thus I could add it to that array.

As far as I can see adding it to an existing array would eliminate the overhead of creating yet another array that needs to be pushed to WebGL each frame. However, the Vertex Index data is not likely to change each frame so it is essentially static whereas the light contributions could change frequently and thus changing its components would mean stepping over all the static Vertex Index data each time (i.e. because every 4th piece of data would be light data).

Any thoughts? :?:

Gosh. That’s quite an idea!.. It may be worth studying the code in this demo: http://asalga.wordpress.com/2011/12/12/ … gl-part-1/

I unfortunately did not understand the code as i’m a bit of a newbie. If you find this approach to be a good technique. Would you care to get a little advice how you went around implementing it.

My understand of how the shader files could do alot of the work, was that the vertex shader has 3 vertices in which it can interpolate and pass those points into the fragment shader. With those points, you can project each one onto the plane behind it (not sure how) and mulitply a 50% opaque black onto the point colour behind it?

We also know that the dot product of the light vector and the normal vector gives us a direction in which to project our plane/points/whatever onto (coz i think there is more than 1 way of doing it).

Thoughts?

EDIT: I found this and this seems to help alot. I’ll read it tonight and get back to you on any further thoughts: http://archive.gamedev.net/archive/refe … page2.html

Thanks for the posts but the issue isn’t how to draw shadows…the issue is on which surface to draw them on.

You will note that both the posts that you suggested all talk about where the shadow would be on a plane. But my problem is that I need to figure out what that plane is. If you have one object on a ground like surface (i.e. a plane) then it is easy. You can use a number of methods to figure out the shadow. But as soon as you have multiple objects it becomes complicated because the shadow from one object could be cast on the ground like surface (i.e. the plane) or on another object or a combination of both.

This is what I am trying to work out.

Instead of implementing shadows, I am trying a real life approach were I am implementing lighting and the shadows will result from a lack of lighting (as in the real world). This is the technique used in the WebGL sample lessons converted from Nehe’s OpenGL lessons. The trick is knowing if objects are hit by any given light source because the path between the light source to the object (or object fragment) may be blocked by other objects.

I’m now here to ask the same question. But ill have a go at answering something i dont understand anyway :slight_smile:

The light part (which you already know):
I guess that you have an array of light objects with their direction vectors. then you iterate through all objects currently being drawn and for each one you measure if that planes normal is < 90 to your light vector.

Is anything behind it… if so do some funkeh shiz
If you negate the normal of the object that you have just cast light onto… You can cast light from that normal and iterate through all the objects once again to see if that normals dot product is < 90 to your objects casting vector. If it is… cast a black shadow on that using the shader.

What you reckon?
That would allow you to put things up the plane that is behind it and so on until there is no more normals that are reflecting < 90. I’ll give it a try and get back to you. @ work atm.

I think the real question for me is how da fizz do you do that in the shader code. Coz all those calculations would drain your processing power like a wh0re!

EDIT:

I am not sure about the projection part yet, ie, projecting a shadow that is distoted by how close or far away a light is… Anyone else care to hazard a guess? I think that negating the normals answers the question above though.

EDIT:

Oh yeah, ofcourse. if you are casting more vectors from the lightsource, they become the vectors you compare against your objects normals… the close to 90 degrees you get, the lighter the dark light emitted from the object becomes.

As I said, I am trying to avoid having to draw shadows at all. If one can figure out which fragments get light from which sources, the absence of light on fragments will create shadows…as in the real world.

Of course this will have it own set of problems when dealing with flat surfaces which are not made up of many fragments. For example, a cube that is made up of 6 quads (or 12 triangles) will have the light source hitting either the entire surface or none of the surface. As such it may be necessary to subdivide flat plains into smaller fragments but on a more complex objects (such as a person) it should work fine.

As far as I can tell all shadow solutions (both by drawing shadows or drawing shadows by light omission) basically require calculation of a light vector and determining which fragment the light vector will hit first. Then lighting that fragment and not lighting (or drawing shadows on) any fragments behind it.

This basically means that to render a n object scene, you need to perform n times n comparisons assuming that you don’t have some z ordering method (which can be applied from any direction). The process then needs to be repeated for each light source.

In my proposed solution, I perform a back buffer render of the area that will be viewed by the eye but from the point of the light source - not the eye. Assuming that each fragment is color coded with an individual color, the result identifies all the fragments that light source contributes to. This information would then be stored with each fragment (as a boolean bit) ready for the final render. Assuming up to 8 light sources, this can be stored in one additional byte of fragment data.

This has obvious difficulties but they can be overcome. One is that each render is limited by the number of unique colors that can be rendered at once. This can be overcome by performing multiple renders but that quickly eats up processing time. Another difficulty will be to determine exactly what area to render from the point of view of the light source. The process needs to ensure that all objects in the eye’s view are rendered (unless of course they are obscured by other objects).

In the final render, the properties of the 8 light sources are passed to the shader along with the fragment information which now includes the extra byte carrying the light data. The color of the fragment is obtained by adding in the light contributions from all the lights which are applicable for that fragment (as indicated in the fragments light data byte). This means that each fragment can have different contributions from different light sources.

Assuming that the objects in the scene are made up of small fragments, this process should create object shadows without actually having to draw shadows (i.e. it creates shadows by the lack of light). As stated below this means that large flat areas which are made up of few fragments need to be sub-divided.

I guess it comes down to a trade off: Either additional rendering for each light source to determine which fragments get light from the various light sources or comparison of each fragment to see which fragments are on top of which other fragments.

Besides using my proposed back buffer rendering technique, I have not yet seen a method which would allow storage of Z order information which can be read from any angle. Thus even if z order information was stored it would need to be re-generated each time the angle of the eye changed with respect to the scene.

Hmmm…I wonder if, however, I will spend too much time processing the light source back buffer view to determine which fragments are in it. Even if the back buffer is low resolution it basically means checking each pixel for its color and repeating this process for each light source.

It may turn out that an n times n check is actually faster.

I think I read about that colouring technique. I’ve got to be honest, I find it hard to understand the solutions without seeing them first… So this time i’ll stfu :smiley: . I’ll keep coming back to hopefully steal the outcome of this.

Really hope you get/work out your solution. I’m still stuck on coding collisions hahaha. Pro at work <–

It’s really frustrating that not many people are on these forums atm.

The coloring technique is the same concept as is used for Picking.

Picking is a term refer to allowing the user to click on the 3D scene and figuring out from the 2D mouse coordinates which object the user has clicked (“picked”).

Typically this would involve a lot of checking because the same 2D spot that the user clicked on may be occupied by many objects but at different distances. However, if all you need to know is what the top most object is (the typical situation when “picking”), you can use a coloring-rendering technique which will identify the topmost object for you. The idea is that when the user clicks the mouse, you render the scene (in a invisible back buffer) with each object having a different color. Then you read the pixel color under the mouse and that tells you which object the user clicked on. Just like in the visible scene, the objects closer to the viewer will be rendered (automatically) over objects that are further away and thus the rendering takes care of figuring out which object is closer.

that makes far too much sense. God forbid anything in 3d is explained in an easy to understand method. Cheers. Im gonna do that!