Shadow Mapping - shadow too small

Hello!
I’ve implemented basic shadow mapping for directional light, but the result is a bit different than what i imagined.
Here is [ATTACH=CONFIG]1461[/ATTACH] image of the scene from light’s position but i moved a bit to the left so we could see the shadow. I expected the shadow to be the same width and height as the cube that “casts” it on the ground. Is it normal for a shadow to be smaller ?
Here are the matrices i am using:


glm::vec3 worldSpace_lightPos  = glm::vec3( 4.0f, 7.0f, 4.0f );
glm::vec3 worldSpace_planePos  = glm::vec3( 8.0f, 0.0f, 5.0f );	// Plane mesh's position
glm::vec3 worldSpace_cubePos   = glm::vec3( 5.0f, 1.5f, 5.0f ); // Cube mesh's position
glm::vec3 worldSpace_invLightDir  = worldSpace_lightPos - ( worldSpace_cubePos + glm::vec3(0.0f, +0.5f, 0.0f) ); // from cube towards light's pos
glm::vec3 lightSpace_cameraDir = worldSpace_cubePos + glm::vec3(0.0f, +0.5f, 0.0f);	// center of the cube

glm::mat4 V = glm::lookAt( glm::vec3(-8.0f, 7.0f, 0.0f), glm::vec3(0,0,0), glm::vec3(0,1,0) );
glm::mat4 P = glm::perspective(45.0f, 4.0f/3.0f, 0.01f, 100.0f);

glm::mat4 depthV = glm::lookAt(worldSpace_lightPos, lightSpace_cameraDir, glm::vec3(0,1,0));
glm::mat4 depthP = glm::ortho<float>(-10,10,-20,20,-20,20);

And texture params:


	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
	glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, color);

And my 2nd pass fragment shader:


#version 330 core

in vec3 eyeSpace_coordinates;
in vec3 eyeSpace_normals;
in vec4 lightSpace_coordinates;

uniform sampler2D ShadowMap;

uniform vec3 lightDir;
uniform mat4 view;
uniform mat4 projection;


float calculateShadow()
{
	float bias = 0.0005f;
	float isShadowed = 1.0f;

	vec3 nuevo = 
//			lightSpace_coordinates.xyz / lightSpace_coordinates.z;
			lightSpace_coordinates.xyz;
//	nuevo = nuevo * 0.5 + 0.5;

	float closestDepth = texture(ShadowMap, nuevo.xy).r;
	float currentDepth = nuevo.z - bias;

	if( currentDepth > closestDepth )
		isShadowed = 0.3f;

return isShadowed;
}

vec3 CalcPointLight(vec3 normal, vec3 fragPos)
{
	vec3 eyeSpace_lightDir = ( view * vec4( lightDir, 0.0f ) ).xyz;

	vec3 l = normalize(eyeSpace_lightDir);

	float cosTheta = max( dot(normal, l), 0.0 );

	vec3 diffuse = vec3( 0.8, 0.8, 0.8 ) * cosTheta * vec3(1.0f, 0.0f, 0.0f);

return diffuse;
}

void main()
{
	vec4 ls_Pos = lightSpace_coordinates/ lightSpace_coordinates.w;

	vec3 n = normalize(eyeSpace_normals);

	float shadow = calculateShadow();

	vec3 light = CalcPointLight(n, eyeSpace_coordinates);

	gl_FragColor = vec4( vec3(shadow), 1.0f ) * vec4(light, 1.0f);
}

Sorry for the bad image, but i’m not allowed to post URL’s… hopefully you get the idea :slight_smile:

[QUOTE=bwdun123;1286668]I’ve implemented basic shadow mapping for directional light, but the result is a bit different than what i imagined.

Here is image of the scene from light’s position but i moved a bit to the left so we could see the shadow.

I expected the shadow to be the same width and height as the cube that “casts” it on the ground. Is it normal for a shadow to be smaller?[/QUOTE]

If this is, as you said, an image from the light’s position, and it is a visual scene rendered with the same ortho camera your shadow map is captured with, then you shouldn’t see any shadows because they’re all behind the object’s from your (the light’s) perspective.

You might describe what we’re looking at in more detail. What do the colors in this image represent? Is this a rendering of your depth map, or is this a “visual” view. Is this image taken with the same orthographic camera that your depth map is captured with?

[QUOTE=Dark Photon;1286673]If this is, as you said, an image from the light’s position, and it is a visual scene rendered with the same ortho camera your shadow map is captured with, then you shouldn’t see any shadows because they’re all behind the object’s from your (the light’s) perspective.

You might describe what we’re looking at in more detail. What do the colors in this image represent? Is this a rendering of your depth map, or is this a “visual” view. Is this image taken with the same orthographic camera that your depth map is captured with?[/QUOTE]

I’m sorry for the bad explanation. In my code i’m rendering the scene from light_camera’s frustum with ortho ( depthV, depthP ) projection in 1st pass. My 2nd pass is rendered using eye_camera’s frustum with perspective ( V, P ) projection. I’m also using a function to move around the scene so V and P matrices do change between 1st and 2nd render pass.

To take this image i provided i had moved my eye_camera’s position to light_camera’s position in world space and aimed at the center of my cube. After that i moved a bit to the left so a part of the shadow can be seen and took the photo. In the photo, smaller (yellow) dots represent cube’s vertices. Bigger (white) dots show 2 visible vertices of the shadow.

I posted the same question, with better quality photos, to stackoverflow so please ignore the low qual one i provided in this thread and look at those instead.

[QUOTE=Dark Photon;1286673]If this is, as you said, an image from the light’s position, and it is a visual scene rendered with the same ortho camera your shadow map is captured with, then you shouldn’t see any shadows because they’re all behind the object’s from your (the light’s) perspective.

You might describe what we’re looking at in more detail. What do the colors in this image represent? Is this a rendering of your depth map, or is this a “visual” view. Is this image taken with the same orthographic camera that your depth map is captured with?[/QUOTE]

I’m sorry for the bad explanation! I’m rendering my code in two passes. First is the “shadow map generation” pass in which i render the scene using my light_camera ortho projection ( depthV, depthP ) to depth buffer. In my second pass i’m rendering the scene to my default framebuffer using my eye_camera perspective projection ( V, P ) , but note that i’m using a function to move my camera around with keyboard and mouse so V and P matrices that i provided do change a lot.
To take the photo i provided i had moved my eye_camera’s position, using said “moving” functions, to the light_camera’s position (worldSpace_lightPos) and aimed at the center of my cube. Then i had moved left a bit to show a part of the shadow the code produces, and snapped the photo.
In the photo smaller (yellow) dots represent cube’s vertices and bigger (white) dots represent the two visible shadow vertices.

I posted the same question to stackoverflow ( here ), and used better pictures there, so please ignore the picture i provided on this forum and look at thos on stackof instead. Thanks for your reply :slight_smile:

Just a minor update…
I think i realized my mistake. I was expecting that

I made some changes here and there to try and find the cause of my concern but haven’t had such luck yet.
The only thing i managed to figure out is that the shadows being drawn in 2nd pass are pretty much as they should be. This we can judge from the following images:
I CAN’T POST URLS SO THE LINK TO SAID IMAGES IS AT MY PROFILE’S INTEREST TAG!
So according to rendered depth image - shadow cast by a darker cube should cover about half of the lighter cube’s top face, and half of it’s front face. On the second image we can see that happening. The issue is, though, that when we look at the 3rd image, which is taken when the eye_camera is on worldSpace_lightPos ( which is the position of light_camera during my first pass ) we can see that the eye_camera’s frustum doesn’t see the scene as does my light_camera on 1st picture.

So i don’t know if shadow maps with directional light are supposed to work like this, but i was expecting the 1st image to look exactly like the 3rd one…
Because if we compare 1st and 3rd image, we can clearly see that the “camera’s” position and direction were different. So how can i correctly choose worldSpace_lightPos and/ or projection*view matrices to make sure that during my 1st render pass the light_camera sees what i expect it to see (and i expect it to see what eye_camera sees in 3rd image) ?
I’m planning to do shadow mapping with spotlights sometime soon so it would be a huge problem if, during my depth map creation pass, my spotlight turned out to be in a completely different position - thus causing funky shadow positions!

Just a quick update!
I’ve played around with the code and realized a few things which i’ll explain using these 3 images:
IMAGES

The first image pictures a depth map of my scene rendered on a quad. According to the depth map we can expect the shadow to be cast by the darker cube on the lighter one. This darker cube will shadow about a half of the lighter cube’s top face, and half of that cube’s front face. On the 2nd image we can see that’s pretty much what happens, which is okay.
I snapped the 3rd image when my eye_camera’s position was right on the light’s position (worldSpace_lightPos)… so the issue i’m having here is that i expected the 1st image to be the same as the 3rd one. The reason i expected them to be the same is because i used the same position variable ( worldSpace_lightPos ) to set the eye_camera’s and light_camera’s view matrix’s position and so i thought that everything my eye sees from 3rd image is what my light_cam would see while rendering to depth buffer and that whole cube that is further away from us would be shadowed.

So if we compare images 1 and 3 we can see that the light_cam’s (image 1) frustum and eye_cam’s (image 3) frustum’s position and direction are different, although i’m using the same view matrix
I’m not sure if this is meant to happen, but i hope not because if i were to try and implement a spotlight that is capable of shadow mapping i wouldn’t know what value to assign to my worldSpace_lightPos to get the position i wanted my spotlight to be at.

Maybe this difference in position is caused by using ortho projection for my first (depth map creation) render pass, and perspective proj for my second pass?

Never mind guys, i managed to completely misunderstand how directional light and orthographic projection work!

To anyone wondering… The reason i got different results in image 1 and 3 is not because of the position (pos) parameter in glm::lookAt( pos, dir, up), but the different direction (dir) parameter!
Once i used the exact same view matrix in both rendering passes everything worked out as i had first hoped.
God bless guys! i feel so relieved