Corrupted data during mapped buffer! (Uniform Buffer Objects)

I didn’t know whether to post this under beginners or not, but there’s something wrong with my application when I try to map a buffer and write onto it. It’s a really strange and specific bug. I have a struct that has 16 floats in total:

struct Light
{
	vec3  position;
	float exponent;
	vec3  direction;
	float cutoff;
	vec3  colour;
	float padding;
	vec3  attenuation;
	float spotlight;
};
layout( std140, binding=2 ) uniform Lights
{
	Light light[100];
};

I map the uniform buffer “Lights” and proceed to write 3 lights onto the buffer (I know I’m not using all 100 allocated lights now, but I will in the future). Every value in that struct is correct except for “spotlight” and “padding” which have values that seem to be cloned from other attributes (in the screenshot, “padding” has the same value as “exponent”)

As you will see in the screenshot below, I assign 0.0f to “padding” but the assignment is completely ignored and padding has a value of “50.0”. The same goes for “spotlight” which has been set to it->first->m_rep.spotlight which is “0.0” as seen highlighted in the “Locals” tab below

I don’t even know where to start looking from here on, because this bug seems really bizarre. Any ideas? Let me know if you need more information to help me solve this.

Thanks in advance

EDIT: I just saw my first mistake, that if statement after I’ve incremented “i”… sorry for that,[STRIKE] but that’s not the cause of this bug.[/STRIKE]

EDIT2: Actually, I removed the whole if statement and the “spotlight” value is now correct for all lights, even though I didn’t change anything else. Apparently the if statement was reading from the buffer and somehow that corrupted the values. I can’t even explain… I would still like an explanation as to why this happened, if anyone has any idea!

I’m not sure this may be the reason, but you map the buffer for writing and then try to read from it. It may be that on some drivers you actually can’t read such a buffer, even though I have never seen that myself.

I thought so as well, but the bug was happening before I added that if statement! I was basically testing “spotlight” inside the shader and if it was “true” I would paint everything white, so basically every now and then things were being painted white instead of having a normal lighting shading. So after I figured it was the “spotlight” value that was wrong, I tried to test this CPU side of things where I write the values into the buffer. My test was to read from the buffer after I had written, to see the value and I found out the value changed right after I write it, which made me clueless. I removed the if statement after posting this thread and nothing blinks white any more, so it appears to have been fixed!

Start by reading the “Standard Uniform Block Layout” section in the spec.

I have a struct that has 16 floats in total

After reading the spec, you should see that your struct isn’t really 16 floats. You can introspect it to confirm, using glGetActiveUniformsiv with UNIFORM_OFFSET etc.

[QUOTE=arekkusu;1260112]Start by reading the “Standard Uniform Block Layout” section in the spec.

After reading the spec, you should see that your struct isn’t really 16 floats. You can introspect it to confirm, using glGetActiveUniformsiv with UNIFORM_OFFSET etc.

[/QUOTE]

Each vec3 is considered a 3N, each float a 1N and booleans are considered 1N as well. My layout is 3N, 1N, 3N, 1N and so on… and in the spec it says that when you use that kind of format, the 3N is packed with the 1N making it 4N,4N,4N as such: 3N+1N, 3N+1N, etc. Am I wrong? Or what am I missing here? Is it because the bool is considered 1N but is actually 1-byte instead of 4? So basically it was reading the bool and then 3 more bytes from the vec3 that came after? But then wouldnt that read 3 bytes of attenuation? That would mess up attenuation, which isn’t the case because attenuation had the correct values…

EDIT: Actually, sorry, I didn’t have a bool anymore when I posted this, so my layout is exactly 3 floats followed by 1 float, which makes a perfect 4N… why is it not really 16 floats? When analysing with Nsight, my 100 light structs ocupy 6400 bytes, which is 100 times 16 floats.