Offset incorrect when accessing array

I have a shader storage block and struct like so :


struct B {
	vec3 a;
	float b;
	int c;
};

layout(std430, binding=1) buffer BB     {
	B b[];
};

The buffer has the following values:


buff.putFloat(0.1f); // vec3 a
buff.putFloat(0.2f);
buff.putFloat(0.3f);
buff.putFloat(0.4f); // float b
buff.putInt(1); // int c
buff.putFloat(1.1f); // padding
buff.putFloat(1.2f); // padding
buff.putFloat(1.3f); // padding
buff.putFloat(0.5f); // vec3 a
buff.putFloat(0.6f); 
buff.putFloat(0.7f);
buff.putFloat(0.8f); // float b
buff.putInt(2); // inc
buff.putFloat(1.4f); // padding
buff.putFloat(1.5f); // padding
buff.putFloat(1.6f); // padding

My problem is this shown in the following two examples. This returns true :



if (b[1].a.x == 0.5 && b.length()== 1  ) {// true

while this code returns false:


int n = 1;
if (b[n].a.x == 0.5 && b.length()== 1  ) {// false

The actual value b[n].a.x brings back in the second example is 1.2 which is the value 2 machine units back in the buffer from what it should be
In summary, just be referencing the array using the variable rather than a number seems to affect the offset. The first element of the array is not affected. If this was due to accessing an unsized array I would expect a compile-time error. Besides, this shader storage block arrays is sized.
Strangely, the following code also returns false:


int n = 1;
float m =b[n].a.x;

if (b[1].a.x == 0.5 && b.length()== 1  ) {// false

So using a variable rather than a number for the index seems to mess up the offset for subsequent access to the array. Could this be a driver bug ?

Sizing the array seems to fix it


layout(std430, binding=1) buffer BB     {
	B b[2];
};

Interesting. You should report this bug to your graphics card vendor: Nvidia, AMD, Intel

If b.length()==1 then the only valid element of b is b[0]. Accessing b[1] by any means results in undefined behaviour.