What's wrong with my interleaved data?

If my vertex attribute struct looks like this:


struct AttribData {
	float position[3];	// 3 floats * 4 bytes = 12 bytes
	float normal[3];	// 3 floats * 4 bytes = 12 bytes
}; // So a total of 24 bytes!

Any my VBO’s look like this:


	GLuint indexBuffer;
	glCreateBuffers ( 1, &indexBuffer );
	glBindBuffer ( GL_ELEMENT_ARRAY_BUFFER, indexBuffer );
	glBufferData ( GL_ELEMENT_ARRAY_BUFFER, shapes[0].mesh.indices.size() * sizeof(GLuint), shapes[0].mesh.indices.data (), GL_STATIC_DRAW );

	// Model buffers
	GLuint buffer;
	glCreateBuffers ( 1, &buffer ); // Create a new VBO and use the variable to store the VBO id
	glBindBuffer ( GL_ARRAY_BUFFER, buffer ); // Make the new VBO active
	glBufferData ( GL_ARRAY_BUFFER, objectData.size() * sizeof(AttribData), &objectData[0], GL_STATIC_DRAW ); // Upload the vertex data to the video device

And my vertex VAO looks like this:


	// Create a new VAO and use the variable to store the VAO id
	GLuint vertexArray;
	glCreateVertexArrays ( 1, &vertexArray );

	// Setup the formats
	glVertexArrayAttribFormat ( vertexArray, positionLocation, 3, GL_FLOAT, GL_FALSE, 0 ); 
	glVertexArrayAttribFormat ( vertexArray, normalLocation, 3, GL_FLOAT, GL_FALSE, 12 );

	// Setup the buffer sources
	glVertexArrayElementBuffer ( vertexArray, indexBuffer );
	glVertexArrayVertexBuffer ( vertexArray, positionLocation, buffer, 0, 24 ); // Start at byte 0 in the buffer, next set of positions are 24 bytes later
	glVertexArrayVertexBuffer ( vertexArray, normalLocation, buffer, 12, 24 ); // Start at byte 12 in the buffer, next set of normals are 24 bytes later
	
	// Link
	glVertexArrayAttribBinding ( vertexArray, positionLocation, 0 );
	glVertexArrayAttribBinding ( vertexArray, normalLocation, 1 );

	// Enable
	glEnableVertexArrayAttrib ( vertexArray, positionLocation );
	glEnableVertexArrayAttrib ( vertexArray, normalLocation );	

Then why does my monkey head look like it fell out of the ugly tree and hit every branch on the way down? Pic: http://screencast.com/t/hRHsAtwr

I suspect that both offsets are being added, so the normals are actually offset by 24 bytes from the start of the buffer.

Your usage of vertex attrib binding is fairly mixed up here, which is leading to this problem and will only cause you further issues later on. At a guess I’d say that you’re porting from older glVertexAttribPointer-based code.

Specifically: you’re using attrib locations for the bindingIndex parameter in many of your calls, whereas the way the functionality is designed means that the two concepts are decoupled and don’t have any relationship.

In your case a single binding index is all that is needed, which means a single call to glVertexArrayVertexBuffer. Your glVertexArrayAttribBinding calls then use this same single binding index as their last parameter. One vertex buffer, one binding index, two attribs, associate each attrib with the binding. The resulting code looks like this:

GLuint bindingIndex = 0; // ASSUMPTION: this is the binding index you want

// Create a new VAO and use the variable to store the VAO id
GLuint vertexArray;
glCreateVertexArrays (1, &vertexArray);

// Setup the formats (THESE CALLS USE ATTRIB LOCATION)
glVertexArrayAttribFormat (vertexArray, positionLocation, 3, GL_FLOAT, GL_FALSE, 0);
glVertexArrayAttribFormat (vertexArray, normalLocation, 3, GL_FLOAT, GL_FALSE, 12);

// Setup the buffer sources
glVertexArrayElementBuffer (vertexArray, indexBuffer);

// THIS ONE SPECIFIES WHICH BUFFER TO USE FOR THE CHOSEN BINDING INDEX, AN OFFSET INTO THE BUFFER, AND THE STRIDE OF THE VERTEX FORMAT
glVertexArrayVertexBuffer (vertexArray, bindingIndex, buffer, 0, 24);

// Link
// THIS ONE CONNECTS ATTRIB LOCATIONS TO THE BINDING INDEX
glVertexArrayAttribBinding (vertexArray, positionLocation, bindingIndex);
glVertexArrayAttribBinding (vertexArray, normalLocation, bindingIndex);

// Enable
glEnableVertexArrayAttrib (vertexArray, positionLocation);
glEnableVertexArrayAttrib (vertexArray, normalLocation);

My additons to the comments, explaining the changes, are in ALL CAPS.

That was it! Yes, I’ve been porting to DSA, which is proving to be exceptionally difficult since there are very few good DSA examples online, especially ones that try and and use a single VAO and VBO. I’ve read the man pages for the DSA API am often confused how the parameters of each call come together.

Could you explain the bindingIndex some more? I’m already setting the offset and stride, so what’s the binding index suppose to be?