Transitioning from openGL 2.0 to modern?

I wrote my game in openGL 2.0 because I found it much easier to understand, and now that I’ve gotten past the proof-of-concept stage I’m trying to switch it to openGL 4.5. So I’ve been looking at tutorials for a few days and I’m just not getting it.

So far I’ve written a class to make, and handle errors for glPrograms and load shaders from files, and my own matrix/matrix stack class, then got confused on what’s anything, and realized the book I was using was from 2009 so I’m not sure if it’s actually good for anything.

The first source of confusion is from my onResize() function:


        glViewport(0, 0, screenSize.x, screenSize.y);
		
	double width = screenSize.x*devicePixelRatio.x;
	double height = screenSize.y*devicePixelRatio.y;

	glMatrixMode(GL_PROJECTION);
	glLoadIdentity();
	glFrustum(-width/2, width/2, -height/2, height/2, .999999, 8.0);
	glTranslatef(0, 0, -1);
	glScalef(1, -1, 8.0/4096);

	glMatrixMode(GL_MODELVIEW);

I’ve read that GL_PROJECTION, GL_MODELVIEW and the matrix functions are deprecated, so I’ve added the frustrum/transform/scale etc functions to my own matrix class, but I’m very unclear on what to do with it, or how the matrix modes really related to each other. Are the GL_PROJECTION matrices essentially at the bottom of the matrix stack or something?

The second is in my sprite rendering function, which is around 90% of rendering:


const GLfloat * getTexCoordinates(int _mirror)
{
	static const GLfloat texCoord[]		= { 0, 1, 1, 1, 1, 0, 0, 0 };
	static const GLfloat xMirrorTexCoord[]	= { 1, 1, 0, 1, 0, 0, 1, 0 };
	static const GLfloat yMirrorTexCoord[]	= { 0, 0, 1, 0, 1, 1, 0, 1 };
	static const GLfloat MirrorTexCoord[]	= { 1, 0, 0, 0, 0, 1, 1, 1 };

	switch(_mirror & 0x03)
	{
	default:	return texCoord;
	case 0x01:	return xMirrorTexCoord;
	case 0x02:	return yMirrorTexCoord;
	case 0x03:	return MirrorTexCoord;
	}
}


void SpriteComponent::render(position_type camera)
{
	if(!bind())
	{
		return;
	}
	
        glPushMatrix();
	glTranslatef(pos.x - camera.x, pos.y - camera.y, 0);
	
	if(drawOrder < 0)
	{
		glTranslatef(0, 0, drawOrder);
		double scale = 1 + (drawOrder << 3)/-4096.0;
		glScaled(scale, scale, 1);
	}

	Point2F s = size();
	Point2F p = self()->pivotPoint();
	
	glRotatef(-self()->rot(), 0, 0, 1);
	glTranslatef(-pivot().x, pivot().y, 0);
	glScalef(size().x, size().y, 1);

	static const GLfloat vertices[] = { 0, 1, 1, 1, 1, 0, 0, 0 };
	static const GLubyte indices[] = { 0, 1, 2, 0, 2, 3 };

	glEnable(GL_BLEND);
	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

	glEnable(GL_TEXTURE_2D);
	glEnableClientState(GL_VERTEX_ARRAY);
	glVertexPointer(2, GL_FLOAT, 0, vertices);
	
	glEnableClientState(GL_TEXTURE_COORD_ARRAY);
	glTexCoordPointer(2, GL_FLOAT, 0, getTexCoordinates(attr._mirror));

	glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
	glDisable(GL_TEXTURE_2D);
        glPopMatrix();
}

I’m pretty unclear on what to do to convert it other than make the vertices and indices into a VBO. I’ve read that I can do… something, to render objects in batches and reduce calls to the graphics card, but I can’t discern what that means exactly, as in batches with a single texture? or are the textures and texture coordinates also included in the batches? are texture coordinates also a VBO?

Correct. You’ll notice that they aren’t listed in the OpenGL 4 reference pages, so they don’t exist in the OpenGL 3+ core profile.

There are separate matrix stacks for the model-view and projection matrices (as well as the texture matrices for each texture unit and the colour matrix).

The fixed function pipeline automatically transforms vertices by the model-view matrix (to obtain eye-space coordinates) then by the projection matrix (to obtain clip-space coordinates). The reason for keeping them separate is that lighting calculations are performed in eye space; the results will be incorrect if the coordinates have been transformed by a perspective projection.

When using shaders, you’d typically pass the matrices to the vertex shader as uniform variables. The vertex shader is responsible for transforming the vertices.

[QUOTE=GeatMaster;1285865]
I’m pretty unclear on what to do to convert it other than make the vertices and indices into a VBO. I’ve read that I can do… something, to render objects in batches and reduce calls to the graphics card, but I can’t discern what that means exactly, as in batches with a single texture? or are the textures and texture coordinates also included in the batches? are texture coordinates also a VBO?[/QUOTE]

The first change is to replace glVertexPointer() and glTexCoordPointer() with glVertexAttribPointer(), and the glEnableClientState() calls with glEnableVertexAttribArray(). The next change is to store the vertex and texture coordinates in one or more buffer objects, then bind the appropriate buffer before each glVertexAttribPointer() call; each vertex attribute is sourced from the buffer which was bound to GL_ARRAY_BUFFER at the time of the corresponding glVertexAttribPointer() call for that attribute.

Batching just refers to using as few glDrawElements() (or other glDraw*) calls as possible, drawing as many primitives as possible in each call. In this case, that would mean rendering many sprites with a single call, which means that you can’t use a uniform variable for the transformation; you can either use instanced rendering or store the transformations in a uniform array or a texture, indexed using either an integer vertex attribute or an expression derived from the GLSL variable gl_VertexID. But you should probably leave this until you’ve got the code working with shaders and you have a better understanding of modern OpenGL.

It’s also important to note that you can make these changes with OpenGL 2.0, because all of this functionality exists in OpenGL 2.0 (as do shaders).

Also, most of these changes are orthogonal. You don’t need to make all of them unless you need the code to work with the core profile. With the compatibility profile, you can use client-side arrays or VBOs, you can use the legacy vertex attributes (position, texture coordinates, normal, colour) with shaders, you can use the legacy matrix functions with shaders.

So you could start by writing shaders which work with your existing code, so the only change would be to compile, link and use a shader program. You can then replace the legacy attributes with user-defined attributes, replace the matrix functions with GLM and user-defined uniform variables, and replace the client-side arrays with VBOs.

[QUOTE=GClements;1285929]Also, most of these changes are orthogonal. You don’t need to make all of them unless you need the code to work with the core profile. With the compatibility profile, you can use client-side arrays or VBOs, you can use the legacy vertex attributes (position, texture coordinates, normal, colour) with shaders, you can use the legacy matrix functions with shaders.

So you could start by writing shaders which work with your existing code, so the only change would be to compile, link and use a shader program. You can then replace the legacy attributes with user-defined attributes, replace the matrix functions with GLM and user-defined uniform variables, and replace the client-side arrays with VBOs.[/QUOTE]

Yup; all of this is a case where OpenGL’s holding on to the old API is very much a clear win because it ends up giving you an easier migration path to the new.