Hey All,

I'll start by outlining the issue, and then afterwards provide some code samples to give context to my problem. However, the meat of my question is this: at what point in the graphics pipeline does homogenization occur, and is the developer expected to do this manually?

Basically I've got some 2-D squares that I can draw by creating a VAO and using a simple vertex/fragment shader. As of now each square has a MV matrix, and there is one projection matrix. The projection matrix is orthographic, and this seems to work fine for getting things onto the screen.

However, if I want my objects to move back and forth in the z direction I've got to use a different projection, and I've used GLM's frustum projection to give myself a volume for my squares to exist (http://glm.g-truc.net/0.9.2/api/a002...b877366143116a). Using this matrix makes my objects fail to appear on screen.

By printing out both my orthographic and frustum projection matrices I can see that they look reasonable, given the information I've found on the subject (http://www.songho.ca/opengl/gl_projectionmatrix.html).

However, I notice the following: the orthographic matrix is constructed in such a way that any operation of a homogeneous point (i,e point.w=1) should preserve that property after the operation. This is not the case with the frustum matrix (nor is it with perspective, I think), which leads me to believe I need to take care of it myself (i.e divide out the w component from my final result.)

Adding that simple division to my vertex shader code did not fix my issue, but I feel as though I'm on the right path. It's difficult for me to be sure, however; it could be that I'm forgetting to enable something like GL_DEPTH_TEST or setting a depth function.

Here is the code I use to create the VAO (note that Drawable is a class of my own design that contains the VAO as well as the MV matrix and a few other pertinents, and JShader is the shader program I am using. the "getHandle" methods return handles to the position and texture coordinate variables in the shader)

Code :Drawable initQuad(JShader& shader){ const int nVert=4, dim=3, nIndices=4; const int vStride = nVert*dim*sizeof(GLint); const int tStride = nVert*2*sizeof(GLfloat); const int iStride = nIndices*sizeof(GLuint); Drawable dr; const GLint vertices[nVert][dim] = { {0, 0, 0}, {40, 0, 0}, {0, 40, 0}, {40, 40, 0} }; //why 3? const GLfloat texCoords[nVert][3] = { {0.f, 0.f}, {1.f, 0.f}, {0.f, 1.f}, {1.f, 1.f} }; const GLuint indices[nIndices] = {0, 1, 2, 3}; GLuint tmpVAO; glGenVertexArrays(1, &tmpVAO); glBindVertexArray(tmpVAO); GLuint buffers[3]; glGenBuffers(3, buffers); //vertices glBindBuffer(GL_ARRAY_BUFFER, buffers[0]); glBufferData(GL_ARRAY_BUFFER, vStride, vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(shader.getPosHandle()); glVertexAttribPointer(shader.getPosHandle(), dim, GL_INT, 0, 0, 0); //tex coords glBindBuffer(GL_ARRAY_BUFFER, buffers[1]); glBufferData(GL_ARRAY_BUFFER, tStride, texCoords, GL_STATIC_DRAW); glEnableVertexAttribArray(shader.getTexCoordHandle()); glVertexAttribPointer(shader.getTexCoordHandle(), dim, GL_FLOAT, 0, 0, 0); //indices glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, buffers[2] ); glBufferData( GL_ELEMENT_ARRAY_BUFFER, iStride, indices, GL_STATIC_DRAW ); glBindVertexArray(0); dr.setVAO(tmpVAO); return dr; }

I initialize my projection matrices like so:

Code :glm::mat4 proj = glm::ortho<GLfloat>(-200.f, 200.f, -200.f, 200.0, -200.f, 200.f); glm::mat4 proj_F = glm::frustum<GLfloat>(-200.f, 200.f, -200.f, 200.0, -200.f, 200.f);

And when printed they look like this

Ortho:

[ 0.005, 0, 0, 0, ]

[ 0, 0.005, 0, 0, ]

[ 0, 0, -0.005, 0, ]

[ -0, -0, -0, 1, ]

Frustum:

[ -1, 0, 0, 0, ]

[ 0, -1, 0, 0, ]

[ 0, 0, -0, -1, ]

[ 0, 0, 200, 0, ]

Note that, aside from sign differences(which could be the issue) the factor of 200 should ensure similar mappings in x and y into screen space (although the frustum matrix should divide out 200 * z from everything.) After declaring this matrix I immediately send it to the shader.

I draw things with glDrawElements after sending the MV matrix to the shader. My shader code, initially, looked like this:

Vertex:

Code :#version 130 uniform mat4 projMat; uniform mat4 MVMat; attribute vec4 vPosition; attribute vec2 a_TexCoord; varying vec2 v_TexCoord; void main(){ v_TexCoord = a_TexCoord; gl_Position = projMat * MVMat * vPosition; }

Fragment:

Code :#version 130 precision mediump float; varying vec2 v_TexCoord; uniform sampler2D u_Texture; uniform vec4 fColor; void main(){ gl_FragColor = fColor * texture2D(u_Texture, v_TexCoord); }

After trying to manually homogenize, my vertex shader looks like

Code :#version 130 uniform mat4 projMat; uniform mat4 MVMat; attribute vec4 vPosition; attribute vec2 a_TexCoord; varying vec2 v_TexCoord; void main(){ v_TexCoord = a_TexCoord; gl_Position = projMat * MVMat * vPosition; gl_Position /= gl_Position.w; }

To no avail. I had a hypothesis that my vPosition variable had a w value of 0 by default, but I tried manually setting it to 1 and got nowhere.

If you actually read all that I appreciate it. This is a pretty simply issue: basically I'm trying to move into 3-D having got what I needed from basic 2-D examples, but the projection matrices are tripping me up. It could be the fact that things aren't getting homogenized, but it could also be me not enabling some OpenGL functionality during initialization (or a million other things.)

I've consulted the following links, but the information I found there seemed a bit contradictory.

http://stackoverflow.com/questions/2...et-homogenized

http://gamedev.stackexchange.com/que...alized-in-glsl

http://www.opengl.org/discussion_boa..._Vertex-a-vec4

Any advice helps. Also if you have any code pointers/GLSL pointers (I know people use in and out these days...) I'd like to hear those as well.

Thanks,

John