Help Coverting Glew Grid to Glad Grid.

I am trying to create a grid using GLAD instead of GLEW. The following is part of my conversion attempt. It doesn’t draw anything on the screen yet. May I have some help please? It’s pretty standard stuff.

int indices[30] = {0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0 ,0,0,0,0,0,0,0,0,0,0 };
 glGenVertexArrays(1, &VA01);
 glBindVertexArray(VA01);
 //START WORLD
 // This will identify our vertex buffer
 GLuint vertexbufferLAND;
 // Generate 1 buffer, put the resulting identifier in vertexbuffer
 glGenBuffers(1, &vertexbufferLAND);
 // The following commands will talk about our 'vertexbuffer' buffer
 glBindBuffer(GL_ARRAY_BUFFER, vertexbufferLAND);
 // 4 x 4  x 3
 GLfloat polygonVertices[48] = {};
 make_plane(4,4, polygonVertices, indices);
 // Give our vertices to OpenGL.  
 glBufferData(GL_ARRAY_BUFFER, sizeof(polygonVertices), polygonVertices, GL_STATIC_DRAW);
 glEnableVertexAttribArray(0);
 glVertexAttribPointer(
  0,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
  3,                  // size
  GL_FLOAT,           // type
  GL_FALSE,           // normalized?
  0,                  // stride
  (void*)0            // array buffer offset
 );

 glfwSetKeyCallback(window, key_callback);
 

while (!glfwWindowShouldClose(window))
 {
  //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glm::mat4 modelMatrix = { {1,0,0,0}, {0,1,0,0}, {0,0,1,0}, {0,0,0,1} };
  glm::mat4 view = { {1,0,0,0}, {0,1,0,0}, {0,0,1,0}, {0,0,0,1} };
  glm::mat4 projection = { {1,0,0,0}, {0,1,0,0}, {0,0,1,0}, {0,0,0,1} };
  ourShader.use();
  GLuint MatrixID = glGetUniformLocation(ourShader.ID, "modelMatrix");
  //glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &modelMatrix[0][0]);
  glUniformMatrix4fv(MatrixID, 1, GL_FALSE, glm::value_ptr(modelMatrix));
  unsigned int transformLoc2 = glGetUniformLocation(ourShader.ID, "view");
  glUniformMatrix4fv(transformLoc2, 1, GL_FALSE, glm::value_ptr(view));
  //projection = glm::perspective(glm::radians(45.0f), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, -10000.0f);
  ourShader.setMat4("projection", projection);
  
  //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
  glEnable(GL_DEPTH_TEST);
  glBindBuffer(GL_ARRAY_BUFFER, vertexbufferLAND);
  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
  glDrawElements(GL_TRIANGLE_STRIP, 30, GL_UNSIGNED_INT, indices );
  // Swap front and back buffers
  glfwSwapBuffers(window);
  // Poll for and process events
  glfwPollEvents();
  }
  glfwTerminate();
  return 0;
}


void make_plane(int rows, int columns, GLfloat *vertices, int *indices) {
    // Set up vertices
    for (int r = 0; r < rows; ++r) {
        for (int c = 0; c < columns; ++c) {
            int index = r * columns + c;
            vertices[3 * index + 0] = (float)c * 20;
            vertices[3 * index + 1] = (float)r * 20;
            vertices[3 * index + 2] = 0.0f;
        }
    }

    // Set up indices
    int i = 0;
    for (int r = 0; r < rows - 1; ++r) {
        indices[i++] = r * columns;
        for (int c = 0; c < columns; ++c) {
            indices[i++] = r * columns + c;
            indices[i++] = (r + 1) * columns + c;
        }
        indices[i++] = (r + 1) * columns + (columns - 1);
    }


}

Thank you,

Joshua

GLAD and GLEW are OpenGL loading libraries. Outside of what files you #include and what initialization functions to call, they shouldn’t behave any differently (unless you’re using a C+±based interface for GLAD). Are you able to run a basic test program using GLAD?

Yes I do have a test program working with GLAD. However, I am having problems with a different rendering program that I am trying to rewrite in GLAD (was GLEW). I am indeed programming with C++. The following are some examples of the code that is written using GLEW and no longer works:

        glEnableClientState( GL_VERTEX_ARRAY );

        glVertexPointer( 3, GL_FLOAT, 0, polygonVertices );

        //this line does work
        glDrawArrays( GL_TRIANGLE_STRIP, 0, 5 );

        glDisableClientState( GL_VERTEX_ARRAY );

Thank you for your time,
Josheir

Here is some more GLEW code that doesn’t work with GLAD, using C++:


    //this line does work
    glViewport( 0.0f, 0.0f, SCREEN_WIDTH, SCREEN_HEIGHT ); 
    glMatrixMode( GL_PROJECTION ); 
    glLoadIdentity( ); 
    glOrtho( 0, SCREEN_WIDTH, 0, SCREEN_HEIGHT, 0, 1 ); 
    glMatrixMode( GL_MODELVIEW );
    glLoadIdentity( );

Thanks again,

Josheir

P.S. In the first post, I was hoping for the fixes by someone who reads quite easily.

Ironically, your last two examples use only functions from the OpenGL 1.1 API, which doesn’t need either GLEW or GLAD.

Even your first example doesn’t include any code relevant to GLEW/GLAD, although it does use functions which require such a library on Windows.

In short: wherever the problem lies, it isn’t in the code which you’ve shown.

Thank you for a workable solution, GClements.

I can’t seem to get the faith. A senior Member, OpenGL Guru, would always tell me if the answer involved deprecation or non-use advice? Can you help me any more about that, and is the older code still ok for me to use?

I really like the code’s understandability and simplicity, and it’s well done usage.

Thankfully,

Josheir

actually, that advice is really useful. if you follow it, then next time you wont have to bother about glew nor glad. you only need those libs to load the GL function pointers and #define the GL constants, not more.

how about this:

struct Vertex
{ 
	glm::vec3 Position; 
};

std::vector<Vertex> vertices;

// xz-grid
for (int i = -10; i <= +10; i++)
{
	
	// line parallel to x
	vertices.push_back({ { -10, 0, i } });
	vertices.push_back({ { +10, 0, i } });
	
	// line parallel to z
	vertices.push_back({ { i, 0, -10 } });
	vertices.push_back({ { i, 0, +10 } });
}

// ...

glDrawArrays(GL_LINES, 0, vertices.size());

[QUOTE=Josheir;1293665]is the older code still ok for me to use?

I really like the code’s understandability and simplicity, and it’s well done usage.[/QUOTE]

It’s fine, but look at it as “training wheels”. It makes it easier to get something going without understanding as much of what’s going on. But in many cases, it limits your performance and/or your capability.

So just realize that you’re using the training wheels, and when you know more about what’s going on, get rid of them.

Also keep in mind that some vendors don’t support this older, deprecated code path as well (if at all). So if you use it, you might not be able to run your program on as many GPUs.

Hey, John Connor, cool code too! Dark Photon, where is the deprecation list for OpenGL? Is there any very dependable listing?

Thanks,

Josheir

“With communication we need understanding.” - JE

Yes. It’s in the spec. OpenGL 3.0 deprecated a bunch of the old stuff. OpenGL 3.1 split the spec into two versions (and GL context types): core and compatibility. Core removes the old deprecated stuff. Compatibility retains it.

Just check out the specs for any version of OpenGL >= 3.1 on the OpenGL Registry, and you’ll see a Core specification and a Compatibility Specification. For instance, for OpenGL 4.6:

[ul]
[li]OpenGL 4.6 - Core Spec [/li][li] OpenGL 4.6 - Compatibility Spec [/li][/ul]
In the Compatibility spec, functionality which has been removed in the Core spec is shown in red (at least it looks red to me; it says it’s “typeset in orange”, but it’s definitely a reddish orange; true orange has almost no blue :)). This is a pretty handy reference (and the authoritative one).

Here are a few pages in the wiki that bullet-point outlines some of the deprecated functionality:

[ul]
[li]Category: Deprecated [/li][li]General_OpenGL#Deprecated_Functionality [/li][/ul]
You can create a GL context in the core profile so that the old deprecated stuff isn’t available, or create a GL context in the compatibility profile so that it’s available to you. Also a handy tool.

Thanks for the way cool information, Dark Photon.

Josheir