Is learning OpenGL version 2.1 useful?

Hi,

I started learning OpenGL this week, using the Go bindings for GLFW on Linux.

I am using OpenGL version 2.1, because that is what’s available on my machine. But I read this is obsolete, and newer versions are quite different.
I might get a new computer in a year, with a fresh installation of Linux, with probably a version of OpenGL newer than 2.1

Is it useful for me to learn OpenGL now, using version 2.1, for later using newer a version? Or do I need to forget all I learned when I start using a new version?
Is it better to wait learning OpenGL until I have a new computer?

If I was starting now, I would try to start with the newest OpenGL 4.x that would be available. Lots of super cheap modern cards and chips are available. But, if you are forced to work with 2.1, it is a better starting point than any thing earlier.

I was in the same position you are when I started learning. 2.1 is the lowest you would want to use, but it has enough of the modern concepts that it is still not completely useless by a long shot. There are tutorials you can find by searching “Durian software moden opengl”. They were written with someone in your position in mind. They try to do things in a modern way, but they avoid any of the newer functions that have been added since 3 and 4.

OpenGL 2.1 supports GLSL shader programs and buffer objects, which are the two features which most fundamentally distinguish “modern” OpenGL from “legacy” OpenGL.

The main OpenGL 3+ features which are missing from OpenGL 2.1 are framebuffer objects, vertex array objects and uniform blocks. Other missing features include instanced rendering, conditional render, transform feedback, sync objects, sampler objects, multisample anti-aliasing, and geometry shaders.

You can certainly learn the fundamentals of modern OpenGL programming without any of those; the main issue is that some tutorials may assume their availability. E.g. if a.tutorial uses uniform blocks and VAOs, you aren’t going to be able to run the code on a system with OpenGL 2.1, even though you could achieve the same result without using them and with relatively minor changes to the code. The other missing features tend to be more specialised and less likely to be used until you reach an advanced stage.

Also, Mesa’s software renderer supports OpenGL 3.0, so you can use that if you just need to experiment with OpenGL 3.0 features.

Could I just update my version op Mesa, without updating my hardware or X server software?

Where can I get Mesa? The official website is no longer available.

Mesa (or specific parts of it) is usually installed along with the X server. It provides the swrast_dri module, as well as DRI modules for some hardware. The version of Mesa is likely to be tied (to an extent) to the version of the X server. I believe that you need at least Mesa 10.2 for OpenGL 3.0 support in the software driver.

[QUOTE=peterkl;1272200]Is it useful for me to learn OpenGL now, using version 2.1, for later using newer a version?
Or do I need to forget all I learned when I start using a new version?
Is it better to wait learning OpenGL until I have a new computer?[/QUOTE]

Is your ultimate goal to be proficient in desktop OpenGL or embedded OpenGL (i.e. OpenGL ES).

I’ll give you my two cents. If you’re already a 3D GPU rendering guru, you know what you need to know to pick up OpenGL quickly regardless. However, if (as I suspect) you’re just starting to pick up GPU rendering, then there’s value in starting your learning now.

I would go ahead and start working with OpenGL 2.1, working through tutorials and asking questions. You don’t need to “forget all you learn” when you move up to later versions of OpenGL. It’s an evolution.

I would recommend that you avoid using what’s called “immediate mode” drawing (glBegin()…glEnd()) and focus on using vertex arrays to store vertex attribute data. You can start with storing these vertex arrays in app-side buffers (aka client arrays) if you want, but as soon as you see how that works, start moving toward use of VBOs. That’ll get your code storing vertex attribute data the way the latest desktop OpenGL versions (and OpenGL ES) prefer that you store it.

On the shading side, feel free to use the “fixed function pipeline” briefly as training wheels to get some cool scene rendering in and keep you motivated. But I wouldn’t spend to long with it. Move into playing with shaders pretty soon.

You can do all of the above in an OpenGL 4.5 compatibility context if/when you get a newer GPU. The compatibility profile doesn’t tell you “you can’t do X” anymore (only core profiles do that). You just stop doing X because it might not yield the best performance.

So in summary, there’s value in getting started now with the GPU and GL drivers you have. When you move forward, you can take what you learn with you and build on it.

I have an old AMD Radeon 54xx, it is slow, and fanless. You can have it for free, I doubt it is worth that much. But, it supports OpenGL 4.4 if Wikipedia is to be believed. It is just taking up room. I can send it to you for free using the cheapest mail service, if you are in the USA, or somewhere with cheap mail service.

You can email me a mailing address if you want it, codepilot at gmail dot com

I would like to respond to this with links to sources, but apparently, I am not allowed to do this.

Are there other forums for help on OpenGL?

To mitigate spam, users below a certain post count aren’t allowed to post links. What links were you going to respond with? You could just remove the http prefix of the link.

Second attempt, now without links.

I found the tutorial and read the first chapters. I compiled and run the example code that is on github in GitHub - jckarter/hello-gl: "Hello World" in OpenGL 2.0
It runs fine on my computer.

I tried to convert the C code into Go, but that code doesn’t work. It compiles and runs, but all I get is an empty white window.
That code is on github in gl/hello at master · pebbe/gl · GitHub
I don’t know what I am doing wrong. Would someone please take a look?

I wrote another program in Go that does work. It uses “immediate mode”. This is also on github in gl/gl2.1 at master · pebbe/gl · GitHub

Edit:

It’s fixed now, with the help of someone on golang-nuts.