Gonna abandon GLUT.

I’ve got a Windows/GLUT/OpenGL program that we’re currently running in ‘software rendering’ mode. We have a Matrox card running with Windows XP, and we’re having trouble running with hardware acceleration, and yes the drivers are installed. We’re considering using a NVidia Quadro4 instead.

We’re also considering living with ‘software rendering’ but improving the performance of our program. glutMainLoopEvent() seems to be the bottleneck. When we access it only when there’s something to draw, we get a drastic improvement in performance. But since we’ve got 4 windows and 2 subwindows, it would be nice if each window had it’s own glutMainLoopEvent(). Is there a way for a non-multitasking program to have multiple glutMainLoopEvents()? one for each window?

If not, is there a way to do that with pure Windows code?

I’m not much of a Windows programmer, so stiffle your need to snicker and guffaw!

ps: glutMainLoopEvent() is part of the OpenGLUT library. It exits instead of remaining in the loop.

I have repeated this in the forum many times: GLUT is
a utility library that helps you learn the OpenGL,
but it should not, in my opinion, be used for anything
too serious. Invest the time to write your own handling
functions/routines and algorithms. Besides the great
benefits of having complete control over what happens,
it will benefit the programmer in terms of learned skills.

Not using glut may not make it easier. This can still bite you with direct system calls.

What are your requested framebuffer attributes?

What do you request in the glutInitDisplayMode call? If you ask for something not supported in hardware(like stencil, multisample or accumulation buffers for example (depends heavily on hardware)) or if GLUT asks for too much precision internally then you’ll may end up with a software fallback.

Yep as said GLUT isn’t intended for serious apps although it’s not particularly defficient for basic stuff. You can get away with using it, but if you tell people you’re ditching GLUT nobody will be jumping up to object.

I am requesting GLUT_DOUBLE and GLUT_RGB in glutInitDisplayMode().

I tried requesting -nothing- in glutInitDisplayMode() and the software Renderer/Vendor still came up GDI Microsoft Corp.

Was looking at what it would take to do it in Windows/OpenGL and came to the conclusion that I would have to do the same types of things in Windows that GLUT does for me. So I’m now considering staying with GLUT. With our worst case 4 displays and software rendering, our application uses a maximum of 33% of the CPU processing. This is at the upper limit of being acceptable to us. And if we get the right hardware to allow hardware acceleration, I assume we’ll be in even better shape. For a few hundred extra bucks we should be ok. I’ve got the NVidia Quadro4 on my PC and it runs the same s/w (with glut) at about 10% CPU usage. Of course, it’s only 1 display running 4 windows.

We could be wasting alot of time getting rid of GLUT, with not much performance gain.

We also considered putting the rendering in a separate task but so far that hasn’t been necessary.

What if I’m using too much precision in a call to an OpenGL function. Would that cause reversion to software rendering mode? How the heck do you find something like that? Put a GetErrorString after suspicious code?

I’m never gonna know about such errors unless I go searching for them. I guess I could throw up a Windows message box from time to time.

i think glut is very good! i don’t see any advantages to the own functions!!!

That sucks, without looking at the GLUT code I don’t know what’s causing this. This is not about precision requested later in the application (althogh techinically stuff like this could case software fallback in general it doesn’t. It’s a very common issue cause by the initial display attribute allocation problems with windows pixel format descriptors.

The correct way would be to enumerate all pixel formats and check them.

This has nothing to do with errors per se.

It doesn’t seem like you’re being too onerous although you should probably try adding a zbuffer to that glut request, that may fix the problem indicating poor windows GLUT implementation (again I haven’t looked at the GLUT code so I don’t know this will fix your problem, but that’s almost certainly where you shuold be focusing your efforts (in the display attributes).

I have seen wierd software path fallback issues on a low end NVIDIA card (a low end corporate GF4 quadro) even when I knew the chosen PFD was a hardware accelerated ona and worked with other apps. I was intercepting the OpenGL code so I never really tracked it down, the display calls were made via the SDL in that case. I reported it to the source of the software and moved on to more productive stuff.

Was finally able to get out of software rendering mode. The problem on my system is caused by glutInitWindowPosition(). When I comment out glutInitWindowPosition() for each of our 4 displays, the GL_RENDERER comes up as ‘Intel Brookdale-G’ and the 4 displays all crowd on to one monitor. The code that used to run acceptably well with software rendering, now looks like crap when crowded all on one screen.

Permit me to explain our configuration. We’ve got 5 flat-panel monitors.

  1. One of them is connected to the Intel Brookdale Graphics Controller - which is on the motherboard.

  2. 2nd display is connected to a Matrox G400/G450
    Was using glutInitWindowPosition( 1024, 0) here.

  3. 3rd display is connected to a Matrox G400/G450
    Was using glutInitWindowPosition( 2048, 8) here.

  4. 4th display is connected to a Matrox G400/G450
    Was using glutInitWindowPosition( 3072, 8) here.

  5. 5th display is connected to a Matrox G400/G450
    Was using glutInitWindowPosition( 4096, 8) here.

When I use glutInitWindowPosition to display to the other 4 monitors ( 1024 - 2048 - 3072 - 4096 )
it reverts to software rendering mode.

Should I use something other than glutInitWindowPosition to set the window position of the 4 displays? Something more specific?

You can’t have hardware acceleration across multiple very different cards. No matter GLUT or not.

You are even lucky to have hardware acceleration when all the display are on the same monitor, my personal tests showed that as soon multiple cards display was used with Windows, it went down to software rendering…

Cases I have heard working in hardware where with multiple cards from the same vendor, not necessarily the same products. I am pretty sure it works with NVidia.

Maybe other contributors have more experience/knowledge of such cases ?

ZBuffer -

Do you live in Grenoble or just on a ski vacation?

Your info regarding Windows and s/w rendering was a huge help, because it verifies what I have been seeing. We plan on trying a NVidia Quadro4 in our system, but it sounds like it won’t help, since it will reside on the same system as the Intel Brookdale-G.

Thanks for your assistance! What do you recommend we do, hardware-wise?

Matrox is not the best solution in terms of GL support. I’m sure you knew that.
If you make the G400 you main display, I’m sure it will run in hw but I’m sure the drivers are very buggy.

For Nvidia, apparently you can have a multicard + monitor per card config. You just need to create the window on the right monitor first, then init GL.
I don’t have direct experience.
You might get more response in the advance forum.

Our next experiment was to disable the Intel Brookdale Graphics Controller on our motherboard and see if we can run with the Matrox in hardware rendering mode. But we’re a little uneasy about
going to System->Settings->Display Adapters->Intel Brookdale-G … then hitting ‘Disable’. What’s that going to do? It should simply disable the Intel Brookdale-G, but then does it switch to the Matrox?

We’re a bunch of big chickens.

To be honest, none of us knew about the bugginess of Matrox. So we’re actually a bunch of big dumb chickens.

mangoYellow, I do live in Grenoble, and I am not much of a skiier, more a snow racketter :smiley:

Normally on the display control panel, you should be able to enable/disable desktop extension and choose the main graphic card whithout problem.

I don’t use windows but linux where it works with one nvidia card for 4 displays. Some Quadro cards have this two DVI dual connectors. AFAIK under Windows it’s then a big one display which means you have one buffer which goes virtually over the screens. 1024x768 x 4 mean than 2048x1536 x 1. But I’m not a expert in this only a friend told me its works so. Maybe the matrox card has 4 display support too but the GL implementation of matrox is AFAIK really bad.