why draw a gl_quads cost 80ms to execute?

Enviorment:Pentium2.0G, 256M RAM, visual studio 2003, OS-windows2003.

I open a 1024*512 window and attach a Opengl rendering context on it. set the glortho range to (-100,100,-50,50,-100,100)

but when I try to draw a GL_QUAD, it cost 80ms to implement, is it normal?


#define Quad_Width 20.0f
glBegin(GL_QUADS);

glVertex2f(-Quad_Width, Quad_Width);
glVertex2f(Quad_Width, Quad_Width);
glVertex2f( Quad_Width,  -Quad_Width);
glVertex2f(-Quad_Width,  -Quad_Width);

glEnd();

Hope experts help me.

Hmmm. Are you sure you have the OpenGL driver for the card installed?

Because it sounds like you’re getting software emulation here.

Or perhaps you’re not telling us everything you’re doing: is it a textured quad, if so how large is the texture and are you using mipmaps? Are you using a fragment shader?

I can tell you that it’s not the result of vsyncing: 80ms would mean 12.5 Hertz.

Thanks for your always help.

I have not implement Texture mapping , just draw a GL_QUAD, about 512*512 size.

I dont know whether the OpenGL driver have been installed for the display card, how to check it , my card is board-integrated intel 82845.

my card is board-integrated intel 82845.
Oh, dear.

I don’t know the capabilities of that chipset, but usually board-integrated means that the CPU does a lot of the work.

Anyway, if you have the CD that came with the motherboard, you could check if there’s a driver on it, or you could go to Intel’s site and see if you can download the latest driver there (that’s probably the safest approach).
Then install it, and you’ll be sure you do have a proper driver. It might not make a difference though - if it simply isn’t hardware accelerated.

Why are you using a Windows 2003 Server with only 256Mo as an OpenGL production unit ???
I guess that you don’t have optimized drivers. Try to run your executable on a Windows XP and see the difference…