Speed Anyone???

hi

well i made a cgl class to encapsulate all the opengl code… at first i made everything public (HDC, HGLRC, etc.) and to access them as wnd.m_hDC etc. so when i ran it with a maximzed window it ran fine. Then i made all the variables private and used get and set functions to access them… now when i run my program in a maxized window its REALLY SLOW (all it does is displays a rotating triangle) but it runs fine on a 300 by 300 window.

could this “slowness” be caused because of the function overhead… i mean… i only access the GetDC and GetHWnd functions… is it possible dat those 2 functions make such a big difference…

thx

No!

Surly they are only called at the start, hence they would have no effect on FPS later.

First are you sure it works grand a 300x300. If your fullscreen window is 1280x1024 then its going to be alot slower than a 300x300 window.

Otherwise you’re going wrong somewhere else, post up some more info.

Yes my fullscreen window is 1280x1024. But how can it be so slow ( Tnt2 32megs ). My test program remains unchanged. Only change is use the get functions to have access to the private members. so i suppose i am going wrong somewhere because the code from nehe.gamedev.net works just fine. And my code is very similar to his. All the opengl code is the same. should i post the whole thing… or should i forget bout the class thing

Well, I would recommend forgetting about Class stuff. Theres no need for OOP for such a simple aspect of a program that only excutes once.

Just don’t forget that 1280x1024 is 14 times bigger than 300x300, therefore it will be slower.

Write a simple FPS counter and post the results. They may be quite respectable

is your scene a simple one ie less than 10000 polygons if so. ild expect fill to be playing a big role thus the 1280x1024 will be 14.5x slower than the 300x300

i think i will forget about the class thing because all i am displaying is one rotating triagle and its slow. and when i run nehe’s code at 1280x1024 with a rotating triangle… its at the same speed as 300 by 300 so i am going wrong somewhere… which i can’t find because my opengl code is the same as nehe’s ne way thx

Probably your context use the software implementation.

>>nd when i run nehe’s code at 1280x1024 with a rotating triangle… its at the same speed as 300 by 300 so i am going wrong somewhere<<

no card yet invented will display a rotating triangle the same speed at 300x300 as at 1280x1024.
i guess whats happening is the display is getting synced to the monitors refresh rate eg 75,85hz. u should be able to disable this in the display properties. look 4 ‘disable vsync’

Originally posted by Qwylli:
Probably your context use the software implementation.

is there a way to set the context to use hardware implementation?

i disabled vsync and it still does the same thing.

There is no way to explicitly tell Windows to use hardware acceleration, you just get it if the hardware CAN accelerate what you ask for.

If you want hardware acceleration, you must make sure you only ask for stuff the hardware can handle. For example, if you ask for an accumulation buffer, almost all consumer level hardware will fall back to software since tey can’t handle an accumulation buffer. On NVIDIA hardware (GeForce/TNT) for example, you must not combine a 32-bit depthbuffer with a 16-bit framebuffer, and ask for a stencil buffer only if you are using a 32-bit framebuffer.

To check if you are using a software renderer, call glGetString(GL_RENDERER) and see what it returns. If it returns something with “Microsoft”, you are running in software.

glGetString(GL_RENDERER) returns “GDI generic” what is dat suppose to be??

here is my opengl code

bool Init ( void )
{
glShadeModel( GL_SMOOTH );
glClearColor( 0.0f, 0.0f, 0.0f, 0.5f );
glClearDepth( 1.0f );

glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LEQUAL );

glHint( GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST );

return true;	

}

bool Resize ( GLsizei width, GLsizei height )
{
if ( height == 0 )
height = 1;

glViewport( 0, 0, width, height );

glMatrixMode( GL_PROJECTION );
glLoadIdentity();

gluPerspective( 45.0f, (GLfloat)width / (GLfloat)height, 0.1f, 100.0f );

glMatrixMode( GL_MODELVIEW );
glLoadIdentity();

return true;

}


this oop thing caused lots of trouble maybe i will switch back to C or still use C++ but forget bout classes

[This message has been edited by Akash (edited 07-31-2001).]

GDI Generic would be the default software implementation. glGetString(GL_VENDOR) will probably get you something like “Microsoft …”

Are you using Glut, Win32, MFC, or something else for your window creation? Post the code you use to initialize your window, because that is probably what is causing it to fall back to software rendering.

If you use an ATI card: some of them have 24 bit modes for the color. But there is no hardware context available for 24 bit colors, so you always get software mode (as long as you are in windowed moded). But fullscreen should be hardware then.

i am using win32 and glGetString(GL_VENDOR) still returns GDI Generic… but i rewrote the CreateScene function and now it works fine… i still dont’t know why it got so slow because i wrote the same code as before… only diffrence is… it works now !!!.. thx a lot