well i made a cgl class to encapsulate all the opengl code… at first i made everything public (HDC, HGLRC, etc.) and to access them as wnd.m_hDC etc. so when i ran it with a maximzed window it ran fine. Then i made all the variables private and used get and set functions to access them… now when i run my program in a maxized window its REALLY SLOW (all it does is displays a rotating triangle) but it runs fine on a 300 by 300 window.
could this “slowness” be caused because of the function overhead… i mean… i only access the GetDC and GetHWnd functions… is it possible dat those 2 functions make such a big difference…
Yes my fullscreen window is 1280x1024. But how can it be so slow ( Tnt2 32megs ). My test program remains unchanged. Only change is use the get functions to have access to the private members. so i suppose i am going wrong somewhere because the code from nehe.gamedev.net works just fine. And my code is very similar to his. All the opengl code is the same. should i post the whole thing… or should i forget bout the class thing
is your scene a simple one ie less than 10000 polygons if so. ild expect fill to be playing a big role thus the 1280x1024 will be 14.5x slower than the 300x300
i think i will forget about the class thing because all i am displaying is one rotating triagle and its slow. and when i run nehe’s code at 1280x1024 with a rotating triangle… its at the same speed as 300 by 300 so i am going wrong somewhere… which i can’t find because my opengl code is the same as nehe’s ne way thx
>>nd when i run nehe’s code at 1280x1024 with a rotating triangle… its at the same speed as 300 by 300 so i am going wrong somewhere<<
no card yet invented will display a rotating triangle the same speed at 300x300 as at 1280x1024.
i guess whats happening is the display is getting synced to the monitors refresh rate eg 75,85hz. u should be able to disable this in the display properties. look 4 ‘disable vsync’
There is no way to explicitly tell Windows to use hardware acceleration, you just get it if the hardware CAN accelerate what you ask for.
If you want hardware acceleration, you must make sure you only ask for stuff the hardware can handle. For example, if you ask for an accumulation buffer, almost all consumer level hardware will fall back to software since tey can’t handle an accumulation buffer. On NVIDIA hardware (GeForce/TNT) for example, you must not combine a 32-bit depthbuffer with a 16-bit framebuffer, and ask for a stencil buffer only if you are using a 32-bit framebuffer.
To check if you are using a software renderer, call glGetString(GL_RENDERER) and see what it returns. If it returns something with “Microsoft”, you are running in software.
GDI Generic would be the default software implementation. glGetString(GL_VENDOR) will probably get you something like “Microsoft …”
Are you using Glut, Win32, MFC, or something else for your window creation? Post the code you use to initialize your window, because that is probably what is causing it to fall back to software rendering.
If you use an ATI card: some of them have 24 bit modes for the color. But there is no hardware context available for 24 bit colors, so you always get software mode (as long as you are in windowed moded). But fullscreen should be hardware then.
i am using win32 and glGetString(GL_VENDOR) still returns GDI Generic… but i rewrote the CreateScene function and now it works fine… i still dont’t know why it got so slow because i wrote the same code as before… only diffrence is… it works now !!!.. thx a lot