mysterious delay in output

I just came back to doing 3D stuff after a hiatus of six months or so, because of an idea I had one day.

So I spent about a day putting that idea together, using stuff I’d previously coded as a guideline, because all that works fine.

Now here’s the problem: there is a significant and noticable delay between when the code is being executed and when it is rendered, almost a full second. I don’t have very good frame rates (25-25 fps), but that is not the problem: the problem is, if I (for example) move the mouse, hit a key, etc, this event is logged IMMEDIATELY on the console (via printf) but in the OGL window, it does not occur for a noticable delay – so noticable, I can move the mouse quickly for say 1/2 second and the display doesn’t start moving til my hand is off it. Parallel to this, if I put a condition in to flash an object red every few hundred frames and count ticks in the console, the object actually flashes about 15-20 frames AFTER the code has been executed, going by the console output (nb – please, don’t bother me w/ remarks about this methodology).

Now, here’s the clincher: my old code, that I used as the template for my new code, does not have this issue at all. Console output and the display are perfectly syncronized, I can move the mouse and there is no delay, etc. The only difference between the two is that the old code uses lighting and texture, whereas I have not added that to the new code yet.

Anyone have any idea why this could happen – that is, how I would end up with the display lagging 15-20 frames behind the execution?

Perhaps you use a non hardware accellerated driver ?

@+
Yannoo

No, this cannot be the case because your framerate haven’t to be biggly reduced …

What driver use you ?
It’s the same case with an software emulation driver ?
(ok, the framerate isn’t the same but this can perhaps give indications about the thing that make this delay)

@+
Yannoo

Well, there no GPU – it’s an onboard via chrome.

Also, it’s linux, so I’m using software emulation courtesy of X windows.

However, I don’t think that is the issue, as I said most of the stuff I have written does not have this problem. But that stuff is all “complete”, with lighting and textures. I don’t recall having noticed it prior to that, but I wasn’t approaching it with as much attention when I was first learning. Beyond the lack of texture, there is not difference in the implimentation AFAICT – I use a timer, I use gluLookAt for the perspective, etc. I have no idea why this should happen or how, considering it is not flat out limitation on the system.

I suppose I could just add texture and stuff until it really is identical, and see – but my plan was to work most of the details out just with simple colors first, so I’d prefer not to if I can find some explanation.

This exactely the same code, but without texture use ???

If yes, why don’t create an unused texture and test with and without it ?

@+
Yannoo

Hey, I updated my last post while you were replying (vis why I don’t want to add texture yet).

They are not totally identical – the first one is just a scene where I can navigate the camera around. The new one, the scene is different (much simpler so far; I don’t want to proceed until I get this straightened out) and the nature of the camera movement is different. I want it to move in x and y but not z, and be aimed straight on the z axis . It is, just the whole thing has a one second delay.

I guess I could take my other project and take the texturing out to see if that matters. I already tried turning the lighting off:


        if (LIT) {
                glEnable(GL_LIGHTING);
                glEnable(GL_NORMALIZE);
                glColorMaterial(GL_FRONT,GL_AMBIENT_AND_DIFFUSE);
                glEnable(GL_COLOR_MATERIAL);
                glLightfv(GL_LIGHT0,GL_SPECULAR,Specular);      
                glEnable(GL_LIGHT0);
                arrangelamp();
        }

all the determination of the normals is left in anyway. It still works fine.

And the mechanism is the same. I’m using glut, so the frames are turned over with glutTimerFunc. In the new one, I initially wrote it without a timer, using glutPostRedisplay(), since I figured that would maximize the frame rate (it doesn’t – they come out the same) then switched it to this method hoping that would help.

The camera movement is slightly different (the camera in the first one rotates around y, and moves up and down, and it’s distance from the origin is manipulable, so it’s potential path is like a dynamically resizable cylinder), but again, the method is just to modify some global values based on input device events.

Maybe worth noting: initially I tried SDL for the second one, and had the exact same problem, so I thought it was SDL and switched back to glut, which I like better anyway. But the delay is still there.

Have you test with glDisable(GL_LIGHTING) when if (LIT) is false ?

Have you disabled the texturing ?

@+
Yannoo

You can perhaps display the time before and after glutPostRedisplay for to see ?

@+
Yannoo

Yes, I added an else clause for that – altho since in that case the lighting is never enabled, it should not matter.

Have you disabled the texturing ?

No, not in the first one – the scene is fairly complex, with a lot of texture. But I just added a texture to the second one, still the same problem.

I am not using glutPostRedisplay() – the scene is redrawn using glutTimerFunc():


void nextick (int num) {
        drawScene();
        glutTimerFunc(1,nextick,num+1);
}

I have also coded it without this, using glutPostRedisplay() at the end of drawScene(), the outcome is exactly the same. Evidentally, glutTimerFunc() does not fork the function it calls, ie, drawScene() does not really get called 1000 times per second. It gets called the same number of times that it would using glutPostRedisplay() to depend on the main loop. It amounts to the same thing, except I can use nextick() as a set-up for drawScene(). In any case, my method there does not make a difference to the problem.

It was printing debugging info (such as time and fps calculations) in drawscene that made me realize the delay is between execution and rendering. It is taking place somewhere in the GL call stack, after my code has executed.

I have a simple block in drawscene to demonstrate this, here’s a pseudo-code version:


set drawing color black
every 500 frames:
     printf("NOW
");
     change drawing color to red
every frame:
     printf("*";

So, NOW should appear in the console simultaneous to the object flashing red. That doesn’t happen. If I wait and interrupt the execution when I see the red flash, I get:

NOW
***********************************^C

If the display were simultaneous, there would only be <6 asterisks there, since my response time is not perfect. However it is not nearly this slow! And as I’ve said, this delay is very, very apparent. With my earlier project the display responds immediately to the mouse. With my current one, this delay is so bad as to make using the application very awkward.

Have you test with a printf("*
") in each frame (stdio output can be “cached” by example)

@+
Yannoo

And/or test to add an glutPostRedisplay() on your nextick function ?

@+
Yannoo

It looks like your GL implementation stores too many GL commands before actual rendering takes place.

Some checks :

  • you do use GLUT_DOUBLE buffering, like this ?
    glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
  • does the lag disappear with a more realistic value for timer ?
    glutTimerFunc(15,nextick,num+1);
  • try doing glFLush(); or the more extreme, glFinish(); at the end of your display function, right before glutSwapBuffers();

Thank you very much ZbuffeR. glFlush() did not work, but glFinish() – a command of which I was unaware – did the trick. Also, the frame rate went up to a smooth 85. So it was coincidence that it matched the other, more intense scene, which has less than half that frame rate.

Out of curiosity, why do you think the first scene is fine without either glFlush() or glFinish()? I have two hypotheses:

[ul][li]something involved in setting the normals also flushes the output [*]the heavier scene involving many many more commands, being slowed down by them, manages to “catch up to itself”[/ul][/li]The difference is quite dramatic. As I develop the new one I will try pulling glFinish() out occasionally to see if and when it starts working properly sans that.

I think that the heavier scene fills the rendering buffer quickly so the frame lag is much smaller, down to reasonable values.

But on a correct implementation, glFinish() should not be necessary. It sound that glFlush() is not done correctly, as it should try to render asap all currently buffered commands. And swapbuffers triggers a glFLush() too. One or maybe 2 frames of latency would be acceptable, but more is very surprising.

Can you be more specific about your graphic hardware ?
And driver ? Is that software mesa3d or a driver taking advantage of hardware ?

The hardware is an onboard via chrome chip – ie, I don’t have a card at all (but I imagine I will be needing one in the near future).

The system is fedora 10-64, so there is software emulation provided by X windows. I posted a few days ago about SDL segfaulting the libglx module – according to people at the SDL forum, this is not a common problem with linux, so it may have to do with that (I presume libglx is the same for all none GL cards, but there is also a card specific driver for X itself which I presume libglx must hand off to, and no doubt most people working with GL do not use the chrome driver as there is no hardware support and it is not a 3D chip).

For posterity: this was the driver. I got a new ATI card last week, no more need for glFinish(). Also my frame rate went from 20 to 300 :smiley: