Simulation speed depends on load

Hello guys, :slight_smile:

I’ve created a small OpenGL simulation that involves 3-dimensional axes and some vectors playing around. The problem is the following:

The axes include also a sphere settled at the origin (whose creation involve many edges splitting and normalizing steps starting from a tetrahedron, i.e. high load). I have the option during the simulation to hide the axes (i.e. the sphere also). When I do this, the simulation goes faster. Which doesn’t make sense. The simulation’s speed also depends on the computer I do it on. On my laptop it’s slow, but on my desktop (which has 2x GTX 295) is extremely fast that I need to lower the speed manually to see something meaningful.

I thought in the beginning that this is due to VerticalSync, but later on it didn’t change anything to disable VerticalSync from the display card’s options.

Any ideas how I can make the simulation’s speed independent of the load and independent of the computer?

Any efforts are highly appreciated. Thank you! :slight_smile:

You could use a timer to control when the rendering takes place.
Similarly, you could read the system clock (or use a high-resolution timer) and only render the scene when a certain time interval has passed.

Thanks for the answer, buddy!

That’s what I’m doing actually. I’m using a Qt timer (QTimer) with interval 1 ms, and upon timeout a function advanceTimestep() is called!! But it’s not doing a 1 ms for some reason, and it’s still dependent on the load!!

That’s what I’m doing actually. I’m using a Qt timer (QTimer) with interval 1 ms, and upon timeout a function advanceTimestep() is called!! But it’s not doing a 1 ms for some reason, and it’s still dependent on the load!!

First, there’s no need to use even one exclamation point here, let alone two.

Second, after some light Googling, I ran across this page. It states very clearly that the accuracy of the timer is guaranteed by nothing. All that is guaranteed is that the timer will fire sometime after the time you give it. Which could be exactly 1ms. Or it could be 20ms.

Timers are not threads.

Your simulation is based on time. So, how do you know how much time has passed from one execution of the simulation to the next? If it’s a fixed value, then you’re either doing it wrong or your rendering of the simulation needs to be de-coupled from the simulation itself.

I think I got the picture.

Because both are on the same thread, the timer can’t do its pulses till the rendering is done. So I guess I have to change the algorithm in a way that measures the time until the rendering is done, and then feed this time back to the simulation to update the values according to it.

Thank you guys. I think I got it :slight_smile:

A timer that fires an event on an interval is generally not a good idea for animations. I’m not familiar with Qt but from a brief look through the website I came across the QTime class which looks a lot more like what you need.

The kind of code structure you want would be:

Program start:

  • QTime::start
  • lasttime = 0

Each frame:

  • thistime = QTime::elapsed ()
  • frametime = thistime - lasttime
  • lasttime = thistime
  • Animate (frametime)

Then input frametime as a parameter to your rendering function; e.g. assuming you had a function called Animate () which you want to move 100 units per second, it should look something like this (assuming a millisecond timescale):

Animate (frametime)

  • units_to_move = frametime * 0.1
  • Move (units_to_move)

And then move by units_to_move units, instead of by a fixed amount.