Results 1 to 6 of 6

Thread: Simulation speed depends on load

  1. #1
    Junior Member Newbie
    Join Date
    May 2011
    Posts
    4

    Simulation speed depends on load

    Hello guys,

    I've created a small OpenGL simulation that involves 3-dimensional axes and some vectors playing around. The problem is the following:

    The axes include also a sphere settled at the origin (whose creation involve many edges splitting and normalizing steps starting from a tetrahedron, i.e. high load). I have the option during the simulation to hide the axes (i.e. the sphere also). When I do this, the simulation goes faster. Which doesn't make sense. The simulation's speed also depends on the computer I do it on. On my laptop it's slow, but on my desktop (which has 2x GTX 295) is extremely fast that I need to lower the speed manually to see something meaningful.

    I thought in the beginning that this is due to VerticalSync, but later on it didn't change anything to disable VerticalSync from the display card's options.

    Any ideas how I can make the simulation's speed independent of the load and independent of the computer?

    Any efforts are highly appreciated. Thank you!

  2. #2
    Senior Member OpenGL Pro BionicBytes's Avatar
    Join Date
    Mar 2009
    Location
    UK, London
    Posts
    1,161

    Re: Simulation speed depends on load

    You could use a timer to control when the rendering takes place.
    Similarly, you could read the system clock (or use a high-resolution timer) and only render the scene when a certain time interval has passed.

  3. #3
    Junior Member Newbie
    Join Date
    May 2011
    Posts
    4

    Re: Simulation speed depends on load

    Thanks for the answer, buddy!

    That's what I'm doing actually. I'm using a Qt timer (QTimer) with interval 1 ms, and upon timeout a function advanceTimestep() is called!! But it's not doing a 1 ms for some reason, and it's still dependent on the load!!

  4. #4
    Senior Member OpenGL Lord
    Join Date
    Mar 2015
    Posts
    6,678

    Re: Simulation speed depends on load

    That's what I'm doing actually. I'm using a Qt timer (QTimer) with interval 1 ms, and upon timeout a function advanceTimestep() is called!! But it's not doing a 1 ms for some reason, and it's still dependent on the load!!
    First, there's no need to use even one exclamation point here, let alone two.

    Second, after some light Googling, I ran across this page. It states very clearly that the accuracy of the timer is guaranteed by nothing. All that is guaranteed is that the timer will fire sometime after the time you give it. Which could be exactly 1ms. Or it could be 20ms.

    Timers are not threads.

    Your simulation is based on time. So, how do you know how much time has passed from one execution of the simulation to the next? If it's a fixed value, then you're either doing it wrong or your rendering of the simulation needs to be de-coupled from the simulation itself.

  5. #5
    Junior Member Newbie
    Join Date
    May 2011
    Posts
    4

    Re: Simulation speed depends on load

    I think I got the picture.

    Because both are on the same thread, the timer can't do its pulses till the rendering is done. So I guess I have to change the algorithm in a way that measures the time until the rendering is done, and then feed this time back to the simulation to update the values according to it.

    Thank you guys. I think I got it

  6. #6
    Newbie OpenGL Pro
    Join Date
    Jan 2007
    Posts
    1,789

    Re: Simulation speed depends on load

    A timer that fires an event on an interval is generally not a good idea for animations. I'm not familiar with Qt but from a brief look through the website I came across the QTime class which looks a lot more like what you need.

    The kind of code structure you want would be:

    Program start:
    - QTime::start
    - lasttime = 0

    Each frame:
    - thistime = QTime::elapsed ()
    - frametime = thistime - lasttime
    - lasttime = thistime
    - Animate (frametime)

    Then input frametime as a parameter to your rendering function; e.g. assuming you had a function called Animate () which you want to move 100 units per second, it should look something like this (assuming a millisecond timescale):

    Animate (frametime)
    - units_to_move = frametime * 0.1
    - Move (units_to_move)

    And then move by units_to_move units, instead of by a fixed amount.

Similar Threads

  1. Replies: 1
    Last Post: 02-03-2017, 05:49 AM
  2. size of screen depends on what?
    By somnath in forum OpenGL: Basic Coding
    Replies: 3
    Last Post: 04-24-2011, 07:45 AM
  3. glTexImage2D depends on wglMakeCurrent?
    By DalTXColtsFan in forum OpenGL: Basic Coding
    Replies: 1
    Last Post: 06-18-2003, 11:59 PM
  4. 3D-Texturing depends on hardware ???
    By Mahdy in forum OpenGL: Advanced Coding
    Replies: 2
    Last Post: 04-06-2001, 11:47 PM
  5. My life depends on this But not really hah hah ..........
    By Na2uRITS in forum OpenGL: Basic Coding
    Replies: 1
    Last Post: 12-18-2000, 10:23 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Proudly hosted by Digital Ocean