Question

What is the best way to exit out of a loop as close to 30ms as possible in C++. Polling boost:microsec_clock ? Polling QTime ? Something else?

Something like:

A = now;
for (blah; blah; blah) {
    Blah();
    if (now - A > 30000)
         break;
}

It should work on Linux, OS X, and Windows.

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

Was it helpful?

Solution

The code snippet example in this link pretty much does what you want:

http://www.cplusplus.com/reference/clibrary/ctime/clock/

Adapted from their example:

void runwait ( int seconds )
{
   clock_t endwait;
   endwait = clock () + seconds * CLOCKS_PER_SEC ;
   while (clock() < endwait)
   {
      /* Do stuff while waiting */
   }
}

OTHER TIPS

The calculations in the loop are for updating a simulation. Every 30ms, I'd like to update the viewport.

Have you considered using threads? What you describe seems the perfect example of why you should use threads instead of timers.

The main process thread keeps taking care of the UI, and have a QTimer set to 30ms to update it. It locks a QMutex to have access to the data, performs the update, and releases the mutex.

The second thread (see QThread) does the simulation. For each cycle, it locks the QMutex, does the calculations and releases the mutex when the data is in a stable state (suitable for the UI update).

With the increasing trend on multi-core processors, you should think more and more on using threads than on using timers. Your applications automatically benefits from the increased power (multiple cores) of new processors.

While this does not answer the question, it might give another look at the solution. What about placing the simulation code and user interface in different threads? If you use Qt, periodic update can be realized using a timer or even QThread::msleep(). You can adapt the threaded Mandelbrot example to suit your need.

If you need to do work until a certain time has elapsed, then docflabby's answer is spot-on. However, if you just need to wait, doing nothing, until a specified time has elapsed, then you should use usleep()

Short answer is: you can't in general, but you can if you are running on the right OS or on the right hardware.

You can get CLOSE to 30ms on all the OS's using an assembly call on Intel systems and something else on other architectures. I'll dig up the reference and edit the answer to include the code when I find it.

The problem is the time-slicing algorithm and how close to the end of your time slice you are on a multi-tasking OS.

On some real-time OS's, there's a system call in a system library you can make, but I'm not sure what that call would be.

edit: LOL! Someone already posted a similiar snippet on SO: Timer function to provide time in nano seconds using C++

VonC has got the comment with the CPU timer assembly code in it.

According to your question, every 30ms you'd like to update the viewport. I wrote a similar app once that probed hardware every 500ms for similar stuff. While this doesn't directly answer your question, I have the following followups:

  • Are you sure that Blah(), for updating the viewport, can execute in less than 30ms in every instance?
  • Seems more like running Blah() would be done better by a timer callback.
  • It's very hard to find a library timer object that will push on a 30ms interval to do updates in a graphical framework. On Windows XP I found that the standard Win32 API timer that pushes window messages upon timer interval expiration, even on a 2GHz P4, couldn't do updates any faster than a 300ms interval, no matter how low I set the timing interval to on the timer. While there were high performance timers available in the Win32 API, they have many restrictions, namely, that you can't do any IPC (like update UI widgets) in a loop like the one you cited above.
  • Basically, the upshot is you have to plan very carefully how you want to have updates occur. You may need to use threads, and look at how you want to update the viewport.

Just some things to think about. They caught me by surprise when I worked on my project. If you've thought these things through already, please disregard my answer :0).

You might consider just updating the viewport every N simulation steps rather than every K milliseconds. If this is (say) a serious commercial app, then you're probably going to want to go the multi-thread route suggested elsewhere, but if (say) it's for personal or limited-audience use and what you're really interested in is the details of whatever it is you're simulating, then every-N-steps is simple, portable and may well be good enough to be getting on with.

See QueryPerformanceCounter and QueryPerformanceFrequency

If you are using Qt, here is a simple way to do this:

QTimer* t = new QTimer( parent ) ;
t->setInterval( 30 ) ; // in msec
t->setSingleShot( false ) ;
connect( t, SIGNAL( timeout() ), viewPort, SLOT( redraw() ) ) ;

You'll need to specify viewPort and redraw(). Then start the timer with t->start().

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top