Why does Sleep(500) cost more than 500ms?

C++WinapiSleep

C++ Problem Overview


I used Sleep(500) in my code and I used getTickCount() to test the timing. I found that it has a cost of about 515ms, more than 500. Does somebody know why that is?

C++ Solutions


Solution 1 - C++

Because Win32 API's Sleep isn't a high-precision sleep, and has a maximum granularity.

The best way to get a precision sleep is to sleep a bit less (~50 ms) and do a busy-wait. To find the exact amount of time you need to busywait, get the resolution of the system clock using timeGetDevCaps and multiply by 1.5 or 2 to be safe.

Solution 2 - C++

sleep(500) guarantees a sleep of at least 500ms.

But it might sleep for longer than that: the upper limit is not defined.

In your case, there will also be the extra overhead in calling getTickCount().

Your non-standard Sleep function may well behave in a different matter; but I doubt that exactness is guaranteed. To do that, you need special hardware.

Solution 3 - C++

As you can read in the documentation, the WinAPI function GetTickCount()

> is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.

To get a more accurate time measurement, use the function GetSystemDatePreciseAsFileTime

Also, you can not rely on Sleep(500) to sleep exactly 500 milliseconds. It will suspend the thread for at least 500 milliseconds. The operating system will then continue the thread as soon as it has a timeslot available. When there are many other tasks running on the operating system, there might be a delay.

Solution 4 - C++

In general sleeping means that your thread goes to a waiting state and after 500ms it will be in a "runnable" state. Then the OS scheduler chooses to run something according to the priority and number of runnable processes at that time. So if you do have high precision sleep and high precision clock then it is still a sleep for at least 500ms, not exactly 500ms.

Solution 5 - C++

Like the other answers have noted, Sleep() has limited accuracy. Actually, no implementation of a Sleep()-like function can be perfectly accurate, for several reasons:

  • It takes some time to actually call Sleep(). While an implementation aiming for maximal accuracy could attempt to measure and compensate for this overhead, few bother. (And, in any case, the overhead can vary due to many causes, including CPU and memory use.)

  • Even if the underlying timer used by Sleep() fires at exactly the desired time, there's no guarantee that your process will actually be rescheduled immediately after waking up. Your process might have been swapped out while it was sleeping, or other processes might be hogging the CPU.

  • It's possible that the OS cannot wake your process up at the requested time, e.g. because the computer is in suspend mode. In such a case, it's quite possible that your 500ms Sleep() call will actually end up taking several hours or days.

Also, even if Sleep() was perfectly accurate, the code you want to run after sleeping will inevitably consume some extra time. Thus, to perform some action (e.g. redrawing the screen, or updating game logic) at regular intervals, the standard solution is to use a compensated Sleep() loop. That is, you maintain a regularly incrementing time counter indicating when the next action should occur, and compare this target time with the current system time to dynamically adjust your sleep time.

Some extra care needs to be taken to deal with unexpected large time jumps, e.g. if the computer was temporarily suspected or if the tick counter wrapped around, as well as the situation where processing the action ends up taking more time than is available before the next action, causing the loop to lag behind.

Here's a quick example implementation (in pseudocode) that should handle both of these issues:

int interval = 500, giveUpThreshold = 10*interval;
int nextTarget = GetTickCount();

bool active = doAction();
while (active) {
    nextTarget += interval;
    int delta = nextTarget - GetTickCount();
    if (delta > giveUpThreshold || delta < -giveUpThreshold) {
        // either we're hopelessly behind schedule, or something
        // weird happened; either way, give up and reset the target
        nextTarget = GetTickCount();
    } else if (delta > 0) {
        Sleep(delta);
    }
    active = doAction();
}

This will ensure that doAction() will be called on average once every interval milliseconds, at least as long as it doesn't consistently consume more time than that, and as long as no large time jumps occur. The exact time between successive calls may vary, but any such variation will be compensated for on the next interation.

Solution 6 - C++

Default timer resolution is low, you could increase time resolution if necessary. MSDN

#define TARGET_RESOLUTION 1         // 1-millisecond target resolution

TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
{
    // Error; application can't continue.
}

wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax);
timeBeginPeriod(wTimerRes); 

Solution 7 - C++

There are two general reasons why code might want a function like "sleep":

  1. It has some task which can be performed at any time that is at least some distance in the future.

  2. It has some task which should be performed as near as possible to some moment in time some distance in the future.

In a good system, there should be separate ways of issuing those kinds of requests; Windows makes the first easier than the second.

Suppose there is one CPU and three threads in the system, all doing useful work until, one second before midnight, one of the threads says it won't have anything useful to do for at least a second. At that point, the system will devote execution to the remaining two threads. If, 1ms before midnight, one of those threads decides it won't have anything useful to do for at least a second, the system will switch control to the last remaining thread.

When midnight rolls around, the original first thread will become available to run, but since the presently-executing thread will have only had the CPU for a millisecond at that point, there's no particular reason the original first thread should be considered more "worthy" of CPU time than the other thread which just got control. Since switching threads isn't free, the OS may very well decide that the thread that presently has the CPU should keep it until it blocks on something or has used up a whole time slice.

It might be nice if there were a version of "sleep" which were easier to use than multi-media timers but would request that the system give the thread a temporary priority boost when it becomes eligible to run again, or better yet a variation of "sleep" which would specify a minimum time and a "priority- boost" time, for tasks which need to be performed within a certain time window. I don't know of any systems that can be easily made to work that way, though.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
Questionkookoo121View Question on Stackoverflow
Solution 1 - C++orlpView Answer on Stackoverflow
Solution 2 - C++BathshebaView Answer on Stackoverflow
Solution 3 - C++PhilippView Answer on Stackoverflow
Solution 4 - C++NotinlistView Answer on Stackoverflow
Solution 5 - C++Ilmari KaronenView Answer on Stackoverflow
Solution 6 - C++ISanychView Answer on Stackoverflow
Solution 7 - C++supercatView Answer on Stackoverflow