Why does Sleep(500) cost more than 500ms?

I used Sleep(500) in my code and I used getTickCount() to test the timing. I found that it has a cost of about 515ms, more than 500. Does somebody know why that is?


Because Win32 API's Sleep isn't a high-precision sleep, and has a maximum granularity.

The best way to get a precision sleep is to sleep a bit less (~50 ms) and do a busy-wait. To find the exact amount of time you need to busywait, get the resolution of the system clock using timeGetDevCaps and multiply by 1.5 or 2 to be safe.


sleep(500) guarantees a sleep of at least 500ms.

But it might sleep for longer than that: the upper limit is not defined.

In your case, there will also be the extra overhead in calling getTickCount().

Your non-standard Sleep function may well behave in a different matter; but I doubt that exactness is guaranteed. To do that, you need special hardware.