Python's time.clock() vs. time.time() accuracy?
Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?
for example:
start = time.clock()
... do something
elapsed = (time.clock() - start)
vs.
start = time.time()
... do something
elapsed = (time.time() - start)
Solution 1:
As of 3.3, time.clock() is deprecated, and it's suggested to use time.process_time() or time.perf_counter() instead.
Previously in 2.7, according to the time module docs:
time.clock()
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.
On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.
Additionally, there is the timeit module for benchmarking code snippets.
Solution 2:
The short answer is: most of the time time.clock()
will be better.
However, if you're timing some hardware (for example some algorithm you put in the GPU), then time.clock()
will get rid of this time and time.time()
is the only solution left.
Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with time.time()
but exists also with time.clock()
, so you should never run one timing test only, but always run a series of test and look at mean/variance of the times.
Solution 3:
Others have answered re: time.time()
vs. time.clock()
.
However, if you're timing the execution of a block of code for benchmarking/profiling purposes, you should take a look at the timeit
module.