linux time command microseconds or better accuracy

Your question is meaningless: you will not get repeated measurements even within milliseconds time does report.

Adding more digits will just add noise. You might as well pull the extra digits from /dev/random.


Use gettimeofday -- gives microsecond accuracy


I do agree with Employed Russian's answer. It does not have much sense to want microsecond accuracy for such measures. So any additional digit you've got is meaningless (and essentially random).

If you have the source code of the application to measure, you might use the clock or clock_gettime functions, but don't hope for better than a dozen of microseconds of accuracy. There is also the RDTSC machine instruction.

Read the linux clock howto.

And don't forget that the timing of execution, is from an application point of view, non deterministic and non reproductible (think about context switches, cache misses, interrupts, etc... happenning at random time).

If you want to measure the performance of a whole program, make it run for at least several seconds, and measure the time several (e.g. 8) times, and take the average (perhaps dropping the best & worst timing).

If you want to measure timing for particular functions, learn how to profile your application (gprof, oprofile etc etc...) See also this question

Don't forget to read time(7)

Be aware that on current (laptop, desktop, server) out-of-order pipelined superscalar processors with complex CPU caches and TLB and branch predictors, the execution time of some tiny loop or sequence of machine instruction is not reproducible (the nanosecond count will vary from one run to the next). And the OS also adds randomness (scheduling, context switches, interrupts, page cache, copy-on-write, demand-paging ...) so it does not make any sense to measure the execution of some command with more than one millisecond -or perhaps 100µs if you are lucky- of precision. You should benchmark your command several times.

To get significant measures, you should change the benchmarked application to run in more than a few seconds (perhaps adding some loop in main, or run with a bigger data set...), and repeat the benchmarked command a dozen times. That take the mean (or the worst, or the best, depending what you are after) of the measures.

If the system time(1) is not enough, you might make your own measurement facility; see also getrusage(2); I'm skeptical about you getting more accurate measures.

BTW on my i3770K recent GNU/Linux (4.2 kernel, Debian/Sid/x86-64) desktop computer, "system"-calls like time(2) or clock_gettime(2) runs in about 3 or 4 nanoseconds (thanks to vdso(7) which avoid the burden of a real syscall...) so you could use them inside your program quite often.


See if current_kernel_time() is helpful for you in your requirement. I have used it and found useful as it gives granularity to nanosecond level.. More details are here.