Why does CPU Time for the System Idle Process increase *faster* than the wall clock?

One could compute the used CPU time, assuming the task scheduler balances the tasks across all cores evenly (or at least in a distributed manner) as:

CPU Time = Application Time * Number of Cores * Average CPU Utilization

The System Idle Process is just a wrapper for the task scheduler when there is no work to be done by the CPU, so in the case nothing else is running on the system, this usually has a very high utilization. Assuming you have a 4-core system, and your idle process takes up 95% of the CPU, every second you would expect the idle timer's CPU time to increase by:

CPU Time = (1 second) * (4 cores) * (0.95) = 3.8 seconds

Note that as we get better processors and as our operating systems become more optimized, this would theoretically max. out at 100% (e.g. at idle, the CPU has literally NO work compared to it's capabilities), in which case you would expect the CPU Time for the idle process to simply increase at real-time multiplied by the number of cores.


Note that this formula applies even for single-threaded applications, since if a single-threaded application is running constantly on a 4-core machine, the maximum processor utilization would only be 25%; thus, the CPU time for that single-threaded application should nearly match real-time:

CPU Time = (1 second) * (4 cores) * (0.25) = 1 second

The task manager shows CPU time, not real time.

CPU time is the time that was allocated to the given process on any available CPU. So if you have a quad core system, and you only run processes on one core, the idle process will use the remaining time on the other 3 cores.

Thus you will get a CPU time usage of 1 hour 30 minutes during your 30 minute uptime.