How is load average calculated on osx? It seems too high - and how do I analyze it?
An advanced question: I think my load averages are too high as compared to a linux system. I have around 0.40 1min with basically no cpu use (0-1%) and even if this is spread out over 4 cores it still equals roughly 0.10 = 10% cpu use which isn't correct. I've now learned that load average takes not only cpu use into account but also io to disk and network. I've therefore tried finding the io wait value but this seems to not be available on mac for some reason? I have US and SY and ID of course in the iostat tool but no sign of io wait % (called WI if I don't misremember).
Everything is just fine and I have the same load averages on my other macs, what I'm after here is understanding WHY the averages are calculated this way (this high) and how I can analyze it further?
I've googled a good 2 hours on the topic but there is little or none talking about this, any ideas?
Solution 1:
The load is the average number of runnable processes. man 3 getloadavg
says:
The getloadavg() function returns the number of processes in the system run queue averaged over various periods of time. Up to nelem samples are retrieved and assigned to successive elements of loadavg[].
The system imposes a maximum of 3 samples, representing averages over the last 1, 5, and 15 minutes, respectively.
You can also obtain the same information by running sysctl vm.loadavg
.
Assuming Mac OS X 10.7.2, the getloadavg
function calls this code here (search for the second occurrence of sysctl_loadavg
), which, in essence, returns the current value of averunnable
.
This, in turn, is defined here:
struct loadavg averunnable =
{ {0, 0, 0}, FSCALE }; /* load average, of runnable procs */
This file also defines compute_averunnable
, which computes the new weighted value of averunnable
.
The scheduler header file sched.h declares it as extern
, and all scheduler implementations in xnu-1699.24.8/osfmk/kern/sched_*.c
periodically call it via compute_averages
in sched_average.c.
The argument to compute_averunnable
, is sched_nrun
in sched_average.c
, getting its value from sched_run_count
in sched.h
.
This number is modified by the macros sched_run_incr
and sched_run_decr
, used exclusively in the file sched_prim.c
, which are the scheduling primitives responsible for unblocking, dispatching, etc. of threads.
So, to recap:
It simply uses the number of runnable threads to compute load averages in 5 second intervals.
While the systems are totally different, I find it hard to believe that Linux always has lower loads than OS X. In fact, it appears that Linux simply shows a different value.
Quoting Wikipedia:
On modern UNIX systems, the treatment of threading with respect to load averages varies. Some systems treat threads as processes for the purposes of load average calculation: each thread waiting to run will add 1 to the load. However, other systems, especially systems implementing so-called N:M threading, use different strategies, such as counting the process exactly once for the purpose of load (regardless of the number of threads), or counting only threads currently exposed by the user-thread scheduler to the kernel, which may depend on the level of concurrency set on the process.
Judging from this article, Linux really uses the number of processes that are runnable as opposed to XNU's threads.
Since every runnable process has at least one runnable thread, the load average values on OS X will, assuming an equivalent load average calculation (which I didn't bother to check), always be at least as big, since the item counts they're based on are different.