Inspecting kernel_task

Solution 1:

I would run the sysdiagnose command to generate snapshots on both systems. It's going to log far more data than you need, but it will show memory maps of why exactly the memory allocation has gotten to the place where it is.

Then, if you set up two test accounts on each Mac with nothing in them - set both to be the auto-log-in user and reboot both.

Grab a third and fourth sysdiagnose from the cleanly rebooted systems. You should find a stable, reproducable kernel memory allocation when freshly booted and no apps started. Not that both systems will have the same initial kernel allocations - just that they should be stable across reboots.

Inside the tar.gz files sysdiagnose generates, start with comparing the footprint-all.txt files and see which processes are causing the largest Kernel Memory allocations. The last few lines of that file are also helpful in seeing the balance between kernel allocations and user memory allocations/classes.

You could probably just use the footprint tool, but I'm lazy and let sysdiagnose get me all the logs and then pick/choose the parts I need for a specific task. I also use heap and vmmap often on daemons and user processes, but using it on kernel_task might not be as useful.

Solution 2:

Use zprint -t (Lion) or sudo zprint -t (Mountain Lion and later) (man zprint) to show information about kernel zones

Example (i skipped about 160 lines):

zprint -t
                   elem      cur         max       cur       max       cur alloc alloc          
zone name          size     size        size     #elts     #elts     inuse  size count        Total Allocs
-----------------------------------------------------------------------------------------------------------
zones               544      93K        102K       176       192       167    8K    15                  88K
vm.objects          224   62134K      66430K    284043    303680    283180   16K    73   C         2702161K
vm.object.hash.en$   40    9941K      13122K    254490    335923    238354    4K   102   C          130659K
maps                232      43K         40K       192       176       180    8K    35               27576K
....
....
VM.map.entries       80    6100K       7776K     78080     99532     76888   20K   256   C         1141452K
kernel_stacks     16384    2448K       3200K       153       200       139   16K     1   C           52144K
page_tables        4096  138280K 4293389860K     34570    654217     34570    4K     1   C          595852K
kalloc.large      72308   31705K      32310K       449       457       449   70K     1            13005729K
TOTAL SIZE   = 920734440
TOTAL USED   = 895302860
TOTAL ALLOCS = 310758214530