Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI.

All was well, and I had a ZFS memory usage graph looking like this:

A fairly healthy Arc size of around 23GB - making use of the available memory for caching.

However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this:

Partial output of arc_summary.pl is:

System Memory:
     Physical RAM:  30701 MB
     Free Memory :  26719 MB
     LotsFree:      479 MB

ZFS Tunables (/etc/system):

ARC Size:
     Current Size:             915 MB (arcsize)
     Target Size (Adaptive):   119 MB (c)
     Min Size (Hard Limit):    64 MB (zfs_arc_min)
     Max Size (Hard Limit):    29677 MB (zfs_arc_max)

It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something?

Edit

To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are:

my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p};
my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c};
my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min};
my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max};
my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size};

The Kstat entries are there, I'm just getting odd values out of them.

Edit 2

I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat:

System Memory:
     Physical RAM:  30701 MB
     Free Memory :  26697 MB
     LotsFree:      479 MB

ZFS Tunables (/etc/system):

ARC Size:
     Current Size:             744 MB (arcsize)
     Target Size (Adaptive):   119 MB (c)
     Min Size (Hard Limit):    64 MB (zfs_arc_min)
     Max Size (Hard Limit):    29677 MB (zfs_arc_max)

The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB.

So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens?

I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg

Edit 3

Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates.

Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like.

It seems to have fixed the problem.

This is proper la la land stuff now. I've literally no idea what's going on. :(


Solution 1:

Unfortunately I cannot solve your problem, but here's some background information:

  • The ARC target size does not seem to be a fix value. I experience the same problem on a Solaris 11 machine and after each reboot, at some point the target size seems to lock in at a value between ~100 and ~500MB.

  • At least 3 other people are facing the same issue, as discussed in http://mail.opensolaris.org/pipermail/zfs-discuss/2012-January/050655.html

  • There is also an open bug report (7111576) on "My Oracle Support" (https://support.oracle.com). If your server is under a valid support contract, you should file a service request and refer to that bug. As of now, any bugfix still seems to be work in progress...

Other than that, there's not much you can do. If you've yet to upgrade your zpool/zfs versions, you might try booting into your old Solaris 11 Express boot environment and run that until Oracle finally decides to release a SRU that fixes the issue.

Edit: Since the question of performance degradation has been discussed above: It all depends on what you're doing. I've seen horrible latencies on my Solaris 11 NFS share ever since upgrading to Solaris 11 11/11. Compared to your system, however, I have relatively few spindles and rely heavily on ARC and L2ARC caching working as expected (please be aware that the problem also causes L2ARC not to grow to any reasonable size). This is certainly not an issue of misinterpreted statistics.

Even though you might not rely too heavily on ARC/L2ARC, you will probably be able to reproduce it by reading a large file (that would normally fit into your RAM) multiple times with dd. You will probably notice that the first time reading the file will actually be faster than any consecutive reads of the same file (due to the ridiculous ARC size and countless cache evictions).

Edit: I have now managed to receive an IDR patch from Oracle that resolves this issue. If your system is under support, you should ask for the IDR patch for CR 7111576. The patch applies to Solaris 11 11/11 with SRU3.

Solution 2:

They changed the kstats.

Oracle Solaris 11 has removed the following statistics from zfs:0:arcstats:

  • evict_l2_cached
  • evict_l2_eligible
  • evict_l2_ineligible
  • evict_skip
  • hdr_size
  • l2_free_on_write
  • l2_size recycle_miss

and added the following to zfs:0:arcstats:

  • buf_size
  • meta_limit
  • meta_max
  • meta_used

So this could basically just be a problem with your script.