Solaris ZFS volumes: workload not hitting L2ARC
You should have more RAM in the system. Pointers to L2ARC need to be kept in RAM (ARC), so I think you'd need around 4GB or 6GB of RAM to better utilize the ~60GB of L2ARC you have available.
This is from a recent thread on the ZFS list:
http://opensolaris.org/jive/thread.jspa?threadID=131296
L2ARC is "secondary" ARC. ZFS attempts to cache all reads in the ARC
(Adaptive Read Cache) - should it find that it doesn't have enough space
in the ARC (which is RAM-resident), it will evict some data over to the
L2ARC (which in turn will simply dump the least-recently-used data when
it runs out of space). Remember, however, every time something gets
written to the L2ARC, a little bit of space is taken up in the ARC
itself (a pointer to the L2ARC entry needs to be kept in ARC). So, it's
not possible to have a giant L2ARC and tiny ARC. As a rule of thumb, I
try not to have my L2ARC exceed my main RAM by more than 10-15x (with
really bigMem machines, I'm a bit looser and allow 20-25x or so, but
still...). So, if you are thinking of getting a 160GB SSD, it would be
wise to go for at minimum 8GB of RAM. Once again, the amount of ARC
space reserved for a L2ARC entry is fixed, and independent of the actual
block size stored in L2ARC. The jist of this is that tiny files eat up
a disproportionate amount of systems resources for their size (smaller
size = larger % overhead vis-a-vis large files).