ZFS - zpool ARC cache plus L2ARC benchmarking
I have been doing lots of I/O testing on a ZFS system I will eventually use to serve virtual machines. I thought I would try adding SSD's for use as cache to see how much faster I can get the read speed. I also have 24GB of RAM in the machine that acts as ARC. vol0 is 6.4TB and the cache disks are 60GB SSD's. The zvol is as follows:
pool: vol0
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
vol0 ONLINE 0 0 0
c1t8d0 ONLINE 0 0 0
cache
c3t5001517958D80533d0 ONLINE 0 0 0
c3t5001517959092566d0 ONLINE 0 0 0
The issue is I'm not seeing any difference with the SSD's installed. I've tried bonnie++ benchmarks and some simple dd commands to write a file then read the file. I have run benchmarks before and after adding the SSD's.
I've ensured the file sizes are at least double my RAM so there is no way it can all get cached locally.
Am I missing something here? When am I going to see benefits of having all that cache? Am I simply not under these circumstances? Are the benchmark programs not good for testing the effect of cache because of the the way (and what) it writes and reads?
Solution 1:
It seems your tests are very sequential like writing a large file with dd then reading it. ZFS L2ARC cache is designed to boost performance on random reads workloads, not for streaming like patterns. Also, to get optimal performance, you might want to wait a longer time until the cache is warm. Another point would be to make sure your working set fit into the SSDs. Having io statistics observed during the tests would help figuring out what devices are used and how they perform.
Solution 2:
Did you consider the ARC space compared to your test? In testing the I/O benefit of SSD's used as L2ARC (pool read cache) and/or ZIL (pool synchronous write cache) you need to consider the size of your ARC in contrast to your test's working set. If ARC can be used, it will be without pulling from L2ARC. Likewise, if write caching is enabled, writes will be coalesced regardless of ZIL unless flush and explicit synchronous behavior is enabled (i.e. the initiator's write cache is disabled too, etc.)
If you want to see the value of SSD for smaller working sets, consider that 16 disk RAID10 will deliver about 1200+ IOPS (SAS/SATA?) for writes and about twice that for reads. Reducing the disk set to two (for testing) and reducing the ARC to minimum (about 1/8th main memory) will then allow you to contrast spindle vs. SSD. You'd otherwise need to get more threads banging on your pool (multiple LUNs) to see the benefit. Oh yes, and get more interfaces working too, so you're not BW bound by a single 1Gbps interface...
Solution 3:
Anyone attempting to benchmark the L2ARC will want to see how "warm" the L2ARC is, and to also assess what that their requests are hitting the L2ARC. There is a nice tool and article for doing just that: arcstat.pl updated for L2ARC statistics
Solution 4:
Given the state of the answer here I will provide one.
Instead of answering with a question or an answer irrelevant to the question I will try to given an answer that is relevant.
Sadly I do not know the factual answer as to what should be going on, but I can answer with my own experience.
From my own experience, a zvol bigger than the ARC (or L2ARC) will not be cached. Other than avoiding read amplification.
You can run arc_summary on linux to get access to the ARC statistics.
I tested with accessing the same file over and over inside a virtual machine with its drive hosted on a zvol, which meant the same parts of the zvol should have been accessed over and over, but all the i/o was not even registering in the ARC at all as if it was been bypassed.
On the other hand I have another virtual machine hosted on a raw file on a zfs dataset, and that is caching just fine.
To confirm if ARC is enabled for the zvol (or dataset), check the primarycache variable, and for the l2arc, check the secondarycache variable.