Squid or Other HTTP Caches with SSD Cache Store?

Solution 1:

We've been using varnish on ssd drives for the last 9 months, it has worked extremely well for us. We previously used a squid memory only cache with a carp layer. It worked, but memory fragmentation was a real problem requiring frequent restarts. Squid 2.x also will only use one core which makes it rather inefficient on current hardware.

For our site, which is very cache friendly, we see about about 10% cpu usage on an 8 core machine serving 100Mbit/s of traffic. In our tests we run out of bandwidth before we hit cpu limits with 2 1Gb ports.

I do have some advice for running varnish with an ssd cache.

  • Random write performance really matters. We tried several vendor's for ssd drives before settling on the intel x-25m. We've seen some post as little as .1MB/s for 4k random writes, we get 24MB/s 4k random writes with the x-25m.

  • Raid0. The cache in 2.0 is not persistent, so no need to worry about redundancy. This does make restarts hurt, but those are rare. You can do things like load a new config and purge objects without restart.

  • mmap mode. The varnish cache can be mmap'd to a file or use swap space. Using swap has not worked well for us, it tends to use more i/o bandwidth to serve the same amount of traffic. There is a 4 sector readahead in the linux swapin code, we wrote a patch to remove this but have not tried it in production.

  • Deadline scheduler. With 2.6.28+ this is ssd aware and performs well. We tried noop but found that deadline was fairer as i/o bandwidth becomes limited.

  • Disable read ahead. Since there is no rotational delay, no point in reading extra data just because you might need it. i/o bandwidth is precious on these things.

  • Run 2.6.28+. mmap of a lot of space on linux gives the memory manager a good workout, but the split lru patches help a lot. kswapd cpu usage dropped a lot when we updated.

We've posted our vcl file as well as several tools we use with varnish at link text. The vcl also includes a neat hack implementing a very fast geoiplookup server based on the maxmind database.

Solution 2:

I'm not using SSDs as HTTP caches, but I can make these observations:

Not all SSDs are equal, so you have to be very careful about picking decent ones. FusionIO make PCIe-backed SSDs which are really high-end performers (with relatively low capacity), but costly. Intel's X25-E SLC SSDs perform really well, and are more affordable, but still low capacity. Do your research! I can definitely recommend the X25-E SLC variants, as I'm using these in production systems.

There are other SSDS out there which may give you great sequantial read/write speed, but the important thing for something like a cache is random IO, and a lot of SSDs will give approximately the same random performance as spinning disks. Due to write amplification effects on SSDs, spinning disks will often perform better. Many SSDs have poor quality controllers (eg, older JMicron controllers), which can suffer from significantly degraded performance in some situations. Anandtech and other sites do good comparisons with tools like iometer, check there.

And, of course, SSDs are small. The Intel X25-E, which I would say are the best SATA SSD I've seen, only come in 32 and 64 GB variants.

For RAID levels, standard RAID performance notes still apply. A write to a RAID 5 baically involves reading the data block you're going to modify, reading the parity block, updating the parity, writing the data block, and writing the parity, so it is still going to give worse performance than other RAID levels, even with SSDs. However, with drives like the X25-E having such high random IO performance, this probably matters less - as it's going to still outperform random IO on spinning disks for a similarly sized array.

From what I've seen, RAID controller bandwidth is saturated too soon for getting the most benefit out of a 7 disk RAID set, at least as far as sequential performance is concerned. You can't get more than about 800MB/s out of current models of SATA controllers (3ware, areca etc). Having more smaller arrays, across multiple controllers (eg, several RAID1s rather than a single RAID10) will improve this, although the individual performance of each array will suffer.

Regarding an HTTP cache, I think you'd be better served with a decent array of spinning disks, and plenty of ram. Frequently accessed objects will stay in memory cache - either in squid's internal cache, or in your OS's fs cache. Simply giving a machine more ram can significantly reduce the disk loading due to this. If you're running a large squid cache you'll probably want lots of disk space, and the high-performing SSDs still only come in relatively low capacity.