NetGear's ReadyNAS 2100 has 4 disk slots and costs $2000 with no disks. That seems a bit too expensive for just 4 disk slots.

Dell has good network storage solutions too. PowerVault NX3000 has 6 disk slots, so that's an improvement. However, it costs $3500; the NX3100 doubles the number of disks at double the price. Just in case I'm looking at the wrong hardware for lots of storage, the trusty PowerVault MD3000i SAN has a good 15 drives, but it starts at $7000.

While you can argue about support from Dell, Netgear or HP or any other company being serious, it's still pretty damn expensive to get those drives RAID'ed together in a box and served via iSCSI. There's a much cheaper option: build it yourself. Backblaze has built it's own box, housing 45 (that's forty five) SATA drives for a little under $8000, including the drives themselves. That's at least 10 times cheaper than current offers from Dell, Sun, HP, etc.

Why is NAS (or SAN - still storage attached to a network) so expensive? After all, it's main function is to house a number of HDDs, create a RAID array and serve them over a protocol like iSCSI; nearly everything else is just colored bubbles (AKA marketing terms).


This really depends on your point of view.

If I'm an ISV who needs to launch on the tiniest possible budget but I need a crapload of storage, then yes, a brand-name box will be too expensive and the risk/reward of a home-made FreeNAS box would most likely be an acceptable solution.

However, if I'm a mega-multi-national corporation with 10,000 users and I run a datacentre that supports a billion-dollar-a-year company and if the datacentre goes offline it's going to cost in the order of $100,000 a minute then you can bet your arse I'm going to buy a top-shelf brand-name NAS with a 2-hour no-questions-asked replacement SLA. Yes, it's going to cost me 100x more than a DIY box, but the day your entire array fails and you've got 10TB of critical storage offline, that $100,000 investment is going to pay for itself in about 2 hours flat.

For someone like Backblaze, where storage volume is king, then it makes sense for them to roll their own - but that's the core competancy - providing storage. Dell, EMC, etc - their products are aimed at those who storage is not their primary focus.

Of course, it's all totally pointless if you don't have backups, but that's another story for another day.


In our case it comes down to tiers of storage service. This has come about in large part because different needs have different storage requirements. Our ESX environment has Exchange running inside of it, so we need fast, reliable storage. Our desktop-support function just needs lots of it (disk images), with no requirement for speed. The second type doesn't need the stuff that's $9/GB.

Tier 3: Homebrew NAS

This is an HP DL360 with four attached MSA60's and to-be-determined storage software. The drives are all 7.2K RPM MDL SAS drives, giving about 30TB of it. The software will be picked soon but is likely to be a combination of openFiler for iSCSI services with a Windows server attached (via iSCSI) providing file-level serving. Total cost-per-GB is in the neighborhood of $2.

Tier 2: EVA4400 - FATA

This is an EVA4400 with .5TB fibre-attached-ATA (FATA) drives, a 7.2K RPM highly reliable solution. This is only accessible via Fibre Channel, though iSCSI is an option. This is used for highly available file-sharing (by way of a cluster), mass storage of other types, and backup-to-disk stuff. Total cost per GB is in the neighborhood of $9.

Tier 1: EVA4400 - FC Disks

This is another set of shelves on the EVA that are running 450GB 15K RPM FC drives. This is used for storage that NEEDS low latency, high volume traffic, and can handle highly random I/O efficiently. The tenants here are the ESX datastores, our MSSQL database volumes, and certain heavily accessed file-serving volumes. Total cost per GB here is hard to pin down, but between $12-$17/GB.

The first tier is the newest one and it was a hard add. The whole point of it was to provide a centrally managed cheap-ass storage solution so individual departments wouldn't have to buy servers to get the storage they wanted. The hardware is all covered by warranty, but the software that drives it? Only in one use-case, and that use-case is not the one I recommended to management. We could have slapped a server running OpenFiler onto the EVA4400-FATA and served things up that way, but that still wouldn't have been cheap enough, we had to build ours from disk-array parts.

We have tiers of service for a variety of reasons, one of which is cost. The other is performance and expected I/O loads. The MSA60 based solution should saturate I/O wise a lot faster than either of the EVA options, simply because it has fewer spindles to spread the I/O around (vs FATA) or uses slower disks (vs FC). My testing on the MSA60-based solution shows that for some workloads (sequential) I'm hitting the SAS transfer limit, which is slower than our FC capable arrays are able to pitch data.