RAID10 without write-back cache = horrible write performance?
I have just provisioned a dedicated server on singlehop.
I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get:
write-cache disabled
200 MB/s read throughput
30 MB/s write throughput
I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results:
write-cache enabled
280 MB/s read
260 MB/s write
which is great and all but means I'd have to add a BBU for an additional monthly cost.
Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.
UPDATE: They're have a look at it now and seeing whether there is something wrong. I'm gonna give ahamat the accepted answer as he broke down when this 8x drop-off would affect the server workload and when it would not. Will update again if I get any more data. +1 for other answers. Thanks.
UPDATE2: There was something wrong with the hardware it seems. Moved to a new machine with identical specs and getting 80MB/s writes without write-back cache. 250MB/s with the cache on. so 3x drop-off and reasonable throughput without it.
The performance in real world applications will vary, due to the nature of the application.
Asynchronous writes will go to RAM, while you have any available for write buffering. Obviously writing to RAM will be significantly faster than to disk. This is the default for most (all?) modern operating systems. If you have enough RAM to writes until they are flushed to disk then all writes will appear extremely fast. Although, there is a time period where loss of power results in data loss. Battery backed on-disk write buffering reduces (but doesn't completely eliminate) this period.
Synchronous writes must be committed to disk before the write can return and is significantly slower. This is the default mode for NFS and some other applications. For synchronous writes battery backed on-disk write buffering significantly increases apparent write performance and eliminates the risk of data loss due to power failures. Take note though, that the writes are still going to volatile memory, it's just been moved to disk PCB's memory rather than main memory. ZFS has solved this a different way by use of an SSD ZIL which will commit to the SSD and return the write operation, then later move it to spinning disks.
So you really need to look at your application. Are the majority of your writes synchronous or asynchronous? For asynchronous you can get away with just having gobs of RAM. For synchronous you'll need the battery backed write cache (but ZFS may be able to provide a cheaper solution).
In any case, you need write caching.
Yeah, you want a battery-backed cache unit. Write speed is typically poor without it. If your application requires that type of performance, you'll need to pay...