What are the pros & cons of 4xSSD(512Gig) in a RAID10 vs. 2xSSD(1Tb) no RAID?
With SSD's the only generic recommendation is to buy the right drive for your workload.
See this answer for the rationale.
The warranty for the Samsung SSD 850 Pro may be ten years, but that covers mechanical failures and does not cover you when you exceed the still "somewhat limited" total write capacity limit.
Associated with the failure rates and warranties is the reliability associated with physical limit of the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB. In addition to other performance requirements that is potentially one big limiter.
To allow a more convenient comparison between different makes and differently sized sized drives the write endurance is often converted to daily write capacity as a fraction of the disk capacity.
Assuming that a drive is rated to live as long as it's under warranty:
The Samsung 1000 GB SSD has a 10 year warranty and a total write capacity 300 TB:300 TB --------------------- = 0.08 drive per day write capacity. 10 * 365 days * 1000 GB
If you only expect to run the drive for 5 years that doubles: 0.16.
The higher that number, the more suited the disk is for write intensive IO.
At the moment (early 2015) value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years.
Note: Some real world endurance tests show that sometimes the vendor claims can be massively exceeded, but driving equipment way past the vendor limits isn't always an enterprise consideration...
4xSSD(512Gig) in a RAID10
- you have redundancy (+)
- you have only 1TB of usable space (-)
- you have x2 speed (+)
2xSSD(1Tb)
- no redundancy (-)
- you have 2TB of usable space (+)
- you have normal speed (N)
2xSSD(1Tb) RAID0
- no redundancy (-)
- you have 2TB of usable space (+)
- you have x2 speed (+)
As for the lifetime of them, that model is pretty good, but after 100TB of read/writes it's a risky business, even if they can last 20 times that in some cases.
You choice should depend on the total amount of data you consider it will be written on them for the desired period of time. If that data is not a lot, then go for the normal config, even a RAID0 for increased performance. If the data amount is very large, go for the RAID10 option, as the risk of a failure increases with time.
Linux software RAID (md) supports passing discard ops down to its components. When those components are SATA devices, they turn into ATA TRIM commands.
See for example: Implementing Linux fstrim on SSD with software md-raid
Depending on your access pattern, 2x SSDs concatenated could be as fast as RAID0. e.g. random IO scattered across the entire disk fine-grained enough that both disks are kept busy. If availability and some safety against loss of writes since the last backup is worth the cost of an extra SSD or two, RAID1 of would work well. Or Linux software RAID10 with an f2
layout, so you get RAID0 sequential read speeds. RAID1 can read in parallel from the redundant copies, so you can get 2x the high queue-depth read perf. A 4 disk Linux RAID10f2 sequential-reads as fast as a RAID0 of 4 disks.
You could also partition your disks so one partition is RAID10 (database), and another partition is RAID5 or RAID0 (bulk storage for images). Also note that Linux RAID10f2 works on as few as 2 disks. See http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10. It's different from RAID1 because of how it lays out your data (unless you use n2, then it's the same). f2
and o2
give speeds ups for single-threaded reading, while all variations should help with speeding up parallel reads. For SSDs, don't bother with o2
. The increased locality won't speed up writes like it does for magnetic disks.