What are the the potential drawbacks of using non-OEM hard drives in a server?

I'm currently speccing a Hyper-V host for my company and would like to use solid state drives for local storage. The problem is, most oem drives have a hefty premium when compared to retail server-grade drives. I'm currently leaning towards using 4x Samsung 845dc EVO's in raid 10 with one hot spare.

Are there any downsides aside from the drives not being included in the server's warranty?

Edit: The server is a Dell T320 and will host a few linux and windows VMs. The most performance intensive tasks are all disk related: wsus, redirected folders, a file share with large solidworks assemblies among other things.


Solution 1:

This all depends.

If HP or IBM, I'd say use their respective drives. (just because)

If Dell, probably use their drives... If you can't afford the Dell-spec'd disks, look harder. But refurbished Dell disks if you have to in order to save money and retain support.

But also know that Dell PERC RAID controllers are manufactured by LSI, and LSI controllers have a very wide compatibility list. Given that, it's acceptable to use whatever disks you want (within reason) on LSI-based RAID controllers. Just know the drawbacks of self-supporting your system. It's a cost-benefit analysis: less expensive disks, but you need to keep a spare or two. Versus more expensive disks and 4-hour or NBD support...

Solution 2:

In addition to the other valid remarks:

That particular drive, the Samsung 845DC is in the words of the manufacturer "designed for read intensive, <10% write content" and a write lifetime of 600TB which, depending on the IO profile of your VM's, may result in an early death, not covered by the 5 year warranty.

Server SSD's are typically specified for particular IO workload due to the finite number of write cycles NAND cells can support. A common metric is the total write capacity, usually in TB.
To allow a more convenient comparison between different makes and differently sized sized drives the write capacity is often converted to daily write capacity as a fraction of the disk capacity.

Assuming that a drive is rated to live as long as it's under warranty:
a 100 GB SSD may have a 3 year warranty and a write capacity 50 TB:

        50 TB
---------------------  = 0.46 drive per day write capacity.
3 * 365 days * 100 GB

The higher that number, the more suited the disk is for write intensive IO. At the moment value server line SSD's have a value of 0.3-0.8 drive/day, mid-range is increasing steadily from 1-5 and high-end seems to sky-rocket with write endurance levels of up to 25 * the drive capacity per day for 3-5 years.

Those Samsung drives come in at a daily write capacity of 600/(5*365*0.960)= 0.34 .

Solution 3:

The manufacturers have spent time validating OEM drives and possibly creating custom firmware to deal with compatibility/optimisation issues with specifically their RAID controllers. There is some value but it is very untangible.

Some products simply won't accept non-proprietary drives.

Also until very recently server grade SSD products simply were not available directly to the consumer or at least not at a lower price. Noteably consumer grade SSDs don't have capacitors to allow all the buffered data to be written in case of power loss.

Also you will have to source caddies, they are not officially available seperately.

If you are prepared that there might be issues and you may have to fall back to proprietary drives then there are certainly savings to be had by trying this kind of drive. If you need to go by the book and have someone else to blame if there is an issue then you may be more comfortable with the traditional and sticking to offically supported drives.