Best way to test new HDD's for a cheap storage server

I want to build a storage server and bought 10 x 2TB WD RED's. The HDD's just arrived.

Is there any tool you guys use to check for bad drives or to best defend against infant mortality before copying real data on to your disks?

Is it better to check each single HDD or to test the array (ZFS raid-z2) through copying a lot of data on it?


I had the same question 2 months ago. After sending in a failed disk, the replacement disk failed in my NAS after 3 days. So I decided I would now test the new replacement before putting it in production. I do not test every new disk I buy, only on 'refurbished' disks, which I do not completely trust.

If you decide you want to test these disks I would recommend running a badblocks scan and an extended SMART test on the brand new hard disk.

On a 2TB disk this takes up to 48 hours, The badblock command writes the disk full with a pattern, then reads the blocks again to see if the pattern is actually there, and will repeat this with 4 different patterns.

This command will probably not actually show up any bad blocks on a new disk, since disks reallocate bad blocks these days.

So before and after this I ran a smart test, and check the reallocated and current pending sector count. If any of these have gone up, your disk has some bad blocks already and so might prove untrustworthy.

After this I run an extended SMART test again.

You might want to install smartctl or smartmontools first.

Warning, the badblocks -w flag will overwrite all data on your disk, if you just want to do a read check, without overwriting the disk, use badblocks -vs /dev/sdX

sudo smartctl -a /dev/sdX
# record these numbers
sudo badblocks -wvs /dev/sdX
# let it run for 48 hours
sudo smartctl -a /dev/sdX
# compare numbers
sudo smartctl -t long /dev/sdX
# this might take another hour or 2, check results periodically with
sudo smartctl -a /dev/sdX

If after this your smart values seem ok I would trust the disk.

To know what each smart value means, you can start looking here

http://en.wikipedia.org/wiki/Self-Monitoring,_Analysis,_and_Reporting_Technology


These are new disks. Either they're going to fail or they won't. You're already a huge step ahead by using the ZFS filesystem, which will give you great insight into your raid and filesystem health...

I wouldn't do anything beyond just building the array. That's the point of the redundancy. You're not going to be able to induce a drive failure with the other listed methods.


You can use Bonnie++ for testing. It can perfectly emulate file server behavior pattern.

For example:

# bonnie++ -u nobody -d /home/tmp -n 100:150000:200:100 -x 300

Test will run as user 'nobody' and will create/rewrite/delete 100*1024 files, from 200 to 150000 bytes per file, within 100 autocreated directories below /home/tmp. And number of tests = 300. You can play around file count/size and number of test repeats.