What is the lifetime of a typical hard disk? [closed]

What lifetime can be expected of the typical hard disk? Or are there big differences between different types? And does it make a difference if it is used heavily instead of never being connected to system (for example serving as a backup medium)?


Solution 1:

What lifetime can be expected of the typical hard disk?

The correct answer to your question of "What lifetime can be expected of the typical hard disk?" is "Not long enough for you to not have a backup of your data from day 1."

Seriously, most techies since ages immemorial have felt the sudden urge to run out and buy a replacement hard disk within 3 years. There was a really good Google white paper on the lifetime of consumer-level SATA drives, and it was scary to read, to say the least.

Are there big differences between types?

We have had SCSI, SAS, IDE, SATA, etc. Also, now we have Enterprise models, 24/7-capable, etc etc. Usually, enterprise (SCSI, SAS, Enterprise-models) should have a longer lifespan... however there are still some bad eggs that slip through the gates and hurtle towards the abyss of failure.

Does it make a difference if it is used heavily?

A hard drive that is not often used in theory should last longer than a constantly used drive - however don't take that as the gospel truth.

So what are you trying to say here, you wishy-washy guy?

What I am trying to say is, when it comes to data and data storage, it is never too extravagant to assume your drive will fail tomorrow - and plan according to it.

Solution 2:

Here is great paper from Google about HDD lifetime

Failure Trends in a Large Disk Drive Population
Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz André Barroso

Abstract

http://research.google.com/archive/disk_failures.html

Solution 3:

What we have is only statistical evidence over a relatively short time period (3 to 5 years at most). We can't necessarily infer the life expectancy of current drives from old ones, or of one particular drive from some other one. Some anecdotes :

  • I have some 20 years old hard drives (40 to 400 MB) that work perfectly fine today.
  • one of my customers have a RAID array of 4 320 MB drives running 24/24h since 1993 without any failure so far.
  • on the other hand, 80% of 1996 vintage Micropolis 9GB drives failed in the first year.

However :

  • drive technology changed very significantly in the past 15 years. I wouldn't bet that current drives come near to older (and simpler) drives from a durability standpoint, though they may fare better on average.
  • on a large sample, current drives failure rates is about 0.6 to 1% per year for the 5 years that drive makers are interested in. After those five years, there is very little actual data.

About disk usage :

  • Most of our storage servers fits in the 0.6% range of annual drive failures (data collected upon about 3000 disks).
  • but one particular heavily used cluster (total 300 disks) is in the 3 to 5% annual disk failure rate (5 to 10 times worse).

What to do?

  • Use RAID. Do backups. Keep some backups on some other technology (tape, optical). Do more backups. Then some more. Only the paranoids will survive.

Solution 4:

does it make a difference if it is used heavily instead of never being connected to system?

This point is the only one not covered by other answers so far.

A drive in use is going to see more wear and tear on the physical mechanisms (i.e. the head moving apparatus and the spindle motor) and is exposed to environmental conditions (changes in operating temperature inside a machine for instance, and increased chance of physical knocks if it is an external drive).

Inactive media may still degrade over time though. Changes in environment (mainly temperature for hard drives, humidity too for tapes) can cause the magnetic storage to slowly degrade as can exposure to other factors in storage (local magnetic fields, temporary or otherwise, contaminants in the air, ...). Also you may find that a drive that has been powered off for a long period of time will fail to spin-up once reconnected due to mechanical parts having "seized up" - there are techniques that sometimes rescue a drive from this long enough to get the data off onto another drive but they are not reliable. I've only every had one drive fail in this way, and I managed to get it going with the risky "quick spin" technique but it does happen. So if you are storing data on drives for a long time, store the data on at least two drives and test them occasionally (though that applies to any other medium too, not just drives - don't just store and forget if the data is important).

Solution 5:

The biggest killer is temperature. Keep your hard disks below 30 deg C. The next biggest killer is shock, either through physically dropping the disk or what is known as a 'head crash' where the cantilever scrapes against the magnetic coating of the drive due to a power or mechanical failure.

The MTBF (mean time before failure) is a rough indication of how long a drive (on average) will last irrespective of load and is usually supplied by the manufacturer, although do take it with a pinch of salt.