What's the top normal S.M.A.R.T. temperature for HGST Helium Ultrastar 8TB 7200 RPM SAS 12Gb/s enterprise drives?

I've just received new Dell R730xd 2U server with faceplate 12 * 3.5" drive bays + 4 * 3.5" mid-body tray, located above RAM modules & CPUs. I've plugged 16 * HGST Helium 8Tb 7200 RPM SAS 12Gb/s drives and started 2 * 8 * 8Tb RAID6 volumes background init.

I query drives temperature with smartctl. While front drives are expectedly cool in range of 33C to 37C, the mid-body drives #14..17 are 45C, 46C, 51C, 54C - the latter I'm the most concerned about it is overheating. Init was going on for just a few hours.

iDRAC reports inlet air is 22C and outlet is 44C. Fans rotate at ~4.3k RPM. They spin at ~15k if the lid is off.

Thermal imaging shows #17 is the hottest with the case temperature at 47C.

I'm not yet sure if there's anything with the particular drive or is it about the drive location - will verify by deleting VD and swapping two drives places - will update this post with observations.

Mfg specs say normal operational ambient T is up to 60C (link)

In my view increased temperature affects drive longevity.

However two flex bay rear drives on my older R720xd are 15kRPM and were always around 55C, still alive after 3+ years.

In addition I've requested HGST support for their position.

Another topic on serverfault points to Google research, stating T is a factor after a few years. (link)

UPD1 (20151102): Manufacturer replied quickly: "This drive can operate in temperatures between 5 - 60 C. The drive should normally operate below 50C. If it is operating at a stable temp of 55C then it is running a bit hotter than normal, but still in a safe range."

UPD2: I swapped #14 and #17 places - overheat is location-specific, right-side (looking front to back) is warmer than the left side and former #14 at #17 seat was showing top 56C and former #17 in #14 seat was cool at 40-45C. Adjusting iDRAC->Hardware->Fans->Setup->Fan Speed Offset to "Low Fan Speed Offset (+23%)" (6.8kRPM idling vs 4.4kRPM default, doing RAID init) brought top temperatures for #14 and #17 from 49C and 54C to 40C and 47C. Setting fans to 15kRPM (by setting default reaction to 3rd party PCI cards - I have one) brings temps to 34C and 39C at cost of extra +120W power usage (340W vs 230W).

Of course I'm not using Dell-approved disks. There are no 8Tb drives offered by Dell for this server now, and 6Tb SAS are $830 a piece. I've got 8Tb Helium SAS for $498 bringing pre-RAID TB cost from $138 to $62. Later I realized that Dell-firmwared (and supported by Lifecycle Controller) may be in better communication with cooling and are also getting firmware updates via LC.

Another pleasant surprise for me - swapping #14 and #17 didn't result in RAID rebuild - controller just picked up disks at new locations without saying a word in logs.

UPD 20160426: Now having deployed multiple of R730xd with 12+4 equipped with HGST 8T 12G SAS or Seagate 8Tb 12G SAS, I observe in all of them #14 is ~10C cooler than #17 and partial remedy to bring it to 40-47C range is to increase fans speed setting in iDRAC to +30%.


Anything under 55-60 C should be ok. Anyway, what is really dangerous for a mechanical drive are repeated thermal excursions, where the drive become hot and rapidly cools. Equally dangerous are repeated spinon/spinoff cycles.

As stated by EEAA, if it is a supported setup from DELL, you should fear not.


I add this as a pointer to some more relevant research, which is newer than the Google work, and seems to have some rigour to its methodology.

Backblaze, the storage pod people, have done an analysis of failure rate vs. temperature by drive model, and find in most cases no correlation. For three models, (two Seagate Barracudas and a Hitachi Deskstar), the correlation is of statistical significance (they don't say what the threshold for significance is, but I'm guessing a medically-standard 95%, from the numbers), and in one of those cases it's quite strong.

Their conclusion, which I reproduce in full, is that

Overall, there is not a correlation between operating temperature and failure rates. The one exception is the Seagate Barracuda 1.5TB drives, which fail slightly more when they run warmer.

As long as you run drives well within their allowed range of operating temperatures, keeping them cooler doesn’t matter.

So in your case, I'd say you didn't have any real problem. (Disclaimer: I have no connection with Backblaze.)


Server manufacturers put a lot of money into designing their systems to be reliable and to perform within spec for any third-party components that may be included. Dell would not warranty these drives if they were expected to have a short life.

If Dell says that this is a supported configuration, then don't worry about it. Modern gear is a lot more tolerant of high temperatures than gear from even 10-15 years ago. You have RAID, and are protected from dual-drive failures. IMHO you should spend your time working on something else other than second-guessing the thermal management of this server.