Why datacenter water cooling is not widespread?

From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent.

On the other hand, using water can possibly (IMO):

  1. Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea).

  2. Reduce noise, making it less painful for humans to work in datacenters.

  3. Reduce space needed for the servers:

    • On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside,
    • On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed.

So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level?

Is it because:

  • The water cooling is hardly redundant on server level?

  • The direct cost of water cooled facility is too high compared to an ordinary datacenter?

  • It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?


Solution 1:

Water + Electricity = Disaster

Water cooling allows for greater power density than air cooling; so figure out the cost savings of the extra density (likely none unless you're very space constrained). Then calculate the cost of the risk of a water disaster (say 1% * the cost of your facility). Then do a simple risk-reward comparison and see if it makes sense for your environment.

Solution 2:

So I will break my answer in serveral parts:

  • Physical properties of water versus air and mineral oil
  • Risks of water use and historical bad experiences
  • Total cost of cooling a datacenter
  • Weakenesses of classic liquid cooling systems

Physical properties of water compared to others

First a few simple rules:

  • Liquid can transport more heat than gases
  • Evaporating a liquid extract more heat (used in refrigerator)
  • Water has the best cooling properties of all liquids.
  • A moving fluid extract heat way better than a non moving fluid
  • Turbulent flow requires more energy to be moved but extract heat way better than laminar flow.

If you compare water and mineral oil versus air (for the same volume)

  • mineral oil is around 1500 times better than air
  • water is around 3500 times better than air

  • oil is a bad electricity conductor in all conditions and is used to cool high power transformers.

  • oil depending on its exact type is a solvant and is able to dissolve plastic
  • water is a good conductor of electricity if it is not pure (contains minerals...) otherwise not
  • water is a good electrolyt. So metals put in contact with water can be dissolved under certain conditions.

Now some comments about what I said above: Comparisons are made at atmospheric pressure. In this condition water boils at 100°C which is above the maximum temperature for processors. So when cooling with water, water stays liquid. Cooling with organic compounds like mineral oil or freon(what is used in refrigerator) is a classical method of cooling for some application (power plants, military vehicules...) but long term use of oil in direct contact with plastic has never been done in the IT sector. So its influence on the reliability of server parts is unknown (Green Evolution doesn't say a word about is). Making you liquid move is important. Relying on natural movement inside a non moving liquid to remove heat is inefficient and directing correctly a liquid without pipe is difficult. For these reasons, immersion cooling is far from being the perfect solution to cooling issues.

Technical issues

Making air move is easy and leaks are not a threat to safety (to efficiency well). It requires a lot of space and consume energy (15% of your desktop cinsumption goes to your fans)

Making a liquid move is troublesome. You need pipes, cooling blocks (cold plates) attached to every component you want to cool, a tank, a pump and maybe a filter. Moreover, servicing such a system is difficult since you need to remove the liquid. But it requires less space and requires less energy.

Another important point is that a lot of reasearch and standardization has been down on how to design motherboards,desktop and servers based on a air based system with cooling fans. And the resulting designs are not adequate for liquid based systems. More info at formfactors.org

Risks

  • Water cooling systems can leak if your design is poorly done. Heat pipes are a good example of a liquid based system that has no leak (look on here for more info)
  • Common water cooling systems cool only hot component and thus still require an air flow for other component. So you have 2 cooling systems instead of one and you degrade the performances of your air cooling system.
  • With standard designs, a water leak has an huge risk of causing a lot of damage if it enters in contact with metal parts.

Remarks

  • Pure water is a bad conductor of electricity
  • Nearly every part of electronic components are coated with a non conductive coating. Only solder pads are not. So a few drops of water can be harmless
  • Water risks can be mitigated by existing technical solutions

Cooling air reduces its capacity to contain water (humidity) and so there is a risk of condensation (bad for electronics). So when you cool air, you need to remove water. This requires energy. Normal humidity level for a human is around 70% of humidity.So it is possible that you need after cooling to put back water in your air for the people.

Total cost of a datacenter

When you consider cooling in a datacenter you have to take into account every part of it:

  • Conditioning the air (filtering, removing excess humidity, moving it around...)
  • Cool and hot air should never mix otherwise you lower your efficiency and there is a risk of hot spot (points that are not cooled enough)
  • You need a system to extract the heat in excess or you have to limit the heat production density (less servers per rack)
  • You may already have pipes to remove the heat from the room (to transport it up to the roof)

The cost of a datacenter is driven by its density (amount of servers per square meter) and its power consumption. (some other factors enters also into account but not for this discussion) Total datacenter surface is divided into the surface used by the server themselves, by the cooling system, by the utilities (electricity...) and by servicing rooms. If you have more server per rack, you need more cooling and so more space for cooling. This limits the actual density of your datacenter.

Habits

A datacenter is something highly complex that requires a lot of reliability. Statistics of downtime causes in a datacenter say that 80% of downtime are caused by human errors.

To achieve the best level of reliability, you need a lot of procedures and safety measures. So historically in datacenters, all of the procedures are made for air cooling systems and water is restricted to its safest use if not banned from datacenters. Basically, it is impossible for water to ever come into contact with servers.

Up to now, no company was able to come with a good enough water cooling solution to change that matter of facts.

Summary

  • Technically water is better
  • Server design and datacenter designs are not adapted to water cooling
  • Current maintenance and safety procedures forbid the use of water cooling inside servers
  • No commercial product is good enough to be used in datacenters

Solution 3:

While we do have a few water-cooled racks (HP ones actually, don't know if they still make them) direct water cooling is a little old-school these days. Most new large data centres are being built with suction tunnels that you push your rack into, this then pulls the ambient air through and expels or captures-for-reuse the heat collected as it moves through equipment. This means no chilling at all and saves huge amounts of energy, complexity and maintenance, though it does limit systems to using very specific racks/sizes and requires spare rack space to be 'blanked' at the front.

Solution 4:

Water is a universal solvent. Given enough time, it will eat through EVERYTHING.

Water cooling would also add a considerable (and costly) level of complexity to a data center which you allude to in your post.

Fire suppression systems in most data centers do not contain water for a few, very specific reasons, water damage can be greater than fire damage in a lot of cases and because data centers are tasked with uptime (with backup generators for power, etc.), this means that it's pretty hard to cut power to something (in the event of a fire) to squirt water on it.

So can you imagine if you have some type of complex water cooling system in your data center, that gives up the ghost in the event of a fire?? Yikes.

Solution 5:

water should NOT be used for datacenter cooling but a mineral oil that mixes very well with electricity. see http://www.datacenterknowledge.com/archives/2011/04/12/green-revolutions-immersion-cooling-in-action/

even though the solution is new the technology is quite old, however making this type of change into existing datacenters it becomes very difficult, as you need to replace the existing racks with new type of racks ...