Why do we still use power supplies on datacenter servers?

What'cha talking 'bout Willis? You can get 48V PSUs for most servers today.

Running 12V DC over medium/long distance suffers from Voltage Drop, whereas 120V AC doesn't have this problem¹. Big losses there. Run high voltage AC to the rack, convert it there.

The problem with 12V over long distance is you need higher amperage to transmit the same amount of power and higher amperage is less efficient and requires larger conductors.

The Open Compute Open Rack design uses 12V rails inside a rack to distribute power to components.

Also large UPSes don't turn 12V DC into 120V AC - they typically use 10 or 20 batteries hooked in series (and then parallel banks of those) to provide 120V or 240V DC and then invert that into AC.

So yes, we're there already for custom installations but there's a fair bit of an overhead to get going and commodity hardware generally doesn't support that.

Non sequitor: measuring is difficult.

1: I lie, it does, but less than DC.


It's not necessarily more efficient as you increase the I^2R losses. Reduce the voltage and you have to increase current in proportion but the resistive loss (not to mention the voltage drop) of power cables increases in proportion to the square of the current. Thus you need massive, thick cables too, using more copper.

Telcos use typically -48V so they still need power supplies in servers - inverters - to make the DC level conversion which is a conversion to AC then back again. The cables are much thicker.

So it's not necessarily a great idea to run everything on DC for efficiency.