Using Y power cables in a datacenter

Solution 1:

I am no electrician either, but I think you will at least lose the possibility of keeping your server up and running when doing so. On the contrary if you connect each PSU to a different power source, your server will still have an availble power source (hopefully).

Solution 2:

I've never used them because they're a single point of failure, at best.

Every server I deploy into a real datacenter has each PSU plugged into a different PDU in the rack, each of which are attached to a different independent UPS, on different circuits, ideally even fed from different power feeds.

If the UPS, PDU or circuit your Y-cable's attached to goes down, the redundant PSUs are going to be useless, so it seems like a waste at best, and a false sense of redundancy at worst.

EDIT:

I'll just mention that I'm talking about lower capacity 1U or 2U UPSes mounted inside the server rack, rather than the much larger, much more expensive UPS units that take a up rack all to themselves. Those are definitely built to be highly redundant in and of themselves, without the need for a secondary unit.

Solution 3:

The primary reason such items are frowned upon is due to redundancy, lack-thereof. Using such a cable means all of your server's power-inputs are being fed by the same circuit so when that circuit dies (or the PDU it's connected to, I've had that happen) so does the server. Colos strongly recommend Primary and Secondary circuits for just this reason and want to see multi-PSU servers plugged into two circuits.

Way back in the day I had a group of machines that shipped with a single 3-way Y cable and 3 normal power cables for a large (7U if I remember right) 3 PSU system. The data-center I was working in at the time (this was about 1999) didn't have enough power-outlets for that kind of thing, so we ended up using the Y cable; 2 legs of the Y on one UPS, and a straight up power-cable for the 3rd PSU to the second UPS. 3-PSU systems are thankfully much less common now.

PSU Load-balancing, or is it switching?

There are differences to how power-supplies handle loading. As various power-supply benchmarks have shown, peak efficiencies are reached once you get over 50% loading. There are gains to be had for running things all on one PSU, it's more likely to be efficient. It is for this reason that some server manufacturers draw all of a server's current through a single PSU and switch to the other one when a failure happens or a whim strikes; a 230 Watt system will get best efficiency from its dual 400 Watt PSUs by running all the load through only one PSU.

Such switching systems only draw from one PSU, and therefore one circuit if using fully separated power circuits, at a time.

The downside to switching systems is that load can move unpredictably in the community of PSUs connected to a certain circuit. If enough of them throw their weight to a single circuit it can overload it. This is bad power-design, since you want to design things so that you can lose a full circuit and have things stay up, but it's still something that trips up systems engineers.

Load-balancing servers draw equal amounts of current from both PSUs. This gives predictability in circuit loading, though can still cause blown breakers if the Systems Engineers load their circuits over 50% and a circuit dies forcing the PSUs to draw 100% from one circuit, which now exceeds its rating. Again, bad power design but it's a common mistake.

Startup loads

There are two kinds of startup-loading:

  1. Everything runs flat out until BIOS (or OS Boot, or app-load) catches up and things calm down.
  2. Inrush current loading right as things get turned on.

The first is something we're all familiar with. That 120 disk SAS array may draw only 4000 Watts when running normally, but if all the disk-shelves restart at the same time it may draw 6500 Watts.

The same holds true for servers. Fans run at full speed, yes. CPUs run at full speed for a bit, yes. RAM is run at full voltage during post, yes. It's likely to draw as much as it can draw during those first stages of POST but rapidly drops off as the BIOS hands things off to the OS and power regimes take over. A server that normally draws 110 Watts during normal usage may temporarily draw as much as 200W for those first few phases.

It's this temp loading that most people think of when they say things like, "it runs the power-supplies at full on startup". Those 400 Watt power-supplies plugged into a server that draws 230W on a busy day aren't going to draw 400 Watts, they'll draw 230W... combined.

The second isn't well known, but when people run across it they get worried. This is inrush current, and takes a few milliseconds during which draw can be quite a lot higher than it normally is. The inrush current for IT devices with AC to DC converters in them (which is all of them) almost always happens twice:

  • One time when the cable is plugged in, as the pre-power stage gets power. It's this stage that allows the power-button on the front to power on the device.
  • A second time when the main distribution stage powers up and starts the device.

Because of the timings, this only becomes a factor when restoring power to a dead circuit. All those devices powering on at exactly the same time can do weird things to the power on that circuit, and that can cause damage all by itself. Doing a staged startup alleviates this.

This is the other area people think of when they say things like "power-supplies run full-tilt on startup", since each PSU has its own inrush current. But as I said, this lasts for a few milliseconds and comes in two stages.

Solution 4:

Our local Colo did not like them either. We had a shared cabinet and were only allowed to utilize a single PDU port for our dual PSU server. The colo didn't like it for redundancy reasons, but for a non-critical machine it was perfectly acceptable to us. There aren't any major power issues from an electrical perspective.

This is why our colo didn't like it:

  1. If the PDU dies the server is dead
  2. If the cable gets fried your server is dead

Here is why I like it:

  1. Non-mission critical box
  2. Supplied power to both PSUs giving me redundancy on the box
  3. I wasn't all that concerned with potential down-time based on the failure of the PDU, nor was I all that concerned about the cable going batshit crazy. (see #1).

The colo didn't have any issues with them electrically speaking. The draw was identical, the box only used what it needed, regardless of the number of PSUs consuming power. However, shortly after buying this cable with my Dell box Dell did stop publicly offering the cables for sale.

Solution 5:

When a server boots up, both PSU start up at full power to do a system check. This draws double the normal operating power. Now that power is more of an issue, the circuits coming into your cabinet are very specific. They don't want you to trip a breaker should you have to reboot your server.

Also, some office building don't let you do that because of fire code. You are doubling up the power that could be consumed from one outlet.