UPS Requirements - Max load or actual load?

One of our UPS units failed this weekend (Happy new year!), and so I'm on the look out to replace it.

I've done the calculations for our power requirements, and it comes out just under 8000W. This is based on data sheets from manufacturers of the hardware we're running rather than actual current usage.

We currently have two UPS units, the biggest being 4000W and the other 1500W, so it looks like we're under capacity already, (Although the units are never more than 60% capacity)

Should I be buying a unit that can handle the max load of all the hardware (8000W), or just based on the load at peak with some head room? (Not considering growth here)

Also, is it best practice to have a single larger unit, or two smaller units since we have redundant PSUs on the majority of the hardware.

Update

Spoke with APC this morning and the unit has no hope, plus it's not under warranty, so will need to be replaced.

I've put a power meter on the feed to see what the actual usage is and it comes in at 1.8kW.

The 8kW figure I mentioned above was the Max potential load, including if every switchport had POE enabled too.

Since we're never going to reach the 8kW mark, plus it's technically not possible since we only have two 16A SP feeds to the cabinets so we can only draw a max of 3.68kW per feed.

So what I'm going to propose is we buy two 3kW UPS units for redundancy, they'll meet our avg. load plus a 1kW overhead (Is this generous enough?), then add in two additional 3kW batteries to give us the required runtime. I'll then split each redundant PSU to each UPS, so then each UPS (in theory) has half the 1.8kW load.

It's worth noting that we enforce a staggered power-on process, so unlikely to have a surge in load when power is resumed.

Does this sound sane?


Solution 1:

System power ratings are misleadingly "inflated" to account for maximum potential system load. Ie, if you fill all the supported bays and banks and slots and PSU positions, and then everything runs at max load.

In practice, actual load is lower. Usually it is much lower.

So you should measure actual power draw for your running gear. And then add a margin for the spike that occurs when the hardware is initialized at power-up (if you didn't have a chance to measure this in reality). When I can't measure the whole room/rack/facility booting at once, I semi-arbitrarily go with 30% over the running power consumption. Then round up to the nearest UPS size.

Et voila.

Solution 2:

Measure the real utilization of your systems infrastructure at present and build in room for expansion, spikes in utilization and normal growth.

There's nothing more than that.

As for UPS design, it depends. Dual power supplies are good to have, but what do you want to protect against? How did your actual UPS fail?