Key Things to look for in a Data Center

I'm trying to build a simple checklist to determine the quality of a datacenter... where and what should I look for and how can I determine if what the owners say (e.g. "our UPS keep the data center up for 100 days without power") is true or not? What are typical signs or good or bad data centers?


Here is a list of questions I made for myself last time I went datacenter shopping:

  • Explain what it would take for sprinklers to go off on our equipment.
  • What will remote hands be willing to do? For example, install hard drives, rotate tapes…
  • Are your remote hands available 24/7/365, average wait time for them to get to the cage after filing a ticket (How are tickets entered?) ?
  • Are you on multiple grids?
  • Do you have raised floor cooling?
  • How many datacenters do you operate besides this one?
  • How long can the datacenter run on backup power?
  • Can we have equipment delivered directly to the datacenter?
  • Is there a delivery dock and free, close, and available parking?
  • If we have a vendor come to the datacenter, do we need to accompany them?
  • What ambient temperature and humidity is maintained?
  • How many ISP choices are there?
  • Have any of your customers ever lost power for any amount of time in the history of the datacenter?
  • How long has this datacenter been in operation?
  • What access controls are in place to to both the floor and equipment?

If you visit several and ask these questions between the price, your visit impressions, and their answers it will probably be clear which one you want. Make sure you always visit them and visit a good amount of them.


Kyle covered it pretty well, but here are a couple points:

Physical Security is huge. It should take nearly an act of Congress (Parliament, insert slow-moving bureaucratic institution here) to get inside.

It should have Halon fire suppression, not sprinklers; Servers should not be damp. (Local fire-suppression regulations may override...)

Find out what their preferred server vendors are. Unless it's for a very specific reason (like running a Google-like datacenter), it should be name-brand servers. (Dell, HP, IBM, Sun, Apple, etc.) If they say "white-box" or a brand that you don't recognize, run. Note that there are some reputable lower-tier server vendors that are reputable (System76, for example), but "custom-built" means that they're putting things together themselves. Great for your home, but bad for your datacenter. (This doesn't include buying a HP Proliant DL580 and installing things like the memory option kits or drive cages.)

What ownership options are available? Buy through them? Buy direct and drop-ship there? Leasing? VMs?


Excellent as always Kyle, A couple of things I've learned from experience:

  • Ask if there are generators to backup the UPS's, if so have the generators been tested, how often?

  • What physical locks and checks do they have in place to prevent electricians from killing the power?

  • What liability/insurance coverage do they have?

  • How do they deal with situations when they don't meet their SLA?

  • How often have they not met an SLA?

  • How much power do they provide to each rack/cage/etc.? (Will you be power constrained and need another rack/cage just for the extra power?)

  • Ask for References, in your industry would be good.

Funny stories that weren't funny at the time:

  1. There was a fire in Vancouver in an underground electrical compartment, 4 blocks from my DC, the fire took out the power for a 10 block radius. The UPS' kept the lights on until the Gen-set came online. Gen-set stayed online for about an hour before over-heating. UPS's were able to keep the lights on for another 30 minutes after the gen-set did a safety shut down. Gen-set belonged to the building, IIRC the DC was able to blame them and washed their hands.

  2. An electrician killed the power to a couple rows of racks at the DC because the panel some how fell and knocked all the breakers open. I've also heard about an electrician at another DC going to work on a UPS, not putting it into bypass mode and taking down the whole DC.


  • I would say that probably a third of a data center is the technical line items (Do you have {VESDA, re-fueling contracts, chillers covered by UPSes, multiple power grids, diverse fiber entrances).

  • Another third is how they deal with it when things don't go right. Do they swallow their pride, examine the failure and figure out what to do to make sure it doesn't happen again? Or do they just keep doing what didn't work before?

  • And the other third is the personnel. Are they smart, easy to work with, and don't turn over every month?

But even more importantly: Do they have the space you need? One place we almost went into, within 2 years we were using more space than they had available.


I work in a small data center in Silicon Valley. I'm the sysadmin on the managed-server side of the business.

Bad signs:

  • Lack of redundant monitoring and alerting for power, temperature, humidity
  • Lack of monitoring for network devices, colos, servers and other equipment
  • Clutter and not using cable-ties or other cable management to keep clean, organized racks

Good signs: - Onsite diesel generator with automatic failover - Backup chillers and air handlers with automatic failover - Plenty of bandwidth on major carrier backbones (AT&T, XO Comm) - Redundant network providers - Redundant core routers, firewalls, load balancers and switches - Running memory check and hardware diagnostics before deploying servers

Name brand servers are fine, but if they're old and have been around the block a bunch of times, you'd better make sure they're passing hardware diagnostics before using them.

A good data center should provide its customers with a website where they can monitor their bandwidth consumption and uptime. They should also answer any questions. Ask them the make and model of their UPS. Ask them to see the current load on the UPS. With this information you can verify how long it can go without power.

But honestly, the UPS should not be your concern. A UPS only provides a brief uptime (30 minutes or so). A much better concern is if the DC has a backup generator. It's also worthwhile to ask which grid the DC is on. In terms of brownouts and blackouts, different priorities are assigned to different grids. Guess what? Hospitals and fire stations are high priority (power is never cut). If the Data Center is on the same grid, its guaranteed reliable power.

Ask them how much power available per rack. Where I work we provide each rack with 3x 25amp circuits. A typical 1u server consumes 1-3 amp.