What are the cons to using a PC/cheap PCs as servers vs hardware designed to be used for servers?

What is it with server hardware that actually makes it more suited for server hosting vs. say a collection of cheap PCs or a good PC?


""What is it with server hardware that actually makes it more suited for server hosting vs. say a collection of cheap PCs or a good PC?""

When hosting a lot of servers, holding them up, getting power to them, being able to get to them all makes a difference. The typical way to do that is in a rack ( http://images.google.com/images?q=server%20rack ) and servers are shaped so they fit in standard rack sizes and come with rails and cable tidies which all fit the right spacings Lego style. If you build your own machines, you will have to deal with putting them on the floor or standing them on shelves, running cables to them neatly, a way to keep them behind a locked door or to lock the front of them to prevent fiddling, etc.

On the electronics side, servers are built with more expensive memory which can correct errors (Google's study on their own kit showed error rates of >3,500 errors per memory stick per year). Not much good having a database spread over cheap machines if it's content gets quietly corrupted and it was important.

They are built with higher quality disks which can run 24x7 and stay in warranty, run at faster speeds (15,000 RPM compared to standard 7,200 RPM), connect to faster interfaces (Serial Attached SCSI), and they often have high end disk controllers which handle spreading load over disks in a way that works even if a disk fails, and have on board memory for buffering disk access to make it faster.

Electrically, they are more carefully designed so they can have disks removed and reconnected while switched on - and in bigger servers, have component cards and processors, power supplies and expansion cards added and removed while on.

They usually use more expensive processors such as Intel Xeons which have more integrated memory and faster processor-to-memory interface ability, more cores and beefier cooling.

Often they have simply more of everything - a dozen memory sockets, half a dozen disk slots, multiple processors, multiple power supplies, four network connections, all for handling more work at the same time, or handling a single failure.

All their electronic components such as capacitors and resistors, will have higher quality material so they can stand being run at full load for longer in a hot environment, and not fail. They aren't competing on being cheap, they are competing on being trusted to work for a long time.

Airflow will matter - inside they have carefully laid out components and cabling, and many fans. It's not uncommon to see two rows of fans one blowing through the other to shift more air and tolerate fan failure.

More sensors - how many home computers warn you if the case was opened recently?

All the changes are designed to answer the questions:

  • What happens if we need 50 or 500 of these things, how do we manage them?
  • How much more work can it do at once than a normal computer?
  • How can we have servers without needing a huge team of employees to run them?
  • How can we avoid a failure taking out everything?
  • How can we avoid causing more problems while we fix it?

OK, your clustered MySQL server - what happens when the $50 motherboard fails? Do you get a support guy on site within 4 hours with a guaranteed correct replacement or do you have to order a replacement from NewEgg and hope it's still being made? Will it take out any other components as it fails? Can a $50 motherboard hold enough memory to make a good MySQL server? Can it shift enough information from disk->RAM->CPU to make a good database server?

How much effort are you spending on clustering, when instead you could buy one big machine and not cluster?

What if it's not MySQL, but instead a system which doesn't support clustering, like a random company's document store, and you need it to serve 500 users on one server?

There's nothing magical about servers is quite right - they are more expensive heavy duty computers. Bulldozers, cranes and Formula1 cars instead of a fleet of Mondeos. Are you a business which needs a really fast car and can afford a team of mechanics? Are you working on your own and you can't afford a fast car but you can afford to spend all weekend fiddling with things to make them work?

You are asking at a time when distributed servers are all the rage - if you have a redundant storage system and you can point thirty cheap Apache servers to it and have a site simple enough that you can load balance between them without needing an expensive load balancer, then you're onto a good thing. Cheap machines will do fine.


Servers generally have support of a vendor; that is, you pay to have replacement parts on your door in hours. As such, they usually are higher quality, and tested out the wazoo.

These are also tested configurations, so certain hardware is "known good" with stable drivers.

Servers generally are also able to accommodate things like multiple processors, multiple power supplies, etc. And often are configured to fit a proper standard rack. And there's always calling your vendor and knowing they can tell you that XYZ component will work with ABC system.

In general there's nothing magical about servers. If you don't mind scrounging parts, you can assemble a system just as good as what comes from Dell. But when something breaks, you can't get the exact same part or certified part to replace it and have it be someone else's concern, it's yours. For most businesses/admins that added cost is peace of mind, knowing that your vendor is to get things replaced instead of adding to the sysadmin's problems.

Even Google used commodity systems for servers. I think they supposedly still do.


Reliability, Availability and Servicability.

This means better components designed to run 24/365 for years, oceans of alarmed sensors to spot prefailure, lights-out control and remote ISO booting, higher performance chipsets designed to remain responsive under extreme loads, 'hot' swapping of disks, CPUs, memory, adapters, redundant PSUs, better air management, memory failure protection, power management plus things like >2 CPU support, huge local storage support, much larger memory capacities, rack mountability/physical size.

They're the ones I can think of from the top of my head anyway. Ultimately it depends on how important the service provided by your server to your organisation is.