How do services with large ingest rates install enough hard disks fast enough?
Solution 1:
I don't know if any of them are actually installing hardware one server at a time. Back in 2008 MS started building its data centers by getting servers delivered in sealed and pre-wired shipping containers of servers that they just needed to unload from a truck and plug power/network connections into. While the 08 build was a mix of containers and traditional for their most recent datacenter they've since gone to a custom prefab design that's weatherproof and doesn't need to be housed inside separate buildings.
Both HP and IBM sell similar packages with prebuilt containers full of servers that just need power/data connections to deploy.
Solution 2:
Google has several technologies which they developed internally to store these huge masses of data. Using these technologies they can actually add truck loads of hard disks into their cluster without any downtime, but yes, they do still need people doing that.
As far as I know from the Google Blog the two main parts are the Google File System, which is a distributed file system that can scale up to really big scale: Google File System
And on top of the Google File System they have Big Table which is some kind of Key Value database and also scales up into huge scales: Big Table
For guaranteeing high availability everything is redundant many times, more than 3 times in most cases.
Solution 3:
That's precisely correct. I remember that at one time, Facebook datacenters were adding three tractor-trailers full of hard drives and rack-mount servers in the average day. Of course, they have complicated schemes to make storage scalable and redundant. Google, for example, has GFS. Facebook has three data centers just for their equipment, each larger than two Wal-Marts and a new one planned four times larger than their existing centers.