How do you backup 40+ Centos5.5 servers?
We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc.
We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached.
All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could).
We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead.
We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money).
We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore)
Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup?
Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares?
Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.
Solution 1:
Considering that this is an "emergency" situation, you should do the following only for so long as it takes you to get competent staff in place and a more reliable long-term backup strategy. Do not do this forever.
You can use dd
to make images of the hard drives in these systems (or the individual partitions, if you want). dd
can be used to read or write data to a plain file, as well, and you'll take advantage of this. Since the hard drives are likely much larger than the actual space used, I recommend compressing them as well.
So the general idea would be something like this:
- Plug USB hard drive into USB port.
- Mount USB hard drive, e.g. at
/media/backup
. -
Copy the hard drive images. For example:
dd if=/dev/sda of=/media/backup/$(hostname)-sda.img
Better yet, compress the image while you take it:
dd if=/dev/sda | gzip -c > /media/backup/$(hostname)-sda.img.gz
Unmount the USB hard drive, and move to the next machine.
You can use a tool such as kpartx
to work with the backup images as if they were actual hard drives (see its man page) or just restore them directly by reversing if=
and of=
in the dd
command.
Solution 2:
Everything reliable will take a lot of time to learn and implement (and test) in such an environment, there is no easy way around this.
You can use standard tools like rsync etc. to get complete backups, you just need to make sure that all the informations you need for recovery is backed up as well (e.g. the partition tables and boot records).
The most important point: You have a very complex environment which you obviously don't understand fully and you lack the necessary knowledge and experience. This way, you will never get a disaster-proof backup. Your best (I would even say only) option is to hire a consultant to create a viable backup solution for you. Or better yet, for an environment of 40+ servers, hire a competent sysadmin to manage the systems.