How to provide proper backups for multiple Linux based servers? [closed]
There have been several questions related to providing backups, but most of them were either too specific, intended for home usage or Windows-based.
What I'd like to hear:
- How do you ensure that all of your Linux servers are properly backed up?
- How often do you backup your servers?
- What do you backup, apart from the obvious /home directories?
- How do you ensure that backups are incremental, but still easy to restore?
Especially the last question has been troubling me. Compressing backups into tarballs works okay, but causes a serious performance issue when I suddenly need to restore a backup. On the other hand, when not compressing the backups and syncing them to another server, there is a serious risk of creating inconsistent user permissions.
What tools help you to make this as easy as possible and what is your preferred approach to the matter?
Solution 1:
How often do you backup your servers?
The standard backup is done nightly, there are a few backups that are done more frequently.
How do you ensure that all of your Linux servers are properly backed up?
The backup send a report with a good subject on success or failure. I check that I have received the status messages at least a couple times a week.
What do you backup, apart from the obvious /home directories?
Depends on the server, but I almost always want /etc, /root, and /var/lib/dpkg. I then add any data directories.
How do you ensure that backups are incremental, but still easy to restore?
I use Dirvish,
Dirvish is a fast, disk based, rotating network backup system.
With dirvish you can maintain a set of complete images of your filesystems with unattended creation and expiration. A dirvish backup vault is like a time machine for your data.
Dirvish uses rsync and the --link-dest option to hard-link identical files for each backup together. The backup is fast, it doesn't waste space, and restoring is as simple as copying files.
Solution 2:
* How do you ensure that all of your Linux servers are properly backed up?
I run duplicity on them, backing up to amazon S3.
* How often do you backup your servers?
I do a full backup quarterly and nightly incrementals. This is mostly to cut down on storage costs at S3.
* What do you backup, apart from the obvious /home directories?
/home, /etc and /var. I run a debian box so I also do a nightly 'dpkg --get-selections >/etc/debian.pkgs' so I can track what's installed.
* How do you ensure that backups are incremental, but still easy to restore?
duplicity does that well. I test it occasionally by doing a restore from some point in history.
Solution 3:
- They get backed up completely automatically, and all backups are monitored to ensure that they completed successfully (as reported by the backup software). This monitoring is crucial, as it "closes the loop" on backups to ensure that they're running successfully.
- Usually daily, although some systems (database servers) do occasionally get more interesting strategies.
- Package lists,
/etc
,/var
and/home
. Everything else can be recreated from there. - I use dirvish at home, which just creates hardlinked trees of files with rsync. It does a great job, but does require rsync running as root (to ensure permissions are maintained) and is hard on the filesystem from on inode consumption perspective. At work we use rdiff-backup, and while it has it's own set of ideosyncracies, stores permissions in metadata files, so you don't need root permissions to write the backup (but do need to run an explicit restore to get the permissions back).