Solution 1:

Try to use rsync version 3 if you have to sync many files! V3 builds its file list incrementally and is much faster and uses less memory than version 2.

Depending on your platform this can make quite a difference. On OSX version 2.6.3 would take more than one hour or crash trying to build an index of 5 million files while the version 3.0.2 I compiled started copying right away.

Solution 2:

Using --link-dest to create space-efficient snapshot based backups, whereby you appear to have multiple complete copies of the backedup data (one for each backup run) but files that don't change between runs are hard-linked instead of creating new copies saving space.

(actually, I still use the rysnc-followed-by-cp -al method which achieves the same thing, see http://www.mikerubel.org/computers/rsync_snapshots/ for an oldish-but-still-very-good run down of both techniques and related issues)

The one major disadvantage of this technique is that if a file is corrupted due to disk error it is just as corrupt in all snapshots that link to that file, but I have offline backups too which would protect against this to a decent extent. The other thing to look out for is that your filesystem has enough inodes or you'll run out of them before you actually run out of disk space (though I've never had a problem with the ext2/3 defaults).

Also, never forget the very very useful --dry-run for a little healthy paranoia, especially when you are using the --delete* options.