What is the most effective way to setup a linux web server for manual failover

Solution 1:

For file based services (a web server etc.) rsync can effectively keep the second server up to date - (users, configs etc) - when it comes to databases things get a bit more complex (I've used MySQL and a slave server for this and it was very effective, I've also used PostgreSQL in a few HA/standby configurations but it was way more clumsy)

This, combined with a bit of IP theft (a quick script to assign the IP of the failed machine to the interface of the backup machine) can make for a relatively straighforward setup but allow quick recovery.

Just one thing to consider - beware of the failback. Moving services to a backup machine is one thing, moving them back once you've corrected the failure can get hairy, pay close attention to the databases.

LinuxHA is a (somewhat heavyweight) approach to this if you decide to make it a bit more automated

http://www.linux-ha.org/

Solution 2:

You can use rsync or drbd to keep your backup server in sync, or mount your data via nfs from a 3rd server. If you want to keep it simple, I would back up to the same directory locations on the backup server as the source server.

heartbeat2 is a good solution for managing the IP addresses and provides tools for automatic or manual failover - and takes into consideration arp cache flushing and other things that I wouldn't have thought of.