Can you set a backup IP for your server in DNS?
Solution 1:
Yes... sort of.
There are two things you can do here: If you put multiple A records in your DNS server for a given name, then they'll all be served to clients and those clients will pick one from the set to connect to, meaning that traffic will be "fairly" evenly distributed amongst all sites simultaneously. This isn't really what you seem to be describing, but it's a common situation (although I don't trust it, for a variety of reasons).
The other option is that you only put one A record in your DNS server, and the DNS server (or something anciliary to it, like a monitoring script) keeps an eye on your site's main address, and if it fails then the DNS server's A record gets changed to your other site. This means that only one site will be getting traffic at a time.
The downside to this second strategy is DNS caching. Anyone who got the old site address will be SOL until their DNS cache entries containing the old address get purged. This means that you have to keep your TTLs low (increasing the load on your DNS infrastructure, although that's rarely a practical problem), but there's still the problem of "rogue" DNS caches, which don't honour TTLs. These are a massive pain for anyone who ever has to change DNS entries, but they're a million times worse for anyone who needs to change DNS entries "often" (hopefully your site isn't going down several times a day, but still...) Basically, anyone behind one of these misbehaving DNS caches will see your site as being "down" for an extremely extended period of time, and just try explaining to them that it's their DNS cache that's at fault... Eugh.
In short, I wouldn't do it for a site, because there are better ways to mitigate whatever risk you're thinking of, but you'll need to describe that risk if you want suggestions on how to mitigate it.
Solution 2:
Everyone seems to think that you are talking about WWW servers, even though you explicitly wrote
The oft-overlooked truth is that HTTP service is the exception and not the norm when it comes to this. In the normal case, yes, there is a mechanism for publishing information to clients via the DNS so that they properly fallback from primary servers to backup servers. That mechanism islike a backup name-server or mail server
SRV
resource records, as used by service clients for many other protocols apart from HTTP. See RFC 2782.
With SRV
resource records, clients are told a list of servers, with priorities and weights, and are required to try servers in order order of priority, picking amongst servers with equal priorities according to weight, choosing higher-weighted servers more often than lower-weighted ones. So with SRV
resource records, server administrators can tell clients what the fallback servers are, and how to distribute their load across a set of equal-priority servers.
Now content DNS servers are located by a special type of resource record of their own, NS
resource records, which don't have priority and weight information. Equally, SMTP Relay servers are located by their own special type of resource record, MX
, which has priority information but no weighting information. So for content DNS servers there's no provision for publishing fallback and load distribution information; and if one is using MX
resource records then for SMTP Relay servers there's no provision for publishing load distribution information.
However, SRV
-capable MTSes now exist. (The first was exim
, which has been SRV
-capable since 2005.) And for other service protocols, unencumbered with the baggage of MX
and NS
resource records, SRV
adoption is far more thorough and widespread. If you have a Microsoft Windows domain, for example, then a whole raft of services are located through SRV
lookups in the DNS. That's been the case for more than a decade, at this point.
The problem is that everyone thinks of HTTP, when HTTP is by far, nowadays in 2011, the exception and not the rule here.
Solution 3:
if you're serving dynamic content and it's not practical to simply have two servers giving content simultaneously then your other option is to have multiple records on your DNS anyway and configure the backup server to throw ICMP port unreachable to clients that try and connect to it; if at any point the main server goes down then you simply remove the port 80 block on the backup and traffic will start coming in.
The only other (budget) way you're going to be able to do it is setup a separate machine (or two) to perform NAT on requests, thus if a webserver dies, you can simply remove the NAT rule for it.