How to host a single website on multiple geographically-diverse servers

I currently have 2 servers, using cPanel/WHM. The first one is a VPS hosted in London (we'll call it "international") and the second one is a dedicated server located in my country (we'll call it "local").

"local" will have unlimited local bandwidth, however it will only have 1Mbps of international bandwidth.

I need to host a single website (or maybe multiple websites) on both servers and serve the visitor based on their origin country. I mean, when the visitor is from my own country the data will be served from "local", and if the visitor is from any other country the data will be served from "international".

Both types of visitor can perform read/write operations on the servers and I need to sync files and databases between both servers, as both server will have updated files and database.

So, how can this be possible regarding the DNS and synchronisation? Or what's easy and possible? Can anyone guide me with the steps I have to perform?


The first, simple, straightforward, and above all, robust solution is to give up on your plans for having two servers, and just run one machine out of a suitably central location. While I understand the rationale for not hosting anything off your local server, because of the constrained international bandwidth, I don't see anything in your question that requires a local server presence.

If your reasons for wanting a local server are for purely performance reasons, I'd seriously recommend looking at a local static asset server, with all of the dynamic stuff going to London. While geoDNS isn't trivial, it's an awful lot easier than robust real-time synchronisation of your dynamic assets and database. This mechanism is used by many sites (this one included) to improve overall perceived page speed, and it works rather well.

Assuming that isn't the case here, and you really do need two servers, I see a massive flaw in your plan -- the 1Mbps international bandwidth is going to be fairly saturated by your synchronisation traffic. You'll want to hope that your site doesn't get too popular, or you'll be in a whole world of pain.

You're in a fairly favourable position vis a vis DNS, because you've got a clearly defined subset of addresses that you want to serve particular records to. Presumably you can get a list of netblocks from your provider that delineate what counts as "local, bandwidth unlimited" traffic, and what counts as "international, 1Mbps capped" traffic. If your provider can't do that, I'd be asking them how the hell they're actually doing the rate limiting, because there's got to be a list in there somewhere. Worst case, if they're just doing it based on "anything that we see announced over this BGP link is local", you still should be able to get a list of the prefixes on that link.

So, the DNS stuff comes down to "for A record requests to www.example.com, serve localip if the source address is in the list of local prefixes, and internationalip otherwise". How you script that for your given DNS servers is up to you; I'd go with tinydns, because I use it for everything I can and it's pretty awesome at this particular task.

But that's about 1% of the total problem. You've got a much, much bigger issue on the dynamic assets side of town.

The database is actually the (relatively) easy bit. Both MySQL and PostgreSQL support multi-master replication, whereby writes to either database get replicated to the other (more or less) automatically. It's not exactly trivial to setup, and you need to monitor the bejesus out of it to detect when it breaks and fix it, but it is possible in a fairly standardised way.

Your files, on the other hand, require a lot more local intelligence. To make this work, you'll need to design your file storage properly to allow the replication to work. It's even more entertaining because you say you need to support deletion.

Really, periodic rsync is your best friend for this. Ignoring the modification and deletion aspects of things for a second, if you make sure that your filenames can't collide on both sides (using UUIDs or database PKs as the basis for all your filenames will work nicely) you should be able to just do periodic rsyncs of each side to the other, and all new files created on each side will magically appear on the other. How often you do the rsync depends on how much time you can stand before everything is synchronised -- that's a call you have to make. Your application also needs to intelligently handle cases where (for example) the DB records have synchronised but the files haven't.

The deletion makes things a lot harder, because you can't just run a blind rsync -a --delete because anything that the sender doesn't have will be deleted from the receiver -- a great way to lose lots of data. I'd prefer to have a deletion log, and run through it every now and then and delete things from the other side. If that doesn't appeal, you can go more fancy with two separate filesystems at each end (one for "local data", and the other for "replica of other end"), and either access both of them from your application, or use a union filesystem layer to make them look like one filesystem to the webserver.

Modification is just a complete nightmare -- your risk is simultaneous modifications on both servers, at which point you're just screwed. In the sort of "eventual consistency" model you're working with here (which, for the geographically-distributed, high-latency replication system you're forced to deal with, is the only option) you simply cannot handle this at the infrastructure level -- you have to make some sort of compromise at your application to decide how to deal with these sorts of issues. You can help the situation by treating your filesystem as an append-only store (if you want to modify a file, you write a new version and update your database to point to the new record), but since your database, too, is only eventually-consistent, you can't solve the problem completely. At least if your database is the single point of truth, though, you'll have guaranteed consistency, if not guaranteed correctness, which is half the battle.

And I think that just about covers everything. To reiterate, though, life is a lot simpler if you don't have to go with geographically-distributed servers. If you're implementing this because it "sounds cool", step away from the keyboard. If you want to do cool stuff, do it on your own time, or as a science experiment. You're paid to do what's most effective for your employer, not what gives you a geek priapism.