Load balancer or proxy to route traffic to different servers based on their URL

Solution 1:

I guess you are referring to this shopify DNS setup.

Point the A record to the Shopify IP address 23.227.38.65. Point the CNAME record with the name www to shops.myshopify.com

root domain (@)

Well as you are aware, you cannot have a CNAME on the root (@) domain, so root domain needs to be handled with a A and AAAA records pointing at fixed IP addresses.

The way big companies scale this globally is using "anycast" where the same IP address is announced over BGP from multiple different datacenters.

You can think of anycast as load balancing handled by routers where multiple servers in same or different data centers can receive and handle traffic for a single IP address.

If you aren't already an AS with your own IP space, then anycast is definitely out of scope for you.

The simple way to start here is to not run any hosting on the root domain, but just do redirects to www. Then a single nginx redirector box (or many behind a level4/tcp load balancer) can handle quite a ton of domain redirects.

If you need a lot of boxes due to huge number of requests, use tcp/layer4 load balancing to your redirector servers so you can do application (http) and ssl termination on the boxes behind the load balancer for more scalability (single load balancer can handle more traffic).

Use permanent redirects (301) which cache indefinitely reducing recurring traffic from the same clients.

Best practices here. Use letsencrypt/certbot to auto-gen/renew the redirector domains once the DNS is setup. Always redirect to https on the same domain (e.g. http://example.com --> https://example.com) before redirecting to another domain (https://www.example.com).

www

Looking at shopify's shops.myshopify.com (where the www CNAME should point) you can see it has a single A record which is currently 23.227.38.74.

Using a global ping tool you can see that this IP has few millisecond latency from many places around the world. This means it certainly isn't a single server in a single location (transatlantic latency usually runs 60ms in the best case...so when you see ping of 4ms from US and EU for same IP...you know those pings aren't going to the same server). You can also verify this by running a traceroute to the same IP from different servers in different geos.

At each of their endpoints responding to that IP, they probably have a load balancer routing the requests to different hardware.

So behind the shopify CNAME, it is single-IP anycast. The advantage of giving your clients a CNAME is that you have the freedom to change the IP(s) behind that name without your clients needing to update DNS on their end. On that note... when you give your clients an A record for their root (@) domain redirector... you want to make sure this is an IP address that you can control and re-assign to different hardware if you have problems with a server/load-balancer (e.g. AWS elastic IP type thing or your own IP space if you are an AS).

As far as I know, there isn't any "hint" given on the DNS request (and part of the DNS resolver caching) when your computer follows the CNAME chain when resolving a name to tell the final DNS server what the original domain is that you requested. If there was then you could imagine a DNS server having conditional rules to return different IP addresses for the same name depending on the name that was behind it.

So if you aren't going to do it the shopify way (bgp/anycast). The most straight forward thing would be to give your customers unique CNAMEs. This way you can do the load balancing on a DNS level (returning different IPs for each unique customer CNAME).

You could follow some convention like customerdomain.tld.example.com and have the DNS automatically provisioned based on your database of customer assets.

And for the root domain (@) you could still use a single redirector IP (one or more boxes behind a single IP/load-balancer) managing the redirection for all domains to www.customerdomain.tld which CNAMEs to customerdomain.tld.example.com.

Update... maybe I missed the point of your question.

This would help massively when I need to migrate a lot of websites onto a new infrastructure

As I mentioned above, at least for the root/@ case, you NEED to control that IP and be able to assign it to other infrastructure...otherwise all your customers will have to update their DNS when that IP changes due to a migration.

For the www/CNAME case, this is not an issue because you can just update the IP behind the CNAME on your own DNS.

So I'll focus on options for only the root domain (@) case since that is most problematic (requires customer action to update DNS when IP address of their service changes). Options...

option 0 - don't support the root/@ domain for customers

Whatever you are hosting, host it on a sub-domain (www or other). If customers want a redirect they can manage that on their end with their IT people.

This completely removes the customer DNS pointing at fixed IP address problem. You can update the CNAME(s) IP addresses on your end and any infrastructure move or IP change becomes simple.

option 1 - assignable IP addresses

You can use things like assignable IP addresses (AWS elastic ip type thing, most serious VPS providers offer something similar).

This allows you to deploy a new server (at that provider) and then switch the IP over to the new server.

The problem is that you have vendor/provider lock-in because the IP addresses belong to the vendor. So if you wanted to move from AWS to Google-Cloud or your own hardware you cannot take those IPs with you....DNS update for your customers. Also the IPs may be region locked so you can't easily assign the IP to another server at the provider in a different data center.

option 2 - become an AS and get your own IP space

If you are doing serious hosting, it is only a matter of time before you need to become an AS (Autonomous System) via ARIN or RIPE or a different organization if your company is outside North America and Europe.

You then need to acquire (or lease) your own block of IP addresses. You can get ipv6 free usually. ipv4 has run out but at least RIPE lets you get on a waiting list for a /24 (256 addresses) when they recover them over time. Otherwise you have to buy the address space from someone (there are marketplaces you can join).

And of course then you need to work with providers that allow you to bring your own IP addresses.

Here are a couple practical links that walk through an anycast setup. But for starters, ignore the anycast bits and focus on the getting setup as an AS, getting IP space and finding the right kind of infra partners. (Because running BGP/anycast is not trivial.)

  • https://labs.ripe.net/author/samir_jafferali/build-your-own-anycast-network-in-nine-steps/
  • https://ripe69.ripe.net/wp-content/uploads/presentations/36-Anycast-on-a-shoe-string-RIPE69.pdf

Downsides:

  • time investment to set things up and learn (e.g. BGP if your upstream provider isn't handling that for you).
  • financial investment (membership/IP fees for RIPE/ARIN and potentially large costs to acquire/lease IPv4 blocks).
  • limited to working with VPS providers that allow you to bring your own IPs
    • Or you have to lease rack space and deal with peering/routing/switching/BGP/etc, hardware failures, SNMP hardware monitoring, etc.
  • new distractions like needing to ticket and address abuse complaints related to your IP space

Definitely makes sense at a certain scale or if you already have the skills and tools in place to manage it.

option 3 - nonstandard DNS

Some managed DNS providers have added CNAME-like support for naked/root domain.

If you use one of these providers or implement this yourself if you run your own DNS...then that can address the problem.

See this answer

If you rely on this, then you are vendor-locked to DNS providers that support this non-standard feature. Or you need to run your own DNS and implement this yourself.

option 4 - CDN

Depending on your app, you can put another service in front of it. i.e. CDN-like service (stackpath, cloudflare, aws-cloudfront, etc). Those guys will deal with DNS/anycast and related topics and you can have your customers point at the CDN service and run your services behind the CDN.

Changing back-end services become configuration changes at the CDN (or similar) to tell the CDN the names/ips of your endpoints the content should be requested from.

Downsides:

  • additional cost if you don't need it.
  • need make sure cached vs non-cached (e.g. app) endpoints are configured on the CDN to match how your application(s) work.
  • additional layer that needs to be debugged if your app isn't working (did the CDN break the request or did your app?).
  • usually this means your customer's CNAME record will point at the CDN's domain...not yours. Your domain is in the CDN app's config as an upstream server. So you have vendor lock-in...if you want to switch CDNs all your customers will need to update their DNS CNAMEs to point at the new one. You could mitigate this by putting 2 layers of CNAMEs (customer -> you -> CDN) but that isn't great from a performance perspective.

what I'd do

Without more details about your customer base size, skills (e.g. BGP), whether you are running own hardware or renting cheap VPS...

I like simple, you can always make it more complex later. What is the simplest thing that keeps my costs down, doesn't take a lot of time, makes a good user-experience for my customers and ultimately allows me to get back to revenue-generating activities (rather than spending time on technical backend that has a time/financial cost with the hope of reducing overall costs at scale). I'm not google, so I'd rather grow my top line than micro-optimize my bottom line...especially if there isn't a technical need (yet).

I'd do the following...

  • no support for customers' naked/root domain. customers wanting a redirect can have their IT people set it up on their end. One less major headache.
    • Or if you want to support this then you setup a single redirector IP that you know you won't lose (e.g. AWS elastic IP) and have customers setup A and AAAA records. The rest of your services don't have to be hosted at the same place (ie redirector can be AWS with ELB if you need to scale redirection and customer boxes can be on cheap VPSes).
  • each customer gets a predictable (unique to them) CNAME based on their hosted domain or customer ID (CUS1234.example.com makes more sense if you give customers the ability to easily change the domain they host under).
  • My DNS is auto-updated based on my customer database (customer domains -> customer-specific CNAME -> IP address hosting customer's app).
  • I can easily monitor that customer's DNS, and my DNS are all pointing to the right place.
  • I can monitor DNS traffic/abuse on a per-customer basis separately (because they have unique names) from customer endpoint.

Customers set their DNS once and never need to change it unless they want to change their hosted domain.

Infra migrations are relatively easy if you have good backup/restore/replication mechanisms in place that work in tandem with some form of service discovery on your server/vps/app provisioning layer.

  • set low DNS TTL on your DNS records (i.e. the name the customer's CNAME points at CUST1234.example.com A 10.0.0.1) some time before the migration.
  • Spin up new infra including replication of data from old infra (databases, user-uploaded-content, etc).
  • Switch your customer DNS records (A, AAAA) to point at the new IPs
  • Take down the old infra after DNS TTL + margin has elapsed.

If your data backend can't handle writes from 2 live customer endpoints at the same time, then you may need to have an outage for the migration...because there will be some overlap as the old DNS cache expires.

Advantage of this type of setup is I can run on almost any reputable VPS provider I want (my auto-provisioning isn't picky). I don't need to make the investment to become an AS and deal with the additional overhead of managing my own IP space (it definitely makes sense to do this at a certain scale...but I don't know your company's situation).

I can do things like DNS-based geo load balancing (return different IP for same name depending on region of requesting server). This allows you to provide a customer with multiple servers under the same name in different regions (so they don't have to deal with trans-USA or trans-atlantic latency when loading an app). You can offer this on a per-customer basis as a value-add/upsell.

Note on load balancing...

I mentioned tcp/layer4 load balancing several times without elaborating. Generally you have two common types of load balancing. layer4/transport/tcp and layer7/application/http(s).

A layer7/application/http load balancer "speaks" http and terminates the ssl connection before proxying the request (usually unencrypted http) to one of multiple servers behind the balancer/proxy. This is simple but can lead to issues where the servers behind the load balancer don't know they are supposed to pretend they are speaking https when writing headers, secure cookies, redirects, etc. It also means your load balancer has to do more work per request (parsing http and dealing with SSL overhead). This extra work limits the scalability of the load balancer which is usually a single node/machine/vps.

A layer4/tcp load balancer doesn't need to parse the http request or have the overhead of SSL termination. It doesn't know anything about http. The request is routed to one of multiple servers which handle the ssl termination and handling the http request.

If you are concerned about TLS session reuse (or lack of) affecting performance it is common to use redis or memcached as a shared TLS session cache between multiple webservers so you don't have to worry about the load balancer keeping users "sticky" to a specific box behind the load balancer. nginx doesn't appear to support off-box TLS session cache (cache only shared between nginx workers on same box). haproxy seems to have something but I haven't tried it and don't know how it would work in the haproxy/level4 in front of an nginx pool where the ssl termination happens.

You can use nginx or haproxy as a layer4/tcp load balancer, both are fairly straight forward to setup. For more advanced use cases (and probably better performance) you can also look at Linux Virtual Server (LVS).

Another way of distributing load is to have multiple A or AAAA records returned for a single name. Ideally the DNS client is choosing randomly from the addresses returned so you get some distribution of load across multiple IP addresses. If you start hitting scaling issues with your load balancer tier, this is a low-tech way to add some more scale (just add another load balancer against the same or different pool of application servers). However nothing forces clients to round robin your IP addresses...so this doesn't give you much control over which IPs get the load...but it is better than nothing.

Solution 2:

The answer that @mattpr gave is really detailed, and very thoughtful. It has a lot of useful information that is very accurate. I don't want to take away from that answer, and how amazing it is.

My approach would be much more simplistic. I would setup a proxy host in [insert cloud vendor of choice], and use something like nginx proxy manager (https://nginxproxymanager.com/). It allows proxy and redirect to other URLs. It's easy to setup in a Docker Container. It supports caching, websockets, and does Let's Encrypt.

Have your clients point their @ to your hosts IP, then create an A record for them to point their www CNAME to (hosting.myawesomecompany.com). Then setup a proxy config for that customer to point the requests to the correct server/infrastructure.

Make sure nginx proxy manager is kept updated, and keep the webui secure to IP ACL or something.