Why Place Load-Balancer Behind Firewall?

I'm considering purchasing an F5 load balancing device which will proxy inbound HTTP connections to one of five web servers on my internal network. My assumption was that the F5's external interface would face the Internet and the internal interface would face the internal network where the web servers live. Yet several of the illustrations I'm seeing online place the F5 device behind the firewall This arrangement would cause extra traffic to pass through the firewall and also makes the firewall a single failure point, correct?

What's the rationale behind this configuration?


Solution 1:

I think the classical:

Firewall <-> Load Balancer <-> Web Servers <-> ...

is mostly left over from the era of expensive hardware-based firewalls. I've implemented such schemes so they work but makes the whole setup more complicated. To eliminate single points of failure (and e.g. allow upgrades of the firewall) you need to either mesh traffic between 2 firewalls and 2 load balancers (either using layer 2 meshes or proper layer 3 routing).

On public clouds one tends to implement something like:

Load Balancer <-> [ (firewall + web) ] <-layer 2 domain or ipsec/ssl-> [ (firewall + app/db) ]

which is frankly good enough.

  1. If you're using the load balancer to terminate the SSL connection a firewall placed in front of the load balancer only does very basic layer 3 filtering since it's seeing encrypted traffic.
  2. Your F5 already comes with a firewall, which is as good as the filtering rules you put in place.
  3. The defense-in-depth argument is IMHO weak when it comes to layer 3. The attack vectors for web applications are SQL injections, not tripping the firewall to gain root access.
  4. The cores of puny web servers is usually good enough to handle filtering from tcp and up.

Happy to see some discussion on the topic.

Solution 2:

I'd have thought this would be self-evident: The same reason you put anything behind the firewall.

Solution 3:

I wouldn't say there's any "extra" traffic travelling through that firewall.

If you have 5,000 requests inbound, and you send an even 1,000 requests to each server, then that's no more requests being serviced by the firewall than if you sent 5,000 requests so the one server, or if you put the firewall behind the F5 (all 5,000 requests still need to pass through that firewall at some point, otherwise they're not on a "private" network at all).

But it is true that the firewall is a single point of failure, but if you're dipping into the budget to fork out to purchase a single F5, well then that F5 becomes a single point of failure as well.

If you're out to configure a fully redundant system, you need two F5's in an active/passive HA cluster, and then you would have two firewalls, also in an active/passive HA cluster.

They may be depicted by a single graphic in the F5's documentation, but that's because it's just showing the logical appearance of the firewall (there's one device serving all the requests), not the physical setup (two devices, one of them in HA standby).

Another reason to put your load balancer behind your edge firewall is because your load balancer may not be web hardened by default (perhaps it has vulnerabilities in its management interfaces, maybe it comes default permit-all, who knows). By putting it behind the firewall and only poking holes for your publically required ports, you run a vastly lower risk of a vulnerable load balancer being exposed to the internet.