Should I dual home our webservers (DMZ/Internal network) or just do 1-to-1 NAT?

I'm setting up a rack of servers which will have 2 webservers and 10 internal servers that provide back end application support (migrating from an AWS environment). We'll have virtual machine instances running on the boxes.

In most enterprise network configurations I've worked with they dual home the webservers so one NIC sits on the DMZ network and one sits on an Internal (non-internet routable) network.

Are the security benefits really sufficient to warrant two networks and dual homed hosts in the DMZ for a smaller 12-server configuration? The application is a mission critical web app.

One last benefit to consider is that separate networks reduce the risk that if the internal network were to saturate it's bandwidth, the DMZ network could be left unaffected (1Gbit interfaces, and we can get 2Gbit's into the firewall which supports 1.5Gbit statefull throughput).


So ultimately this is what we're talking about I think:

enter image description here


Solution 1:

Are the security benefits really sufficient to warrant two networks and dual homed hosts in the DMZ for a smaller 12-server configuration? The application is a mission critical web app.

Absolutely. Publically facing services should be isolated in a DMZ. Period. Anything that the big bad Internet can reach should be separated from your internal network and its services. This drastically limits the scope and damage a security breach will cause. This is good application of The Least Privilege Principle and Functional Separation..

In most enterprise network configurations I've worked with they dual home the webservers so one NIC sits on the DMZ network and one sits on an Internal (non-internet routable) network.

I would take this one step further. The way you describe it, your servers would have the potential to act as a bridge between the Internet and the rest of your network thus completely bypassing the DMZ in the event that there is a security breach. Your servers should remain on the DMZ and be accessible only through a point of control such as a firewall or VPN. They should not have a direct connection to anything on your internal network. That would invalidate the whole point of a DMZ.


Here's a diagram showing the logical design. The implementation of the logical design is really up to you. The main point I'm trying to make is that you want any access between your internal servers and your DMZ servers to go though some sort of choke point where you can control, monitor and log those connections. Here's an example that will hopefully clarify what I'm talking about:

DMZ Design

Let's say your provider has given you 203.0.113.0/28 for address space. You decide to chop it up into two separate subnets: 203.0.113.0/29 for DMZ machines and 203.0.113.8/29 for internal machines. Your firewall sits between your entire setup and the internet and has three interfaces: one for your provider's upstream connection, one for 203.0.113.0/29 and one for 203.0.113.8/29. Any communication between any of these networks will thus pass through the firewall where you can do the following important things: A) selectively pass only the traffic you need to between hosts, B) monitor and log that traffic. The real goal is that there should be no direct communication between any of these networks. This is the ideal you should strive for.