How to Configure Secure Connectivity between Multiple Subnets

I have the following setup

2 x linode vps
1 x lab (physical) running 4 vps

My goal is to make it so all nodes act as if they are on the same LAN. This will allow me to alter IPTable rules, to allow only local traffic, versus having to add a new IPTable entry for EVERY server which needs access to a port on the target node.

I have done some preliminary research and testing and can't quite seem to figure out the best solution for what I am trying to accomplish. I have been practicing with two of my lab VPS, which reside on separate subnets, before I start configuring the actual production VPS.

The lab machine has two physical nics; eth0 and eth1. eth1 is setup as a bridge to provide virtual nics to the VPS.

Setup is as follows

service-a-1 (physical node):
    eth0: 192.168.0.1
    eth1: br0
    br0:  192.168.0.2

service-a-2 (vps):
    eth0: 192.168.0.3
    eth0:0 10.0.0.1, 255.255.192.0
    eth0:1 10.0.1.1, 255.255.192.0, gw 10.0.0.1

service-a-3 (vps):
    eth0: 192.168.0.4
    eth0:0 10.0.64.1, 255.255.192.0
    eth0:1 10.0.65.1, 255.255.192.0, gw 10.0.64.1

I use the 192.168.0.x ip addies to connect to the VPS, but the 10.0.x ip addies to practice connecting subnets. My goal with the above design is to establish a secure tunnel between service-a-2 and service-a-3 by way of their gateway ips; 10.0.0.1 and 10.0.64.1, respectively. Then for all other nodes in each subnet, use the gateways, for which a tunnel is already establish, so I don't have to keep creating a new tunnel for every node on either subnet.

To test connectivity I have been using: ping -I 10.0.1.1 10.0.65.1, which should emulate communication between node1 on subnet1 and node1 on subnet2.

I tried to follow the instructions outlined in this tutorial as it seemed pretty straight forward, but after reading other posts, not sure it's actually encrypted, as the mode is set to 'gre'. But after reading some information on using OpenSSH, it seems that a new connection is required for every node on the subnet, vs establishing a single connection between the two gateways.

After more searching around I came across an article provided by linode which looked promising but in the first few paragraphs mentioned that OpenSSH is the preferred method (over OpenVPN) to accomplish what I am seeking to do.

So my question is a two-parter:

  1. Is my logic valid for trying to connect subnets with one another? (Establish a tunnel between gateways, then assign gateway to each node on the subnet)

  2. What is the preferred method of establishing a tunnel between two gateways to be shared by X number of nodes within their respective subnets? Using linux route, OpenSSH, OpenVPN, or something else?

-- Update --

After some toying around, it seems I need to establish an Open-SSH tunnel (for encryption) between the disparate routers. The tunnel will connect the external ips of both routers, which I assume, if set up correctly, will allow me to access nodes behind the router on the other end.

Something else dawned on me, say I have the following setup:

subnet-1: Office #1, San Diego, CA

subnet-2: Colo #1, Dallas, TX

subnet-3: Colo #1, Tokyo, Japan

subnet-4: Colo #1, Sydney, Australia

Would it make sense to establish tunnels between each subnet, to act as a virtual lan? As I mentioned in the original question, I am doing this so IPTables can allow any traffic coming through 10.0.0.0/18, vs having to pinhole iptables for every server for which access is required from another server.

Taking even a further step back, does it even make sense to run IPTables on EVERY server, if it is behind a firewall? Maybe it would be easier just to stop IPTables on all servers behind a firewall. I take security seriously and it seems common sense to run IPTables on every node, even if behind a firewall. But if someone gains access to a node, then theoretically they can access other nodes as if they are not running IPTables, because of the 10.0.0.0/18 rule pinholed on every server.

-- Update #2 --

So I have n2n configured in the following manner:

service-a-1 (behind router, but pinholed 55554 udp):

  IP config: 
    ifcfg-eth0:  inet addr:10.0.0.1  Bcast:10.0.63.255  Mask:255.255.192.0 HWaddr 00:1B:78:BB:91:5A

  n2n (edge) startup:
    edge -d n2n0 -c comm1 -k eme -u 99 -g 99 -m 00:1B:78:BB:91:5C -a 10.0.0.1 -l supernode1.example.com:55555 -p 55554 -s 255.255.192.0

service-a-3 (linode vps):

  IP config:
    ifcfg-eth0: inet addr:4.2.2.2  Bcast:4.2.127.255  Mask:255.255.255.0 HWaddr F2:3C:91:DF:D4:08

    ifcfg-eth0:0: inet addr:10.0.64.1  Bcast:10.0.127.255  Mask:255.255.192.0 HWaddr F2:3C:91:DF:D4:08

    n2n (server) startup:
     supernode -l 55555 -v

    n2n (edge) startup:
      edge -d n2n0 -c comm1 -k eme -u 99 -g 99 -m F2:3C:91:DF:D4:08 -a 10.0.64.1 -l supernode1.example.com:55555 -p 55554 -s 255.255.192.0

With this setup, I was fully expecting to ping service-a-3 (10.0.64.1) from service-a-1 (10.0.0.1) but I keep getting "destination net unreachable". IPTables on both servers is turned off, but service-a-1 is behind a firewall, but it is configured to allow ALL outbound traffic. Any idea why I can't ping between the two subnets as if it were a flat network?


Solution 1:

You can simplify the solution...

If you're looking for a way to link all of these servers (not routers or gateways devices) as though they were on one flat network, I'd suggest looking at the n2n peer-to-peer offering from ntop.

This tool allows you to traverse intermediate devices; helpful if you don't have access to firewalls or have complex routing issues. In my case, I use n2n for monitoring client systems from a central location. It's cleaner than site-to-site VPNs, and I can work around overlapping subnets/IP addresses. Think about it...

Edit:

I recommend using the n2n_v2 fork and hand-compiling.

An example configuration of n2n would look like the following:

On your supernode, you need to pick a UDP port that will be allowed through the firewall in front of the supernode system. Let's say UDP port 7655, with name edge.mdmarra.net:

# supernode -l 7655 -f -v 
# edge -d tun0 -m CE:84:4A:A7:A3:40 -c mdmarra -k key -a 10.254.10.1 -l edge.mdmarra.net:7655

On the client systems, you have plenty of options. You should choose a tunnel device name, a MAC address (maybe), a community name, a key/secret an IP address and the address:port of the supernode. I tend to use a more complete command string:

# edge -d tun0 -m CE:84:4A:A7:A3:52 -c mdmarra -k key -a 10.254.10.10 -l edge.mdmarra.net:7655

These can be run in the foreground for testing, but all of the functionality is in the edge command. I will typically wrap this in the Monit framework to make sure the processes stay up.