Is there a way to optimize encrypted packet traffic over multiple Internet connections?
I want to accomplish this:
- Combine multiple Internet connections.
- Encrypt the connection from the local computer to the remote proxy/VPN server.
- Ideally, switch between "Reliable" and "Speed" modes (described below). If this is not possible, then I'd prefer at least to be able to do the Channel Bonding (Speed) mode.
- Have the server software hosted on my own infrastructure (like a dedicated server or VPS).
- Ideally, the solution should be able to run on a Windows client connecting to a Linux server. (e.g., Windows 10 on the client, Ubuntu 14.04 Server on the server.) If a Linux client is required, please state that in your answer; it's still an acceptable answer if it requires a Linux client, I can always run it in a VM if I have to.
How can I do this?
More detail
Reliable vs. Speed Modes
Reliable mode is like RAID-1, where the same packets are sent on two or more uplinks, and the first packet to reach the server gets taken, while the other copy gets thrown away. In reliable mode, your total throughput is theoretically the faster of your two uplinks, but in practice is a bit less if packets from the slower uplink sometimes reach the endpoint first.
Speed mode is like RAID-0, where different packets of data are sent across both links so that your total throughput, e.g. on a download, is the addition of the throughput of both uplinks.
Speedify uses this mechanism, more info here. I could simply use that, except Speedify's own service doesn't encrypt the data, and one of my two uplinks is an unsecured WiFi hotspot; I need strong encryption to protect application protocols that are not, themselves, encrypted (like regular HTTP that superuser.com is accessed over).
My Situation
I have two connections to the public Internet:
- A ~12 Mbps LTE connection as an Ethernet over USB adapter (it's actually an iPhone, but it exposes itself to the OS as regular Ethernet)
- A ~5 Mbps LTE connection from a WiFi hotspot, delivered to a USB 802.11ac dongle
I want to combine them as follows:
Connection A --> Internet --> Server --> Internet Activity
Connection B --> Internet --> Server --> Internet Activity
The Internet Activity
end is where it gets dicey. I want to have two separate connections established between my local computer and Server
, but one unified connection established between Server
and the broader public Internet.
So for example, let's say I submit an HTTP Request that takes, say, 10 IP packets. Perhaps 6 of those packets would be sent over Connection A and 4 over Connection B. That breakdown would be based on the saturation level of each uplink. This is what Speedify does, but Speedify doesn't encrypt anything.
Those packets would be sent in the correct order from Server
to whatever endpoint on the public Internet I am trying to reach.
Then, when the HTTP Response comes back from the Web, it would come back as 10 packets which are then passed through Server
and distributed back to Connection A
and Connection B
in a way that tries to avoid network congestion (so if one of the uplinks has a lot of packet loss or saturation, it would focus on using the other uplink, but if packets are making it through on both links, it would distribute them across both links depending on the link speed).
That's the gist of what would occur behind the scenes. I considered possibly using something like OpenVPN. However, I'm not familiar with whether or how it could be configured to do this.
My Question
I'm not looking for a list of software suggestions that might be useful. Rather, my question is what are the details of how to accomplish this?
A while after posting this question, I switched some terms in my google searches and found the following gem of a blog post: http://simonmott.co.uk/vpn-bonding
The article is long and provides all the information needed to get this working. However, there's a significant flaw in the approach taken by the author. By tunneling over SSH, he's making the tunnel transport TCP. Ack. That means if you tunnel TCP through the tunnel, you've got TCP on top of TCP. With any significant latency or packet loss at all, the TCP stacks will get confused and start to thrash as both TCP stacks attempt to deal with congestion control algorithms, retransmissions, etc. This severely limits your throughput unless you only use something like UDP inside the tunnel (which means you can't access the Web).
The article does mention that it'll work equivalently with something other than ssh as the tunnel, and he's right. I decided to use OpenVPN's point to point feature for this. It's not extremely secure since it uses a static key, but the security was good enough for my purposes (pretty much only Advanced Persistent Threats will be able to break the crypto).
OpenVPN can transport over either TCP, or... UDP! We want to make our tunnel's transport layer UDP, because if the packets are lost, the "inner" TCP layer will deal with congestion control. And if you run UDP inside UDP, the application code is responsible for dealing with packet loss or latency, and will tend to handle it just fine.
I ran into a major issue in the form of a kernel regression that hit sometime during a point release of the 3.13 series, and it hasn't been addressed even in torvalds' git master at this point. This is not mentioned in the article because the regression didn't exist at the time the post was authored. On both your client and server, you will either need to recompile your kernel with this patch (simple enough to apply manually if patch
refuses to work), or use a kernel version from 3.13.0 or earlier.
For my purposes, I used Debian Wheezy (currently Debian's oldstable
branch) with the 3.2 kernel for the server, because I didn't want to recompile my kernel on an Amazon EC2 t2.nano VPS. On the client side (a Linux Mint desktop), I went with the kernel recompile. So, both methods work.
Here are the instructions for setting this up once you recompile your kernel:
You're going to have four openvpn
processes: two on the client, and two on the server. Use openvpn version 2.1 or later, otherwise this won't work. Put the files in /etc/openvpn directory (unless you have a custom sysconfdir
and a custom-compiled openvpn).
In my case, I have two separate NICs, eth0
and eth1
on the server, that provide two separate public IPs, abbreviated below SERVER_IP1
and SERVER_IP2
. On the client, I have eth1
and wlan0
connected to my Internet uplinks, and their gateways (found using ifconfig
and route -n
) are abbreviated as GW1
and GW2
.
To create static.key
, read the OpenVPN man page.
Server tun0.conf:
dev tun0
local SERVER_IP1
proto udp
topology p2p
push "topology p2p"
secret static.key
keepalive 30 90
Server tun1.conf:
dev tun1
local SERVER_IP2
proto udp
topology p2p
push "topology p2p"
secret static.key
keepalive 30 90
Client tun0.conf:
dev tun0
nobind
remote SERVER_IP1
proto udp
topology p2p
secret static.key
Client tun1.conf:
dev tun1
nobind
remote SERVER_IP2
proto udp
topology p2p
secret static.key
Now you want to start the OpenVPN instances on the server first, then the client.
Once you have tun0
and tun1
both connected in POINTOPOINT
mode (it should say that in the description of the interface when running ifconfig
), you're ready to set up the bonding link, bond0
.
I'm going to assume that you are using Debian, Ubuntu or a fork thereof for the config files. You can do an equivalent configuration on CentOS/RHEL-based systems in /etc/sysconfig/network-scripts/ifcfg-bond0
, if I recall correctly. You will have to adjust the config syntax for that flavor of OS. And things may change significantly in the near future with the introduction of systemd and its network daemon.
Anyway, add this to /etc/network/interfaces
on the server:
iface bond0 inet static
address 172.26.0.1
netmask 255.255.255.252
bond-slaves tun0 tun1
bond_mode balance-rr
And on the client:
iface bond0 inet static
address 172.26.0.2
netmask 255.255.255.252
bond-slaves tun0 tun1
bond_mode balance-rr
Make sure ifenslave
is a valid command on the command line before proceeding. If not, install it from your package manager, with something like sudo apt-get install ifenslave
.
Also make sure to uncomment the line that says #net.ipv4.ip_forward=1
in /etc/sysctl.conf
, and you may have to echo 1 > /proc/sys/net/ipv4/ip_forward
if you don't want to reboot after making the change to /etc/sysctl.conf
.
Here's my start script for the client; you'll have to replace several placeholder values (SERVER_IP1, SERVER_IP2, GW1, GW2, eth1
and wlan0
, etc.) to get it to work for you.
Do not replace 172.26.0.1
/ 172.26.0.2
with anything; those are arbitrarily chosen private IPs that correspond to the server's bond0 link and the client's bond0 link, respectively.
#!/bin/bash
modprobe bonding
modprobe tun
iptables -F
#Force connecting to each of the two server IPs through the separate Internet uplinks you have
ip route add SERVER_IP1 via GW1 dev eth1
ip route add SERVER_IP2 via GW2 dev wlan0
#Connect to OpenVPN - this establishes the tunnel interfaces
sleep 1
screen -mdS tun0 openvpn --config /etc/openvpn/tun0.conf
sleep 1
screen -mdS tun1 openvpn --config /etc/openvpn/tun1.conf
sleep 5
#The next line should be all you need, but I find it doesn't work on Linux Mint, it just hangs after partially configuring the interface. Works fine on Debian Wheezy though.
ifup bond0 >& /dev/null &
sleep 5
killall ifup >& /dev/null
ifconfig bond0 up >& /dev/null
#If the ifup script doesn't do its job (it fails on certain Debian OSes depending on the version of your ifenslave program), we have to manually set it up - the next few lines take care of that
ifconfig bond0 172.26.0.2 netmask 255.255.255.252
sleep 2
echo '+tun0' > /sys/class/net/bond0/bonding/slaves
echo '+tun1' > /sys/class/net/bond0/bonding/slaves
#Clear the default gateway and set it to the bond interface
#Required regardless of whether you had to manually configure bond0 above or not
ip route del 0.0.0.0/0
ip route add 0.0.0.0/0 via 172.26.0.1 dev bond0
#Use fair queue controlled delay active queue management for managing multiple TCP flows simultaneously - prevents webpages from loading horribly slowly while you have a download going - requires a recent kernel (3.2 or later should suffice)
tc qdisc add dev bond0 root fq_codel
#DEBUGGING
#On client and server:
#ifconfig bond0, make sure IPs are assigned
#iptables -F on client (don't need any rules)
#cat /sys/class/net/bond0/bonding/slaves - make sure tun0 and tun1 are there
#ifdown bond0; modprobe bonding; ifup bond0 then re-set-up the slaves and IPs
And here's the server script. It should look quite similar to the client script in general, except that you have to do some iptables
packet forwarding to get packets to and from your Internet uplink and the bond0 interface.
Happily, there are no placeholders in the server script...! Just copy, paste, and run. (Err, unless your two interfaces that the client connects to happen not to be eth0
and eth1
.)
#!/bin/bash
#The next line should be executed before you start doing anything on the client; or you can set openvpn to automatically start on system boot if you prefer.
/etc/init.d/openvpn start
sleep 1
ifup bond0
sleep 1
#Not necessary if your ifenslave script is working properly, but I had to add them manually
echo '+tun0' > /sys/class/net/bond0/bonding/slaves
echo '+tun1' > /sys/class/net/bond0/bonding/slaves
#I honestly have no idea what this line does, but it's in the original blog post and it seems to be required :/
ip route add 10.0.0.0/8 via 172.26.0.2 dev bond0
iptables -A POSTROUTING -t nat -o eth0 -j MASQUERADE
iptables -A POSTROUTING -t nat -o eth1 -j MASQUERADE
iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
#Optional, but again, the best active queue management around - see client script for details
tc qdisc add dev bond0 root fq_codel
#This gets inserted automatically at some point along the way; 169.254 is defined as unroutable so you definitely don't want that hanging around in there. Could be an artifact of using Amazon EC2, so this may error out for you with a "No such process" error if you don't get this route.
ip route del 169.254.0.0/16
...And that's about it.
Is it fast? Well... kinda. At this point I'm not blown away by the performance, but it's definitely giving me better speeds than the slower of the two links, and I used the handy tool iptraf
to determine that both wlan0
and eth1
are sending and receiving UDP packets when I put load on the default gateway (e.g. by visiting websites). I'm looking into possible tuning in the way of MTU, MSS, recv buffer, etc. to improve the performance and optimize for throughput.