socat tun device very low throughput

I was tinkering around with socat and tried to use socat for creating a TUN device for tunneling between two debian stretch servers. However, throughput seemed very low and comparing with iperf against TCP/TCP-Listen on localhost, TUN has about 5 orders of magnitude less throughput.

Here is a "minimum working example" to show how speed is affected.

socat with TUN device

Server side:

# socat
socat TUN:10.10.0.2/16,iff-up TCP4-LISTEN:54321,bind=192.168.1.2,fork
# iperf service
iperf -s -p 15001 -B 10.10.0.2

Client side:

# socat
socat TUN:10.10.0.1/16,iff-up TCP4:192.168.1.2:54321
# iperf
iperf -c 10.10.0.2 -p 15001 -t 30

socat with TCP/TCP-LISTEN

Server side:

# socat
socat TCP4-LISTEN:12345,bind=192.168.1.2,fork TCP4:127.0.0.1:15001
# iperf service
iperf -s -p 15001 -B 127.0.0.1

Client side:

# socat
socat TCP4-LISTEN:54321,bind=127.0.0.1,fork TCP4:192.168.1.2:12345
# iperf
iperf -c 127.0.0.1 -p 54321 -t 30

Results

TUN device:

[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-39.7 sec   640 KBytes   132 Kbits/sec

TCP/TCP-LISTEN:

[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-30.0 sec  3.30 GBytes   944 Mbits/sec

If you want to reproduce the results using the lines above, you need to somehow run socat lines and the iperf serverside in the background or daemonized, I just used screen sessions.

So, while I assumed that throughput will suffer to some degree, it seems strange to me that it will degrade from the assumed gigabit (both servers on same switch) to a mere 100KBit. A quick glance at atop shows no significant bottlenecks, so it isn't just CPU capped or eating RAM.

Why is throughput that low? Some logic error I did? Or a problem in the kernel, bad implementation in socat, or using iperf wrong?

Are there any parameters or settings (kernel, socat, anything) to improve this? Anything I could check for? And, most important, is there a way I can use the TUN device which gives me useful throughput?


I was having a similar issue when using socat to create an IP connection between two computers using a Bluetooth serial connection (via rfcomm). The raw serial connection was rather fast (15 or so KiB/s), but when connecting to a web server via the TUN device created by socat, the speed was very slow and the browser eventually just gave up after loading for a while.

The socat man page does say the following.

Note that streaming eg. via TCP or SSL does not guarantee to retain packet boundaries and may thus cause packet loss.

This made me wonder if packets were being dropped or truncated. Sure enough, tcpdump -i tun0 showed many packets that were missing bytes (some missing over 140 bytes). Looking at the MTU (maximum transmission unit - packet size) of tun0, I found it was 1500 (ip link). However, the bluetooth MTU was 1021 (?) bytes according to hciconfig --all (if I am reading its output correctly).

Thus, for testing, I reduced the MTU on tun0 (on both sides of the tunnel - on both computers) to 100 bytes. This worked ... well, it was still not super fast, but I was connected over Bluetooth and it was much faster than before. At least I did not get any connection issues anymore. Note that this can probably be set much larger, but I was just testing to see if it would fix the issue.

Below is the command to update the MTU on the tun0 device created by socat (after calling socat) to be 100 bytes. This needs to be run on both systems (and probably should match).

ip link set mtu 100 tun0

Thus, you would execute the following.

Server side:

# socat
socat TUN:10.10.0.2/16,iff-up TCP4-LISTEN:54321,bind=192.168.1.2,fork

# set MTU to 100
ip link set mtu 100 tun0

# iperf service
iperf -s -p 15001 -B 10.10.0.2

Client side:

# socat
socat TUN:10.10.0.1/16,iff-up TCP4:192.168.1.2:54321

# Set MTU to 100
ip link set mtu 100 tun0

# iperf
iperf -c 10.10.0.2 -p 15001 -t 30

Note: you can use ip link to show all links - look for tun to find the right device if tun0 does not work.