How can I use a LTE interface inside a docker container?

For my customer, I have to setup docker containers on separate machines that run various services on various physical links.

I cannot use "host" mode for my docker containers.

So far, I have happily used the macvlan driver to spawn new network interfaces in my container.

For example:

networks:
  good_net:
    driver: macvlan
    driver_opts:
      parent: eno0
  slow_net:
    driver: macvlan
    driver_opts:
      parent: eno0
  high_latency_net:
    driver: macvlan
    driver_opts:
      parent: eno0

I have a script at container startup that does:

  • give a unique mac address
  • bring up
  • dhcp or static address
  • apply tc filters to shape the traffic for each network

Works fine: I can run ping and iperf3 on the individual links to test that they work as we expect.

Problem: now the real interfaces are coming and they break my setup.

One of the links is now LTE.

lte_net:
  driver: macvlan
  driver_opts:
    parent: lte0

On PC1 (host, not container):

4: lte0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
   link/ether 7e:c4:d2:6a:e3:07 brd ff:ff:ff:ff:ff:ff
   inet 10.0.250.1/30 brd 10.0.250.3 scope global noprefixroute lte0
      valid_lft forever preferred_lft forever

On PC2 (host, not container):

4: lte0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
   link/ether b6:30:cb:12:31:16 brd ff:ff:ff:ff:ff:ff
   inet 10.0.250.5/30 brd 10.0.250.7 scope global noprefixroute lte0
      valid_lft forever preferred_lft forever

Hosts can happily ping and iperf3 each other over the LTE link.

BUT: in containers, using the macvlan approach, they can't:

  • I give the interface in each container an IP address in the same subnet: 10.100.10.1/24 and 10.100.10.2/24
  • I have brought down all the other interfaces so there is only one route left: 10.100.10.0/24 dev lte0 proto kernel scope link src 10.100.10.1
  • from container on pc1, I try to ping ping the IP of LTE interface in container on pc2
  • with tcpdump on the host, I can see, the ARP packets on the lte interface: 02:2d:6d:ad:63:62 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.100.10.2 tell 10.100.10.1, length 28

But I can't see them arriving on the pc2, not even at host level.

The reason is very likely to be that the packets being sourced from a MAC address that is not recognized by the LTE modem are discarded. This is similar to what happens with WLAN.

So I tried with ipvlan l2 instead, as seen sometimes suggested:

lte_net:
  driver: ipvlan
  driver_opts:
    ipvlan_mode: l2
    parent: lte0

The lte0 interface in the container AND its parent on the host have the same MAC address. But there is no change: still no crossing the LTE...

ipvlan l3 does not even come up, but I don't think this is what I need anyway.

So my question is: what am I doing wrong ? How can I properly access my physical LTE as a network interface for IP traffic in a docker container ?

Sorry about the 3G tagging: I don't have enough reputation to create a LTE tag...

Thanks !


Solution 1:

I finally found a working solution... But boy did I suffer... And learned...

I now have a wrapper script around "docker-compose up -d" and a one-fit-all docker-compose.yml that starts 4 (I don't need more) default docker networks.

It looks like this:

network:
  test_net0:
  test_net1:
  test_net2:
  test_net3:

When brought up, this will create for each test_netX:

  • a veth pair: one in the container namespace, one in the default host namespace
  • a bridge: that has enslaved the veth on the host side

For standard eth interfaces on the host, there is no problem: just enslave it to the bridge of your choice. I could enslave a couple of USB-ethernet adpaters the same way without a problem. This is done in bash (now I wish we had started in python really) and the enslaving is done after the "up" by iproute2 commands using netns.

For LTE, things get a lot more complex. First the LTE interface won't let itself be enslaved properly. Second, although we could find no documentation about it, after lots of experiments, we discovered that something between the LTE interfaces seen on the 2 hosts was doing a lot of discarding:

  • any packet that had a source OR destination IP that was not in the "LTE network"
  • (not sure about that one but suspected) any packet that had a source OR destination MAC address different from our LTE interfaces (by the way these are dynamically allocated by the driver it seems, to make things more fun)

Basically, our LTE setup was transmitting only packets from and to LTE interfaces.

So we had a fantastic idea: let's just tunnel our traffic over LTE. We started with an ipip tunnel but quickly realized that we had invited a elephant to our picnic because we needed multicast from container to container over LTE, which is probably doable if you have a hundred years to spare...

Then things finally clicked when we set up a GRETAP instead of ipip tunnel over our LTE link. The big advantage there was that the GRETAP interface can be enslaved and that it behaves like a standard ethernet interface.

Once we fixed our too low TTL on our multicast packets, we had all types of traffic running smoothly between our containers over the LTE link.

Now it looks soooo easy and logical...

Linux, iproute2, docker, will you all marry me ?