QEMU / KVM - Dedicated 802.1q VLAN for each VM - Communication only via router
I have a Linux firewall router (dedicated machine) with multiple ethN interfaces (my "big firewall"). All forwarded traffic is filtered by a set of iptables
rules (default policy DROP).
There's another dedicated machine ("vmhost") that will host multiple virtual machines using KVM / QEMU / libvirt / virsh.
The firewall router (physical server) and the vm host (another physical server) are directly connected by a patch cable (eth2 of router <-> eth0 of the vmhost).
I don't want the VMs on the vmhost to be able to communicate
- to each other
- or to the VM host
except through the external firewall router.
Therefore, I've configured multiple 802.1q tagged VLANs on both sides (router and vmhost): eth0.10, eth0.11 etc. (eth2.10, eth2.11, ... on the other side), each with a different /30
subnet (one host IP = router, the other host IP = VM). So every VM gets its own tagged VLAN and its own subnet.
I want to use this to subordinate the VM traffic to the iptables rules of the central firewall router. The VMs shall only be able to access IP addresses and ports permitted explicitely.
How does one configure a VM to be tied to a dedicated VLAN interface (e.g. eth0.10
)? Concerning net
, netdev
, nic
, ...
I explicitely don't want to bridge between the VMs' nets or the VMs and the host.
// Later addition: Both servers use Debian 10 amd64.
There are serveral methods in general. Note, some of them won't let you go around bridging. A configuration is still secure nonetheless.
The basics of bridging on Linux are here, for example: https://developers.redhat.com/blog/2017/09/14/vlan-filter-support-on-bridge/
Multiple bridges
This is old approach, which was available in very old Linux, once it supported both bridge and VLANs. Its drawback is quite messy configuration, but somewhat it's easier to understand and manage.
You configure a bridge on host for each VLAN subinterface like this:
ip link add link eth0 name eth0.100 type vlan id 100
ip link set eth0.100 up
ip link add br100 type bridge
ip link set br100 up
ip link set eth0.100 master br100
And so on. After this if you put VM virtual NIC into this br100
, its untagged packets will appear out of eth0 tagged as VLAN 100, and vice versa, tagged with VLAN100 will reach this VM. This bridge you specify into VM domain file in the vNIC settings. To put a VM into several VLANs you create a dedicated virtual NIC for each VLAN.
This approach is also used by the Proxmox VE if you don't enable "bridge vlan_filtering". You didn't specified on which distribution you run libvirt, so I can't suggest how reach this configuration in your particular case. How it would look on Debian you can see here.
In principle you can use "untagged default" VLAN to manage host. I don't recommend this, that way you won't be able to pass through this VLAN into any VMs (don't put master eth0
into any bridges!). Better it to define a management VLAN, create a bridge for it and apply an IP addres to the management bridge on the host. Let's assume this vlan 100 is for management:
ip address add address 192.168.100.100/24 dev br100
Single bridge
This is "new" approach, made available once Linux bridge code supported vlan filtering. It's main advantage it makes Linux a true L2 switch, a configuration is less messy - you only have a single bridge. Proxmox VE supports it directly when there is "bridge vlan_filtering" enabled.
Unfortunately, this approach isn't directly supported by libvirt according to their manual. I don't know why, they seem to be lazy, because everything in the kernel is there. I'll document it here in the hope they will eventually support it.
You create a single bridge:
ip link add br0 type bridge vlan_filtering 1
ip link set br0 up
ip link set enp0s1 master br0
Now add VLANs and define which ones are tagged and untagged. Remember, br0
is both a bridge name and that bridge's port name towards the host. So you might want to set up untagged management VLAN on that host, which is tagged on enp0s1
or untagged; this will be no problems for this kind of setup. Let's assume it's ID is 100 and it is tagged on the wire:
bridge vlan add vid 100 dev br0 pvid untagged
bridge vlan add vid 100 dev enp0s1
ip addr add 192.168.100.100/24 dev br0
Now packets with IP address will go out of enp0s1 tagged with 100.
All VM vNICs are to be attached to this bridge. But, now you need to set up which VLANs are where. Again, libvirt doesn't know how to do this. You need to add their VLANs to the physical interface, I expect tagged, and to the vNIC interface, tagged or untagged, as you wish. You must do this by hand, just like it was done for management interface. You can pass several vlans to the same vNIC, just make sure no more than one of them is untagged.
Other approaches
Other approaches include:
OpenVSwitch — libvirt supports vlans directly, but I don' like it, I think it is overly complex.
VEPA — a good approach, supported setting VLANs right in the libvirt domain file, captures exactly what you want. I had to recommend it, but I never configured it with VLANs, so I can't. Maybe somebody else tried and will explain it in detail. In general, it features the same level of convience as a "single bridge" one, except that communication between VMs or between VM and host will be impossbile even if it is desired, even if they're in the same VLAN.