Being new to virtualization in general, and somewhat new to Linux (using Debian Squeeze and coming from BSD) I have a hard time understanding what would be the best network bridging option for my host machine. Much -if not all- of the information on the net seems somewhat outdated.

There is info on br0, tun, tap and vnet and the like. I'm pretty much lost on what they all mean and do and would appreciate if someone knowledgeable could dumb it down for me.

What I would like is the best performance and flexible setup for my Debian host, where the (*BSD) guests can manage their own firewall (PF). The Squeeze host machine has two hardware NIC's behind a proper hardware router.

At this point I think it is wise to put the guests on their own dedicated hardware nic (eth1) and use an internal ip range (10.0.0.x) while the host uses eth0, but I'm very open to suggestions from the experts :)


Most of the terms you rattled off are names of interfaces created by vendor specific or software specific scenarios. Virtual networks, bridges, tunnels, etc.

The terminology needed to setup kvm or other console mode virtual machine systems is far from consistent. However the general idea is the same and if you play around with a linux live cd booted in VirtualBox or some other GUI you can play around with the options.

Basically it sounds like you need to setup a bridged interface, where the virtual machine gets a piece of network hardware that is transparently aliased (usually via a sub-interface on the host) to one of the network cards on the host. Using your second NIC for this might be nice but certainly not a requirement. The virtual machine appears on the network as a new device and can get an address and talk to devices in on the upstream network, whatever that is.

The other common option is to give the virtual machines interfaces that are automatically behind a NAT router on one of the hosts network cards. This is useful for keeping them OFF your main network if they are just test machines or you have reason for them to communicate as any other software would using your host computers IP address.

A less common but useful scenario if you have multiple locations where virtual machines are hosted is to tunnel them so that their network traffic actually gets managed by some other host on the network, essentially VPN'ing them over to somewhere else, but at the hardware level so that the interface itself in the guest appears normal, but is actually attached somewhere else in the network topology.


Build a Bond between the physical NICs then build a bridge on the Bond

If this was my debian KVM server I would build a bond from the 2 physical NICs. I would build the bond as a fail over / fault tolerance.

Once you have the bond you can build a bridge (br0) and use that for each kvm guest.

The bridge allows the guests to join the 'real' network and occupy an IP address to communicate with the network or even the internet.

At this point it would be highly recommended to continue using pfsense on the BSD guests because they are part of the network.