Public IP Address for LXC container
Ok, so I want to know how to do networking on LXC containers. Not just the sort of vague information you get from the other websites, but a true beginner's guide to making them work.. Since most examples are basically setup for people to test with, I want to run a service on one...like a web server for example.
I am running Ubuntu 12.04 LTS and I have LXC installed and I can make, start and stop a container. My server obviously has a public facing IP and I would like to know how to setup a container so it too can have a public IP. Since there already seems to be a bridge in place from my current container it would seem that I either need to give the containers a DHCP range that is public for them to work off or manually assign a static IP address to my container.
If I want to statically assign an IP to the container, how do I do that? Do I need to make any changes to my bridge config on the host? Is it actually better to do it with the MACVLAN option?
Any help would be appreciated.
Solution 1:
My approach assumes that your server has a single NIC, and you need to share that NIC between the host and the LXC guests. This involves using a bridge. The bridge owns and manages eth0
. The host now configures its own networking on br0
instead of eth0
. The LXC guests are configured to connect to the bridge.
On the host,
sudo apt-get install bridge-utils
.-
On the host, replace
eth0
with a bridge:This is dangerous. Get this wrong and you could be locked out of your server. Be sure to have a local login enabled and that local console access works, so that you can revert this change if you have any problems.
In
/etc/network/interfaces
:- Replace
auto eth0
withauto br0
. -
Replace:
iface eth0 inet dhcp
with:
iface br0 inet dhcp bridge_ports eth0
If you had a static network configuration, then you'd replace:
iface eth0 inet static address ... netmask ... gateway ... etc.
with:
iface br0 inet static bridge_ports eth0 address ... netmask ... gateway ... etc.
You're just changing
eth0
forbr0
and adding thebridge_ports eth0
line. Reboot the host. If you were doing this locally, then running
sudo ifdown eth0
before you started, andsudo ifup br0
afterwards would also do. Note that the bridge can take a little time to come up, so give it five minutes after reboot before you assume that all is lost.
- Replace
-
To move a given named LXC container over to a public IP:
- Stop the container.
- On the host, edit
/var/lib/lxc/container_name/config
and changelxc.network.link
tobr0
. - On the host, edit
/var/lib/lxc/container_name/rootfs/etc/network/interfaces
and configure your public IP as you would normally (DHCP or a static configuration as needed). Note that the interface is still calledeth0
from the point of view of the container. - Restart the container.
To change the default for new LXC containers, edit
/etc/lxc/default.conf
on the host and changelxc.network.link
tobr0
.If you don't need the LXC-provided NAT bridge at all (ie. all your containers will use the new bridge instead), then on the host edit
/etc/default/lxc
and changeUSE_LXC_BRIDGE
to"false"
, and then on the host runsudo service lxc restart
.
Solution 2:
Robie, thank you so much for posting this answer, I have been taring my hair out trying to get this going and this has been the only method that has worked!
I thought I should mention a few things I figured out in order to help clarify the instructions for other admins.
My host had multiple static ip aliases assigned to eth0 on the guest, for example:
iface eth0:1 inet static
address 5.5.5.5
netmask 255.255.255.5
gateway 5.5.5.1
etc.
Now we don't want to set up br0 the same way, we just want one IP with no aliases like Robie indicated above.
So let's say you want 5.5.5.5 to be assigned to the container debian8.
Edit /var/lib/lxc/debian8/etc/network/interfaces
and add:
iface eth0 inet static
address 5.5.5.5
netmask 255.255.255.5
gateway 5.5.5.1
etc.
Then issue this command: route add default gw <gateway-ip, in my case 5.5.5.1>
After that, reboot the container and everything should finally work! :)
Solution 3:
I had the same problem and I have this solution (quick and dirty).
server: eth0 = 10.1.0.77/24
server: lxdbr0 (lxd bridge) = 10.255.255.77/24
container: eth0 = 10.255.255.100/24 (same network as lxdbr0)
container: eth0:0 = 194.99.99.99/28 (public ip address on eth0 alias)
On the server: route add -host 194.99.99.99 gw 10.255.255.100 dev lxdbr0
Also, if needed add route to upstream routers.
Probably, not the best solution but requires no great effort! Cheers.