LXC Containers & Bridge Connection
I need Step by Step Configuration to make LXC Containers in Ubuntu
- right way to configure Lxc container's
- right way to config cgroup
-
right way to config network in host & in container
Note
i have configured container's more than 30 to 35 times , i have problem with cgroup ( mounting it in fstab ) , after restating pc computer halt after grub screen, if i dont restart it work fine
- my network in container's is not working , i have did every thing i can .
Solution 1:
The following answer repeats much of the information in the Links below
LXC
Containers are a lightweight virtualization technology. They are more akin to an enhanced chroot than to full virtualization like Qemu or VMware, both because they do not emulate hardware and because containers share the same operating system as the host. Therefore containers are better compared to Solaris zones or BSD jails. Linux-vserver and OpenVZ are two pre-existing, independently developed implementations of containers-like functionality for Linux. In fact, containers came about as a result of the work to upstream the vserver and OpenVZ functionality. Some vserver and OpenVZ functionality is still missing in containers, however containers can boot many Linux distributions and have the advantage that they can be used with an un-modified upstream kernel.
Making LXC easier
One of the main focus for 12.04 LTS was to make LXC dead easy to use, to achieve this, we’ve been working on a few different fronts fixing known bugs and improving LXC’s default configuration.
Creating a basic container and starting it on Ubuntu 12.04 LTS is now down to:
sudo apt-get install lxc
sudo lxc-create -t ubuntu -n my-container
sudo lxc-start -n my-container
This will default to using the same version and architecture as your machine, additional option are obviously available (–help will list them). Login/Password are ubuntu/ubuntu.
Another thing we worked on to make LXC easier to work with is reducing the number of hacks required to turn a regular system into a container down to zero. Starting with 12.04, we don’t do any modification to a standard Ubuntu system to get it running in a container. It’s now even possible to take a raw VM image and have it boot in a container!
The ubuntu-cloud template also lets you get one of our EC2/cloud images and have it start as a container instead of a cloud instance:
sudo apt-get install lxc cloud-utils
sudo lxc-create -t ubuntu-cloud -n my-cloud-container
sudo lxc-start -n my-cloud-container
And finally, if you want to test the new cool stuff, you can also use [juju][3]
with LXC
[ -f ~/.ssh/id_rsa.pub ] && ssh-keygen -t rsa
sudo apt-get install juju apt-cacher-ng zookeeper lxc libvirt-bin --no-install-recommends
sudo adduser $USER libvirtd
juju bootstrap
sed -i "s/ec2/local/" ~/.juju/environments.yaml
echo " data-dir: /tmp/juju" >> ~/ .juju/environments.yaml
juju bootstrap
juju deploy mysql
juju deploy wordpress
juju add-relation wordpress mysql
juju expose wordpress
# To tail the logs
juju debug-log
# To get the IPs and status
juju status
Making LXC safer
Another main focus for LXC in Ubuntu 12.04 was to make it safe. John Johansen did an amazing work of extending apparmor to let us implement per-container apparmor profiles and prevent most known dangerous behaviours from happening in a container.
NOTE: Until we have user namespaces implemented in the kernel and used by the LXC we will NOT say that LXC is root safe, however the default apparmor profile as shipped in Ubuntu 12.04 LTS is blocking any armful action that we are aware of.
This mostly means that write access to /proc and /sys are heavily restricted, mounting filesystems is also restricted, only allowing known-safe filesystems to be mounted by default. Capabilities are also restricted in the default LXC profile to prevent a container from loading kernel modules or control apparmor.
More details on this are available here:
http://www.stgraber.org/2012/05/04/lxc-in-ubuntu-12-04-lts/
https://help.ubuntu.com/12.04/serverguide/lxc.html
http://www.stgraber.org/2012/03/04/booting-an-ubuntu-12-04-virtual-machine-in-an-lxc-container/
Solution 2:
cgroup is already configured in Ubuntu 11.10 server you don't have to configure it The following is a pretty good guide in general.
http://www.activestate.com/blog/2011/10/virtualization-ec2-cloud-using-lxc
Solution 3:
If you have not yet seen LXC web panel ... try it.
http://lxc-webpanel.github.io/
Its easy to install and gives you a nice browser interface to the LXC Web Panel.
The application is written in Python and allows you to do the vast majority of LXC mgmt from the GUI.
Create/Start/Stop/Destroy, Freeze/Unfreeze, edit/change networking, edit many cgroup parameters etc.
Very nice tool