Docker : How To Dockerize And Deploy multiple instances of a LAMP Application

I need to deploy many instances of the same LAMP (or LEMP) application :

  • each instance will be accessible from a subdomain, with front loadbalancer/ proxy
  • each instance must have its own db data and files data.
  • each instance might be monitored
  • memory limit / cpu might be set per app instance
  • easy to automate the deployment of an new webapp instance
  • environment might be easily reproducible for test and development.

Application requires :

  • dameon processes (Nginx, MariaDB, PHPFPM)
  • binaries (composer, bower, ...)
  • other systems specific libs & config

After reading Docker documentation and many howtos, I see different solutions to dockerize this web application :


Solution 1 : Use an all-in-one Container

All the stack is in one container :

  • webapp source files, EMP daemon processes, binaries, …
  • mounted volumes for mysql and webapp data files

Examples :

  • Tutum provides an all-in-one container for Wordpress Application : https://github.com/tutumcloud/tutum-docker-wordpress
  • Phusion, which provides base image optimized for Docker, precises in documentation (https://github.com/phusion/baseimage-docker#docker_single_process) :

    Docker runs fine with multiple processes in a container. In fact, there is no technical reason why you should limit yourself to one process

Pros (IMHO) :

  • Seems easy to automate deploiement, to monitor, to destroy….
  • Easy to use in prod, test and dev environment.

Cons (IMHO):

  • Monolithic
  • Hard to scale
  • Does not use all the strength of Docker

Solution 2 : Use a containers stack per webapp instance

For each webapp to deploy, a containers stack is deployed :

  • One container per process : Nginx, Mysql, PHP-FPM,
  • Binary containers (composer, bower,...) can be also dockerized, or merged in the phpfpm container
  • mount volumes for mysql and webapp data files

Examples :

  • the orchestror tool Gaudi provides an example with a LEMP architecture based on 3 “daemon” containers (nginx, mysql, phpfpm), and 2 app containers (composer, bower) (http://marmelab.com/blog/2014/06/04/demo-symfony-with-docker-and-gaudi.html)

Pro (IMHO) :

  • Decoupled
  • processes isolated per instance
  • One process per container, no need daemon manager as RUnit or Supervisord

Cons (IMHO) :

  • Seems more complicated to do work
  • Hard to maintain, to see a “big picture” of all containers states, links, version...

Solution 3 : Mixin the 2 previous solutions

  • One “app” container with : app src files, nginx, phpfmp, composer, git..
  • One container for db mysql, which can be shared or not with the app container

I'm more Dev than Ops, also it's confused for me.

So, Questions :

  1. What are the criteria, pros/cons to consider when choosing between theses solutions?
  2. Howto to manage all the containers stacks if i choose Solution 2, to have a "big picture" of all containers states, links, version... ?
  3. App src files (PHP) might be built in the container or mounted as volume, eg. /var/www ?

I recently went through analysis on Docker for this type of setup. I know there are some who view Docker as a sort of MicroVM, but my take is the Docker philosophy leans more toward single process per container. This tracks well with the Single Responsibility principle in programming. The more a Docker container does, the less reusable and more difficult to manage. I posted all my thoughts here:

http://software.danielwatrous.com/a-review-of-docker/

I then went on to build a LEMP stack using Docker. I didn't find a lot of value in splitting the PHP and Nginx processes into separate Docker containers, but the Web and Database functions are in separate containers. I also show how to manage linking and volume sharing to avoid running SSH daemons in your containers. You can follow what I did here as a point of reference.

http://software.danielwatrous.com/use-docker-to-build-a-lemp-stack-buildfile/

To your point about increased complexity for the single function per container, you are correct. It will look and feel just like you had distinct, distributed tiers. Very large applications have done this for years and it does increase complexity when it comes to communication, security and management. Of course it brings a number of benefits as well.


Both solutions are possible. However, I would go with solution 2 - one container per process - since it is more compatible with the Docker "philosophy".

The nice thing about Docker is, that you can create an application stack (like yours) with independent building blocks (images of single applications). You can combine those building blocks and reuse them. If you take a look at the official Docker registry you will find most of you components as pre-build images. E.g. you will find a Nginx at https://registry.hub.docker.com/u/dockerfile/nginx and a MySQL database at https://registry.hub.docker.com/_/mysql. So, setting up your stack becomes quite easy if you choose to use one container per process/app:

(Note, this is just an example, I am not familiar with PHP and stuff...)

Get your images:

docker pull mysql
docker pull dockerfile/nginx
docker pull tutum/apache-php

docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
docker run -d -p 80:80 -v <sites-enabled-dir>:/etc/nginx/sites-enabled -v <log-dir>:/var/log/nginx dockerfile/nginx
docker run -d -p 80:80 tutum/apache-php

You can setup your stack very easily like this. And, if you want so, you can change some single components. E.g. you can change the MySQL database with MariaDB without touching another component.

The most complicated thing about that solution is how to configure your stack. To link your containers, take a look at https://docs.docker.com/userguide/dockerlinks. You can use this approach to link e.g. your application container with your MySQL container.