Auto-reloading of code changes with Django development in Docker with Gunicorn

I'm using a Docker container for Django development, and the container runs Gunicorn with Nginx. I'd like code changes to auto-load, but the only way I can get them to load is by rebuilding with docker-compose (docker-compose build). The problem with "build" is that it re-runs all my pip installs.

I'm using the Gunicorn --reload flag, which is apparently supposed to do what I want. Here are my Docker config files:

## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r /code/requirements/docker.txt

## docker-compose.yml:
web:
  restart: always
  build: .
  expose:
    - "8000"
  links:
    - postgres:postgres
  volumes:
    - /usr/src/app/static
  env_file: .env
  command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload

nginx:
  restart: always
  build: ./config/nginx
  ports:
    - "80:80"
  volumes:
    - /www/static
  volumes_from:
    - web
  links:
    - web:web

postgres:
  restart: always
  image: postgres:latest
  volumes:
    - /var/lib/postgresql
  ports:
    - "5432:5432"

I've tried some of the other Docker commands (docker-compose restart, docker-compose up), but the code won't refresh.

What am I missing?


Thanks to kikicarbonell, I looked into having a volume for my code, and after looking at the Docker Compose recommended Django setup, I added volumes: - .:/code to my web container in docker-compose.yml, and now any code changes I make automatically apply.

## docker-compose.yml:
web:
  restart: always
  build: .
  expose:
    - "8000"
  links:
    - postgres:postgres
  volumes:
    - /usr/src/app/static
    - .:/code
  env_file: .env
  command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload

Update: for a thorough example of using Gunicorn and Django with Docker, checkout this example project from Rackspace, which also shows how to use docker-machine to launch the setup on remote servers like Rackspace Cloud.

Caveat: currently, this method does not work when your code is stored locally and the docker host is remote (e.g., on a cloud provider like Digital Ocean or Rackspace). This also applies to virtual machines if your local file system is not mounted on the VM. Note that there are separate volume drivers (e.g., flocker), and there might be something out there to address this need. For now, the "fix" is to rsync/scp your files up to a directory on the remote docker host. Then, the --reload flag will auto-reload gunicorn after any scp/rsync. Update: If pushing code to remote docker host, I find it far easier to just rebuild the docker container (e.g., docker-compose build web && docker-compose up -d). This can be slower though than the rsync approach if your src folder is large.


You have another problem- Docker caches each layer that it builds. You shouldn't have to re-run pip install every time!

ADD . /code/
RUN pip install -r /code/requirements/docker.txt

This is your problem- Docker checks every ADD statement to see if any files have changed and invalidates the cache for it and every later step if it has. The correct way to do this is...

ADD ./requirements/docker.txt /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./code/

Which will only invalidate your pip install line if your requirements file changes!