How do I set up linkage between Docker containers so that restarting won't break it?
I have a few Docker containers running like:
- Nginx
- Web app 1
- Web app 2
- PostgreSQL
Since Nginx needs to connect to the web application servers inside web app 1 and 2, and the web apps need to talk to PostgreSQL, I have linkages like this:
- Nginx --- link ---> Web app 1
- Nginx --- link ---> Web app 2
- Web app 1 --- link ---> PostgreSQL
- Web app 2 --- link ---> PostgreSQL
This works pretty well at first. However, when I develop a new version of web app 1 and web app 2, I need to replace them. What I do is remove the web app containers, set up new containers and start them.
For the web app containers, their IP addresses at first would be something like:
- 172.17.0.2
- 172.17.0.3
And after I replace them, they will have new IP addresses:
- 172.17.0.5
- 172.17.0.6
Now, those exposed environment variables in the Nginx container are still pointing to the old IP addresses. Here comes the problem. How do I replace a container without breaking linkage between containers? The same issue will also happen to PostgreSQL. If I want to upgrade the PostgreSQL image version, I certainly need to remove it and run the new one, but then I need to rebuild the whole container graph, so this is not ideal for real-life server operation.
Solution 1:
The effect of --link
is static, so it will not work for your scenario (there is currently no re-linking, although you can remove links).
We have been using two different approaches at dockerize.it to solve this, without links or ambassadors (although you could add ambassadors too).
1) Use dynamic DNS
The general idea is that you specify a single name for your database (or any other service) and update a short-lived DNS server with the actual IP as you start and stop containers.
We started with SkyDock. It works with two docker containers, the DNS server and a monitor that keeps it updated automatically. Later we moved to something more custom using Consul (also using a dockerized version: docker-consul).
An evolution of this (which we haven't tried) would be to setup etcd or similar and use its custom API to learn the IPs and ports. The software should support dynamic reconfiguration too.
2) Use the docker bridge ip
When exposing the container ports you can just bind them to the docker0
bridge, which has (or can have) a well known address.
When replacing a container with a new version, just make the new container publish the same port on the same IP.
This is simpler but also more limited. You might have port conflicts if you run similar software (for instance, two containers can not listen on the 3306 port on the docker0
bridge), etcétera… so our current favorite is option 1.
Solution 2:
Links are for a specific container, not based on the name of a container. So the moment you remove a container, the link is disconnected and the new container (even with the same name) will not automatically take its place.
The new networking feature allows you to connect to containers by their name, so if you create a new network, any container connected to that network can reach other containers by their name. Example:
1) Create new network
$ docker network create <network-name>
2) Connect containers to network
$ docker run --net=<network-name> ...
or
$ docker network connect <network-name> <container-name>
3) Ping container by name
docker exec -ti <container-name-A> ping <container-name-B>
64 bytes from c1 (172.18.0.4): icmp_seq=1 ttl=64 time=0.137 ms
64 bytes from c1 (172.18.0.4): icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from c1 (172.18.0.4): icmp_seq=3 ttl=64 time=0.074 ms
64 bytes from c1 (172.18.0.4): icmp_seq=4 ttl=64 time=0.074 ms
See this section of the documentation;
Note: Unlike legacy links
the new networking will not create environment variables, nor share environment variables with other containers.
This feature currently doesn't support aliases
Solution 3:
You can use an ambassador container. But do not link the ambassador container to your client, since this creates the same problem as above. Instead, use the exposed port of the ambassador container on the docker host (typically 172.17.42.1). Example:
postgres volume:
$ docker run --name PGDATA -v /data/pgdata/data:/data -v /data/pgdata/log:/var/log/postgresql phusion/baseimage:0.9.10 true
postgres-container:
$ docker run -d --name postgres --volumes-from PGDATA -e USER=postgres -e PASS='postgres' paintedfox/postgresql
ambassador-container for postgres:
$ docker run -d --name pg_ambassador --link postgres:postgres -p 5432:5432 ctlc/ambassador
Now you can start a postgresql client container without linking the ambassador container and access postgresql on the gateway host (typically 172.17.42.1):
$ docker run --rm -t -i paintedfox/postgresql /bin/bash
root@b94251eac8be:/# PGHOST=$(netstat -nr | grep '^0\.0\.0\.0 ' | awk '{print $2}')
root@b94251eac8be:/# echo $PGHOST
172.17.42.1
root@b94251eac8be:/#
root@b94251eac8be:/# psql -h $PGHOST --user postgres
Password for user postgres:
psql (9.3.4)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
postgres=#
postgres=# select 6*7 as answer;
answer
--------
42
(1 row)
bpostgres=#
Now you can restart the ambassador container whithout having to restart the client.
Solution 4:
If anyone is still curious, you have to use the host entries in /etc/hosts file of each docker container and should not depend on ENV variables as they are not updated automatically.
There will be a host file entry for each of the linked container in the format LINKEDCONTAINERNAME_PORT_PORTNUMBER_TCP etc..
The following is from docker docs
Important notes on Docker environment variables
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
These environment variables are only set for the first process in the container. Some daemons, such as sshd, will scrub them when spawning shells for connection.