Local hostnames for Docker containers
OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts
file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which will resolve to your docker-machine IP:
update-docker-host(){ # clear existing docker.local entry from /etc/hosts sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts # get ip of running machine export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')" # update /etc/hosts with docker machine ip [[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts" } update-docker-host
This will automatically add or udpate the /etc/hosts
line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
- https://github.com/jpetazzo/pipework
- https://github.com/bnfinet/docker-dns
- https://github.com/gliderlabs/resolvable
Extending on @eduwass's own answer, here's what I did manually (without a script).
- As mentioned in the question, define the
domainname: myapp.dev
andhostname: www
in thedocker-compose.yml
file - Bring up your Docker containers as normal
- Run
docker-compose exec client cat /etc/hosts
to get an output of the container's hosts file (whereclient
is your service name) (Output example:172.18.0.6 www.myapp.dev
) - Open your local (host machine)
/etc/hosts
file and add that line:172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.
Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- Run
docker-compose up
- Add a proxy extension to your browser, with proxy address being
127.0.0.1:8080
- Access
http://client.dev
The request will follow the route:
- Access a local development domain in a browser
- The proxy extension forwards that request to
mitm-nginx-proxy-companion
instead of the “real” internet -
mitm-nginx-proxy-companion
tries to resolve the domain name through the dns server in the same container- If the domain is not a “local” one, it will forward the request to the “real” internet
- But if the domain is a “local” one, it will forward the request to the
nginx-proxy
- The
nginx-proxy
in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
-
links
removed as it's outdated and is replaced by Docker networks - you don't need to add domain names to
server
anddatabase
containers.client
will be able to access them onserver
anddatabase
domains because they are all in the same network (similar to whatlink
was doing previously) - you don't need to use
ports
onserver
anddatabase
containers because it only forwards ports to be used through127.0.0.1
. PHP inclient
container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them withdatabase:5432
andserver:3000
. The same goes forserver <-> database
connections. - I am the author of
mitm-nginx-proxy-companion