Access docker container from host using containers name

I am developing a service and using there docker compose to spin services like postgres, redis, elasticsearch. I have a web application that is based on RubyOnRails and writes and reads from all those services.

Here is my docker-compose.yml

version: '2'

services:
  redis:
    image: redis:2.8
    networks:
      - frontapp

  elasticsearch:
    image: elasticsearch:2.2
    networks:
      - frontapp

  postgres:  
    image: postgres:9.5
    environment:
      POSTGRES_USER: elephant
      POSTGRES_PASSWORD: smarty_pants
      POSTGRES_DB: elephant
    volumes:
      - /var/lib/postgresql/data
    networks:
      - frontapp

networks:
  frontapp:
    driver: bridge

And i can ping containers within this network

$ docker-compose run redis /bin/bash
root@777501e06c03:/data# ping postgres
PING postgres (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.346 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.047 ms
...

So far so good. Now I want to run ruby on rails application on my host machine but be able to access postgres instance with url like postgresql://username:password@postgres/database currently that is not possible

$ ping postgres
ping: unknown host postgres

I can see my network in docker

$ docker network ls
NETWORK ID          NAME                DRIVER
ac394b85ce09        bridge              bridge              
0189d7e86b33        elephant_default    bridge              
7e00c70bde3b        elephant_frontapp   bridge              
a648554a72fa        host                host                
4ad9f0f41b36        none                null 

And I can see an interface to it

$ ifconfig 
br-0189d7e86b33 Link encap:Ethernet  HWaddr 02:42:76:72:bb:c2  
          inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:76ff:fe72:bbc2/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:36 errors:0 dropped:0 overruns:0 frame:0
          TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2000 (2.0 KB)  TX bytes:8792 (8.7 KB)

br-7e00c70bde3b Link encap:Ethernet  HWaddr 02:42:e7:d1:fe:29  
          inet addr:172.20.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:e7ff:fed1:fe29/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1584 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1597 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:407137 (407.1 KB)  TX bytes:292299 (292.2 KB)
...

But i am not sure what should I do next. I tried to play a bit with /etc/resolv.conf, mainly with nameserver directive, but that had no effect.

I would appreciate any help of suggestions how to configure this setup correctly.

UPDATE

After browsing through Internet resources I managed to assign static IP addresses to boxes. For now it is enough for me to continue development. Here is my current docker-compose.yml

version: '2'

services:
  redis:
    image: redis:2.8
    networks:
      frontapp:
        ipv4_address: 172.25.0.11

  elasticsearch:
    image: elasticsearch:2.2
    networks:
      frontapp:
        ipv4_address: 172.25.0.12

  postgres:  
    image: postgres:9.5
    environment:
      POSTGRES_USER: elephant
      POSTGRES_PASSWORD: smarty_pants
      POSTGRES_DB: elephant
    volumes:
      - /var/lib/postgresql/data
    networks:
      frontapp:
        ipv4_address: 172.25.0.10

networks:
  frontapp:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.25.0.0/16
          gateway: 172.25.0.1

Solution 1:

There is a opensource application that solves this issue, it's called DNS Proxy Server, here some examples from official repository

It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well

Start DNS Server

$ docker run --hostname dns.mageddo --restart=unless-stopped -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server

It will be set automatically as your default DNS (and recover to the original when it stops)

Creating some containers for test

checking docker-compose file

$ cat docker-compose.yml
version: '3'
services:
  nginx-1:
    image: nginx
    hostname: nginx-1.docker
    network_mode: bridge 
  linux-1:
    image: alpine
    hostname: linux-1.docker
    command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
    network_mode: bridge # that way he can solve others containers names even inside, solve nginx-2, for example

starting containers

$ docker-compose up

Solving containers

from host

nslookup nginx-1.docker
Server:     13.0.0.5
Address:    13.0.0.5#53
Non-authoritative answer:
Name:   nginx-1.docker
Address: 13.0.0.6

from another container

$ docker-compose exec linux-1 ping nginx-1.docker
PING nginx-1.docker (13.0.0.6): 56 data bytes
64 bytes from 13.0.0.6: seq=0 ttl=64 time=0.034 ms

As well it solves internet hostnames

$ nslookup google.com
Server:     13.0.0.5
Address:    13.0.0.5#53

Non-authoritative answer:
Name:   google.com
Address: 216.58.202.78

Solution 2:

I'm using a bash script to update /etc/hosts. Why this solution?

  • Short script, easy to review (didn't want to give some un-reviewed application with lots of dependencies access to the Docker socket (which means root access))
  • It uses docker events to run every time a container is started or stopped (other solutions posted here run every second in a loop, which is way less efficient)
  • Updates /etc/hosts, no separate DNS server needed.
  • Only dependencies are bash, mktemp, grep, xargs, sed, jq and docker, all of which I had already installed.

Just put the script somewhere, e.g. /usr/local/bin/docker-update-hosts:

#!/usr/bin/env bash
set -e -u -o pipefail

hosts_file=/etc/hosts
begin_block="# BEGIN DOCKER CONTAINERS"
end_block="# END DOCKER CONTAINERS"

if ! grep -Fxq "$begin_block" "$hosts_file"; then
    echo -e "\n${begin_block}\n${end_block}\n" >> "$hosts_file"
fi

(echo "| container start |" && docker events) | \
while read event; do
    if [[ "$event" == *" container start "* ]] || [[ "$event" == *" network disconnect "* ]]; then
        hosts_file_tmp="$(mktemp)"
        docker container ls -q | xargs -r docker container inspect | \
        jq -r '.[]|"\(.NetworkSettings.Networks[].IPAddress|select(length > 0) // "# no ip address:") \(.Name|sub("^/"; "")|sub("_1$"; ""))"' | \
        sed -ne "/^${begin_block}$/ {p; r /dev/stdin" -e ":a; n; /^${end_block}$/ {p; b}; ba}; p" "$hosts_file" \
        > "$hosts_file_tmp"
        chmod 644 "$hosts_file_tmp"
        mv "$hosts_file_tmp" "$hosts_file"
    fi
done

Note: The script removes the _1 suffix added by docker-compose from container names. If you don't want that just remove |sub("_1$"; "") from the script.

You can use a systemd service to run this synchronously with Docker: /etc/systemd/system/docker-update-hosts.service:

[Unit]
Description=Update Docker containers in /etc/hosts
Requires=docker.service
After=docker.service
PartOf=docker.service

[Service]
ExecStart=/usr/local/bin/docker-update-hosts

[Install]
WantedBy=docker.service

To activate, run:

systemctl daemon-reload
systemctl enable docker-update-hosts.service
systemctl start docker-update-hosts.service

Solution 3:

If you're only using you docker-compose setup locally you could map the ports from your containers to your host with

elasticsearch:
  image: elasticsearch:2.2
  ports:
    - 9300:9300
    - 9200:9200

Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.

A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.

There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.

Solution 4:

There are two solutions (except /etc/hosts) described here and here

I wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts

It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed. Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.

$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740

$ nslookup myapp.local.com
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
Name:   myapp.local.com
Address: 172.17.0.2

There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:

$ cat /etc/systemd/resolved.conf

[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp