Forward host port to docker container

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.

I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.

EDIT: some clarification:

Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT     STATE SERVICE
6311/tcp open  unknown

joelkuiper@vps20528 ~ % docker run -i -t base /bin/bash
root@f043b4b235a7:/# apt-get install nmap
root@f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway

Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT     STATE    SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds

I had to do this trick to get any internet connection withing the container: My firewall is blocking network connections from the docker container to outside

EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.


A simple but relatively insecure way would be to use the --net=host option to docker run.

This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.

This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.

For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:

docker run --rm -i -t --net=host my_image telnet localhost 25

If you consider doing it this way, please see the caution about security on this page:

https://docs.docker.com/articles/networking/

It says:

--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.


Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run

ip addr

This will give you a list of network adapters, one of which will look something like

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
   valid_lft forever preferred_lft forever

You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.


You could also create an ssh tunnel.

docker-compose.yml:

---

version: '2'

services:
  kibana:
    image: "kibana:4.5.1"
    links:
      - elasticsearch
    volumes:
      - ./config/kibana:/opt/kibana/config:ro

  elasticsearch:
    build:
      context: .
      dockerfile: ./docker/Dockerfile.tunnel
    entrypoint: ssh
    command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"

docker/Dockerfile.tunnel:

FROM buildpack-deps:jessie

RUN apt-get update && \
    DEBIAN_FRONTEND=noninteractive \
    apt-get -y install ssh && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
    chmod 600 /root/.ssh/config && \
    chown $USER:$USER -R /root/.ssh

config/ssh/config:

# Elasticsearch Server
Host elasticsearch
    HostName jump.host.czerasz.com
    User czerasz
    ForwardAgent yes
    IdentityFile ~/.ssh/id_rsa

This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.


As stated in one of the comments, this works for Mac (probably for Windows/Linux too):

I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST

The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.

You can also reach the gateway using gateway.docker.internal.

Quoted from https://docs.docker.com/docker-for-mac/networking/

This worked for me without using --net=host.