How to get the true client IP for a service running inside docker
My scenario: I'd like to have my docker-based services know what IP is calling them from the web (a requirement of my application.)
I'm running on a cloud VPS, debian jessie, with docker 1.12.2.
I use nc
in verbose mode to open port 8080
.
docker run --rm -p "8080:8080" centos bash -c "yum install -y nc && nc -vv -kl -p 8080"
Say, the VPS has the domain example.com
, and from my other machine, which let's say has the IP dead:::::beef
I call
nc example.com 8080
The server says:
Ncat: Version 6.40 ( http://nmap.org/ncat )
Ncat: Listening on :::8080
Ncat: Listening on 0.0.0.0:8080
Ncat: Connection from 172.17.0.1.
Ncat: Connection from 172.17.0.1:49799.
172.17.0.1
is on the server's local network, and has nothing to do with my client, of course. In fact:
docker0 Link encap:Ethernet HWaddr ....
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
If I start the container in host networking mode
docker run --rm --net=host -p "8080:8080" centos bash -c "yum install -y nc && nc -vv -kl -p 8080"
and call my server again
nc example.com 8080
I get the expected result:
Ncat: Connection from dead:::::beef.
Ncat: Connection from dead:::::beef:55650.
My questions are:
- Why does docker "mask" ips when not in host networking mode? I think the docker daemon process likely is the one opening the port, receives the connection, and then relays it to the container process using its own internal virtual network interface connection.
nc
running in the container thus only sees that the call comes from the docker daemon's IP. - (How) can I have my docker service know about outside IPs calling it without putting it all into host mode?
Solution 1:
According to the documentation, the default driver (i.e. when you don't specify --net=host
in docker run
, is the bridge
network driver.
It's not about docker 'masking' the IP addresses, but on how bridged
and host
networking modes vary.
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
Docker creates an isolated network so that each container in the bridged network can communicate, in your case it's docker0
. So by default, the containers on your docker host communicate within this network.
As you might have already figured out by now, yes 172.17.0.1
is the default route on the docker0
network, but this does not act as a router that forwards packets to the destination, hence you see it as the source from netcat's output.
In fact, you can verify this by running ss -tulnp
on your docker host. You should see that the process listening on port 8080 is docker.
On the other hand, using host networking driver means there is no isolation between the container and the host. You can verify this by running ss -tulnp
on your docker host; you should see the process listening on socket instead of docker (in your case, you should see nc
).
Solution 2:
I had exact same issue and this solution from @TosoBoso works well for me with a Nginx reverse proxy in front of my web back-end. Echo's document also helped me better understand this problem.
Basically you'll rewrite headers (at least you need X-Real-IP
) in your Nginx conf and then you can catch that later in your program.
location /api/ {
proxy_pass http://127.0.0.1:7006/api/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}