Transfer files from a non-host machine to a docker container and vice versa
I have a running Ubuntu
docker container on a Linux machine. I have a big difficulty everytime when I'm trying to copy a file to/from the docker container from my local machine. I have to copy the file to the host machine, and then run docker cp
to transfer it.
Is there any ways to access to the docker container directly from my Windows-based local machine?
Since the container only has a local-to-host IP, one option coming to my mind is tell the host to forward all SSH connections to its specific port to the container. Or any other solution that you might have.
One of the design goals of containers is isolation, thus the difficulty in accessing the files inside the containers is partially expected.
There are (at least) this three options. Edit: Thought of a fourth one :)
Connect to a daemon inside the container
It might seem tempting to run a sshd
process inside the container to be able to directly access it through any SSH client. This is, however, against the idea of running "one process"/"one service" per container (unless its the whole point of the container to be an ssh daemon). Same goes for any other file sharing service running inside the container directly (like ftpd etc).
Whether this option is viable depends on whether the container can be changed to include the daemon for your purposes and whether that is wanted. I'd argue that in an environment where the isolation provided by containers is sought, the use of additional services inside the containers is counter-productive. If it is however more of a "development environment", then running an additional daemon inside the container may be beneficial. As containers provide very limited process supervision, consider employing a suitable service manager (systemd is heavy but does it all, less complete alternatives exist :) ) inside the container in case of running multiple services.
Expose volume from the container
So as long as you have a way to connect to the Linux part, Docker provides an integrated mechanism for mapping paths from the (Linux) host inside the container. This is done by specifying the -v
flag upon container creation like this:
docker run --rm -it -v /home/linux-fan/wd:/media/wd debian:10 /bin/bash
Here, my local directory /home/linux-fan/wd
is provided under /media/wd
inside the container. Note that docker does nothing special to make user IDs inside and outside the container match (may cause permission denied errors if the IDs are not aligned properly).
Additionally/optionally, to avoid connection through SSH it might be interesting to expose the path from the Linux host as a Windows share using samba s.t. all SSH for file access can be eliminated.
Other volume storage options
Although I do not have any experience with them, Docker not only allows providing directories as volumes, but also has an option for adding external storage (cf. https://docs.docker.com/engine/extend/plugins_volume/). Such storage might again be accessible from the Windows side.
Access the remote docker daemon directly
Docker is itself a service and can be exposed to the outside network. To do this, you need to setup the proper TLS certificates and enable network access to the Docker daemon on the Linux host. It should then be possible to run a Windows Docker client (which connects to the Linux Docker daemon through TLS) and issue suitable docker cp
commands (as well as all other docker
commands) from your local machine directly.
Edit 2: Without re-creating containers
Doing something to a running container certainly reduces the options a lot.
The SSHD-in-container-based solution might be difficult if there is no port exposed that it can listen on. If that is the case, working around this with an additional sshd
container linked to the running one and doing some forwarding can work, but is hacky.
Remote Docker access does not need the container to be re-created but requires changes to the Docker configuration and thus a restart (without re-create) of all running containers as the Docker daemon needs to be restarted.
Finally, if none of this is possible, a pull-based solution running as a newly created process inside the existing container might be even more hacky, but available as a sort of "last resort". I doubt, however, that this is more convenient than the existing ssh
+ docker cp
workflow. In case you want to optimize the existing solution: WinSCP offers a function for automatically sending changed files to Linux (the part before calling docker cp
can thus possibly be simplified).