Install node_modules inside Docker container and synchronize them with host
At first, I would like to thank David Maze and trust512 for posting their answers. Unfortunately, they didn't help me to solve my problem.
I would like to post my answer to this question.
My docker-compose.yml
:
---
# Define Docker Compose version.
version: "3"
# Define all the containers.
services:
# Frontend Container.
frontend:
build: ./app/frontend
volumes:
- ./app/frontend:/usr/src/app
ports:
- 3000:3000
environment:
NODE_ENV: development
command: /usr/src/app/entrypoint.sh
My Dockerfile
:
# Set the base image.
FROM node:10
# Create and define the node_modules's cache directory.
RUN mkdir /usr/src/cache
WORKDIR /usr/src/cache
# Install the application's dependencies into the node_modules's cache directory.
COPY package.json ./
COPY package-lock.json ./
RUN npm install
# Create and define the application's working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
And last but not least entrypoint.sh
:
#!/bin/bash
cp -r /usr/src/cache/node_modules/. /usr/src/app/node_modules/
exec npm start
The trickiest part here is to install the node_modules
into the node_module
's cache directory (/usr/src/cache
) which is defined in our Dockerfile
. After that, entrypoint.sh
will move the node_modules
from the cache directory (/usr/src/cache
) to our application directory (/usr/src/app
). Thanks to this the entire node_modules
directory will appear on our host machine.
Looking at my question above I wanted:
- to install
node_modules
automatically instead of manually- to install
node_modules
inside the Docker container instead of the host- to have
node_modules
synchronized with the host (if I install some new package inside the Docker container, it should be synchronized with the host automatically without any manual actions
The first thing is done: node_modules
are installed automatically. The second thing is done too: node_modules
are installed inside the Docker container (so, there will be no cross-platform issues). And the third thing is done too: node_modules
that were installed inside the Docker container will be visible on our host machine and they will be synchronized! If we install some new package inside the Docker container, it will be synchronized with our host machine at once.
The important thing to note: truly speaking, the new package installed inside the Docker container, will appear in /usr/src/app/node_modules
. As this directory is synchronized with our host machine, this new package will appear on our host machine's node_modules
directory too. But the /usr/src/cache/node_modules
will have the old build at this point (without this new package). Anyway, it is not a problem for us. During next docker-compose up --build
(--build
is required) the Docker will re-install the node_modules
(because package.json
was changed) and the entrypoint.sh
file will move them to our /usr/src/app/node_modules
.
You should take into account one more important thing. If you git pull
the code from the remote repository or git checkout your-teammate-branch
when Docker is running, there may be some new packages added to the package.json
file. In this case, you should stop the Docker with CTRL + C
and up it again with docker-compose up --build
(--build
is required). If your containers are running as a daemon, you should just execute docker-compose stop
to stop the containers and up it again with docker-compose up --build
(--build
is required).
If you have any questions, please let me know in the comments.
Hope this helps.
Having run into this issue and finding the accepted answer pretty slow to copy all node_modules
to the host in every container run, I managed to solve it by installing the dependencies in the container, mirror the host volume, and skip installing again if a node_modules
folder is present:
Dockerfile:
FROM node:12-alpine
WORKDIR /usr/src/app
CMD [ -d "node_modules" ] && npm run start || npm ci && npm run start
docker-compose.yml:
version: '3.8'
services:
service-1:
build: ./
volumes:
- ./:/usr/src/app
When you need to reinstall the dependencies just delete node_modules
.
There's three things going on here:
- When you run
docker build
ordocker-compose build
, your Dockerfile builds a new image containing a/usr/src/app/node_modules
directory and a Node installation, but nothing else. In particular, your application isn't in the built image. - When you
docker-compose up
, thevolumes: ['./app/frontend:/usr/src/app']
directive hides whatever was in/usr/src/app
and mounts host system content on top of it. - Then the
volumes: ['frontend-node-modules:/usr/src/app/node_modules']
directive mounts the named volume on top of thenode_modules
tree, hiding the corresponding host system directory.
If you were to launch another container and attach the named volume to it, I expect you'd see the node_modules
tree there. For what you're describing you just don't want the named volume: delete the second line from the volumes:
block and the volumes:
section at the end of the docker-compose.yml
file.
A Simple, Complete Solution
You can install node_modules in the container using the external named volume trick and synchronize it with the host by configuring the volume's storage location to point to your host's node_modules directory. This can be done with a named volume using the local driver and a bind mount, as seen in the example below.
The volume's data is stored on your host anyway, in something like /var/lib/docker/volumes/, so we're just storing it inside your project instead.
To do this in Docker Compose, just add your node_modules volume to your front-end service, and then configure the volume in the named volumes section, where "device" is the relative path (from the location of docker-compose.yml) to your local (host) node_modules directory.
docker-compose.yml
version: '3.9'
services:
ui:
# Your service options...
volumes:
- node_modules:/path/to/node_modules
volumes:
node_modules:
driver: local
driver_opts:
type: none
o: bind
device: ./local/path/to/node_modules
The key with this solution is to never make changes directly in your host node_modules, but always install, update, or remove Node packages in the container.
Documentation:
- https://docs.docker.com/storage/volumes/
- https://docs.docker.com/storage/bind-mounts/
- https://docs.docker.com/compose/compose-file/compose-file-v3/#driver_opts
No one has mentioned solution with actually using docker's entrypoint
feature.
Here is my working solution:
Dockerfile (multistage build, so it is both production and local dev ready):
FROM node:10.15.3 as production
WORKDIR /app
COPY package*.json ./
RUN npm install && npm install --only=dev
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
FROM production as dev
COPY docker/dev-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["dev-entrypoint.sh"]
CMD ["npm", "run", "watch"]
docker/dev-entrypoint.sh:
#!/bin/sh
set -e
npm install && npm install --only=dev ## Note this line, rest is copy+paste from original entrypoint
if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
set -- node "$@"
fi
exec "$@"
docker-compose.yml:
version: "3.7"
services:
web:
build:
target: dev
context: .
volumes:
- .:/app:delegated
ports:
- "3000:3000"
restart: always
environment:
NODE_ENV: dev
With this approach you achieve all 3 points you required and imho it is much cleaner way - not need to move files around.