What's the difference between Docker Compose vs. Dockerfile [closed]
Solution 1:
Dockerfile
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image.
Example, Dockerfile
FROM ubuntu:latest
MAINTAINER john doe
RUN apt-get update
RUN apt-get install -y python python-pip wget
RUN pip install Flask
ADD hello.py /home/hello.py
WORKDIR /home
Docker Compose
Docker Compose
-
is a tool for defining and running multi-container Docker applications.
-
define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment. -
get an app running in one command by just running
docker-compose up
Example, docker-compose.yml
version: "3"
services:
web:
build: .
ports:
- '5000:5000'
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
Solution 2:
The answer is neither.
Docker Compose (herein referred to as compose) will use the Dockerfile if you add the build command to your project's docker-compose.yml
.
Your Docker workflow should be to build a suitable Dockerfile
for each image you wish to create, then use compose to assemble the images using the build
command.
You can specify the path to your individual Dockerfiles using build /path/to/dockerfiles/blah
where /path/to/dockerfiles/blah
is where blah's Dockerfile
lives.
Solution 3:
docker-compose exists to keep you having to write a ton of commands you would have to with docker-cli.
docker-compose also makes it easy to startup multiple containers at the same time and automatically connect them together with some form of networking.
The purpose of docker-compose is to function as docker cli but to issue multiple commands much more quickly.
To make use of docker-compose, you need to encode the commands you were running before into a docker-compose.yml
file.
You are not just going to copy paste them into the yaml file, there is a special syntax.
Once created, you have to feed it to the docker-compose cli and it will be up to the cli to parse the file and create all the different containers with the correct configuration we specify.
So you will have separate containers, let's say, one is redis-server
and the second one is node-app
, and you want that created using the Dockerfile
in your current directory.
Additionally, after making that container, you would map some port from the container to the local machine to access everything running inside of it.
So for your docker-compose.yml
file, you would want to start the first line like so:
version: '3'
That tells Docker the version of docker-compose
you want to use. After that, you have to add:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
Please notice the indentation, very important. Also, notice for one service I am grabbing an image, but for another service I am telling docker-compose
to look inside the current directory to build the image that will be used for the second container.
Then you want to specify all the different ports that you want open on this container.
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
-
Please notice the dash, a dash in a yaml file is how we specify an array. In this example, I am mapping 8081
on my local machine to 8081
on the container like so:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- "8081:8081"
So the first port is your local machine, and the other is the port on the container, you could also distinguish between the two to avoid confusion like so:
version: '3'
services:
redis-server:
image: 'redis'
node-app:
build: .
ports:
- "4001:8081"
By developing your docker-compose.yml
file like this, it will create these containers on essentially the same network and they will have free access to communicate with each other any way they please and exchange as much information as they want.
When the two containers are created using docker-compose
, we do not need any port declarations.
Now in my example, we need to do some code configuration in the Nodejs app that looks something like this:
const express = require('express');
const redis = require('redis');
const app = express();
const client = redis.createClient({
host: 'redis-server'
});
I use this example above to make you aware that there may be some specific configuration you would have to do in addition to the docker-compose.yml
file that may be specific to your project.
Now, if you ever find yourself working with a Nodejs app and redis, you want to ensure you are aware of the default port Nodejs uses, so I will add this:
const express = require('express');
const redis = require('redis');
const app = express();
const client = redis.createClient({
host: 'redis-server',
port: 6379
});
So Docker is going to see that the Node app is looking for redis-server
and redirect that connection over to this running container.
The whole time, the Dockerfile
only contains this:
FROM node:alpine
WORKDIR '/app'
COPY /package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
So, whereas before you would have to run docker run myimage
to create an instance of all the containers or services inside the file, you can instead run docker-compose up
and you don't have to specify an image because Docker will look in the current working directory and look for a docker-compose.yml
file inside.
Before docker-compose.yml
, we had to deal with two separate commands of docker build .
and docker run myimage
, but in the docker-compose
world, if you want to rebuild your images, you write docker-compose up --build
. That tells Docker to start up the containers again but rebuild it to get the latest changes.
So docker-compose
makes it easier for working with multiple containers. The next time you need to start this group of containers in the background, you can do docker-compose up -d
; and to stop them, you can do docker-compose down
.
Solution 4:
Docker compose file is a way for you to declaratively orchestrate the startup of multiple containers, rather than run each Dockerfile separately with a bash script, which would be much slower to write and harder to debug.