Docker Series - Part 3
Docker Compose for building and running containers from Docker configurations
Published: 02 April 2020
What is Docker Compose
It's a tool which can read a configuration file and spin up multiple containers at once, with whatever services we like in those containers.
Writing custom commands on the command line and setting up networking between different running containers becomes a huge burden when we get into more detailed parameters.
Consider this command which does volume mapping, port mapping, custom Dockerfile and tagging all in one:
docker build -p 8000:8000 -v src/app:/usr/app ./docker-folder -f dev-Dockerfile -t my-custom-container
We can avoid writing such lengthy commands and any potential typos with Docker Compose.
How does it work
Docker compose always looks for a docker-compose.yml file by default in the location where the command is executed. This YAML file contains all the details about which services we would like to spin up and which Dockerfile we want to use for each.
Let's look at an example:
# docker-compose.yml
version: "3"
services:
node-app:
build:
context: .
dockerfile: dev-Dockerfile
ports:
- "8000:8000"
command: npm run dev
test-suite:
build:
context: .
dockerfile: test-Dockerfile
command: npm run test
version - should always be the same across all YAML files to be used by Docker Compose
services - contains a list of services we want to spin up (each with its own container)
node-app - is the name of the first service (could have named it anything as done with "test-suite" service)
build - gives information about Dockerfile and its location
context - indicates which directory should Docker Compose go to look for Dockerfile
dockerfile - name of the Dockerfile (by default looks for file named Dockerfile
, unless specified otherwise)
ports - helps with port mapping
command - overrides the default command of the dockerfile
There are many other helpful properties which will look at later.
To run this:
# used when running the very first time
docker-compose up --build
OR
# used during subsequent runs
docker-compose up
OR
# used sparingly and only when default command needs to be overwritten
docker-compose run echo hi
Volumes Mapping
If we run the above YAML file with Docker Compose, we will see a successful NodeJS server running inside one container and a test suite running inside another.
However, if we make any changes to our code locally, we won't see changes reflected in neither the nodeJS nor the test-suite containers. This is very much needed when doing local development. This is where Volumes mapping comes into play.
Basically, we tell Docker to map each folder inside the container's project directory to the folders inside our local project copy. Docker then creates a reference for these folders and hooks them up. This way any changes we make to our local files, get reflected inside the container.
Let's look at an example:
# dockerfile
FROM node:alpine
COPY package.json .
RUN npm install
COPY . .
CMD ["npm","start"]
# docker-compose.yml
version: "3"
services:
node-app:
build: .
ports:
- "8000:8000"
volumes:
- /app/node_modules
- .:/app
Volumes property is considered an array and can take multiple values which are usually of 2 types:
The first type creates a reference from destination back to the source whereas the second one states to keep the destination folder and use it as is.
~ A little Gotcha!
In the above example, Docker compose would have copied our local project directory and pasted in inside the container's project root as specifed with - .:/app, however just specifying this rule, would have created issues.
As a starting project, we wouldn't have a node_modules folder in our local project and after copying everything over, the same would have been the case inside the container.
Docker would have thrown an error while trying to spin up our NodeJS server due to lack of dependencies. This is solved by using - /app/node_modules which in a simple sense, states that node_modules folder inside the container should be kept as is.
Using environment variables
As with any project, we may have different environments like - dev, staging, testing, prod,etc and we may want to run different processes with each. This can be achieved in Docker using the environment property inside the docker-compose.yml file.
# docker-compose.yml
version: "3"
services:
node-app:
build: .
environment:
- NODE_ENV=development
- DB_USER=Roger_Federer
environment is an array and can take many key/value pairs to push into the project on runtime which is nodeJS can be access as process.env.NODE_ENV or process.env.DB_USER.
Alternatively, we can also use .env file from our project directory and multiples of them if need to. This is achieved with env_file property.
# development YAML file
web:
env_file:
- web-variables.env
- web-secrets.env
Different YAML files for different environments
As mentioned above, we may need different processes for different environments in our projects. To that end, in any real world app, we will end up with multiple docker-compose files and Dockerfiles.
For example, we won't need Volume mapping in our production build and we may want different environment variables, so will need to separate these out into a different file.
To handle multiple docker-compose files:
# base - docker-compose.yml
version: "3"
services:
node-app:
build: .
restart: unless-stopped
environment:
- NODE_ENV=production
# override - docker-compose.dev.yml
version: "3"
services:
node-app:
environment:
- NODE_ENV=development
- DEV_VAR=random-value
volumes:
- /app/node_modules
- .:/app
command: npm run dev
docker-compose -f <base-docker-compose-file> -f <docker-file-to-override-with> up
# example
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
NOTE: When creating an override YAML file, we can get away with providing only the override values as long as we put the values under the correct services. Also, make sure to always provide the version on top in every YAML file and ensure it's the same in all.