Dockerising everything

Patrick O'Neill
6 min readJul 15, 2020

In my previous blog I took my Flask backend made it into a Docker image and deployed it onto an AWS EC2 instance.

This time I’m going to talk about how I used Docker to get my entire stack running on one server (in reality this probably isn’t ideal but I wanted to see if I could do it and I only have one free EC2 instance). My application consists of a React frontend, a Flask backend with a Celery worker, a Redis server for messaging and a MySQL server for the database.

I have created the Docker image for the Flask backend so this covers the Gunicorn web server and the Celery Worker which handles emails. I don’t need to create a Docker image file for the Redis messaging server or the MySQL database as I can use the standard images for these available from Docker hub and then configure them to be set up the way I would like.

That leaves me with the frontend which I will create my own Docker image for. I decided to use Nginx as the web server for my frontend. The first thing I needed to do was go to the folder containing my project and run:

npm run build

This will create the html, css and js and other files that are needed to serve the website and put them in the build folder.

FROM nginx:stable-alpine
ADD ./build /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]

The above Dockerfile will grab the latest nginx running on Alpine Linux (a small linux distribution which helps keep image sizes down). Next I copied all the files I just created from the build process and copied them into /usr/share/nginx/html which is the folder that nginx uses to store the data it is serving.

Next I ran the three commands below which will build the image and push it to a dockerhub repository.

docker image build -t dncfrontend .
docker tag dncfrontend paddyjoneill/dncfrontend
docker push paddyjoneill/dncfrontend

Docker Networking

Now I had all the docker images I needed it was time to deploy them and get everything networked so the containers could communicate. To do this I created a docker bridge network.

In a docker bridge network the different containers are able to communicate with each other either by the ip assigned by the docker network or by the containers name using a bridge networks inbuilt dns. In my case I will call the containers: frontend, backend, worker, db and redis.

I need to give each container a name during the docker run command otherwise Docker randomly assigns a name which I won’t know until it is running. If I assign a name to the container I know what it will be and will be able to call that name from within other containers. During the run command I also need to assign the container a network. For example:

docker run -p 5000:5000 --network dnc-net --name backend paddyjoneill/dnc

This will start the container, give it the name backend and place it on the dnc-net network. (The -p 5000:5000 maps port 5000 to port 5000 internally and I am able to hit my api with requests.)

Next I will spin up the MySQL server:

docker run -p 3306:3306 -e MYSQL_DATABASE="db" -e MYSQL_USER="user" -e MYSQL_PASSWORD="password" -v /var/lib/mysql:/var/lib/mysql --network dnc-net --name db mysql

This is pretty similar to the previous run command apart from here I’m setting environment variables that will be used within the docker container using the -e tag. The -v tag mounts the var/lib/mysql folder from the system to the same folder in the container so that when the container saves data it is saved to the system and not in the container. This way if the container was deleted I do not lose any data that would have been stored within that container. (If the database is currently hosted elsewhere you can export the data from there and import the data into this new database using MySQLWorkbench).

Just one more thing

There is just one more thing I need to do before this “should work” in our backend I will need to change the link pointing to the old database to the new dockerised database at mysql://db:3306/ so I did this rebuilt and pushed my updated docker image and then ran it with the same command as previously.

Json results back from my api!

I was now able to hit my api at http://(ec2 ip address):5000/api (remember to open port 5000 in the AWS security group!)and get results proving that my dockerised backend was connected to my docker database!

I then did a similar process to get the redis server up and running and again needed to update my backend to point to the redis container.

The last thing to get the backend up and running was to run the worker which is the same image as the backend but run with a different command to run the celery worker rather than the flask server.

docker run -it — name worker — network daddy-net paddyjoneill/daddynappychange celery -A dnc.celery worker — uid=1

The “celery -A dnc.celery worker — uid=1” after the image name tells docker to use the image but replace the run command with this.

I then tested the set up by creating a new user which would add a user to the database and send an email to the user that they had signed up. I was able to check the database using MySQLWorkbench (port 3306 needs to be open in AWS security group) and I added myself as a user and sure enough I received an email in my inbox!

Docker Compose

As we can see the docker run commands can get pretty verbose and a bit of a pain. Also bringing up a full stack using these takes a bit of time and is prone to errors. This is where a docker-compose file comes in, this is a yaml file which contains all the information to get your docker-containers up and running and configured correctly. (Docker compose is installed separately from Docker and instructions to install it on Ubuntu are here).

version: "3.8"services:
redis:
image: redis:alpine
restart: always
networks:
- dnc-net
backend:
image: paddyjoneill/dnc
restart: always
ports:
- "5000:5000"
networks:
- dnc-net
depends_on:
- redis
- db
command: gunicorn -b :5000 --access-logfile - dnc:app
worker:
image: paddyjoneill/dnc
restart: always
command: celery -A dnc.celery worker --uid=1
networks:
- dnc-net
depends_on:
- redis
- db
frontend:
image: paddyjoneill/dncfrontend
restart: always
ports:
- "80:80"
networks:
- dnc-net
depends_on:
- backend
db:
image: mysql:8.0
restart: always
environment:
MYSQL_DATABASE: 'db'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
networks:
- dnc-net
ports:
- '3306:3306'
volumes:
- /var/lib/mysql:/var/lib/mysql
networks:
dnc-net:

As you can see the docker-compose.yml file is a lot more readable than the different run commands whilst still conveying the same information. The main differences here are the restart policy which means if a container fails and exits docker-compose will spin it back up. The other new thing we see is the depends_on, this allows you to tailor the order in which your stack is brought up. In this case I bring up the MySQL and Redis servers, I then bring up the backend and the worker which depend on these two services and finally once the backend is up I bring up the frontend.

Then in the terminal rather than the multitude of different run commands we can now just:

docker-compose up

You will then see all the containers spin up! If you want to stop running all the containers there are two options:

docker-compose stop
docker-compose down

Stop will simply stop all the containers and down will stop all the containers and then delete them and any networks.

I was then able to load frontend in my browser using the ip of the ec2 instance which loaded up and was hitting my api and getting results back. Happy Days!

Aaaayyy

--

--