Dockerising and deploying a Flask app

Patrick O'Neill
4 min readJul 7, 2020

In my previous blog I implemented the use of Celery to manage asynchronous tasks (in my case sending emails).

The next avenue I’m looking to explore is containerisation with the long term goal of implementing continuous deployment. For my exploration of containers I will be using Docker which is the most popular containerisation solution. However first a bit about containers from Docker “A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.”

So rather than having separate virtual machines for each application, containers can run alongside other containers on one operating system. This uses much less resources and is a more efficient use of processing power allowing you to do more “work” rather than using that processing power on running many virtual machines.

There are probably much better places to read about the benefits of containers so I will move on to what I did…

What I did

Created a Dockerfile

The first step of the process is to create a Dockerfile which is a text file which contains all of the commands you would use to make the image from the command line.

FROM python:3.8-buster
WORKDIR /server
ADD . /server
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

This Dockerfile takes a base Python 3.8 image and then says our working directory will be server and copies the python files to this directory. Then it runs pip to install our required packages in the image. The last line CMD tells the image what command to run if none is given when the image is run.

Built the image

The next stage was to use the Dockerfile to build the docker image file, the -t tags the image with daddynappychange in this case. (To have access to the command line tools you need to have Docker Desktop installed and running on MacOS).

docker image build -t daddynappychange .

Tested the image locally

Next up was to test the image locally so it needed to be run. The -p does some port forwarding, I chose 80 so I could just type localhost into the browser and didn’t need to specify a port and this is mapped to port 5000 inside the container which is the port used by the Flask server.

docker run -p 80:5000 daddynappychange

I had some problems here and a solution I found was to change the ip of the Flask app like below:

if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')

Once the image had spun up I was able to hit up my api with requests and get responses that proved it was all working. Tidy.

Tagged the image

Now that I had the image built and working locally the next step in the process was to upload the image to the dockerhub repository. This would make it easy for my to download my image to wherever it is being hosted. The first step was to tag the daddynappychange image I had created with user/image in this case paddyjoneill/daddynappychange

docker tag daddynappychange paddyjoneill/daddynappychange

Pushed image to Dockerhub

Then I pushed the image to the dockerhub repository using the below command. You will have to log in to dockerhub in the terminal by using the command docker login and provide your credentials.

docker push paddyjoneill/daddynappychange

Deploying to an AWS EC2 instance

The first thing to do in the next process is to start an EC2 instance, I chose to create an Ubuntu instance as I have been using Ubuntu recently.

When you create the instance towards the end of the process it will ask you about a key pair and you will need to download and save the file you will need it shortly.

Now you need to edit the security groups to allow ssh and http access from ports 22 and 80 respectively.

Now you should be ready to ssh into your EC2, you will be able to get the address of your EC2 instance from your account and then use the below command to ssh in (replacing ./daddynappychange.pem to point to your .pem file). If you chose a different EC2 image you will need to replace ubuntu with the relevant user for your image.

ssh -i ./daddynappychange.pem ubuntu@ec1–23–123–12–3.eu-west-2.compute.amazonaws.com

If it has all gone smoothly you should now be connected to your EC2 and your command line should read ubuntu@ip-123–12–123–12:-$

Installing Docker

To install docker onto your Ubuntu EC2 follow the instructions linked here, they are very straightforward and don’t think I can add anything. https://docs.docker.com/engine/install/ubuntu/

Pull the image from Dockerhub

Now we have docker install it is time to pull our image file from dockerhub. To do this run the docker login command and follow the prompts asking for your username and password (this step is only necessary if you have your repository set to private. Then pull your image using your username/image.

docker logindocker pull paddyjoneill/daddynappychange

Run your Docker container

The last step is to run your container on your EC2 instance! This is the same as when we ran it locally but in this case I have some extra commands to run the production gunicorn web server rather than the flask development server. The :5000 tells the server to run on port 5000 and dnc:app will depend on the structure of your app.

docker run -p 80:5000 paddyjoneill/daddynappychange gunicorn -b :5000 dnc:app

Test it out

You should now be able to go to your EC2 instance in your browser (eg http://ec1–23–123–12–3.eu-west-2.compute.amazonaws.com) and see it working!

In the next blog I use Docker and docker-compose to containerise and orchestrate the whole application.

--

--