Persistence…

Patrick O'Neill
4 min readMay 18, 2020

Previous Blog here.

So in creating my baby change finder app I now have a front-end that displays a map with location markers on and this front-end fetches the data from my back-end API server.

So that’s it then? All good? Well if the application was only ever going to have these hardcoded places and it wasn’t necessary to add any more then it might be. However as discussed in the planning stage I want users to be able to add new places, update them and even add information such as reviews. Starting with a hardcoded array I would be able to add more places to the array but if the server was turned off or there was a problem which required it to restart all this additional data would disappear into the ether and we would have to start again with the original hardcoded array of data.

Luckily for me I am standing on the shoulders of those who have gone before me and I don’t need to create a solution, the way forward for me is to use a database. Using a database I will be able to add new places and information and they will permanently be saved and it will not matter if the server has to reboot etc.

Before there where databases

I decided to use a SQL type database as this allows me to have relationships between different types of data and then use queries to return sets of data (ie give me all the reviews by user x). I have previously used PostgreSQL so I thought this time I would try to give MySQL a go and see if there are any/many differences.

AWS RDS MySQL database hosting

The next thing was to set up the database and the decision of whether to have a local test database or have it hosted somewhere. As I’m planning to use AWS to host the whole back-end (and possibly front-end) I thought I would dip my toe in that ocean by using their RDS (relational database service) to host the database.

Setting up the database using the RDS service proved straightforward I just needed to adjust the security settings to allow connections from outside AWS. Then I tested the connection using MySQLWorkbench and could connect!

Connected!

So the next stop was to get my Flask back-end connecting and talking to the database. There were two options I could take here PyMySQL or SQLAlchemy and I decided to start off with PyMySQL as it was similar to what I had used previously with Ruby to talk to a PostgreSQL database. I would need to write the SQL queries myself and as it was something I’ve done before I was happy with.

Environment Variables

Before moving onto writing queries I looked into how to use environment variables in Python. The reason for this is that I don’t want to have my database address, username and password in my source code that is being shared on GitHub.

To do this I needed to import the os module which allows you to use operating system functionality. You can then set environment variables using export from the command line:

export DB_USER='admin'

Then they can be imported in the code:

from flask import Flask
import os
app = Flask(__name__)db_user = os.environ['DB_USER']
...

Then when your code is uploaded to GitHub no one is able to see what your username and password are. At the moment I need to export all the environment variables before running flask but I will be looking into python-dotenv which means these can be saved into a .env file and loaded automatically (however need to make suer you .gitignore the .env file).

Now I have saved the environment variables I can move onto adding in PyMySQL and writing the SQL statements. The first two statements I will write are to add a venue and return all venues. Once this is complete I will have a very basic full-stack application where the front-end is talking to the back-end that is providing data from a database!

def get_all_venues():
connection = pymysql.connect(host=os.environ['DB_URL'],
user=os.environ['DB_USER'],
password=os.environ['DB_PASSWORD'],
db='innodb',
charset='utf8mb4',
cursorclass=pymysql.cursors.DictCursor)
try:
with connection.cursor() as cursor:
# Read all records
sql = "SELECT * FROM `venues`"
cursor.execute(sql)
return cursor.fetchall()

connection.commit()
finally:
connection.close()

It all worked without too much bother which was a relief however it seemed like quite a lot of work and code repetition and more code to then turn the returned results into Python objects if needed. So I will be looking into SQLAlchemy which is an ORM (Object Relational Mapper) which should make my life a lot simpler and take care of SQL statements and turning objects into database records and vice versa.

Next blog here.

--

--