A Serving Flask on Docker

      No Comments on A Serving Flask on Docker

Serving a flask application with gunicorn and nginx on docker…

Packaging applications for reproducible results across environments has gotten a great boost with docker. Docker allows us to bundle the application with all its dependencies so that the resulting image can be run anywhere with a compatible docker runtime. The applications we are interested in require resources such as:

  1. a web application framework (e.g. Flask)
  2. a WSGI server (e.g. gunicorn)
  3. a web server (e.g. Nginx)
  4. a datastore (e.g. Elasticsearch)
  5. an orchestration engine (e.g. Kubernetes)

Due to length concerns, the focus of this post is on 1 through 3 and dockerizing them. We will use the work here as a springboard for a Kubernetes implementation in a subsequent post that covers the whole gamut. This post goes over the following.

  • Build an app container to run a Flask application behind gunicorn
  • Build a web container to serve static content while proxying requests back to the app container for dynamic data
  • Enable the Flask application to talk to an Elasticsearch instance on the host. You can replace this with any other data store of your choice.

The driver for this post is the earlier article A Flask Full of Whiskey (WSGI) – that was implemented natively on my laptop. Pulling together the code, system configuration, and the commands to share with the post was a chore. Plus, some of that needed to be tweaked anyway by the end-user if they were on a different OS, or had a differently configured system. A docker image with the moving parts bundled would have been easier to share, and easier to reuse! Let us get with it then.

There are some code snippets in this post for illustration. The complete code can be obtained from github.

1. The scheme

Figure 1 below shows in a nutshell what we plan to implement.

Figure 1. The big picture. The web and app containers are brought up by docker-compose. Elasticsearch on the host is opened up for access from the docker network. The app service is accessed via the web container that proxies the request to app container

We choose to leave Elasticsearch running on the host, as it serves other applications as well. Besides we want to model a situation where the container applications need to access applications on the host. The application is simple. From the id value in the API, it queries the elasticsearch index for a quote with that id and builds a snippet of Html using a template.

So upon hitting a URL like http://localhost:8080/quotes/byId?id=797944 you would get:

2. Docker compose

From Figure 1 we know that we need to set up two services, one of which should be accessible from the host. Here is the directory/file layout, and some detail on what each file is for.

The compose file is straightforward – two services, each to be built with its own Dockerfile sitting in their respective directories as seen above. Nginx container is set to be accessible to the host at port 8080. Here is the full docker-compose.yml.

3. Web container

We import the official Nginx image and add our config and static files in the Dockerfile below for the web container

We need to send the requests to the Flask app running under gunicorn listening on port 9999 of the app container. That service is named ‘app_service’ in the docker-compose. Here is the relevant snippet from nginx.conf.

4. App container

The image for the application container starts with the lean python-alpine image and installs the required modules from requirements.txt.

The flask app needs to be able to access Elasticsearch running on the host. But we do not want to hardcode a potentially variable IP address of the host. On Mac and Windows, docker resolves ‘host.docker.internal‘ to the host IP address automatically – but not on Linux unfortunately. Found this post https://dev.to/bufferings/access-host-from-a-docker-container-4099 to get around this issue. This fix is combined with the command to start gunicorn – and that is our start.sh below.

5. Docker network and firewall rules

One last thing I had to do on my laptop was to allow for Elasticsearch to be accessed from the docker network. The default address range in docker is quite large – 172.17.0.0/16 through 172.31.0.0/16 for example in the 172 set. Every time you run ‘docker-compose up‘, you can get containers with addresses from anywhere in that range, so we have to open up the whole range for access. Too large a range for my liking to be opened up for access to Elasticsearch in my network at home. We change this by supplying a /etc/docker/daemon.json file below with a custom smaller range.

We restart the docker service and open up 172.24.0.0/16 alone for access to Elasticsearch.

6. Test and verify

We bring up the containers with docker-compose.

Inspecting the quoteserver_default network confirms that our daemon.json is in effect.

Listing the running containers shows that we have the web container exposing its port 80 at 8080 for the host.

Examining the app container shows that the /etc/hosts has been appended with the gateway address for resolution.

All that is left to do now is test the functionality.

With that, we conclude this rather short post. We will take up the full-fledged container orchestration with Kubernetes in a subsequent post.

Leave a Reply