Develop and Deploy a Python API with Kubernetes and Docker

Comprehensive Guide to Developing and Deploying a Python API with Docker and Kubernetes (Part I)

Table of Contents

Introduction

In the evolving landscape of software development, containerization and orchestration have become pivotal. Docker and Kubernetes stand at the forefront of this transformation, offering scalable and efficient solutions for application deployment. This guide provides a detailed walkthrough on developing a Python API, containerizing it with Docker, and deploying it using Kubernetes, ensuring a robust and production-ready application.

Check out the MetricFire free trial to see how we can help monitor your Docker, Kubernetes, and Python setups.

 To quickly get started with monitoring Kubernetes clusters, check out our tutorial on using the Telegraf agent as a Daemonset to forward node/pod metrics to a data source and use that data to create custom dashboards and alerts. 

 

Key Takeaways

  1. Docker is a popular containerization technology that has gained significant attention from developers and operations engineers since its open-source release in March 2013.
  2. Docker has become mainstream, with more than 100,000 third-party projects using it, and there is an increasing demand for developers with containerization skills.
  3. The article provides a two-part series on using Docker. The first part focuses on containerizing a Python API using Flask and running it in a development environment with Docker Compose.
  4. Setting up a development environment involves creating a Python virtual environment, installing Flask, and creating a simple API for displaying weather data.
  5. Docker Compose, an open-source tool for defining and running multi-container Docker applications, is introduced for use in development environments. It allows for auto-reloading containers when code changes occur, making development easier.

  

1. Setting Up a Development Environment

We will install some requirements before starting. We will use a mini Python API developed in Flask, a Python framework, to rapidly prototype an API. Our application will be developed using Flask. If you are not accustomed to Python, you can see the steps to create this API below.

We will start by creating a Python virtual environment to isolate our dependencies from the rest of the system dependencies. Before this, we will need PIP, a popular Python package manager.

The installation is quite easy - you need to execute the following two commands:

   

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py

‍   

For your information, you should have Python 3 installed. You can verify this by typing:

​python --version​

‍  

After installing PIP, use the following command to install the virtual environment:

​pip install virtualenv​

 

 

You can follow the official guide to find other installation alternatives. Next, create a project for your folder in which you should create a virtual environment, then activate it. Also, create a folder for the app and a file called app.py.

  

mkdir app
cd app
python3 -m venv venv
. venv/bin/activate
mkdir code
cd code
touch app.py

‍  

We will build a simple API that shows the weather for a given city. For example, if we want to show the weather in London, we should request it using the route:​

 

/london/uk

‍  

You need to install the Python dependencies called "flask" and "requests" using PIP. We are going to use them later on:

 

pip install flask requests​

‍  

Don't forget to "freeze" your dependencies in a file called requirements.txt. This file will be used later to install our app dependencies in the container:

 

pip freeze > requirements.txt​

 ‍ 

This is what the requirements file looks like:

  

certifi==2019.9.11
chardet==3.0.4
Click==7.0
Flask==1.1.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.3
MarkupSafe==1.1.1
requests==2.22.0
urllib3==1.25.7
Werkzeug==0.16.0

   

This is the API initial code:

  

from flask import Flask
app = Flask(__name__)

@app.route('/')
def index():
    return 'App Works!'

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=5000)

‍  

To test it, run python app.py and visit http://127.0.0.1:5000/. You should see "App Works!" on the web page. We will use data from openweathermap.org, so make sure to create an account on the same website and generate an API key.

 

Now, we need to add the convenient code to make the API show weather data about a given city:

  

@app.route('/<string:city>/<string:country>/')
def weather_by_city(country, city):
    url = 'https://samples.openweathermap.org/data/2.5/weather'
    params = dict(
        q=city + "," + country,
        appid= API_KEY,
    )
    response = requests.get(url=url, params=params)
    data = response.json()
    return data

‍  

The overall code looks like this:

  

from flask import Flask
import requests

app = Flask(__name__)

API_KEY = "b6907d289e10d714a6e88b30761fae22"

@app.route('/')
def index():
    return 'App Works!'

@app.route('/<string:city>/<string:country>/')
def weather_by_city(country, city):

    url = 'https://samples.openweathermap.org/data/2.5/weather'
    params = dict(
        q=city + "," + country,
        appid= API_KEY,
    )

    response = requests.get(url=url, params=params)
    data = response.json()
    return data

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=5000)

‍  

Now if you visit 127.0.0.1:5000/london/uk, you should be able to see a JSON similar to the following one:

  

{
  "base": "stations",
  "clouds": {
    "all": 90
  },
  "cod": 200,
  "coord": {
    "lat": 51.51,
    "lon": -0.13
  },
...

‍  

Our mini API is working. Let's containerize it using Docker.

 

2. Containerizing the Application with Docker

Let's create a container for our API. The first step is creating a Dockerfile. A Dockerfile is an instructive text file containing the steps and instructions that the Docker Daemon should follow to build an image. After building the image, we will be able to run the container.

 

A Dockerfile always starts with the FROM instructions:

  

FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /app
WORKDIR /app
COPY requirements.txt /app
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
CMD [ "python", "app.py" ]

‍  

In the above file, we did the following things:

  1. We are using a base image called "python:3".
  2. We also set PYTHONUNBUFFERED to 1. Setting PYTHONUNBUFFERED to 1 allows log messages to be dumped to the stream instead of being buffered.
  3. We also created the folder /app, and we set it as a workdir.
  4. We copied the requirements and then used them to install all the dependencies.
  5. We copied all of the files composing our application, namely the app.py file to the workdir.
  6. We finally exposed port 5000, since our app will use this port, and we launched the python command with our app.py as an argument. This will start the API when the container starts.

 

 

After creating the Dockerfile, we must build it using an image name and a tag. In our case, we will use "weather" as a name and "v1" as a tag:

 

docker build -t weather:v1 .

  ‍

Ensure you are building from inside the folder containing the Dockerfile and the app.py file.

 

After building the container, you can run it using:

 

docker run -dit --rm -p 5000:5000 --name weather weather:v1

 ‍ 

The container will run in the background since we use the -d option. The container is called "weather" (--name weather). It's also reachable on port 5000 since we mapped the host port 5000 to the exposed container port 5000.

 

If we want to confirm the creation of the container, we can use:

 

docker ps

‍  

You should be able to see a very similar output to the following one:

  

CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS                    NAMES
0e659e41d475        weather:v1          "python app.py"     About a minute ago   Up About a minute   0.0.0.0:5000->5000/tcp   weather

‍  

You should be able to query the API now. Let's test it using CURL.

 

curl http://0.0.0.0:5000/london/uk/

‍  

If the last command should return a JSON:

  

{
  "base": "stations",
  "clouds": {
    "all": 90
  },
  "cod": 200,
  "coord": {
    "lat": 51.51,
    "lon": -0.13
...
}

  

3. Managing Development with Docker Compose

Docker Compose is an open-source tool developed by Docker Inc. for defining and running multi-container Docker applications. It is also meant for development environments since it allows auto-reloading your container when you update your code without restarting your containers manually or rebuilding your image after each change. Without Compose, developing using only Docker containers would be frustrating.

For the implementation part, we will use a "docker-compose.yaml" file.

 

This is the "docker-compose.yaml" file we are using with our API:

  

version: '3.6'
services:
  weather:
    image: weather:v1
    ports:
      - "5000:5000"
    volumes:
      - .:/app

‍  

In the above file, you can see that I configured the service "weather" to use the image "weather:v1." I mapped the host port 5000 to the container port 5000 and mounted the current folder to the "/app" folder inside the container.

It is also possible to use a Dockerfile instead of an image. This would be recommended in our case since we already have the Dockerfile.

  

version: '3.6'
services:
  weather:
    build: .
    ports:
      - "5000:5000"
    volumes:
      - .:/app

‍  

Now, simply run "docker-compose up" to start the service or "docker-compose up—-build" to build it and then run it.

 

Monitoring and Observability

Implementing monitoring solutions ensures the health and performance of your application. Tools like Prometheus and Grafana can be integrated for metrics collection and visualization. Additionally, consider using MetricFire's Hosted Graphite for a managed monitoring experience.

Conclusion

By following this guide, you've developed a Python API, containerized it using Docker, and deployed it on a Kubernetes cluster. This approach not only streamlines development and deployment processes but also ensures scalability and maintainability. Integrating monitoring tools further enhances the robustness of your application, providing insights into its performance and reliability.

If you use another programming language, like Go or Rails, you will generally follow the same steps except for minor differences. Check out the MetricFire free trial to see if MetricFire can fit into your monitoring environment.

In the second part of this tutorial, we will discover more details about Docker and Docker Compose, but we will mainly see how to use Kubernetes and deploy our API to GKE (Google Kubernetes Engine).

You might also like other posts...
kubernetes May 14, 2025 · 10 min read

Comprehensive Guide to Developing and Deploying a Python API with Docker and Kubernetes (Part II)

Discover key details about Docker and Docker Compose as well as how to deploy... Continue Reading

kubernetes Jan 07, 2025 · 9 min read

Managing a Kubernetes Cluster Using Terraform

Learn to use Terraform and create an infrastructure template for GKE clusters, then we'll... Continue Reading

kubernetes Oct 03, 2024 · 4 min read

Top 3 Command Line Tools for K8s

This article explores a few popular CLI tools for managing k8s environments. CLI tools... Continue Reading

header image

We strive for 99.999% uptime

Because our system is your system.

14-day trial 14-day trial
No Credit Card Required No Credit Card Required