Develop and Deploy a Python API with Kubernetes and Docker - part II

Develop and Deploy a Python API with Kubernetes and Docker - part II

Table of Contents

  1. Introduction
  2. Storing Data in Environment when Running Docker
  3. Creating a GKE Cluster
  4. Configuring and Using Google Containers Registry
  5. Deploying our App to Kubernetes
  6. Conclusion

Introduction

In part I of this tutorial, we developed a Python API then we used Docker and Docker Compose to containerize the application and create a development environment.

In part II, we are going to discover some other details about Docker and Docker Compose as well as how to deploy the same app to a GKE cluster.

Storing Data in Environment when Running Docker

If you go back to our code, you will notice that the API key is hard-coded, which is a bad practice in general and especially a bad security practice.

One of the solutions is storing this variable as an environment variable:

  

import os
API_KEY = os.environ['API_KEY']

‍ 

Docker allows us to assign a value to API_KEY when we run a container:

  

docker run -dit --rm  -p 5000:5000 --name weather -e 
API_KEY="4a6e07d289e10d710761fae2288b3b69" weather:v1

 ‍

Let's test this. This is our final code:

  

from flask import Flask
import requests
import os
app = Flask(__name__)
API_KEY = os.environ['API_KEY']

@app.route('/')
def index():
    return 'App Works!'

@app.route('/<string:city>/<string:country>/')
def weather_by_city(country, city):
    url = 'https://samples.openweathermap.org/data/2.5/weather'
    params = dict(
        q=city + "," + country,
        appid= API_KEY,
    )
    response = requests.get(url=url, params=params)
    data = response.json()
    return data

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=5000)

‍ 

You should start by stopping the containers that use the port 5000 before proceeding. Then you need to rebuild the image, and run it using:

  

docker stop weather
docker build -t weather:v1 .
 docker run -dit --rm  -p 5000:5000 --name weather -e API_KEY="4a6e07d289e10d710761fae2288b3b69" weather:v1

‍  

When using Docker Compose, you should adapt your "docker-compose.yaml" file to use environment variables:

  

version: '3.6'
services:
  weather:
    build: .
    ports:
      -"5000:5000"
    environment:
      -API_KEY=${API_KEY}
    volumes:
      -.:/app

  

Don't forget to export API_KEY as an environment variable before running Docker Compose:

  

export API_KEY="4a6e07d289e10d710761fae2288b3b69" 
docker-compose up -d

  

Another way of setting environment variables is by creating a file where we store all of the variables.

Create a file named ".env" and add the "API_KEY":

 

​API_KEY="4a6e07d289e10d710761fae2288b3b69"​

  

Now run the container using:

 

​docker run -dit --rm -p 5000:5000 --name weather --env-file .env weather:v1​

  

If you want to use Docker Compose, change your "docker-compose.yaml" file to use the ".env" file:

  

version: '3.6'
services:
  weather:
    build: .
    ports:
      -"5000:5000"
    env_file:
      -$PWD/.env
    volumes:
      -.:/app

  

Note that we can create other environment variables like this for debug, port, IP etc.

 

 

Creating a GKE Cluster

After diving into Docker and Docker Compose concepts, let's move to the Kubernetes part. We are going to use Google Kubernetes Engine, which is the managed Kubernetes service of Google Cloud.

Before creating any Kubernetes cluster, make sure to create a Google Cloud account, create an organization and a project. If you are testing GCP for the first time, you can benefit from the free tier.

After installing the CLI of Google Cloud, let's start by setting the default configurations:

 

​gcloud init​

  

To communicate with Kubernetes API and manage the Kubernetes cluster, we will also need Kubectl. If it's not installed yet, you can follow the official documentation to install it. We can create a cluster using the web console or the CLI:

 

​gcloud container clusters create mycluster --num-nodes 3 --region europe-west1​

  

Change the region and the cluster name according to your preferences. Once the cluster is created, you will be able to use kubectl to connect to:

 

​kubectl config get-contexts​

  

If you see "mycluster" in the list, proceed using the following command:

 

​kubectl config set-context <cluser_id>​

  

Note that you can get the list of GCP zones using gcloud compute zones list.

You can also copy the gcloud command line that allows you to connect to the cluster from the web console.

  

undefined

  

When you are connected to the cluster, you will be able to connect to its API using kubectl. You can, for example, check the running services:

 

​kubectl get svc​

 

You should see the default Kubernetes ClusterIP service:

  

NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.35.240.1   <none>        443/TCP   34m

  

The ClusterIP is the default Kubernetes service and it exposes the Kubernetes service on an internal cluster IP.

 

Configuring and Using Google Containers Registry

The next step in this tutorial consists of building and pushing the API image to a registry. We are going to use the managed container registry of Google Cloud (GCR). We may have chosen Dockerhub, Gitlab Registry, or our own self-hosted registry.

Start by downloading "docker-credential-gcr":

 

​gcloud components install docker-credential-gcr​

  

Configure Docker to use your Container Registry credentials:

 

​docker-credential-gcr configure-docker​

  

Now let's build and push the API image:

  

docker build -t gcr.io/myproject-259511/weather:v1 .
docker push gcr.io/myproject-259511/weather:v1 .

  

Make sure to change "myproject-259511" to your project name. You can also change "gcr.io" to "us.gcr.io", "eu.gcr.io", or "asia.gcr.io" according to your location.

 

Deploying our App to Kubernetes

To run a container in Kubernetes, you should initially create a pod. A pod is essentially a group of one or more containers with shared storage/network resources. Unlike Docker, in Kubernetes, the smallest deployment unit is the pod, not the container.

We can create Kubernetes objects, and particularly pods, using Kubernetes manifest files:

  

apiVersion: v1
kind: Pod
metadata:
  name: api
  labels:
    app: api
spec:
  containers:
    -name: api
      image: gcr.io/myproject-259518/weather:v1
      ports:
        -name: api-port
          containerPort: 5000
          protocol: TCP

‍ 

You should save this to a YAML file and execute kubectl create -f <your_file> to deploy the pod. However, the best way to create pods is by creating a deployment object that will delegate the pod creation to a replica set.

In other words, a deployment file is a declarative update for pods and replica sets. The replica set purpose is to maintain a stable set of replicated pods and ensure the availability of a specified number of pods.

Let's move to the practical part and start by creating a deployment file. This is a basic deployment file:

  

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: api
  name: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      -image: gcr.io/myproject-259518/weather:v1
        name: api
        ports:
        -containerPort: 5000
          name: http
        env:
        -name: API_KEY
          value: "4a6e07d289e10d710761fae2288b3b69"

  

In the above YAML file, we start by declaring the kind if the object we are creating: a deployment. We also describe the containers we want to run inside the Pod. In our example, we are running a single container.

The container configuration includes its name, the image, the exposed port, and its protocol, as well as the environment variables.

Save the YAML code to api.yaml file and execute the following command to apply these changes:

 

​kubectl apply -f api.yaml​

  

We can check the running Pod using kubectl get pods:

  

NAME                   READY   STATUS    RESTARTS   AGE
api-7f94844c45-mcgsv   1/1     Running   0          4m45s

  

At this stage, the container is deployed, we can scale it by either changing the YAML file and increase the "replicas" size or by using Kubectl:

 

​kubectl scale deployment api --replicas=2​

  

The container(s) can be accessible inside the cluster, but until now, there is no way to access it from outside the cluster. If we want to create a public service and expose our API to Internet users, we should create a service.

This is the service YAML description:

  

apiVersion: v1
kind: Service
metadata:
  name: api
  labels:
    app: api
spec:
  type: LoadBalancer
  ports:
  -port: 80
    targetPort: 5000
  selector:
    app: api

  

Using GKE, when a service is a load balancer, Google Cloud will create a VM that acts as a load balancer and map an external port to a POD. In the above example, we were using the port 80 of the load balancer and assigned it to the API port 5000.

This is the complete "api.yaml" file:

  

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: api
  name: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      -image: gcr.io/myproject-259518/weather:v1
        name: api
        ports:
        -containerPort: 5000
          name: http
        env:
        -name: API_KEY
          value: "4a6e07d289e10d710761fae2288b3b69"
---
apiVersion: v1
kind: Service
metadata:
  name: api
  labels:
    app: api
spec:
  type: LoadBalancer
  ports:
  -port: 80
    targetPort: 5000
  selector:
    app: api

  

You should be able to see the public IP address of the load balancer by using kubectl get svc:

  

NAME         TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
api          LoadBalancer   10.35.242.171   34.77.193.210   80:31629/TCP   4m32s
kubernetes   ClusterIP      10.35.240.1     <none>          443/TCP        79m

  

If you visit the external IP address of the load balancer, you will be able to query our API:

​curl http://34.77.193.210/london/uk/

  

{
   "base":"stations",
   "clouds":{
      "all":90
   },
   "cod":200,
   "coord":{
      "lat":51.51,
      "lon":-0.13
   },
   "dt":1485789600,
   "id":2643743,
   "main":{
      "humidity":81,
      "pressure":1012,
      "temp":280.32,
      "temp_max":281.15,
      "temp_min":279.15
   },
   "name":"London",
   "sys":{
      "country":"GB",
      "id":5091,
      "message":0.0103,
      "sunrise":1485762037,
      "sunset":1485794875,
      "type":1
   },
   "visibility":10000,
   "weather":[
      {
         "description":"light intensity drizzle",
         "icon":"09d",
         "id":300,
         "main":"Drizzle"
      }
   ],
   "wind":{
      "deg":80,
      "speed":4.1
   }
}

  

You can shut down the cluster using gcloud container clusters delete <cluster_name> --region <your_region>.

 

Conclusion

In this tutorial, we used different technologies from the Docker ecosystem to build, run, and manage an API in different environments. At the end of the tutorial, we were able to scale and distribute traffic to our service using a load balancer.

There are many interesting concepts to discover about Docker and Kubernetes; networking is probably the most complex to understand if you are starting your journey. You should also check out some of our our Docker and Kubernetes guides for further study.

If you're interested to see how MetricFire can fit into your monitoring environment, try our free trial and start deploying metrics within minutes. Also, feel free to book a demo and talk to us directly about monitoring solutions that work for you.

Hungry for more knowledge?

Related posts