Continuous Delivery Pipeline for Kubernetes Using Spinnaker

May 27, 2020

Kubernetes is now the de-facto standard for container orchestration. With more and more organizations adopting Kubernetes, it is essential that we get our fundamental ops-infra in place before any migration. In my previous posts we learnt about monitoring our workloads, configuring logging stacks, and Fundamentals for Continuous Integration and Continuous Delivery.

A robust monitoring stack provides deep insight into the application stack and helps identify issues early on. MetricFire specializes in monitoring systems and you can use our product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign on to the free trial today.

In this post we will learn about leveraging Jenkins and Spinnaker to roll out new versions of your application across different Kubernetes clusters.

Pre-requisites

  1. Running Kubernetes Cluster with at least 3 nodes (GKE is used for the purpose of this blog)
  2. A Spinnaker set-up with Jenkins CI enabled.
  3. Github webhooks enabled for Jenkins jobs.
  4. Basic knowledge about CI/CD and various Kubernetes resources. You can go through a refresher here.

Strategy Overview

  1. Github + Jenkins:  Continuous Integration System which
    a. Checks out code
    b. Builds the source code and runs tests
    c. Builds the docker image and pushes to the registry.
    d. We can optionally scan docker images of any vulnerabilities before pushing it to docker hub or any other registry
  2. Docker hub: Registry to store docker images.
    a. You can also use GCR or ECR. Make sure respective pull secrets are configured
  3. Spinnaker: Continuous Delivery System to enable automatic deployments to Staging environment and supervised deployment to Production.

Continuous Delivery Pipeline for Kubernetes

Continuous Integration System

Although this post is about the CD system using Spinnaker. I want to briefly go over the CI pipeline so that the bigger picture is clear.

  1. Whenever there is a push in the master branch, a Jenkins job is triggered via Github webhook. The commit message for the push should include the updated version of the application and whether it is Kubernetes Deploy action or Patch action.
  2. The Jenkins job checks out the repo, builds code and builds the docker image according to Dockerfile and pushes it to Docker hub. This job should be configured to somehow figure out the tag for the docker image.
    This tag can be the application version which could be either relayed in the github webhook or parsed using a configuration file in the source code. In this case we will be parsing the pom.xml file to figure out the tag for the docker image.
  3. The Jenkins job will then trigger a Spinnaker pipeline and send the trigger.properties file as a build artifact. This properties file contains very crucial info which is consumed by Spinnaker and will be explained later in this post. The artifacts file looks something like this: 

<p>CODE: https://gist.github.com/denshirenji/0e391f5a5aaa5f9d5db6682ebe700e30.js </p>

Continuous Delivery System

This is the most important section of this blog. Spinnaker offers a ton of options for Kubernetes deployments. You can either consume manifests from GCS or S3 bucket or you can provide manifest as text in the pipeline itself.

Consuming manifests from GCS or S3 buckets includes more moving parts and since this is an introductory blog, it is beyond its scope right now. However, with that being said, I extensively use that approach and it is best in scenarios where you need to deploy a large number of micro-services running in Kubernetes - this is because such pipelines are highly templatized and re-usable.

Today, we will deploy a sample Nginx Service which reads the app version from a pom.xml file and renders it on the browser. Application code (index.html) and Dockerfile for the application is below:

Dockerfile

This is a very simple dockerfile for our application. All it does is create a new docker image for every version of the app and the application version is displayed when the service is accessed.

<p>CODE: https://gist.github.com/denshirenji/faa3b0b8ef3231be07ef7091a127d260.js </p>


Index.html

<p>CODE: https://gist.github.com/denshirenji/1a65c31ee357233b401bbbbbb0017bbb.js </p>

pom.xml

<p> CODE: https://gist.github.com/denshirenji/9d2726a12439dd0535a8625ab597871e.js </p>


The part where index.html is updated can be seen in the gist below (It is basically what the Jenkins job does).


Jenkins Job Executor Shell

The Jenkins job does the following:

  1. Check’s out code and decide whether or not the pipeline should be triggered. It will only trigger the build for master branch
  2. Parse pom.xml and get the new application version 
  3. Updates index.html to reflect the new application version
  4. Builds docker image with the new index.html, tags it appropriately and pushes it to docker hub
  5. Generates a properties file which will be consumed by Spinnaker. As soon as this build finishes successfully, a spinnaker pipeline gets triggered. 

<p> CODE: https://gist.github.com/denshirenji/d7821898b919b493821d17a6bdd160a2.js </p>


Kubernetes Manifests

<p> CODE: https://gist.github.com/denshirenji/a0f6ec1337b4473cd84f7d19aa597ef9.js </p>

In the manifest above we create the following:

  1. Namespace to deploy our application 
  2. Deployment to which new releases will be pushed
  3. Service for accessing the application from outside the cluster

Steps to Set Up the Pipeline

  1. Create a new application under the applications tab and add your name and email to it. Rest all fields can be left blank.
  2. Create a new project under Spinnaker and add your application under it. Also, you can add your staging and production kubernetes cluster under it.

Demo-Project Configuration

We recommend grouping multiple applications into a single project. This grouping can be based on different teams. It varies from organization to organization. However, clusters can of course be shared across different projects.

Spinnaker connects to different kubernetes clusters using RBAC based authentication and that is the recommended way to do it. We will go over the entire spinnaker set-up and connect various kubernetes clusters to it in an upcoming blog. Stay tuned :) 

  1. Now, under the application section add your pipeline. Make sure the Trigger stage is set to Jenkins and you are consuming the artifacts appropriately. You can use this pipeline json. (Don’t forget to modify it according to your credentials and endpoints.)

 

<p> CODE: https://gist.github.com/denshirenji/5052a49833aa75dc01f349737549423b.js </p>

  1. Once you add it, the pipeline will look something like this. We have different pipelines for deploying a new release and patching an existing revision with the image tag.

Continuous Delivery pipeline, triggered by Jenkins, deploying to Staging and Prod

If you notice carefully this single pipeline triggers deployment to Staging and Production env and each of those deploys could be DEPLOY action or PATCH action. In short, one pipeline is offering 4 different options (Deploy Staging, Patch Staging, Deploy Prod and Patch Prod). This is very basic and you can easily extend this to achieve more complex actions depending upon your use case. 

Deep Diving Into the Pipeline

  1. Configuration: This is the stage where you mention the Jenkins endpoint, the job name and expected artifact from the job. In our case: trigger.properties. Make sure that the Jenkins endpoint is accessible from Spinnaker and auth free. If your Jenkins is deployed in the same Kubernetes cluster as Spinnaker then you can also use Jenkins ClusterIP service endpoint.
  2. Deploy (Manifest): The trigger.properties file has an action variable based on which we decide whether we need to trigger a new deployment for the new image tag we can patch an existing deployment. The properties file also tells which version to deploy of patch with. It is set in the TAG variable.
    Please keep in mind that tigger.properties file and pipeline expressions are not available in the Configuration stage of the Job. They become available in the subsequent stages. The new deployment occurs in accordance with the maxSurge and maxUnavailable setting in the deployment manifest. We have already gone over this in detail in this blog.

Expression Validation for Deploy Stage

  1. Patch (Manifest): Similar to the Deploy stage, this stage will check the same variable and if it evaluates to “PATCH”, then the current deployment will be patched. It should be noted that in both these stages the Kubernetes cluster is being used as a staging cluster.
    Therefore, our deployments/patches for the staging environment will be automatic. In order to make the development process faster we enable automatic deployments to Dev/Staging/QA environments however all Production deploys are strictly upon approval.

Deploy Manifest,Note: k8-staging-1 under Account Setting


Patch Manifest, Note: k8-staging-1 under Account Setting

  1. Manual Judgement: This is a very important stage. It is here where you decide whether or not you want to promote the currently running build in the staging cluster to the production cluster. This should be approved only when the staging build has been thoroughly tested by various stakeholders. The stakeholders who should be allowed to approve this stage can also be relayed using the trigger.properties file from Jenkins.
  2. Deploy(Manifest) and Patch(Manifest): The final stages in both paths are similar to their counterparts in pre-approval stages. The only difference being that the cluster under Account is a production kubernetes cluster.
  3. One important thing to remember is that we can configure Notifications for the entire pipeline or individual stages. It largely depends on how verbose you want to make the pipeline. However, it is important that we configure notifications for the Manual Judgement stage so that stakeholders are notified when an approval is needed. These notifications can be email alerts or simple slack/hipchat messages. Slack or Hipchat  is of course the recommended way. 

Now you are ready to push out releases for your app. Once Triggered, the pipeline will look like this:

Automated-Deployment to Staging Succeeded

This pipeline gets triggered when the Jenkins build completes. However, Spinnaker offers a manual execution option (top right in the image) using which you can trigger the pipeline using the Spinnaker dashboard. Each manual trigger refers to a Jenkins build which should have been previously completed since it needs to consume the artifacts file from the build. 

If you want to take a look at the actual yaml file which was deployed to the cluster just click on the YAML button as shown in the screenshot above. 

Manual Judgement Waiting for Approval


As soon as you will click on Continue the job will proceed to the next stage. 



Approval Given to Manual Judgement Section


After the approval has been granted we can see which stakeholder approved the build. It is very handy for audit purposes. 


Deployment to Production Successful


The sections in Grey color have been skipped because the ACTION variable did not evaluate “PATCH”. Once you deploy, you can view the current version and also the previous versions under the Infrastructure Section.


Current and Previous Versions of App in the Staging and Production cluster


Under the Infrastructure view you can see all the services running in all your clusters. This is where you can 

  • Interactively edit the manifests
  • View logs from a particular pod or just 
  • Select a previously deployed version and trigger a roll-back to it
  • Scale up or Scale down an existing deployment. 

Some Important Points to Remember:

  1. You can emit as much data as you feel like to the properties file and later consume it in a Spinnaker pipeline. This can also paths for any additional artifacts or notification URLs.
  2. You can also trigger other Jenkins jobs from Spinnaker and then consume its artifacts in the subsequent stages. Spinnaker uses Jenkins as a sandbox to run script stages and this provides a great deal of flexibility to accomplish all kinds of build tasks.
  3. Spinnaker is a very powerful tool and you can perform all kinds of actions like roll-backs, scaling, etc. right from the console. You can also view pod logs and deployment manifests right from the dashboard. This makes Spinnaker a single pane of glass where you can manage all of your kubernetes resources.
  4. Not only deployments but all kinds of Kubernetes resources can be managed using Spinnaker.
  5. Spinnaker provides excellent integration with Slack/Hipchat/Email for pipeline notifications.


Conclusion

In this blog we implemented a complete CI/CD pipeline using Jenkins and Spinnaker which you use and start deploying to your Kubernetes clusters. Installing Spinnaker is extremely easy and adding new custers is even easier. Let us know if you need help installing Spinnaker in your infrastructure.

Spinnaker provides excellent integrations with Helm, Prometheus etc. It also allows you to do Canary releases by getting Prometheus metrics right in the dashboard. All kinds of complex scenarios can be accomplished using Spinnaker.  

If you need help designing a custom CI/CD pipeline feel free to reach out to myself through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments and different stages during the CI/CD process. Monitoring is extremely essential for any application stack, and you can get started with your monitoring using MetricFire’s free trial.

Robust monitoring and a well designed CI/CD system will not only help you meet SLA’s for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it please book a demo with us.

Related Posts

GET FREE MONITORING FOR 14 DAYS