This is part 3 of our 3 part Kubernetes CI/CD series. In the first part, we learnt at a high level about the overall CI/CD strategy. Then in the second part, we discussed in detail the continuous integration workflow. In this blog we will go in detail into the Continuous Delivery pipeline for deploying your applications to Kubernetes.
While developing your CI/CD strategy, it is important to consider how you will monitor your application stack. A robust monitoring stack provides deep insight into the application stack and helps identify issues early on. MetricFire specializes in monitoring systems and you can use our product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign on to the free trial today.
Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.
Our goal is to make deployments—whether of a large-scale distributed system, a complex production environment, an embedded system, or an app—predictable, routine affairs that can be performed on demand.
You’re doing continuous delivery when:
In the previous blog we established that the hand-off from Continuous Integration to Continuous Delivery takes place when the CI pipeline pushes the docker image to the docker repository, and then pushes the updated helm chart to a helm repository or artifact store. Let’s go further today.
Some widely used CD tools are:
You must have noticed that Jenkins can be used as both a Continuous Integration and Continuous Delivery tool, and this is primarily because of its rich feature set and flexibility.
Spinnaker and Harness follow a more traditional continuous delivery approach where we fetch the deployable artifact, bake it and deploy to a desired environment.
Weave Flux and ArgoCD use a GitOps based approach where both the application source code and helm charts are a part of a git repository and are continuously synced to a desired environment. However, we will learn more about the GitOps approach in a later post.
The most important artifact for the Continuous Delivery pipeline is the Helm Chart. It is extremely important that the helm charts are properly versioned and stored. Along with that we should ensure that our pipeline should have access to the artifact store. The artifacts could be fetched over HTTP using appropriate authorization. Some options for artifact management are:
As we previously learned that Deployment strategies can be broadly classified as:
Kubernetes inherently supports only 2 rollout strategies, Rolling Upgrade and Recreate:
With Rolling Upgrade we decide a minimum number of unavailable replicas and max surge for the available replicas during a new version rollout. For example, imagine the currently running deployment is having 4 replicas running, and minUnavailable parameter is set to 1, and maxSurge parameter is also set to 1. Then during the rollout, one replica of the currently running version will be terminated, and at the same time a new replica with the new version will be created and so on. This is a zero downtime deployment method. It is important to understand that this method should be used when both new and old versions of the application are backward compatible. The configuration looks something like this:
In case of Recreate, all the pods of the existing deployment will be terminated and new pods of the new version are created. We should use this strategy when:
A sample CD pipeline looks something like the following:
The manifest above will be the one which is deployed to the cluster with all the values overridden.
The payload contains very crucial information which is consumed by our CD system to deploy to different environments. For example "branch": "master", indicates that the master branch of the source code repo was built. The rule of thumb is that master branch should always be deployable, and whenever master is built the artifact is deployed to the production environment. Similarly, if the branch is staging we deploy to the staging environment. If we trigger CI builds for tags then that information is also relayed by this payload which can be processed by the CD system
In this blog we went over in detail about the Continuous Delivery system and the various stages involved. This concludes our 3 part Kubernetes CI/CD series. We tried to delve into as much detail as possible and provide production ready configuration. However, it is important to understand that each environment is different and so is every application stack. Therefore, our CI/CD pipeline should also be curated around it.
If you need help designing a custom CI/CD pipeline feel free to reach out to myself through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments and different stages during the CI/CD process. Monitoring is extremely essential for any application stack, and you can get started with your monitoring using MetricFire’s free trial. Robust monitoring and a well designed CI/CD system will not only help you meet SLA’s for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it please book a demo with us.