Using K8S without migrating all of your devops processes

Using K8S But Not Overhauling Your Devops Processes

Table of Contents

Introduction

   

Kubernetes is now the industry standard for organizations that are born in the cloud. Slowly, many enterprises and mid-level companies are adopting it as the default platform for managing their applications. 

But we all know, Kubernetes adoption has its own challenges, as well as its associated costs. 

How do we decide when and what to migrate to Kubernetes? Does migrating to Kubernetes mean overhauling all devops processes?

   

Adopting K8S should not lead to an overhaul of your devops process - it should complement it.

   

Kubernetes offers many benefits such as a self-healing infrastructure, improved go-to-market time, faster release cycles, and fewer outages. Some companies need these benefits, and paying the price of a migration is worth it. In other cases, this kind of overhaul won’t affect the performance of the business.

In this post, we will look at two examples of companies that successfully migrated specific parts of their infrastructure to Kubernetes. By doing so they achieved functionality that directly benefited their business. In both cases they did not entirely overhaul their devops processes.

Based on these stories, you can see what “Kubernetes migration” really means, and why people do it.

First, let’s take a look at the important factors at play when deciding to migrate or to not migrate to Kubernetes.

   

Factors to consider before embracing K8s

   

Application Stack

Kubernetes scales very well for stateless applications and it offers first class support for them. Therefore, if you are running many web servers and proxies, then Kubernetes is an ideal choice.

For applications which require tremendous scaling, and you expect huge spikes in traffic, K8s is definitely a good fit. For example, if you are running an e-commerce store and expect huge amounts of traffic during the holiday season then a Kubernetes based platform will not disappoint you. 

Recently, the support for stateful applications has improved with capabilities like persistent volume snapshots and restore. However, setting up stateful workloads is still tricky business and one should be very careful with it. Sometimes you need high database throughput and it might be best to run them on bare metal servers instead of using VMs or containers.

  

DevOps Tool Kit

The kind of devops tools you are using play a very important role whenever you decide to adopt any new kind of technology into the organization. Their importance increases exceedingly when you try to make changes to the underlying application hosting platform. Therefore it is important to ensure that your existing devops tools are compatible with a Kubernetes based platform. 

For example, a traditional monitoring stack consisting of Zabbix or Nagios might not be very useful when monitoring a Kubernetes infrastructure. This is because Kubernetes is focused on monitoring with Prometheus, and tools like Zabbix and Nagios were created with other monitoring targets in mind. 

The choice of CI/CD tooling should also be top-of-mind. It is important to ensure that the CI/CD infrastructure works well with both containers and VMs. A pipeline designed out of Jenkins and Spinnaker (or sometimes Jenkins alone) might fit well in both cases. 

One thing to keep in mind before adopting any technology is that you always have to maintain an efficient devops practice throughout the migration. You need to maintain visibility and effect even as tools change over. This includes monitoring, log management applications, log management for infrastructure, and a robust Continuous Integration (CI) / Continuous Delivery (CD) tooling.

During your Kubernetes migration you can use MetricFire to maintain control over your devops processes. MetricFire offers Hosted Prometheus based monitoring which integrates seamlessly with Kubernetes or containerized software, while simultaneously offering Hosted Graphite systems that easily monitor traditional VM based infrastructure. MetricFire has a Heroku add-on, called HG Heroku Monitoring which makes it easy to monitor Heroku-based hybrid infrastructure. 

With MetricFire, you can get complete insight into what’s going on during your Kubernetes migration at both the application and infrastructure level.

   

Projected Benefits of K8s

Once you start embracing Kubernetes in your ecosystem, you will start seeing some immediate benefits. 

  

Self Healing

Kubernetes automatically handles QoS issues such as pod restarts or VM degradation. Therefore, such incidents should not create a flurry of alerts for your operations teams since Kubernetes control plane manages it and ensures that pods get rescheduled or a new VM replaces a degraded VM. 

    

Multi-cloud capability

Due in part to its portability, Kubernetes can host workloads running on a single cloud as well as workloads that are spread across multiple clouds. In addition, Kubernetes can easily scale its environment from one cloud to another.

These features mean that Kubernetes lends itself well to the multi-cloud strategies that many businesses are pursuing today. Other orchestrators may also work with multi-cloud infrastructures, but Kubernetes arguably goes above and beyond when it comes to multi-cloud flexibility

  

Increased developer productivity

Kubernetes, with its declarative constructs and its ops friendly approach, has fundamentally changed deployment methodologies and it allows teams to use GitOps. Teams can scale and deploy faster than they ever could in the past. Instead of one deployment a month, teams can now deploy multiple times a day.

  

Portability and flexibility

Kubernetes works with virtually any type of container runtime. (A runtime is the application that runs containers. There are a few different options on the market today.) In addition, Kubernetes can work with virtually any type of underlying infrastructure -- whether it is a public cloud, a private cloud, or an on-premises server -- so long as the host operating system is some version of Linux or recent Windows version.

In these respects, Kubernetes is highly portable, because it can be used on a variety of different infrastructure and environment configurations. Most other orchestrators lack this portability; they are tied to particular runtimes or infrastructures.

However, there could be cases when Kubernetes might not be a good fit for your organization. Let’s lean in on those cases now.

    

When Kubernetes might not be a good fit

While this section is shorter than the benefits section, the points discussed herein are critical. There are key situations where Kuberentes doesn’t help.

    

Your tech stack is sufficient 

Perhaps you are not an engineering focused organization and the stack which you are using is sufficient for your needs. Say, a small company selling coffee beans and other supplies using an e-commerce solution such as Magento. 

In such cases your entire business does not depend on the software that you are shipping / hosting. Your loads are consistent, and you have no need for autoscaling. Using Kubernetes in this situation might be overkill. 

   

You require high throughput from the underlying infrastructure

Sometimes it is beneficial to run applications on bare metal rather than VMs or Containers. This is usually so that you can get maximum performance. In such cases K8s might not be worth it. 

   

Developer Ramp Up

Kubernetes requires significant developer and devops expertise so that it can be run effectively. If you are able to meet SLA and SLO requirements with your current systems then adopting K8s might not even be needed for you.

   

Your software is not containerized

In cases where you are running your software on VMs with no containerization, you are not ready for Kubernetes. Kubernetes is a container orchestrator, and will only be valuable if you have containers to orchestrate. If you try to move from VMs directly to Kubernetes all in one go, it will be extremely difficult.

   

To Kubernetes or to not Kubernetes, that is not the question

The real question is what you’re trying to achieve, and why. Most companies do not use only Kubernetes, or only VMs. Most companies are using Kubernetes in a specific area of their business to achieve results that Kuberenetes is effective for.

Next, we will look at two case studies from Xplenty and DreamFactory illustrating how these two organizations adopted Kubernetes without overhauling their devops process. 

   

Xplenty’s move to Kubernetes 

Heroku for the web app, Kubernetes for the data engine

   

Xplenty provides a world class data pipeline platform to a variety of organizations ranging from PWC and Deloitte to Accenture. Their platform is extremely data heavy and needs large computing capabilities to serve their customers. 

The Xplenty platform can be categorized into two parts: 

  1. The user-facing web application, which is also where their business logic is handled. This is hosted on Heroku.
  2. A Data engine backend which also integrates with a ton of APIs. This is on Amazon Web Services compute cloud (EC2) for hosting the data engine workloads.

Xplenty is very satisfied with Heroku’s performance in hosting the web application. This is supplemented by the fact that Heroku offers a ton of addons to make devops processes easy. Xplenty is using the Sumologic addon for Logging and HG Heroku Monitoring for monitoring their workloads.

However, managing data engines over EC2 instances was a real pain point for them. One of the major drivers for Xplenty was the need for autoscaling the data engine. With traditional EC2 instances it took about 5-10 minutes for a new machine to come up and be ready to be used. 

Therefore, the best solution was to take a hybrid approach where the web application would still be hosted on Heroku, but the data processing engine would be migrated to Kubernetes. 

In order to start their journey they first moved their backend workloads to containers which helped them build developer experience. This helped in managing the migration process, and helped make the process not feel like an overhaul. 

Then, they decided to use Kubernetes to orchestrate their workloads. 

As a result, the autoscaling event which previously took close to 10 minutes was now happening in 20 seconds. This was a huge win for them. 

An added advantage was the existing devops tools which Xplenty was using, namely Sumo Logic for logging and MetricFire for monitoring, which integrated very well with both the Kubernetes based platform and their Heroku app. 

Now, the Xplenty devops processes can be handled by a small team, giving them more time to care for their customers and go the extra mile on features.

In summary, choosing Kubernetes was the right move for Xplenty, but they did not overdo it by moving the entire platform on top of Kubernetes. They are still satisfied with the web app being hosted on Heroku, and they plan to leave it there. They got what they needed out of Kubernetes, and their migration is complete. 

This is a clear example of choosing Kubernetes only if it solves key pain points. Don’t adopt the technology blindly - make careful measured choices and use it for strategic areas of the stack. 

   

undefined      

DreamFactory’s Kubernetes Adoption

For autoscaling cloud-based free trials

   

DreamFactory provides an enterprise API platform and is being used by a wide array of organizations around the globe, including banks, financial institutions, automotive manufacturers, universities, and government at all levels. Unlike the traditional SaaS product, historically the majority of DreamFactory deployments are on-premise. This model provides many advantages, including easier compliance with various regulatory requirements, and the ability to deploy DreamFactory using a variety of solutions, Kubernetes included. 

DreamFactory's Kubernetes use expanded in 2019 with the launch of a hosted solution. Known as Genie. DreamFactory's hosted offering is built atop Kubernetes, and allows for each DreamFactory instance to be securely launched in complete isolation from other instances, and scaled to suit specific customer requirements. Genie also serves as DreamFactory's trial environment, with new instances launched into the cluster via a self-serve trial registration process.

Thanks to Kubernetes' ability to launch, manage, and scale hundreds of DreamFactory environments, Genie's time to market was greatly reduced because the team wasn't required to refactor the platform to support the typical multi-tenant SaaS product requirements. 

Like Xplenty, monitoring plays a crucial role within DreamFactory's hosted environment. An array of Prometheus (Grafana) dashboards are used to monitor platform health, and each customer is provided with their own dashboards for personal use. 

Monitoring played a prominent role throughout the recent US Presidential election due to DreamFactory's partnership with Decision Desk HQ. Decision Desk HQ is just one of seven election reporting agencies relied upon by major US media outlets, and throughout the election a great deal of reporting data passed through their DreamFactory hosted environment (see this blog post for more information). Extensive monitoring was in place over this period, with the team relying heavily upon it to monitor the cluster and respond to any events.

With Dreamfactory’s example it must be clear to you that using Kubernetes can help to scale up and down with ease. This makes it cost effective, since you are not running at peak capacity all the time, and also serves as a feature for customers who need seasonal scaling. 

       

undefined

     

In summary, DreamFactory's reason for using Kubernetes:

  • The traditionally on-premise solution is not multi-tenant, yet they wanted to launch a hosted solution without refactoring the application code. Kubernetes made it easy to launch Dockerized DreamFactory applications into the cloud, each of which is completely isolated and scalable to suit specific customer needs.
  • Kubernetes manages autoscaling (both up and down) to suit present hosted requirements. If trials are brisk one week, Kubernetes scales up to manage more DreamFactory instances. If trials are slow, around major holidays for instance, Kubernetes automatically scales the environment down, ensuring we're not overspending.
  • Using Kubernetes, Prometheus, and Grafana, DreamFactory is able to easily monitor the environment with little additional expense and overhead.

   

Major Drivers of Adopting K8s

There are some very strong indicators that an organization should adopt Kubernetes. If these indicators describe your organization, then migrating to Kubernetes won’t be overhauling your devops processes. Some of those key drivers are:

   

You are already using Docker

If your organization is already using Docker for packaging your applications then adopting Kubernetes is the next logical step. Kubernetes will only make managing those containers extremely easy and reduce a lot of operational overhead.

   

Need for Autoscaling 

If your application needs to scale in order to adjust traffic spikes then using Kubernetes might be a very good decision. This will allow you to adjust to sudden spikes in traffic while the underlying cluster scales to adjust capacity.

   

Frequent releases to Production environment

If you follow an agile development cycle and you need to make smaller but more frequent releases to the production environment, then Kubernetes can certainly aid in achieving that. 

  

Existing ecosystem of flexible DevOps Tools

Kubernetes works very well with a lot of existing devops tools like Jenkins or Prometheus and if you are already using them in your organization then Kubernetes adoption will not feel like a heavy task. If you’re using hosted monitoring tools like Hosted Prometheus or Hosted Graphite by MetricFire, then when you transition to Kubernetes you don’t need to worry about how to maintain your monitoring stack.  

   

Better Return Of Investment

Kubernetes does offer a better return of investment for the compute cost which you pay for your applications. With the help of Kubernetes you can pack more containers on a single host and then scale on demand. This means you don’t need to run at full capacity all the time. You can let Kubernetes do the heavy lifting of scheduling containers and scaling underlying machines when the need arises. 

   

Major indicators that using Kubernetes will be a huge overhauling process

Kubernetes is not for everyone and it is important to understand this. Sometimes there could be scenarios where you would feel that adopting Kubernetes is a major overhaul. A few indicators for those scenarios could be:

   

Insufficient Engineering Experience 

Kubernetes is relatively new technology and people are still learning it. If you introduce it in your organization where the engineering team does not have prior experience, then it might throw them off course. It could even lead to time sinks where the team is spending all their time learning Kubernetes rather than building business critical applications. This could severely cost your business.

   

Large Footprint of Stateful Applications

Some people agree that Kubernetes is still not ready to host stateful applications despite there being tremendous improvements made in that area with each release. It is indeed true that managing stateful workloads in containers is rather tricky, and if your stack has a large footprint of such workloads, then migrating to Kuberentes might turn out to be a huge undertaking. 

    

In-compatible Devops Tool Kit

If you have been using some traditional devops tools like Cacti or Nagios then adopting Kubernetes and making them work will be a major overhaul for you. As you must have noticed in both the Xplenty and Dreamfactory scenarios, adopting Kubernetes was easy since they were using tools like MetricFire, Sumologic and Prometheus. This is not limited to monitoring or logging tools but also includes the entire Continuous Integration (CI) / Continuous Delivery (CD) tooling. A devops tool kit plays a very crucial role in any kind of technology adoption. 

    

Security, Testing, and Compliance

Adopting Kubernetes means installing completely new tech on your infrastructure and it can be certainly overwhelming. Also, Kubernetes provides a uniform view of underlying infra and therefore you could run into some compliance issues when hosting data sensitive workloads alongside other workloads. These can be addressed by careful network segmentation and putting in place correct network policies. However, that is an undertaking in itself.  

Using Kubernetes means containerizing the application. Therefore you not only need to change infrastructure but also the entire Test bench of the application so that it factors in the containers and Kubernetes specific settings. 

   

Conclusion

We hope this blog has given you enough insight into some factors to consider before adopting Kubernetes. Designing the platform to host your applications is a very important decision and should be curated carefully. If you feel like you need help on this journey of Kubernetes adoption feel free to reach out to us!

You might also like other posts...
kubernetes Oct 12, 2023 · 19 min read

Logging for Kubernetes: Fluentd and ElasticSearch

Use fluentd and ElasticSearch (ES) to log for Kubernetes (k8s). This article contains useful... Continue Reading

kubernetes Oct 11, 2023 · 9 min read

Python API with Kubernetes and Docker - Part I

Use Docker to containerize an application, then run it on development environments using Docker... Continue Reading

kubernetes Oct 06, 2023 · 13 min read

Tips for Monitoring Kubernetes Applications

Monitoring is the most important aspect of infrastructure operations. This article will give you... Continue Reading

header image

We strive for
99.999% uptime

Because our system is your system.

14-day trial 14-day trial
No Credit Card Required No Credit Card Required