Monitoring is the most important aspect of infrastructure operations. Effective monitoring strategies help in optimizing infrastructure usage, better planning, and easy incident resolution.
While monitoring preceded DevOps, DevOps has further transformed the software development process to such an extent that monitoring has to evolve as well. The overall pace of software development has increased with DevOps and teams are now automating integration and testing, and deploying software in the cloud with quick timelines and continuous delivery.
With DevOps, there’s more to monitor now, from integration, provisioning, to deployment, teams need to use DevOps monitoring strategies to effectively monitor different aspects of the project.
MetricFire specializes in monitoring systems and you can use our product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign up for the free trial today.
Today we will learn about some important monitoring strategies and concepts to help you gain a deeper understanding of your infrastructure and applications.
Determine what you should monitor in your applications. Monitoring targets can be divided into several primary categories, and you will likely want to cover at least one aspect of each category.
These categories include:
It is a key component of any application monitoring. It includes all the server metrics and with containers coming into the picture it should also include all the container-specific metrics. Some important metrics are:
These metrics are fairly straightforward and much talked about already. Let’s deep dive a little bit into some Kubernetes Metrics.
Containers are the building blocks of containerized applications. Container CPU usage refers to the amount of CPU resources consumed by containers in production. Memory usage is the measure of memory resources consumed. CPU resources are measured in CPU cores while memory is measured in bytes.
Pods are collections of containers and as such pod CPU usage is the sum of the CPU usage of all containers that belong to a pod. Similarly, pod memory usage is the total memory usage of all containers belonging to the pod.
Node CPU usage is the number of CPU cores being used on the node by all pods running on that node. Similarly, node memory usage is the total memory usage of all pods.
You can think of Kubernetes namespaces as boxes. DevOps can create separate boxes to isolate the resources belonging to individual applications or teams. Namespace resource usage is the sum of CPU or memory usage of all pods that belong to that namespace.
The sum of CPU or memory usage of all pods running on nodes belonging to the cluster gives us the CPU or memory usage for the entire cluster.
You can think of container resource requests as a soft limit on the amount of CPU or memory resources a container can consume in production.
The sum of CPU or memory requests of all containers belonging to a pod.
Node CPU requests are a sum of the CPU requests for all pods running on that node. Similarly, node memory requests are a sum of memory requests of all pods.
The sum of CPU or memory requests of all pods belonging to a namespace.
Sum of CPU requests or memory requests for all pods running on nodes belonging to a cluster.
Container CPU limits are a hard limit on the amount of CPU a container can consume in production. Memory limits are the maximum amount of bytes that a container can consume in production.
The sum of CPU or memory limits for all containers belonging to a pod.
Node CPU limits are the sum of CPU limits for all pods running on that specific node. Node memory limits, on the other hand, are a sum of the memory limits of all pods.
Sum of CPU or memory limits of all pods belonging to a namespace.
Sum of CPU or memory limits for all pods running on nodes belonging to a cluster.
In cloud environments, Kubernetes nodes usually refer to cloud provider instances. Node CPU capacity is the total number of CPU cores on the node. Node memory capacity is the total number of bytes available on the node. For example, an N1-standard-1 instance has 2 vCPUs and 3.75 GB of memory.
Cluster CPU capacity is the sum of CPU capacities of all Kubernetes nodes that are part of the cluster. Cluster memory capacity is the sum of memory capacities of all nodes belonging to the cluster. For example, a cluster with 4 N1-standard-1 instances has 8vCPUs and 15 GB of memory.
CPU request commitment is the ratio of CPU requests for all pods running on a node to the total CPU available on that node. In the same way, memory commitment is the ratio of pod memory requests to the total memory capacity of that node. We can also calculate cluster level request commitment by comparing CPU/memory requests on a cluster level to total cluster capacity.
Request commitments give us an idea of how much of the node or cluster is committed in terms of soft resource usage limits.
CPU limit commitment is the ratio of CPU limits for all pods running on a node to the total CPU available on that node. Similarly, memory commitment is the ratio of pod memory limits to the total memory capacity of that node. Cluster level limit commitments can be calculated by comparing total CPU/memory limits to cluster capacity.
Limit commitments give us an idea of how much of the node or cluster is committed in terms of hard CPU and memory limits.
With the emphasis on pay as you go billing models for cloud deployments, resource utilization is an important metric to monitor and has major implications for cost control.
CPU utilization is the ratio of CPU resources being currently consumed by all pods running on a node to the total CPU available on that node. Memory utilization is the ratio of memory usage by all pods to the total memory capacity of that node.
Cluster resource utilization will compare resource usage (both CPU and memory) for all pods with the total resource capacity of all nodes.
Node CPU saturation is the measure of requests for CPU which cannot be fulfilled because of unavailability. Similarly, memory saturation on the node is a measure of the memory requests which cannot be met due to unavailability.
Saturation on the cluster level can be calculated based on saturation numbers across all nodes.
These metrics might seem overwhelming but it is extremely easy to scrape and monitor them with prometheus. If you have a prometheus <link prometheus-thanos blog here> instance running all you need to do is:
<p> CODE: https://gist.github.com/denshirenji/c7d3515720d1ac571fc29c1c0d305a1f.js </p>
<p> CODE: https://gist.github.com/denshirenji/5ebd86bed77a0202972056892d5fefea.js </p>
Once you have deployed these, you can chart metrics on grafana and get dashboards.
Application performance monitoring is where logs are searched, collected, and centralized with tracing and profiling available on the application.
While this list is not exhaustive by any means, it should give you an idea of what your existing monitoring tools offer and what are the loopholes in your DevOps monitoring strategy.
This blog should provide you with some insight on what metrics to monitor for in your ecosystem. Additionally, you can also expose custom metrics and MetricFire can help you scrape them and plot relevant dashboards. All in a few clicks :)
If you need help setting up these metrics feel free to reach out to myself through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments. Monitoring is extremely essential for any application stack, and you can get started with your monitoring using MetricFire’s free trial.
Robust monitoring will not only help you meet SLAs for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it please book a demo with us.