How to monitor your Kubernetes metrics server

How to monitor your Kubernetes metrics server

Table of Contents

  1. Introduction
  2. What is Kubernetes metrics server?
  3. What is the Kubernetes metrics server used for?
  4. Kubernetes metrics server requirements
  5. Metrics to watch
    1. Cluster state metrics
    2. Resource metrics
    3. Control plane metrics
  6. How to deploy a metrics server?
  7. Querying metrics API
  8. Using Kubernetes dashboard for watching metrics
  9. Using Hosted Graphite by MetricFire to monitor Kubernetes
  10. Conclusion 

Introduction

In this article, we will look at what the Kubernetes metrics server is and what it is used for. We will also learn how to set up a metrics server and use it to monitor Kubernetes metrics. Finally, we will explore how to use hosted Graphite by MetricFire for monitoring Kubernetes metrics.

                   

To learn more about monitoring Kubernetes metrics using hosted Graphite by MetricFire, book a demo with the MetricFire team or sign up for the free trial today.

                       

What is Kubernetes metrics server?

Kubernetes Metrics server is a cluster add-on that allows you to collect resource metrics for autoscaling pipelines from Kubernetes. After getting metrics, it delivers the aggregated metrics to the Kubernetes API server via the Metric API. The metrics server is used for autoscaling purposes only.

                

The main advantages of Kubernetes metrics server:

  1. Efficient use of resources.
  2. Scalable support for up to 5000 cluster nodes.
  3. Single deployment for most clusters.
  4. Systematic collection of metrics, by default, every 15 seconds.

            

What is the Kubernetes metrics server used for?

Let’s take a look at the cases for which you can use a metrics server.

  1. Horizontal autoscale based on CPU or memory. It is implemented as a control circuit with a period handled by the controller manager flag (default value is 15 seconds). During each period, the controller manager requests resource usage for the specified metrics. The controller manager obtains metrics from the Resource Metrics API or from the Custom Metrics API. The automatic scaling of the horizontal unit does not apply to objects that cannot be scaled.
  2. Automatically configuring or suggesting resources needed by containers. Vertical Pod Autoscaling (VPA) allows you to automatically set queries based on usage and thus allow proper scheduling for nodes so that the appropriate amount of resources are available for each pod. It can also maintain the ratios between limits and requests that were specified in the initial container configuration. It can either downscale modules that are over-requesting resources or upscale modules that are under-requesting resources, depending on how they are used over time.

                        

You should use tools other than metrics server in the following cases:

  1. Monitoring cluster metrics that are not Kubernetes specific.
  2. To get an accurate source of resource utilization metrics.
  3. For horizontal autoscaling based on resources other than CPU or memory.

                  

Kubernetes metrics server requirements

Before using Metrics Server, you need to check that your network and cluster have the following settings:

  • Server metrics address: if hostNetwork is enabled, then the IP address of the host, otherwise the IP address of the container.
  • Aggregation level must be enabled in the Kube API server.
  • The cluster nodes must have Webhook authentication and authorization enabled.
  • If Kubelet certificate validation is enabled on the metrics server, it must be signed by the cluster CA.
  • The container runtime must implement RPC container metrics or have cAdvisor support.

                            

                                        

Metrics to watch

Let’s look at the main groups of Kubernetes metrics that can be monitored using the metric server.

          

Cluster state metrics

These metrics show the health and availability of Kubernetes items. This information is used to keep track of whether the modules are working as expected. These metrics give you high-level information about the cluster and its health and can help identify problems with nodes and pods. Cluster state metrics include node status, desired pods, current pods, available pods, unavailable pods.

              

Resource metrics

These metrics allow you to understand whether the cluster can handle its workloads, whether it can handle new loads. It is possible to track the use of resources at different levels of the cluster. This group includes the following metrics: memory requests, memory limits, allocatable memory, memory utilization, CPU requests, CPU limits, allocatable CPU, CPU utilization, disk utilization.

                

Control plane metrics

This group includes metrics that allow you to monitor the operation of the primary services and resources for managing the cluster, such as API server, controller managers, schedulers, data stores.

                          

How to deploy a metrics server?

Some clusters include default server metrics deployment. To check whether the metrics server is running on your cluster, run the following command:

kubectl get pods --all-namespaces | grep metrics-server

                

If the metrics server is running, then in the response, you will see information about the running nodes. Otherwise, run the following command to install the latest version of server metrics.

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

          

Querying metrics API

After setting server metrics, you can get metrics for any node or pod using the kubectl get tool. Use the following commands to get metrics for all nodes and pods.

# Get the metrics for all nodes 
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes


# Get the metrics for all pods 
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods

             

You can also get metrics separately for one selected node or pod. To do this, you need to specify its name, as shown in the following commands.

# Get the metrics for node <node_name>
kubectl get --raw /apis/metrics.k8s.io/v1beta1/<node_name> |  jq '.'
# Get the metrics for pode <pod_name>
kubectl get --raw /apis/metrics.k8s.io/v1beta1/<pod_name> | jq '.'

           

To get a list of all nodes or pods in a given namespace, run the kubectl get nodes, or kubectl get pods command, respectively.

The Metric API returns the result in JSON format. To display JSON in a human-readable form in the terminal, use the jq utility to output.

            

Use the kubectl top command to get the current CPU and memory usage for all or individual nodes or pods. The following command returns resource usage by all pods.

kubectl top pod

             

You can read more about how to use the kubectl tool and all its commands here.

               

Using Kubernetes dashboard for watching metrics

Kubernetes dashboard is a graphical tool for monitoring and managing a cluster. It provides the same functionality as kubectl. The Kubernetes dashboard has a panel that provides a convenient breakdown of metrics for each node and pod. In addition, the dashboard has charts that allow you to track how the metrics have changed over a certain period of time.

       

To install the latest version of the Kubernetes dashboard, run the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

        

To access the dashboard interface through the browser, run the following command:

kubectl proxy

        

Next, you need to generate and enter an authentication token using the command:

kubectl --namespace kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

         

After completing the authentication, the dashboard graphical interface will be available to you, and you can use it to monitor metrics and edit Kubernetes objects.

           

Using Hosted Graphite by MetricFire to monitor Kubernetes

Graphite is a metrics monitoring tool that collects, stores, and displays real-time time-series data. This makes it easy to detect and correct errors, improve the system. MetricFire offers hosted Graphite, so you don’t have to worry about installing, configuring, and maintaining your monitoring system, but view the metrics on a web page. With hosted Graphite, you can track all kinds of metrics, including CPU or memory-based ones and other types of metrics.

             

Benefits of using MetricFire:

  1. No vendor lock-ins. MetricFire provides continuous, uninterrupted access to your data at any time.
  2. Easy Budgeting. You will be able to choose the pricing plan that suits you, depending on your needs.
  3. Transparency. MetricFire works transparently on all aspects of its operations of SaaS system monitoring. You can see their internal system metrics on their public status page.
  4. Robust Support. MetricFire engineers will always give a comprehensive answer to any question by phone or video conference if you have any difficulties when working with MetricFire.

         

If you want to know about monitoring Kubernetes metrics with MetricFire, book a demo with our engineers or sign up for the MetricFire free trial today.

            

Conclusion 

Kubernetes metrics server is a powerful tool for monitoring Kubernetes autoscaling metrics based on CPU or memory. However, for it to work, you need to do several settings. An alternative would be to use hosted Graphite by MetricFire, making it easy to monitor all Kubernetes metrics.

                  

Book a demo with the MetricFire team or sign up for the MetricFire free trial to find out more options that MetricFire has to offer.

Hungry for more knowledge?

Related posts