First Contact with Prometheus Exporters

First Contact with Prometheus Exporters

Table of Contents

Banner opt.2.webp

 

Introduction 

Creating Prometheus exporters can be complicated, but it doesn’t have to be. In this article, we will learn the basics of Prometheus and we will walk you through two step-by-step guides showing implementations of exporters based on Python. The first guide is about third-party exporters that expose metrics in a standalone way regarding the application they monitor. The second covers exporters that expose built-in application metrics. Let’s begin! 

 

Key Takeaways

  1. Unlike other monitoring tools that use a push approach, Prometheus employs a pull approach for data collection. The Prometheus server periodically retrieves metrics from client components, ensuring data is collected when the server is up and ready.
  2. Exporters play a crucial role in Prometheus monitoring. They are software components that produce metrics data and expose it via an HTTP server. Metrics are exposed in a format that Prometheus can scrape.
  3. After metrics are collected and stored, Prometheus offers various visualization options. While the Expression Browser is a basic option, more advanced users often turn to tools like Grafana for visualization. Custom-made visualization systems querying Prometheus's API are also possible.
  4. When an application natively exposes key metrics, it can act as a Prometheus client. However, integrating an exporter into an existing application should be done carefully to avoid regressions.
  5. The article provides step-by-step guides for implementing Prometheus exporters in Python. It covers two metric types: Counters (for metrics that increase over time, like system uptime) and Gauges (for metrics that fluctuate, like memory and CPU usage). Examples include a standalone exporter for system resource usage and a Flask web application exporter for request response time and uptime.

 

Quick Overview of Prometheus Concepts

Prometheus is a leading monitoring tool for time series metrics that has been applying original concepts since its introduction in 2012. Specifically, Prometheus’s pull approach of data collection, along with its exporters and flexible visualization help it stand out against other popular monitoring tools like Graphite and InfluxDB.    

 

Pull approach of data collection

The pull approach of data collection consists of having the server component (Prometheus server) periodically retrieve metrics from client components. This pulling is commonly referred to as “scrape” in the Prometheus world. Through scraping, the client components are only responsible for producing metrics and making them available for scraping. 

Tools like Graphique, InfluxDB, and many others, use a push approach where the client component has to produce metrics and push them to the server component. Therefore, the client determines when to push the data regardless of whether the server needs it or whether it is ready to collect it.  

The Prometheus pull approach is innovative because by requiring the server -- not the client -- to scrape, it collects metrics only when the server is up and running and when the data is ready.  This approach requires that each client component enables a specific capability called Prometheus Exporter. 

 

Prometheus exporters

Exporters are essential pieces within a Prometheus monitoring environment. Each program acting as a Prometheus client holds an exporter at its core. An exporter is comprised of software features that produce metrics data and an HTTP server that exposes the generated metrics available via a given endpoint. Metrics are exposed according to a specific format that the Prometheus server can read and ingest (scraping). We will discuss how to produce metrics, their format, and how to make them available for scraping later in this article. 

 

Flexible visualization

Once metrics have been scraped and stored by a Prometheus server, there are various means to visualize them. The first and easiest approach is to use Prometheus Expression Browser. However, due to its basic visualization capabilities, the Expression Browser is mainly helpful for debugging purposes (checking the availability or last values of certain metrics). For better and more advanced visualization, users often opt for other tools like Grafana. Furthermore, in some contexts, users may have custom-made visualization systems that directly query Prometheus API to retrieve the metrics that need to be visualized. 

Basic Architecture of a Prometheus environment with a server component, two client components, and an external visualization system.  

  

undefined

  

Implementing a Prometheus Exporter

From an application perspective, there are two kinds of situations where you can implement a Prometheus exporter: export built-in application metrics, and export metrics from a standalone or third-party tool.

 

Application built-in exporter

This is typically the case when a system or an application exposes its key metrics natively. The most interesting example is when an application is built from scratch since all the requirements that it needs to act as a Prometheus client can be studied and integrated through the design.  Sometimes, we may need to integrate an exporter into an existing application. This requires updating the code -- and even the design -- to add the required capabilities to act as a Prometheus client. Integrating into an existing application can be risky because, if not done carefully, those changes may introduce regressions in the application’s core functions. If you must do this, be sure to have sufficient tests in place in order not to introduce regressions into the application (e.g. bugs or performance overhead due to changes in code and/or in design). 

 

Standalone/third-party exporter

Sometimes the desired metrics can be collected or computed externally. An example of this is when the application provides APIs or logs where metric data can be retrieved. This data can then be used as is, or it may need further processing to generate metrics (an example is this MySQL exporter). 

You may also require an external exporter if the metrics need to be computed throughout an aggregation process by a dedicated system. As an example, think of a Kubernetes cluster where you need to have metrics that show CPU resources being used by sets of pods grouped by labels. Such an exporter may rely on Kubernetes API, and works as follows: 

(i) Retrieve the current CPU usage along with the label of each individual pod 

(ii) sum up the usage based on pod labels

(iii) make the results available for scraping

 

 

Examples of Exporter Implementation Using Python

In this section, we'll show step-by-step how to implement Prometheus exporters using Python. We’ll demonstrate two examples covering the following metric types: 

  • Counter: represents a metric where value can only increase over time; this value is reset to zero on restart. Such a metric can be used to export a system uptime (time elapsed since the last reboot of that system). 
  • Gauge: represents a metric where value can arbitrarily go up and down over time. It can be used to expose memory and CPU usage over time.

 

We will go through two scenarios: In the first one, we consider a standalone exporter exposing CPU and memory usage in a system. The second scenario is a Flask web application that exposes its request response time and also its uptime.

 

Standalone/third-party exporter

This scenario demonstrates a dedicated Python exporter that periodically collects and exposes CPU and memory usage on a system. 

For this program, we’ll need to install the Prometheus client library for Python.

$ pip install prometheus_client

  

We’ll also need to install psutil, a powerful library, to extract system resource consumption. 

$ pip install psutil

  

Our final exporter code looks like this:  (see source gist)

  
undefined

  

The code can be downloaded and saved in a file: 

  

$ curl -o prometheus_exporter_cpu_memory_usage.py \

-s -L https://git.io/Jesvq

 

The following command then allows you to start the exporter:

 

$ python ./prometheus_exporter_cpu_memory_usage.py

  

We can check exposed metrics through a local browser:  http://127.0.0.1:9999. The following metrics, among other built-in metrics enabled by the Prometheus library, should be provided by our exporter (values may be different according to the load on your computer).

  

undefined

  

Simple, isn’t it? This is in part due to the magic of Prometheus client libraries, which are officially available for Golang, Java, Python, and Ruby. They hide boilerplates and make it easy to implement an exporter. The fundamentals of our exporter can be summarized by the following entries:

  • Import the Prometheus client Python library (line 1). 
  • Instantiate an HTTP server to expose metrics on port 9999 (line 10).
  • Declare a gauge metric and name it system_usage (line 6).
  • Set values for metrics (lines 13 and 14).
  • The metric is declared with a label (resource_type, line 6), leveraging the concept of a multi-dimensional data model. This lets you hold a single metric name and use labels to differentiate CPU and memory metrics. You may also choose to declare two metrics instead of using a label. Either way, we highly recommend that you read the best practices about metrics naming and labels

 

Exporter for a Flask application

This scenario demonstrates a Prometheus exporter for a Flask web application. Unlike with a standalone, an exporter for a Flask web application has a WSGI dispatching application that works as a gateway to route requests to both Flask and Prometheus clients. This happens because the HTTP server enabled by Flask cannot be used consistently to also serve as a Prometheus client. Also, the HTTP server enabled by the Prometheus client library would not serve Flask requests. 

To enable integration with a WSGI wrapper application, Prometheus provides a specific library method (make_wsgi_app) to create a WSGI application to serve metrics.

The following example (source gist) -- a Flask hello-world application slightly modified to process requests with random response times -- shows a Prometheus exporter working along a Flask application. (see hello method at line 18). The Flash application is accessible via the root context (/ endpoint), while the Prometheus exporter is enabled through /metrics endpoint (see line 23, where the WSGI dispatching application is created). Concerning the Prometheus exporter, it exposes two metrics:

  • Last request response time: this is a gauge (line 10) in which, instead of using the set method like in our former example, we introduced a Prometheus decorator function (line 17) that does the same job while keeping the business code clean. 
  • Service uptime: This is a counter (line 8) that exposes the time elapsed since the last startup of the application. The counter is updated every second thanks to a dedicated thread (line 33). 

  

undefined

  

So that the program works, we need to install additional dependencies:

  

$ pip install uwsgi

  

Then, the program needs to be launched using WGSI:

  

$ curl -o prometheus_exporter_flask.py \

-s -L https://git.io/Jesvh

  

Now  we need to start the service as a WSGI application:

  

$ uwsgi --http 127.0.0.1:9999 \

--wsgi-file prometheus_exporter_flask.py \

--callable app_dispatch

  

Note that the  --wsgi-file shall point to the Python program file while the value of the --callable option must match the name of the WSGI application declared in our program (line 23).

Once again, you can check exposed metrics through a local browser:  http://127.0.0.1:9999/metrics. Among other built-in metrics exposed by the Prometheus library, we should find the following ones exposed by our exporter (values may be different according to the load on your computer):

  

undefined

  

Here we go! Our different exporters are now ready to be scraped by a Prometheus server. You can learn more about this here.

 

Conclusion

In this article, we first discussed the basic concepts of Prometheus exporters and then went through two documented examples of implementation using Python. Those examples leverage Prometheus’s best practices so that you can use them as a starting point to build your own exporters according to your specific application needs. We didn’t cover the integration with the Prometheus server nor the visualization that can be handled through tools like Grafana. If you are interested in these topics, stay tuned for our following posts. 

If you're interested in trying Prometheus, but you think the setup might take a while, give our Hosted Graphite free-trial a try. You can also book a demo and talk to us directly about Graphite monitoring solutions that work for you.

You might also like other posts...
grafana Oct 30, 2023 · 2 min read

【Grafana】 導入方法を基礎から徹底解説

Grafanaは、監視と可観測性のためのオープンソースのプラットフォームです。 メトリクスが格納されている場所に関係なく、メトリクスを照会、視覚化、アラート、および理解することができます。 ダッシュボードを作成、調査、およびチームと共有し、データ主導の文化を育むこともできます。 Continue Reading

grafana Oct 23, 2023 · 3 min read

【Grafana】利用できるデータソースと可視化方法

Grafanaは、モニタリングや分析を行うための一般的なツールです。ダッシュボードを構築して、データを可視化、クエリ、分析したり、特定の条件のアラート通知を設定したりすることができます。この記事では、最も人気のあるGrafanaデータソースとその使用方法について詳しく説明します。 Continue Reading

grafana Oct 23, 2023 · 2 min read

【Grafana】超人気の可視化ツール、Grafanaとは?

データはすべて、時系列グラフや単一統計表示からヒストグラム、ヒートマップなど、さまざまなタイプのパネルを使って照会し、補完することができます。その柔軟性によりデータソースと多数の可視化パネルにより、Grafanaは、DevOpsやモニタリングエンジニアの間で大人気ツールとなっています。 Continue Reading

header image

We strive for
99.999% uptime

Because our system is your system.

14-day trial 14-day trial
No Credit Card Required No Credit Card Required