First Contact with Prometheus

October 18, 2019

First Contact with Prometheus Exporters

Learn concepts and create your first exporters

Table of contents

1. Quick Overview on Prometheus Concepts

        1.1 Pull approach of data collection

        1.2 Prometheus exporters

        1.3 Flexible visualization

2. Implementing a Prometheus Exporter

        2.1 Application built-in exporter        

        2.2 Standalone/third-party exporter

3. Examples of Exporter Implementation Using Python

        3.1 Standalone/third-party exporter

        3.2 Exporter for a Flask application   

4. Conclusion

Creating Prometheus exporters can be complicated, but it doesn’t have to be. In this article, we will learn the basics of Prometheus and we will walk you through two step-by-step guides showing implementations of exporters based on Python. The first guide is about third party exporters that expose metrics in a standalone way regarding the application they monitor. The second covers exporters that expose built-in application metrics. Let’s begin! 

1. Quick Overview on Prometheus Concepts

Prometheus is a leading monitoring tool for time series metrics that has been applying original concepts since its introduction in 2012. Specifically, Prometheus’s pull approach of data collection, along with its exporters and flexible visualization help it stand out against other popular monitoring tools like Graphite and InfluxDB.    

1.1 Pull approach of data collection

The pull approach of data collection consists of having the server component (Prometheus server) periodically retrieve metrics from client components. This pulling is commonly referred to as “scrape” in the Prometheus world. Through scrape, the client components are only responsible for producing metrics and making them available for scraping. 


Tools like Graphique, InfluxDB, and many others, use a push approach where, the client component has to produce metrics and push them to the server component. Therefore, the client determines when to push the data regardless of whether the server needs it or whether it is ready to collect it.  


The Prometheus pull approach is innovative because by requiring the server -- not the client -- to scrape, it collects metrics only when the server is up and running and when the data is ready.  This approach requires that each client component enables a specific capability called Prometheus Exporter. 

1.2 Prometheus exporters

Exporters are essential pieces within a Prometheus monitoring environment. Each program acting as a Prometheus client holds an exporter at its core. An exporter is comprised of software features that produce metrics data, and an HTTP server that exposes the generated metrics available via a given endpoint. Metrics are exposed according to a specific format that the Prometheus server can read and ingest (scraping). We will discuss how to produce metrics, their format, and how to make them available for scraping later in this article. 

1.3 Flexible visualization

Once metrics have been scraped and stored by a Prometheus server, there are various means to visualize them. The first and easiest approach is to use Prometheus Expression Browser. However, due to its basic visualization capabilities, the Expression Browser is mainly helpful for debugging purposes (check the availability or last values of certain metrics). For better and more advanced visualization, users often opt for other tools like Grafana. Furthermore, in some contexts users may have custom-made visualization systems that directly query Prometheus API to retrieve the metrics that need to be visualized. 

Basic Architecture of a Prometheus environment with a server component, two client components and an external visualization system.  

2. Implementing a Prometheus Exporter

From an application perspective, there are two kinds of situations where you can implement a Prometheus exporter: export built-in application metrics, and export metrics from a standalone or third-party tool.

2.1 Application built-in exporter

This is typically the case when a system or an application exposes its key metrics natively. The most interesting example is when an application is built from scratch, since all the requirements that it needs to act as a Prometheus client can be studied and integrated through the design.  Sometimes, we may need to integrate an exporter to an existing application. This requires updating the code -- and even the design -- to add the required capabilities to act as a Prometheus client. Integrating to an existing application can be risky because, if not done carefully, those changes may introduce regressions on the application’s core functions. If you must do this, be sure to have sufficient tests in place in order not to introduce regressions into the application (e.g. bugs or performance overhead due to change in code and/or in design). 


2.2 Standalone/third-party exporter

Sometimes the desired metrics can be collected or computed externally. An example of this is when the application provides APIs or logs where metric data can be retrieved. This data can then be used as is, or it may need further processing to generate metrics (an example is this MySQL exporter). 


You may also require an external exporter if the metrics need to be computed throughout an aggregation process by a dedicated system. As an example, think of a Kubernetes cluster where you need to have metrics that show CPU resources being used by sets of pods grouped by labels. Such an exporter may rely on Kubernetes API, and works as follows: 

(i) retrieve the current CPU usage along with the label of each individual pod 

(ii) sum up the usage based on pod labels

(iii) make the results available for scraping

3. Examples of Exporter Implementation Using Python

In this section we'll show step-by-step how to implement Prometheus exporters using Python. We’ll demonstrate two examples covering the following metric types: 

  • Counter: represents a metric where value can only increase over time; this value is reset to zero on restart. Such a metric can be used to export a system uptime (time elapsed since the last reboot of that system). 
  • Gauge: represents a metric where value can arbitrarily go up and down over time. It can be used to expose memory and CPU usage over time.

We will go through two scenarios: In the first one, we consider a standalone exporter exposing CPU and memory usage in a system. The second scenario is a Flask web application that exposes its request response time and also its uptime.



3.1 Standalone/third-party exporter

This scenario demonstrates a dedicated Python exporter that periodically collects and exposes CPU and memory usage on a system. 


For this program, we’ll need to install the Prometheus client library for Python.

$ pip install prometheus_client

We’ll also need to install psutil, a powerful library, to extract system resource consumption. 

$ pip install psutil

Our final exporter code looks like this:  (see source gist)



The code can be downloaded and saved in a file: 

$ curl -o prometheus_exporter_cpu_memory_usage.py \

-s -L https://git.io/Jesvq


The following command then allows you to start the exporter:

$ python ./prometheus_exporter_cpu_memory_usage.py


We can check exposed metrics through a local browser:  http://127.0.0.1:9999. The following metrics, among other built-in metrics enabled by the Prometheus library, should be provided by our exporter (values may be different according to the load on your computer).



Simple, isn’t it? This is in part due to the magic of Prometheus client libraries, which are officially available for Golang, Java, Python and Ruby. They hide boilerplates and make it easy to implement an exporter. The fundamentals of our exporter can be summarized by the following entries:

  • Import the Prometheus client Python library (line 1). 
  • Instantiate an HTTP server to expose metrics on port 9999 (line 10).
  • Declare a gauge metric and name it system_usage (line 6).
  • Set values for metrics (lines 13 and 14).
  • The metric is declared with a label (resource_type, line 6), leveraging the concept of multi-dimensional data model. This lets you hold a single metric name and use labels to differentiate CPU and memory metrics. You may also choose to declare two metrics instead of using a label. Either way, we highly recommend that you read the best practices about metrics naming and labels

3.2 Exporter for a Flask application

This scenario demonstrates a Prometheus exporter for a Flask web application. Unlike with a standalone, an exporter for a Flask web application has a WSGI dispatching application that works as a gateway to route requests to both Flask and Prometheus clients. This happens because the HTTP server enabled by Flask cannot be used consistently to also serve as Prometheus client. Also, the HTTP server enabled by the Prometheus client library would not serve Flask requests. 


To enable integration with a WSGI wrapper application, Prometheus provides a specific library method (make_wsgi_app) to create a WSGI application to serve metrics.


The following example (source gist) -- a Flask hello-world application slightly modified to process requests with random response times -- shows a Prometheus exporter working along a Flask application. (see hello method at line 18). The Flash application is accessible via the root context (/ endpoint), while the Prometheus exporter is enabled through /metrics endpoint (see line 23, where the WSGI dispatching application is created). Concerning the Prometheus exporter, it exposes two metrics:

  • Last request response time: this is a gauge (line 10) in which, instead of using the set method like in our former example, we introduced a Prometheus decorator function (line 17) that does the same job while keeping the business code clean. 
  • Service uptime: this is a counter (line 8) that exposes the time elapsed since the last startup of the application. The counter is updated every second thanks to a dedicated thread (line 33). 



So that the program works, we need to install additional dependencies: 

$ pip install uwsgi

Then, the program needs to be launched using WGSI:

$ curl -o prometheus_exporter_flask.py \

-s -L https://git.io/Jesvh


Now  we need to start the service as a WSGI application:

$ uwsgi --http 127.0.0.1:9999 \

--wsgi-file prometheus_exporter_flask.py \

--callable app_dispatch


Note that the  --wsgi-file shall point to the Python program file while the value of the --callable option must match the name of the WSGI application declared in our program (line 23).


Once again, you can check exposed metrics through a local browser:  http://127.0.0.1:9999/metrics. Among other built-in metrics exposed by the Prometheus library, we should find the following ones exposed by our exporter (values may be different according to the load on your computer):


Here we go! Our different exporters are now ready to be scraped by a Prometheus server. You can learn more about this here.

4. Conclusion

In this article we first discussed the basic concepts of Prometheus exporters and then went through two documented examples of implementation using Python. Those examples leverage Prometheus’s best practices so that you can use them as a starting point to build your own exporters according to your specific application needs. We didn’t cover the integration with the Prometheus server nor the visualization that can be handled through tools like Grafana. If you are interested in these topics, stay tuned for our following posts. 


Related Posts

GET FREE monitoring FOR 14 Days