How to monitor Nginx

How to monitor Nginx

Table of Contents

  1. Introduction
  2. Exposing Prometheus metrics from an Nginx server
    1. Nginx monitoring with Nginx Ingress Controller
    2. Nginx monitoring with standalone Nginx server
  3. Wrapping up! 

Introduction

How to monitor Nginx? Let us first read what Nginx monitoring is all about and how it can together work with MetricFire’s Hosted Prometheus. Nginx, pronounced like “engine-ex”, is an open-source web server that, since its initial success as a web server, is now also used as a reverse proxy, HTTP cache, and load balancer. 

     

In the case of web applications, it is not prudent to just run a single web server for your application. You will find that you will soon hit the ceiling for your budget and/or hardware capabilities for a single VM. Therefore, we need a load balancer to help with horizontal scaling. Nginx can not only act as a high-performance web server, but it also functions as a reverse proxy and a load balancer. 

        

Similar to Nginx, there is another web server called Apache. In terms of raw numbers, Apache is the most popular web server in existence and is used by 43.6% (down from 47% in 2018) of all websites with a known web server, according to W3Techs. Nginx comes in a close second at 41.8%. (as of 2020). 

    

While Apache is the most popular overall option, Nginx is actually the most popular web server among high-traffic websites.

     

When you break down usage rates by traffic, Nginx powers:

  • 60.9% of the 100,000 most popular sites (up from 56.1% in 2018)
  • 67.1% of the 10,000 most popular sites (up from 63.2% in 2018)
  • 62.1% of the 1,000 most popular sites (up from 57% in 2018)

    

In fact, Nginx is used by some of the most resource-intensive sites in existence, including Netflix, NASA, and even WordPress.com.

          

When we have an application performing such a crucial role in the entire stack it is extremely important to monitor it. We must know how many requests are being processed, the latency of each request, or the source of each request, among other things. Monitoring helps gauge the health of the web server and further helps us tune it for optimal performance.

           

Today Prometheus is the gold standard of monitoring applications. And Nginx, also exposes a ton of Prometheus metrics if correct modules are enabled. MetricFire can help you ensure that these metrics are monitored properly and you have complete insight into your web server’s performance.

       

MetricFire specializes in monitoring systems and you can use your product with minimal configuration to gain in-depth insight into your environments. If you would like to learn more about it please book a demo with us, or sign up for the free trial today.

                                                     

                                                         

Exposing Prometheus metrics from an Nginx server

   

Nginx monitoring with Nginx Ingress Controller

When you are running Nginx Ingress Controller to control ingress traffic to your Kubernetes cluster (highly recommended), then you can easily get a bunch of Prometheus metrics from it. All you need to do is the following:

    

1. Add the following annotations to the ingress-controller deployment/daemonest

            

 annotations:
         prometheus.io/scrape: "true"
         prometheus.io/port: "9113"

            

2. Start the ingress controller container with -enable-prometheus-metrics argument

3. Add port 9113 to the manifest for the container ports

                 

Your manifest would look something like this:

          

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: {{ .Values.namespace }}
spec:
  replicas: {{ .Values.ingressController.replicas }}
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
         prometheus.io/scrape: "true"
         prometheus.io/port: "9113"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount  
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values: 
                    - ingress-nginx
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: nginx-ingress-controller
          image: {{ .Values.ingressController.image }}
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --enable-ssl-passthrough
            - --ingress-class=nginx
            - -enable-prometheus-metrics
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          - name: prometheus
            containerPort: 9113
          resources:
            limits:
              cpu: "1"
              memory: 500Mi
            requests:
              cpu: 100m
              memory: 20Mi

          

And that’s it! Prometheus running inside the cluster should now autodiscover this. 

       

You should be able to see the metrics for Nginx monitoring in Prometheus now. These include the metrics exposed by the Nginx Prometheus exporter, along with some ingress controller metrics. The complete list can be found here.

        

Nginx monitoring with standalone Nginx server

In order to monitor a standalone Nginx server, we can use the nginx vts exporter module. Since this is not available by default, we will have to compile our own version for Nginx with the module. Today we will illustrate this with the help of docker containers.

             

Consider the following docker file:

       


FROM alpine:3.5

ENV NGINX_VERSION 1.18.0
ENV VTS_VERSION 0.10.7

RUN GPG_KEYS=B0F4253373F8F6F510D42178520A9993A1C052F8 \
	&& CONFIG="\
		--prefix=/etc/nginx \
		--sbin-path=/usr/sbin/nginx \
		--modules-path=/usr/lib/nginx/modules \
		--conf-path=/etc/nginx/nginx.conf \
		--error-log-path=/var/log/nginx/error.log \
		--http-log-path=/var/log/nginx/access.log \
		--pid-path=/var/run/nginx.pid \
		--lock-path=/var/run/nginx.lock \
		--http-client-body-temp-path=/var/cache/nginx/client_temp \
		--http-proxy-temp-path=/var/cache/nginx/proxy_temp \
		--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
		--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
		--http-scgi-temp-path=/var/cache/nginx/scgi_temp \
		--user=nginx \
		--group=nginx \
		--with-http_ssl_module \
		--with-http_realip_module \
		--with-http_addition_module \
		--with-http_sub_module \
		--with-http_dav_module \
		--with-http_flv_module \
		--with-http_mp4_module \
		--with-http_gunzip_module \
		--with-http_gzip_static_module \
		--with-http_random_index_module \
		--with-http_secure_link_module \
		--with-http_stub_status_module \
		--with-http_auth_request_module \
		--with-http_xslt_module=dynamic \
		--with-http_image_filter_module=dynamic \
		--with-http_geoip_module=dynamic \
		--with-threads \
		--with-stream \
		--with-stream_ssl_module \
		--with-stream_ssl_preread_module \
		--with-stream_realip_module \
		--with-stream_geoip_module=dynamic \
		--with-http_slice_module \
		--with-mail \
		--with-mail_ssl_module \
		--with-compat \
		--with-file-aio \
		--with-http_v2_module \
        --add-module=/usr/src/nginx-module-vts-$VTS_VERSION \
	" \
	&& addgroup -S nginx \
	&& adduser -D -S -h /var/cache/nginx -s /sbin/nologin -G nginx nginx \
	&& apk add --no-cache --virtual .build-deps \
		gcc \
		libc-dev \
		make \
		openssl-dev \
		pcre-dev \
		zlib-dev \
		linux-headers \
		curl \
		gnupg \
		libxslt-dev \
		gd-dev \
		geoip-dev \
	&& curl -fSL http://nginx.org/download/nginx-$NGINX_VERSION.tar.gz -o nginx.tar.gz \
	&& curl -fSL http://nginx.org/download/nginx-$NGINX_VERSION.tar.gz.asc  -o nginx.tar.gz.asc \
    && curl -fSL https://github.com/vozlt/nginx-module-vts/archive/v$VTS_VERSION.tar.gz  -o nginx-modules-vts.tar.gz \
	&& export GNUPGHOME="$(mktemp -d)" \
	&& found=''; \
	for server in \
		ha.pool.sks-keyservers.net \
		hkp://keyserver.ubuntu.com:80 \
		hkp://p80.pool.sks-keyservers.net:80 \
		pgp.mit.edu \
	; do \
		echo "Fetching GPG key $GPG_KEYS from $server"; \
		gpg --keyserver "$server" --keyserver-options timeout=10 --recv-keys "$GPG_KEYS" && found=yes && break; \
	done; \
	test -z "$found" && echo >&2 "error: failed to fetch GPG key $GPG_KEYS" && exit 1; \
	gpg --batch --verify nginx.tar.gz.asc nginx.tar.gz \
	&& rm -r "$GNUPGHOME" nginx.tar.gz.asc \
	&& mkdir -p /usr/src \
	&& tar -zxC /usr/src -f nginx.tar.gz \
    && tar -zxC /usr/src -f nginx-modules-vts.tar.gz \
	&& rm nginx.tar.gz nginx-modules-vts.tar.gz \
	&& cd /usr/src/nginx-$NGINX_VERSION \
	&& ./configure $CONFIG --with-debug \
	&& make -j$(getconf _NPROCESSORS_ONLN) \
	&& mv objs/nginx objs/nginx-debug \
	&& mv objs/ngx_http_xslt_filter_module.so objs/ngx_http_xslt_filter_module-debug.so \
	&& mv objs/ngx_http_image_filter_module.so objs/ngx_http_image_filter_module-debug.so \
	&& mv objs/ngx_http_geoip_module.so objs/ngx_http_geoip_module-debug.so \
	&& mv objs/ngx_stream_geoip_module.so objs/ngx_stream_geoip_module-debug.so \
	&& ./configure $CONFIG \
	&& make -j$(getconf _NPROCESSORS_ONLN) \
	&& make install \
	&& rm -rf /etc/nginx/html/ \
	&& mkdir /etc/nginx/conf.d/ \
	&& mkdir -p /usr/share/nginx/html/ \
	&& install -m644 html/index.html /usr/share/nginx/html/ \
	&& install -m644 html/50x.html /usr/share/nginx/html/ \
	&& install -m755 objs/nginx-debug /usr/sbin/nginx-debug \
	&& install -m755 objs/ngx_http_xslt_filter_module-debug.so /usr/lib/nginx/modules/ngx_http_xslt_filter_module-debug.so \
	&& install -m755 objs/ngx_http_image_filter_module-debug.so /usr/lib/nginx/modules/ngx_http_image_filter_module-debug.so \
	&& install -m755 objs/ngx_http_geoip_module-debug.so /usr/lib/nginx/modules/ngx_http_geoip_module-debug.so \
	&& install -m755 objs/ngx_stream_geoip_module-debug.so /usr/lib/nginx/modules/ngx_stream_geoip_module-debug.so \
	&& ln -s ../../usr/lib/nginx/modules /etc/nginx/modules \
	&& strip /usr/sbin/nginx* \
	&& strip /usr/lib/nginx/modules/*.so \
	&& rm -rf /usr/src/nginx-$NGINX_VERSION \
	\
	# Bring in gettext so we can get `envsubst`, then throw
	# the rest away. To do this, we need to install `gettext`
	# then move `envsubst` out of the way so `gettext` can
	# be deleted completely, then move `envsubst` back.
	&& apk add --no-cache --virtual .gettext gettext \
	&& mv /usr/bin/envsubst /tmp/ \
	\
	&& runDeps="$( \
		scanelf --needed --nobanner --format '%n#p' /usr/sbin/nginx /usr/lib/nginx/modules/*.so /tmp/envsubst \
			| tr ',' '\n' \
			| sort -u \
			| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
	)" \
	&& apk add --no-cache --virtual .nginx-rundeps $runDeps \
	&& apk del .build-deps \
	&& apk del .gettext \
	&& mv /tmp/envsubst /usr/local/bin/ \
	\
	# forward request and error logs to docker log collector
	&& ln -sf /dev/stdout /var/log/nginx/access.log \
	&& ln -sf /dev/stderr /var/log/nginx/error.log

COPY nginx.conf /etc/nginx/nginx.conf
COPY nginx.vh.default.conf /etc/nginx/conf.d/default.conf

EXPOSE 80

STOPSIGNAL SIGTERM

CMD ["nginx", "-g", "daemon off;"]

      

You can use this to compile a version of Nginx which has the Nginx module enabled. Alternatively, if you want to quickly test just use the pre-built docker image.

            

docker run -p 80:80 -it vtkub/nginx-vts:1.0  

        

Now, navigate to http://localhost

       

You should be able to see a Welcome to Nginx! page.

         

In order to get Prometheus metrics, you should navigate to http://localhost/status/format/prometheus

                

A ton of Prometheus metrics will now appear. A subset of those metrics can be seen below:

     

# HELP nginx_vts_info Nginx info
# TYPE nginx_vts_info gauge
nginx_vts_info{hostname="a01636de0d14",version="1.13.12"} 1
# HELP nginx_vts_start_time_seconds Nginx start time
# TYPE nginx_vts_start_time_seconds gauge
nginx_vts_start_time_seconds 1604896248.009
# HELP nginx_vts_main_connections Nginx connections
# TYPE nginx_vts_main_connections gauge
nginx_vts_main_connections{status="accepted"} 23
nginx_vts_main_connections{status="active"} 2
nginx_vts_main_connections{status="handled"} 23
nginx_vts_main_connections{status="reading"} 0
nginx_vts_main_connections{status="requests"} 1704
nginx_vts_main_connections{status="waiting"} 1
nginx_vts_main_connections{status="writing"} 1
# HELP nginx_vts_main_shm_usage_bytes Shared memory [ngx_http_vhost_traffic_status] info
# TYPE nginx_vts_main_shm_usage_bytes gauge
nginx_vts_main_shm_usage_bytes{shared="max_size"} 1048575
nginx_vts_main_shm_usage_bytes{shared="used_size"} 3510
nginx_vts_main_shm_usage_bytes{shared="used_node"} 1
# HELP nginx_vts_server_bytes_total The request/response bytes
# TYPE nginx_vts_server_bytes_total counter
# HELP nginx_vts_server_requests_total The requests counter
# TYPE nginx_vts_server_requests_total counter
# HELP nginx_vts_server_request_seconds_total The request processing time in seconds
# TYPE nginx_vts_server_request_seconds_total counter
# HELP nginx_vts_server_request_seconds The average of request processing times in seconds
# TYPE nginx_vts_server_request_seconds gauge
# HELP nginx_vts_server_request_duration_seconds The histogram of request processing time
# TYPE nginx_vts_server_request_duration_seconds histogram
# HELP nginx_vts_server_cache_total The requests cache counter
# TYPE nginx_vts_server_cache_total counter
nginx_vts_server_bytes_total{host="_",direction="in"} 681606
nginx_vts_server_bytes_total{host="_",direction="out"} 6301942
nginx_vts_server_requests_total{host="_",code="1xx"} 0
nginx_vts_server_requests_total{host="_",code="2xx"} 1702
nginx_vts_server_requests_total{host="_",code="3xx"} 0
nginx_vts_server_requests_total{host="_",code="4xx"} 1
nginx_vts_server_requests_total{host="_",code="5xx"} 0
nginx_vts_server_requests_total{host="_",code="total"} 1703
nginx_vts_server_request_seconds_total{host="_"} 0.000
nginx_vts_server_request_seconds{host="_"} 0.000
nginx_vts_server_cache_total{host="_",status="miss"} 0
nginx_vts_server_cache_total{host="_",status="bypass"} 0

       

This module also exposes a very nice dashboard which can be seen by going to http://localhost/status/dashboard

             

undefined

       

Additionally, you can plot those Prometheus metrics using this Grafana dashboard.

    

These metrics should help you gain insight into all the traffic being served by your Nginx deployment. Nginx becomes extremely important while evaluating server performance and debugging issues.

           

             

Wrapping up! 

Nginx performs a very crucial role in any application stack and it is important to have metrics for it to understand what is going on. There is no better way to monitor Nginx and applications than using Prometheus, since it is very flexible and can be used to scrape metrics from all kinds of applications. 

    

Therefore, if you need help setting up these metrics for Nginx monitoring, feel free to reach out to me through LinkedIn. Additionally, MetricFire can help you monitor your applications across various environments. Since monitoring is extremely essential for any application stack, you can get started with your monitoring using MetricFire’s free trial.

   

Robust monitoring will not only help you meet SLAs for your application but also ensure a sound sleep for the operations and development teams. If you would like to learn more about it from our experts, please book a demo with us today.

Hungry for more knowledge?

Related posts