Table of Contents
Introduction
When a web service slows down or errors spike, metrics can tell you what changed (active connections rise, error rate increase), but the root cause can sometimes live in your logs (which IPs are hammering POST endpoints, 4XX/5XX occurrences). Put the two together and you get the full observability picture. Time-series metric trends to spot incidents, and line-level details to fix them fast.
In this guide, we'll detail how to ship Nginx logs to a Hosted Loki endpoint using Promtail, and collect Nginx metrics with Telegraf via stub_status. In MetricFire's Hosted Grafana, you’ll be able to correlate a spike (metrics) with the details behind it (logs).
Start a chat with us today if you are interested in testing MetricFire's Logging Integration for FREE. We will help you every step of the way, from collection to visualization!
Step 1: Install and Configure Promtail to Collect Nginx Logs
We support log collection via OpenTelemetry Contrib and Promtail. In this example, we'll detail how to configure Promtail since it is an official log shipping agent for Grafana Loki. It runs as a lightweight binary that tails log files (like /var/log/*) and forwards them to our Hosted Loki endpoint over HTTP. (This article assumes that you are already running an instance of Nginx that is serving traffic on a designated port).
Install/unpack Promtail (Ubuntu):
cd /usr/local/bin
wget https://github.com/grafana/loki/releases/latest/download/promtail-linux-amd64.zip
unzip promtail-linux-amd64.zip
mv promtail-linux-amd64 promtail
chmod +x promtail
Create a configuration directory:
sudo mkdir -p /etc/promtail
Configure Promtail to Forward Logs
Promtail requires a YAML config file to define where to read logs from and where to send them, and Nginx generally prints logs to /var/log/nginx/*. Create a new file at /etc/promtail/promtail.yaml with the following content:
server:
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: https://<YOUR-API-KEY>@www.hostedgraphite.com/logs/sink
scrape_configs:
- job_name: system-logs
static_configs:
- targets:
- localhost
labels:
host: <HOST-NAME>
job: varlogs
__path__: /var/log/nginx/*.log
NOTE: Make sure to replace YOUR-API-KEY and HOST-NAME in the above config. Now you can start the Promtail service, or run it manually with this command:
promtail -config.file=/etc/promtail/promtail.yaml
Step 2: Create a Loki Data Source in our Hosted Grafana
REACH OUT TO US about trying our new Logging feature for FREE, and we will create a Loki Access Key in your Hosted Graphite account. If you don't already have a Hosted Graphite account, sign up for a free trial here to obtain a Hosted Graphite API key and Loki Access Key.
Now within the Hosted Graphite UI, you can navigate to Dashboards => Settings => Data sources => Add New Data source (Loki). You'll be able to add the URL for your HG Loki endpoint, which includes your new Loki Access Key: https://www.hostedgraphite.com/logs/<UID>/<LOKI-ACCESS-KEY>
Step 3: Visualize Nginx Logs in Dashboard Panels
Once system logs are forwarded to our Loki endpoint and the data source is connected in your Hosted Grafana, you can create a new dashboard panel, select Loki as your Data source, and format a query using 'code mode'.
- LogQL query that excludes notices: {job="varlogs", filename="/var/log/nginx/error.log"} !~ "\\[notice\\]"
- Query to display only 4XX/5XX lines: {job="varlogs", filename="/var/log/nginx/access.log"} |~ " (4\\d\\d|5\\d\\d) "
Now you can see each 4XX/5XX request, and exactly what time each of them occurred!
Step 4: Collect and Forward Nginx Performance Metrics
Enable Nginx stub_status
If you have NGINX running in a Linux environment, you'll need to modify your /etc/nginx/nginx.conf file. Add a status endpoint inside the server{} block of the conf file:
location = /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
NOTE: the default app port for nginx out of the box is :80, but you'll have to define a different port if :80 is already taken. Find out which ports are bound to nginx with this command:
sudo ss -ltnp | grep nginx
Now you can apply and verify the configuration updates with these commands:
sudo nginx -t && sudo systemctl reload nginx
curl -s http://127.0.0.1:<YOUR_APP_PORT>/nginx_status
Setup the Telegraf Collector
If you don't already have an instance of Telegraf running in your server, install our handy HG-CLI tool to quickly install/configure Telegraf:
curl -s "https://www.hostedgraphite.com/scripts/hg-cli/installer/" | sudo sh
NOTE: You will need to input your Hosted Graphite API key, and follow the prompts to select which metric sets you want.
Once it's installed, open the Telegraf configuration file at: /etc/telegraf/telegraf.conf and add the following section:
[[inputs.nginx]]
urls = ["http://127.0.0.1:<YOUR_APP_PORT>/nginx_status"]
Ship Nginx Metrics to Hosted Graphite
Simply save your updated conf file, and restart the Telegraf service to forward the Nginx metrics to your HG account. Or run it manually to inspect the output for potential syntax/permission errors:
telegraf --config /etc/telegraf/telegraf.conf
Once these metrics hit your Hosted Graphite account, you can use them to easily create custom dashboards and alerts!
Example metric query: telegraf.*.*.127_0_0_1.nginx.*
Conclusion
Nginx is a crucial part of any stack, so pairing time-series metrics with corresponding log lines is the fastest path from symptom to root cause. With a quick Nginx update for stub_status, a minimal Telegraf input, and Promtail forwarding logs, your dashboards can show when something went wrong and link directly to what caused it. Instead of jumping between tools or manually inspecting your server logs, you can correlate a spike in metrics with the exact log line that makes sense, all in a single dashboard.
MetricFire's Hosted Loki logging integration is quick to set up but powerful in practice. Whether you're tracking security threats, service issues, failed jobs, or kernel anomalies, it gives you the visibility you need to stay ahead of problems and reduce the time its takes to resolve them. Reach out to the MetricFire team today and let’s build something great together!