This page will guide you through the process of configuring your Prometheus server to use Hosted Prometheus for long-term storage.
This guide assumes you already have a Prometheus setup roughly like this:
It’s OK if you have more Prometheus instances, or if you’ve already configured read/write, federation, or are using Grafana with them - none of that will be affected.
After you set up Hosted Prometheus, here’s what your setup will look like:
Step 1: Edit your prometheus.yml
In your prometheus.yml, add new remote_write and remote_read sections like this:
remote_write:
- url: https://prod.promlts.metricfire.com/write
bearer_token: Your-API-key-goes-here
remote_read:
- url: https://prod.promlts.metricfire.com/read
bearer_token: Your-API-key-goes-here
Don’t forget to fill in your Hosted Prometheus API key, which you can find on the Prometheus addon page when you’re logged in.
Step 2: Restart Prometheus
Restart your Prometheus server to make these changes take effect. Now is a good time to watch the logs from your Prometheus process to make sure it’s still happy.
If you didn’t configure the bearer token or it isn’t correct for some reason, you will see errors like this in your Prometheus log:
level=warn ts=2018-11-01T00:00:00.00Z caller=queue_manager.go:531 component=remote queue=0:https://api.metricfire.com/write msg="Error sending samples to remote storage" count=59 err="server returned HTTP status 403 FORBIDDEN: Forbidden"
Step 3: Check your account
Once we start receiving traffic from your Prometheus instance, we’ll start processing and storing it. There’s a handy traffic indicator on the Prometheus addon page that tells you how recently we received any data. If that says we received data recently, then it’s working!
That should be it! Now, when you query your local Prometheus as before, it will send your query to Hosted Prometheus too. We’ll check our index, and serve any data we have that match your query.
When data expires from your local Prometheus server, that data will no longer disappear from your dashboards - instead, it will be served from the Hosted Prometheus copy, and combined with the leading edge data from your local server.