Alerting
Alerts allow you to receive a notification when your data does something unexpected, such as go above or below a set threshold, or stops suddenly. Select a recipient for your notifications and you can get immediate feedback via email, PagerDuty, or Slack when critical changes occur.
- We highly recommend creating your alerts within the Hosted Graphite UI because alerts built within the Grafana UI might not have the expected affect in our system. Our internal alerting system is optimal because it triggers off of values upon ingestion, rather than upon render, which results in a faster response time to alert queries.
Alert Overview
Your alerts are listed in the alert overview section. We list them in four categories.
-
Healthy Alerts
Alerts that are currently running and within acceptable boundaries.
-
Triggered Alerts
Alerts that are currently running and outside acceptable boundaries, this alert will have already notified you via the set notification channel.
-
Muted Alerts
Alerts which have been silenced manually or by schedule. These alerts will not notify you until they become active again.
-
Inactive Alerts
Alerts that use graphite function metrics but have failed due to the query taking taking too long, being malformed or returning duplicate metrics due to aliasing.
You can click on the metric name to see a recent graph of that metric. The pencil icon or clicking on the alert name opens up the edit alert dialog. The mute icon allows you to silence the alert for a certain amount of time.

Creating An Alert
Alert Name and Metric
From within your Hosted Graphite account, click on the "Alert" icon to open the alert creation panel.
-
Name
This name is used in notifications. It is a reminder of why you added it, so make it clear and descriptive! e.g. “European Servers CPU usage”.
-
Metric Pattern
This is the data that is tested against your criteria (which you’ll add on the next screen) e.g. “my.server.cpu”.
-
Alert Info
Alert message sent with notifications. Can contain arbitrary string which may contain a description of the alert, steps to follow or references to documentation.
You can check a graph of your desired metric with the “Check Metric Graph” button. When you’re finished, click on “Confirm Metric Choice” to proceed to the Alert Criteria screen.

Alert Criteria Panel
There are three ways to define the criteria that will result in a notification being sent.
-
Outside of Bounds
An alert notification will be sent if the metric data you’ve selected goes either above the “above” threshold, or below the “below” threshold. This is useful when your data fits inside an expected range, e.g. a response time of a webserver
-
Below / Above a Threshold
If you just enter one of the above or below values, it will check whichever one you use. This is useful when there’s an upper or lower bound that this data should not cross, for example the CPU load of a server.
-
Missing
An alert notification will be sent to you if the metric does not arrive at all for a certain time period. This is useful for detecting when a system goes down entirely.
The section “If the query fails” lets you control you control the behavior if the graphite function query fails. This option only appears for alerts that use graphite functions as part of their metrics. Graphite function query can fail due to timeouts from matching too many metrics, being malformed or if it returns duplicate metrics due to aliasing.
-
Notify me
A notification is sent when the query fails with description of the reason.
-
Ignore
Notifications are ignored but the alert still changes state and the failure is visible in the event history log.
Alerting Notification Interval lets you control how often you want to be notified for an alert.
-
On state change
A notification will be sent only when the alert transitions state from healthy to triggered or vice versa. An alert that that continues alerting will not sent subsequent notifications.
- Every
A notification will be sent when the alert triggers and recovers. Subsequent notifications will then be paused for the configured time period. This allows you to stop ‘flapping’ behaviour that would give you lots of notifications in a short period of time.

Managing An Alert
From the Alert Overview page, you can hover your mouse over an individual alert to see actions related to managing it.

- View an alert
Click the eye icon to open the overview popup for an alert. This displays an embedded Grafana graph and a history log of the last 3 days of data. There is also a link to the Grafana composer to view more detailed information on the metric being alerted on.

-
Edit an alert
An alert can be edited to change its metric, criteria or notification channel, but any changes may take a few minutes to take effect.
-
Mute an alert
An alert can be silenced from notifying you for a specified time period. Currently the available times are 30 mins, 6hrs, 1 day and 1 week.
-
Delete an alert
An alert can be deleted from your panel here. This action is irreversible.
Notification Channels
Defining an notification channel allows you to receive a notification when an alert triggers. Currently we support seven different ways to notify your team when an event occurs. You can see the available notification channels and add new ones on the Notification Channel Page.
-
Email
Send an email to your team when the alert is triggered.
-
-
Slack
Send an immediate notification to a Slack channel. The Slack notification requires an endpoint for your channel, see the
Slack documentation for details.
-
-
VictorOps
You can send your alerts into your VictorOps hub to integrate with all your existing monitoring and alerting infrastructure. For details on adding the VictorOps notification channel you can see
this how to article.
-
Webhook
Allows you to setup a webhook that we will notify with real-time information for your defined alerts.
The notification will be json encoded in the following format.
{
"name": "The name of the triggered alert.",
"criteria": "The defined alert criteria for the alert.",
"graph": "PNG of the grafana rendered graph.",
"value": "The current value of the metric.",
"metric": "The name of the metric.",
"status": "The current status of the metric.",
"backoff_minutes": false | 123,
"info": null | "Info saved with the alert."
}
Auto-Resolve Notifications
For Email, Webhook and Slack notifications, incidents are automatically resolved.
For VictorOps and PagerDuty notification channels, we can automatically resolve your alerts when they have reached a recovered state. This can be enabled on the Notification Channel Page or can be done via our API as outlined in our alerting API docs here.
Scheduled Mutes
Defining a scheduled mute allows you to silence alerts on a one-time or recurring basis for scheduled maintenance or downtime. You can see the available scheduled mutes and add new ones in the UI.
Once a scheduled mute is created, it must be attached to alerts so that they may be silenced by the scheduled mute - this can be done at the alert create and update endpoints, or the UI.
-
One-time
You can silence alerts on a one-time basis by creating a scheduled mute with no repeat days.
-
Recurring
By providing a list of days of the week for the scheduled mute to repeat, you can silence alerts on a recurring basis.
Troubleshooting your Alerts
The Alerting feature is in active development. Please contact support if you think you’ve found a bug, or have any questions or suggestions.
- Is your metric arriving?
- If are not receiving notifications as expected, please check the alert overview page and select the alert in question. You can use this to check the metric values for the last few hours are as expected.
- Are some events being ignored?
- We alert on a 30 second resolution. This means the finer data (5 second for example) is averaged and we alert off the 30 second aggregate.
- Is your alert not triggering as expected?
- Check to see if your alert was built within the Grafana UI, a simple fix could be recreating this alert within the Hosted Graphite UI.
- Is your alert triggering but not sending Slack notifications?