Alerts allow you to receive a notification when your data does something unexpected, such as go above or below a set threshold, or stops suddenly. Select a recipient for your notifications and you can get immediate feedback via email, PagerDuty, Slack, or Hipchat when critical changes occur.
Your alerts are listed in thealert overviewsection. We list them in four categories.
Alerts that are currently running and within acceptable boundaries.
Alerts that are currently running and outside acceptable boundaries, this alert will have already notified you via the set notification channel.
Alerts which have been silenced manually or by schedule. These alerts will not notify you until they become active again.
Alerts that use graphite function metrics but have failed due to the query taking taking too long, being malformed or returning duplicate metrics due to aliasing.
You can click on the metric name to see a recent graph of that metric. The pencil icon or clicking on the alert name opens up the edit alert dialog. The mute icon allows you to silence the alert for a certain amount of time.
Creating An Alert
Alert Name and Metric
Click the “Add Alert” button in the top bar to open the alert creation panel.
This name is used in notifications. It is a reminder of why you added it, so make it clear and descriptive! e.g. “European Servers CPU usage”.
This is the data that is tested against your criteria (which you’ll add on the next screen) e.g. “my.server.cpu”.
Alert message sent with notifications. Can contain arbitrary string which may contain a description of the alert, steps to follow or references to documentation.
You can check a graph of your desired metric with the “Check Metric Graph” button. When you’re finished, click on “Confirm Metric Choice” to proceed to the Alert Criteria screen.
Set the Alert Name and Metric
Alert Criteria Panel
There are three ways to define the criteria that will result in a notification being sent.
Outside of Bounds
An alert notification will be sent if the metric data you’ve selected goes either above the “above” threshold, or below the “below” threshold. This is useful when your data fits inside an expected range, e.g. a response time of a webserver
Below / Above a Threshold
If you just enter one of the above or below values, it will check whichever one you use. This is useful when there’s an upper or lower bound that this data should not cross, for example the CPU load of a server.
An alert notification will be sent to you if the metric does not arrive at all for a certain time period. This is useful for detecting when a system goes down entirely.
The section “If the query fails” lets you control you control the behavior if the graphite function query fails. This option only appears for alerts that use graphite functions as part of their metrics. Graphite function query can fail due to timeouts from matching too many metrics, being malformed or if it returns duplicate metrics due to aliasing.
An notification is sent when the query fails with description of the reason.
Notifications are ignored but the alert still changes state and the failure is visible in the event history log.
Alerting Notification Interval lets you control how often you want to be notified for an alert.
On state change
A notification will be sent only when the alert transitions state from healthy to triggered or vice versa. An alert that that continues alerting will not sent subsequent notifications.
Every A notification will be sent when the alert triggers and recovers. Subsequent notifications will then be paused for the configured time period. This allows you to stop ‘flapping’ behaviour that would give you lots of notifications in a short period of time.
Set the Alert Criteria and Select your Notification Channel
Managing An Alert
From the Alert Overview page, you can hover your mouse over an individual alert to see actions related to managing it.
View an alert Click the eye icon to open the overview popup for an alert. This displays an embedded Grafana graph and a history log of the last 3 days of data. There is also a link to the Grafana composer to view more detailed information on the metric being alerted on.
Edit an alert
An alert can be edited to change its metric, criteria or notification channel, but any changes may take a few minutes to take effect.
Mute an alert
An alert can be silenced from notifying you for a specified time period. Currently the available times are 30 mins, 6hrs, 1 day and 1 week.
Delete an alert
An alert can be deleted from your panel here. This action is irreversible.
Defining an notification channel allows you to receive a notification when an alert triggers. Currently we support seven different ways to notify your team when an event occurs. You can see the available notification channels and add new ones on theNotification Channel Page.
Send an email to your team when the alert is triggered.
Send an immediate notification to a Slack channel. The Slack notification requires an endpoint for your channel, see theSlack documentationfor details.
Send notifications and show an alert overview in a set of HipChat rooms. You need to set up theHosted Graphite add-on in each roomusing the HipChat interface first before linking it to your alerts. See ourblog postfor more details.
You can send your alerts into your VictorOps hub to integrate with all your existing monitoring and alerting infrastructure. For details on adding the VictorOps notification channel you can seethis how to article.
Allows you to setup a webhook that we will notify with real-time information for your defined alerts.
The notification will be json encoded in the following format.
"name": "The name of the triggered alert.",
"criteria": "The defined alert criteria for the alert.",
"graph": "PNG of the grafana rendered graph.",
"value": "The current value of the metric.",
"metric": "The name of the metric.",
"status": "The current status of the metric.",
"backoff_minutes": false | 123,
"info": null | "Info saved with the alert."
For Email, HipChat, Webhook and Slack notifications, incidents are automatically resolved.
For VictorOps and PagerDuty notification channels, we can automatically resolve your alerts when they have reached a recovered state. This can be enabled on theNotification Channel Pageor can be done via our API as outlined in our alerting API docshere.
Defining a scheduled mute allows you to silence alerts on a one-time or recurring basis for scheduled maintenance or downtime. You can see the available scheduled mutes and add new ones in theUI.
Once a scheduled mute is created, it must be attached to alerts so that they may be silenced by the scheduled mute - this can be done at the alertcreateandupdateendpoints, or theUI.
You can silence alerts on a one-time basis by creating a scheduled mute with no repeat days.
By providing a list of days of the week for the scheduled mute to repeat, you can silence alerts on a recurring basis.
Troubleshooting your Alerts
The Alerting feature is new and in active development. Please contact support if you think you’ve found a bug, or have any questions or suggestions.
Is your metric arriving?
If are not receiving notifications as expected, please check thealert overviewpage and select the alert in question. You can use this to check the metric values for the last few hours are as expected.
Are some events being ignored?
We alert on a 30 second resolution. This means the finer data (5 second for example) is averaged and we alert off the 30 second aggregate.