mirror of
https://github.com/grafana/grafana.git
synced 2025-02-25 18:55:37 -06:00
Alerting docs: restructure Introduction
(#84248)
* Rename `Data sources` title * Relocate and rename `Introduction/Notification templates` * Rename `alert-rules/alert-instances` to `alert-rules/multi-dimensional-alerts` * Move `fundamentals/high-availability` to `setup/enable-ha` * Fix 404 high-availability alerting link on Setup HA Grafana docs * Move alert manager/contact poitns/notification templates within Notifications * Remove `Alerting on numeric data` * Restructure Introduction v2 * Continue Intro restructuring * Update docs/sources/alerting/fundamentals/alert-rules/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Complete contact point TODO * Alias: alertManager * Aliases `annotation-label` + content changes * Aliases to `templating-labels-annotations` * Aliases to `queries-conditions` * Rename `rule-evaluation.md` file * Aliases: `contact points` * Aliases to `message-templating` * Aliases to `alert-rules` * Update links to new URL slugs * Remove duplicated alias * Remove trailing slash for external heading links * Remove trailing slash in heading links to other grafana pages * Change URL directory slug `fundamentals/notifications` * rename title `Configure High Availability` * Content changes * Update docs/sources/alerting/fundamentals/alert-rules/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-alert-state-history/index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-high-availability/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-alert-state-history/index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-high-availability/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-high-availability/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/set-up/configure-high-availability/_index.md Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> * Update docs/sources/alerting/fundamentals/alert-rules/_index.md Co-authored-by: Jack Baldry <jack.baldry@grafana.com> * Fix broken link reference * Fix `queries-and-conditions` * Fix `alert-rule-evaluation` ref link * Fix aliases + inline doc comments * Fix broken link --------- Co-authored-by: brendamuir <100768211+brendamuir@users.noreply.github.com> Co-authored-by: Jack Baldry <jack.baldry@grafana.com>
This commit is contained in:
parent
321148511b
commit
ec42b2a361
@ -1,8 +1,8 @@
|
||||
---
|
||||
aliases:
|
||||
- about-alerting/
|
||||
- ./unified-alerting/alerting/
|
||||
- ./alerting/unified-alerting/
|
||||
- about-alerting/ # /docs/grafana/<GRAFANA_VERSION>/about-alerting
|
||||
- ./unified-alerting/alerting/ # /docs/grafana/<GRAFANA_VERSION>/unified-alerting/alerting/
|
||||
- ./alerting/unified-alerting/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/
|
||||
description: Learn about the key benefits and features of Grafana Alerting
|
||||
labels:
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
aliases:
|
||||
- rules/
|
||||
- unified-alerting/alerting-rules/
|
||||
- ./create-alerts/
|
||||
- rules/ # /docs/grafana/<GRAFANA_VERSION>/alerting/rules/
|
||||
- unified-alerting/alerting-rules/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/
|
||||
- ./create-alerts/ # /docs/grafana/<GRAFANA_VERSION>/alerting/create-alerts/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/alerting-rules/
|
||||
description: Configure alert rules
|
||||
labels:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/create-grafana-managed-rule/
|
||||
- ../unified-alerting/alerting-rules/create-grafana-managed-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-grafana-managed-rule/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/
|
||||
description: Configure Grafana-managed alert rules to create alerts that can act on data from any of our supported data sources
|
||||
keywords:
|
||||
@ -225,7 +225,7 @@ For more information, see [expressions documentation][expression-queries].
|
||||
|
||||
To generate a separate alert for each series, create a multi-dimensional rule. Use `Math`, `Reduce`, or `Resample` expressions to create a multi-dimensional rule. For example:
|
||||
|
||||
- Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value. (Not needed for [rules using numeric data][alerting-on-numeric-data].
|
||||
- Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value. (Not needed for [rules using numeric data][alerting-on-numeric-data]).
|
||||
- Add a `Math` expression with the condition for the rule. Not needed in case a query or a reduce expression already returns 0 if rule should not fire, or a positive number if it should fire. Some examples: `$B > 70` if it should fire in case value of B query/expression is more than 70. `$B < $C * 100` in case it should fire if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels or one is a subset of the other.
|
||||
|
||||

|
||||
@ -274,11 +274,11 @@ This will open the alert rule form, allowing you to configure and create your al
|
||||
[add-a-query]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data#add-a-query"
|
||||
[add-a-query]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data#add-a-query"
|
||||
|
||||
[alerting-on-numeric-data]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/evaluate-grafana-alerts#alerting-on-numeric-data-1"
|
||||
[alerting-on-numeric-data]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/evaluate-grafana-alerts#alerting-on-numeric-data-1"
|
||||
[alerting-on-numeric-data]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions#alert-on-numeric-data"
|
||||
[alerting-on-numeric-data]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alert-rules/queries-conditions#alert-on-numeric-data"
|
||||
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/annotation-label"
|
||||
|
||||
[expression-queries]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data/expression-queries"
|
||||
[expression-queries]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data/expression-queries"
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-mimir-loki-managed-recording-rule/
|
||||
description: Create recording rules for an external Grafana Mimir or Loki instance
|
||||
keywords:
|
||||
@ -25,6 +25,8 @@ weight: 300
|
||||
You can create and manage recording rules for an external Grafana Mimir or Loki instance.
|
||||
Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
|
||||
|
||||
For more information on recording rules in Prometheus, refer to [Defining recording rules in Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/).
|
||||
|
||||
**Note:**
|
||||
|
||||
Recording rules are run as instant rules, which means that they run every 10s. To overwrite this configuration, update the min_interval in your custom configuration file.
|
||||
@ -67,8 +69,8 @@ To create recording rules, follow these steps.
|
||||
1. Click **Save rule** to save the rule or **Save rule and exit** to save the rule and go back to the Alerting page.
|
||||
|
||||
{{% docs/reference %}}
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/annotation-label"
|
||||
|
||||
[configure-grafana]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-rule/
|
||||
- ../unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-mimir-loki-managed-recording-rule/
|
||||
- ../unified-alerting/alerting-rules/create-mimir-loki-managed-rule/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/create-mimir-loki-managed-rule/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-mimir-loki-managed-rule/
|
||||
description: Configure data source-managed alert rules alert for an external Grafana Mimir or Loki instance
|
||||
keywords:
|
||||
@ -129,6 +129,6 @@ Annotations add metadata to provide more information on the alert in your alert
|
||||
[alerting]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting"
|
||||
[alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting"
|
||||
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/annotation-label"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,5 +1,7 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/variables-label-annotation/
|
||||
aliases:
|
||||
- ../fundamentals/annotation-label/variables-label-annotation/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/variables-label-annotation/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/alerting-rules/templating-labels-annotations/
|
||||
description: Learn about how to template labels and annotations
|
||||
keywords:
|
||||
- grafana
|
||||
@ -13,13 +15,15 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Templating labels and annotations
|
||||
weight: 117
|
||||
weight: 500
|
||||
---
|
||||
|
||||
# Templating labels and annotations
|
||||
|
||||
You can use templates to include data from queries and expressions in labels and annotations. For example, you might want to set the severity label for an alert based on the value of the query, or use the instance label from the query in a summary annotation so you know which server is experiencing high CPU usage.
|
||||
|
||||
When using custom labels with templates it is important to make sure that the label value does not change between consecutive evaluations of the alert rule as this will end up creating large numbers of distinct alerts. However, it is OK for the template to produce different label values for different alerts. For example, do not put the value of the query in a custom label as this will end up creating a new set of alerts each time the value changes. Instead use annotations.
|
||||
|
||||
All templates should be written in [text/template](https://pkg.go.dev/text/template). Regardless of whether you are templating a label or an annotation, you should write each template inline inside the label or annotation that you are templating. This means you cannot share templates between labels and annotations, and instead you will need to copy templates wherever you want to use them.
|
||||
|
||||
Each template is evaluated whenever the alert rule is evaluated, and is evaluated for every alert separately. For example, if your alert rule has a templated summary annotation, and the alert rule has 10 firing alerts, then the template will be executed 10 times, once for each alert. You should try to avoid doing expensive computations in your templates as much as possible.
|
@ -1,8 +1,9 @@
|
||||
---
|
||||
aliases:
|
||||
- ../notifications/ # /docs/grafana/latest/alerting/notifications/
|
||||
- ../unified-alerting/notifications/ # /docs/grafana/latest/alerting/unified-alerting/notifications/
|
||||
- ../alerting-rules/create-notification-policy/ # /docs/grafana/latest/alerting/alerting-rules/create-notification-policy/
|
||||
- ../notifications/ # /docs/grafana/<GRAFANA_VERSION>/alerting/notifications/
|
||||
- ../unified-alerting/notifications/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/notifications/
|
||||
- ../alerting-rules/create-notification-policy/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-notification-policy/
|
||||
- ../manage-notifications/create-notification-policy/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/create-notification-policy/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/create-notification-policy/
|
||||
description: Configure notification policies to determine how alerts are routed to contact points
|
||||
keywords:
|
||||
@ -107,6 +108,6 @@ An example of an alert configuration.
|
||||
- Create specific routes for particular teams that handle their own on-call rotations.
|
||||
|
||||
{{% docs/reference %}}
|
||||
[notification-policies]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notification-policies/notifications"
|
||||
[notification-policies]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notification-policies/notifications"
|
||||
[notification-policies]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/notification-policies"
|
||||
[notification-policies]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/notification-policies"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,12 +1,12 @@
|
||||
---
|
||||
aliases:
|
||||
- ../silences/create-silence/ # /docs/grafana/latest/alerting/silences/create-silence/
|
||||
- ../silences/edit-silence/ # /docs/grafana/latest/alerting/silences/edit-silence/
|
||||
- ../silences/linking-to-silence-form/ # /docs/grafana/latest/alerting/silences/linking-to-silence-form/
|
||||
- ../silences/remove-silence/ # /docs/grafana/latest/alerting/silences/remove-silence/
|
||||
- ../unified-alerting/silences/ # /docs/grafana/latest/alerting/unified-alerting/silences/
|
||||
- ../silences/ # /docs/grafana/latest/alerting/silences/
|
||||
- ../manage-notifications/create-silence/ # /docs/grafana/latest/alerting/manage-notifications/create-silence/
|
||||
- ../silences/create-silence/ # /docs/grafana/<GRAFANA_VERSION>/alerting/silences/create-silence/
|
||||
- ../silences/edit-silence/ # /docs/grafana/<GRAFANA_VERSION>/alerting/silences/edit-silence/
|
||||
- ../silences/linking-to-silence-form/ # /docs/grafana/<GRAFANA_VERSION>/alerting/silences/linking-to-silence-form/
|
||||
- ../silences/remove-silence/ # /docs/grafana/<GRAFANA_VERSION>/alerting/silences/remove-silence/
|
||||
- ../unified-alerting/silences/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/silences/
|
||||
- ../silences/ # /docs/grafana/<GRAFANA_VERSION>/alerting/silences/
|
||||
- ../manage-notifications/create-silence/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/create-silence/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/create-silence/
|
||||
description: Create silences to stop notifications from getting created for a specified window of time
|
||||
keywords:
|
||||
|
@ -1,16 +1,15 @@
|
||||
---
|
||||
aliases:
|
||||
- ../contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/
|
||||
- ../contact-points/create-contact-point/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/create-contact-point/
|
||||
- ../contact-points/delete-contact-point/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/delete-contact-point/
|
||||
- ../contact-points/edit-contact-point/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/edit-contact-point/
|
||||
- ../contact-points/test-contact-point/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/test-contact-point/
|
||||
- ../manage-notifications/manage-contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/manage-contact-points/
|
||||
- ../alerting-rules/create-contact-point/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-contact-point/
|
||||
- ../alerting-rules/manage-contact-points/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/
|
||||
- ../alerting-rules/create-notification-policy/ # /docs/grafana/latest/alerting/alerting-rules/create-notification-policy/
|
||||
- ../alerting-rules/manage-contact-points/integrations/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/integrations/
|
||||
- ../manage-notifications/manage-contact-points/ # /docs/grafana/latest/alerting/manage-notifications/manage-contact-points/
|
||||
- ../alerting-rules/manage-contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/
|
||||
- ../alerting-rules/create-notification-policy/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-notification-policy/
|
||||
- ../alerting-rules/manage-contact-points/integrations/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/integrations/
|
||||
- ../manage-notifications/manage-contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/manage-contact-points/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/
|
||||
description: Configure contact points to define how your contacts are notified when an alert rule fires
|
||||
keywords:
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../../alerting-rules/manage-contact-points/configure-oncall/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/configure-oncall/
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/configure-oncall/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/integrations/configure-oncall/
|
||||
- ../../../alerting-rules/manage-contact-points/configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/configure-oncall/
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/integrations/configure-oncall/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/configure-oncall/
|
||||
description: Configure the Alerting - Grafana OnCall integration to connect alerts generated by Grafana Alerting with Grafana OnCall
|
||||
keywords:
|
||||
@ -10,7 +10,7 @@ keywords:
|
||||
- oncall
|
||||
- integration
|
||||
aliases:
|
||||
- ../configure-oncall/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/configure-oncall/
|
||||
- ../configure-oncall/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/configure-oncall/
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/pager-duty/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/integrations/pager-duty/
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/pager-duty/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/integrations/pager-duty/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/pager-duty/
|
||||
description: Configure the PagerDuty integration for Alerting
|
||||
keywords:
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../../fundamentals/contact-points/notifiers/webhook-notifier/ # /docs/grafana/latest/alerting/fundamentals/contact-points/notifiers/webhook-notifier/
|
||||
- ../../../fundamentals/contact-points/webhook-notifier/ # /docs/grafana/latest/alerting/fundamentals/contact-points/webhook-notifier/
|
||||
- ../../../manage-notifications/manage-contact-points/webhook-notifier/ # /docs/grafana/latest/alerting/manage-notifications/manage-contact-points/webhook-notifier/
|
||||
- ../../../fundamentals/contact-points/notifiers/webhook-notifier/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/contact-points/notifiers/webhook-notifier/
|
||||
- ../../../fundamentals/contact-points/webhook-notifier/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/contact-points/webhook-notifier/
|
||||
- ../../../manage-notifications/manage-contact-points/webhook-notifier/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/manage-contact-points/webhook-notifier/
|
||||
- alerting/manage-notifications/manage-contact-points/webhook-notifier/
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/webhook-notifier/ # /docs/grafana/latest/alerting/alerting-rules/manage-contact-points/integrations/webhook-notifier/
|
||||
- ../../../alerting-rules/manage-contact-points/integrations/webhook-notifier/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/manage-contact-points/integrations/webhook-notifier/
|
||||
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/manage-contact-points/integrations/webhook-notifier/
|
||||
description: Configure the webhook notifier integration for Alerting
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
aliases:
|
||||
- ../notifications/mute-timings/ # /docs/grafana/latest/alerting/notifications/mute-timings/
|
||||
- ../unified-alerting/notifications/mute-timings/ # /docs/grafana/latest/alerting/unified-alerting/notifications/mute-timings/
|
||||
- ../manage-notifications/mute-timings/ # /docs/grafana/latest/alerting/manage-notifications/mute-timings/
|
||||
- ../notifications/mute-timings/ # /docs/grafana/<GRAFANA_VERSION>/alerting/notifications/mute-timings/
|
||||
- ../unified-alerting/notifications/mute-timings/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/notifications/mute-timings/
|
||||
- ../manage-notifications/mute-timings/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/mute-timings/
|
||||
canonical: /docs/grafana/latest/alerting/configure-notifications/mute-timings/
|
||||
description: Create mute timings to prevent alerts from firing during a specific and reoccurring period of time
|
||||
keywords:
|
||||
@ -27,7 +27,7 @@ A mute timing is a recurring interval of time when no new notifications for a po
|
||||
|
||||
Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.
|
||||
|
||||
You can configure Grafana managed mute timings as well as mute timings for an [external Alertmanager data source][datasources/alertmanager]. For more information, refer to [Alertmanager documentation][fundamentals/alertmanager].
|
||||
You can configure Grafana managed mute timings as well as mute timings for an [external Alertmanager data source][datasources/alertmanager]. For more information, refer to [Alertmanager documentation][intro-alertmanager].
|
||||
|
||||
## Mute timings vs silences
|
||||
|
||||
@ -85,6 +85,6 @@ If you want to specify an exact duration, specify all the options. For example,
|
||||
[datasources/alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/alertmanager"
|
||||
[datasources/alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/alertmanager"
|
||||
|
||||
[fundamentals/alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alertmanager"
|
||||
[fundamentals/alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alertmanager"
|
||||
[intro-alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/alertmanager"
|
||||
[intro-alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/alertmanager"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../manage-notifications/template-notifications/ # /docs/grafana/latest/alerting/manage-notifications/template-notifications/
|
||||
- ../manage-notifications/template-notifications/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/template-notifications/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/
|
||||
description: Customize your notifications using notification templates
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../manage-notifications/template-notifications/create-notification-templates/ # /docs/grafana/latest/alerting/manage-notifications/template-notifications/create-notification-templates/
|
||||
- ../../manage-notifications/template-notifications/create-notification-templates/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/template-notifications/create-notification-templates/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/create-notification-templates/
|
||||
description: Create notification templates to sent to your contact points
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../manage-notifications/images-in-notifications/ # /docs/grafana/latest/alerting/manage-notifications/images-in-notifications/
|
||||
- ../manage-notifications/images-in-notifications/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/images-in-notifications/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/images-in-notifications/
|
||||
description: Use images in notifications to help users better understand why alerts are firing or have been resolved
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../manage-notifications/template-notifications/reference/ # /docs/grafana/latest/alerting/manage-notifications/template-notifications/reference/
|
||||
- ../../manage-notifications/template-notifications/reference/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/template-notifications/reference/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/reference/
|
||||
description: Learn about templating notifications options
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../manage-notifications/template-notifications/use-notification-templates/ # /docs/grafana/latest/alerting/manage-notifications/template-notifications/use-notification-templates/
|
||||
- ../../manage-notifications/template-notifications/use-notification-templates/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/template-notifications/use-notification-templates/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/use-notification-templates/
|
||||
description: Use notification templates in contact points to customize your notifications
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../manage-notifications/template-notifications/using-go-templating-language/ # /docs/grafana/latest/alerting/manage-notifications/template-notifications/using-go-templating-language/
|
||||
- ../../manage-notifications/template-notifications/using-go-templating-language/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/template-notifications/using-go-templating-language/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/configure-notifications/template-notifications/using-go-templating-language/
|
||||
description: Use Go's templating language to create your own notification templates
|
||||
keywords:
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- metrics/
|
||||
- unified-alerting/fundamentals/
|
||||
- ./metrics/ # /docs/grafana/<GRAFANA_VERSION>/alerting/metrics/
|
||||
- ./unified-alerting/fundamentals/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/fundamentals/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/
|
||||
description: Learn about the fundamentals of Grafana Alerting as well as the key features it offers
|
||||
labels:
|
||||
@ -145,6 +145,6 @@ Alerts are sent to the alert receiver where they are routed, grouped, inhibited,
|
||||
{{% docs/reference %}}
|
||||
[external-alertmanagers]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-alertmanager"
|
||||
[external-alertmanagers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
|
||||
[notification-policies]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notification-policies/notifications"
|
||||
[notification-policies]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notification-policies/notifications"
|
||||
[notification-policies]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notifications/notification-policies"
|
||||
[notification-policies]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/notifications/notification-policies"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,4 +1,9 @@
|
||||
---
|
||||
aliases:
|
||||
- ../fundamentals/data-source-alerting/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/data-source-alerting/
|
||||
- ../fundamentals/alert-rules/alert-instances/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/alert-instances/
|
||||
- ../fundamentals/alert-rules/recording-rules/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/recording-rules/
|
||||
- ../fundamentals/alert-rules/alert-rule-types/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/alert-rule-types/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/
|
||||
description: Learn about alert rules
|
||||
keywords:
|
||||
@ -16,27 +21,109 @@ weight: 100
|
||||
|
||||
# Alert rules
|
||||
|
||||
An alert rule is a set of evaluation criteria for when an alert rule should fire. An alert rule consists of one or more queries and expressions, a condition, and the duration over which the condition needs to be met to start firing.
|
||||
An alert rule is a set of evaluation criteria for when an alert rule should fire. An alert rule consists of one or more [queries and expressions, a condition][queries-and-conditions], and the duration over which the condition needs to be met to start firing.
|
||||
|
||||
While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert.
|
||||
|
||||
An interval specifies how frequently an alert rule is evaluated. Duration, when configured, indicates how long a condition must be met. The alert rules can also define alerting behavior in the absence of data.
|
||||
An interval specifies how frequently an [alert rule is evaluated][alert-rule-evaluation]. Duration, when configured, indicates how long a condition must be met. The alert rules can also define alerting behavior in the absence of data.
|
||||
|
||||
- [Alert rule types][alert-rule-types]
|
||||
- [Alert instances][alert-instances]
|
||||
- [Organising alert rules][organising-alerts]
|
||||
- [Annotation and labels][annotation-label]
|
||||
Grafana supports two different alert rule types: [Grafana-managed alert rules](#grafana-managed-alert-rules) and [Data source-managed alert rules](#data-source-managed-alert-rules).
|
||||
|
||||
## Grafana-managed alert rules
|
||||
|
||||
Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources.
|
||||
|
||||
In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.
|
||||
|
||||
The following diagram shows how Grafana-managed alerting works.
|
||||
|
||||
{{< figure src="/media/docs/alerting/grafana-managed-rule.png" max-width="750px" caption="Grafana-managed alerting" >}}
|
||||
|
||||
1. Alert rules are created within Grafana based on one or more data sources.
|
||||
|
||||
1. Alert rules are evaluated by the Alert Rule Evaluation Engine from within Grafana.
|
||||
|
||||
1. Alerts are delivered using the internal Grafana Alertmanager.
|
||||
|
||||
Note that you can also configure alerts to be delivered using an external Alertmanager; or use both internal and external alertmanagers.
|
||||
|
||||
### Supported data sources
|
||||
|
||||
Grafana-managed alert rules can query the following backend data sources that have alerting enabled:
|
||||
|
||||
- Built-in data sources or those developed and maintained by Grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`,
|
||||
`Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, `TestData`, and `Azure Monitor`.
|
||||
- Community data sources specifying `{"alerting": true, “backend”: true}` in the [plugin.json](https://grafana.com/developers/plugin-tools/reference-plugin-json)
|
||||
|
||||
### Multi-dimensional alerts
|
||||
|
||||
Grafana-managed alerting supports multi-dimensional alerting. Each alert rule can create multiple alert instances. This is exceptionally powerful if you are observing multiple series in a single expression.
|
||||
|
||||
Consider the following PromQL expression:
|
||||
|
||||
```promql
|
||||
sum by(cpu) (
|
||||
rate(node_cpu_seconds_total{mode!="idle"}[1m])
|
||||
)
|
||||
```
|
||||
|
||||
A rule using this expression will create as many alert instances as the amount of CPUs we are observing after the first evaluation, allowing a single rule to report the status of each CPU.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="A multi-dimensional Grafana managed alert rule" >}}
|
||||
|
||||
## Data source-managed alert rules
|
||||
|
||||
To create data source-managed alert rules, you must have a compatible Prometheus or Loki data source.
|
||||
|
||||
You can check if your data source supports rule creation via Grafana by testing the data source and observing if the Ruler API is supported.
|
||||
|
||||
For more information on the Ruler API, refer to [Ruler API](/docs/loki/latest/api/#ruler).
|
||||
|
||||
The following diagram shows how data source-managed alerting works.
|
||||
|
||||
{{< figure src="/media/docs/alerting/loki-mimir-rule.png" max-width="750px" caption="Grafana Mimir/Loki-managed alerting" >}}
|
||||
|
||||
1. Alert rules are created and stored within the data source itself.
|
||||
1. Alert rules can only be created based on Prometheus data.
|
||||
1. Alert rule evaluation and delivery is distributed across multiple nodes for high availability and fault tolerance.
|
||||
|
||||
### Recording rules
|
||||
|
||||
A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.
|
||||
|
||||
Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
|
||||
|
||||
Grafana Enterprise offers an alternative to recorded rules in the form of recorded queries that can be executed against any data source.
|
||||
|
||||
For more information on recording rules, refer to [Create recording rules][create-recording-rules].
|
||||
|
||||
## Comparison between alert rule types
|
||||
|
||||
When choosing which alert rule type to use, consider the following comparison between Grafana-managed and data source-managed alert rules.
|
||||
|
||||
| <div style="width:200px">Feature</div> | <div style="width:200px">Grafana-managed alert rule</div> | <div style="width:200px">Data source-managed alert rule |
|
||||
| ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Create alert rules<wbr /> based on data from any of our supported data sources | Yes | No: You can only create alert rules that are based on Prometheus data. The data source must have the Ruler API enabled. |
|
||||
| Mix and match data sources | Yes | No |
|
||||
| Includes support for recording rules | No | Yes |
|
||||
| Add expressions to transform<wbr /> your data and set alert conditions | Yes | No |
|
||||
| Use images in alert notifications | Yes | No |
|
||||
| Scaling | More resource intensive, depend on the database, and are likely to suffer from transient errors. They only scale vertically. | Store alert rules within the data source itself and allow for “infinite” scaling. Generate and send alert notifications from the location of your data. |
|
||||
| Alert rule evaluation and delivery | Alert rule evaluation and delivery is done from within Grafana, using an external Alertmanager; or both. | Alert rule evaluation and alert delivery is distributed, meaning there is no single point of failure. |
|
||||
|
||||
**Note:**
|
||||
|
||||
If you are using non-Prometheus data, we recommend choosing Grafana-managed alert rules. Otherwise, choose Grafana Mimir or Grafana Loki alert rules where possible.
|
||||
|
||||
{{% docs/reference %}}
|
||||
[alert-instances]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/alert-instances"
|
||||
[alert-instances]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/alert-instances"
|
||||
|
||||
[alert-rule-types]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/alert-rule-types"
|
||||
[alert-rule-types]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/alert-rule-types"
|
||||
[create-recording-rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/create-mimir-loki-managed-recording-rule"
|
||||
[create-recording-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/create-mimir-loki-managed-recording-rule"
|
||||
|
||||
[annotation-label]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label"
|
||||
[annotation-label]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label"
|
||||
[alert-rule-evaluation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/rule-evaluation"
|
||||
[alert-rule-evaluation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/rule-evaluation"
|
||||
|
||||
[queries-and-conditions]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/queries-conditions"
|
||||
[queries-and-conditions]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/queries-conditions"
|
||||
|
||||
[organising-alerts]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/organising-alerts"
|
||||
[organising-alerts]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/organising-alerts"
|
||||
{{% /docs/reference %}}
|
||||
|
@ -1,31 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/alert-instances/
|
||||
description: Learn about alert instances
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- instances
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alert instances
|
||||
weight: 105
|
||||
---
|
||||
|
||||
# Alert instances
|
||||
|
||||
Grafana managed alerts support multi-dimensional alerting. Each alert rule can create multiple alert instances. This is exceptionally powerful if you are observing multiple series in a single expression.
|
||||
|
||||
Consider the following PromQL expression:
|
||||
|
||||
```promql
|
||||
sum by(cpu) (
|
||||
rate(node_cpu_seconds_total{mode!="idle"}[1m])
|
||||
)
|
||||
```
|
||||
|
||||
A rule using this expression will create as many alert instances as the amount of CPUs we are observing after the first evaluation, allowing a single rule to report the status of each CPU.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/multi-dimensional-alert.png" caption="A multi-dimensional Grafana managed alert rule" >}}
|
@ -1,77 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/alert-rule-types/
|
||||
description: Learn about the different alert rule types that Grafana Alerting supports
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- rule types
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alert rule types
|
||||
weight: 102
|
||||
---
|
||||
|
||||
# Alert rule types
|
||||
|
||||
Grafana supports two different alert rule types. Learn more about each of the alert rule types, how they work, and decide which one is best for your use case.
|
||||
|
||||
## Grafana-managed alert rules
|
||||
|
||||
Grafana-managed alert rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources.
|
||||
|
||||
In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. Using images in alert notifications is also supported. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.
|
||||
|
||||
The following diagram shows how Grafana-managed alerting works.
|
||||
|
||||
{{< figure src="/media/docs/alerting/grafana-managed-rule.png" max-width="750px" caption="Grafana-managed alerting" >}}
|
||||
|
||||
1. Alert rules are created within Grafana based on one or more data sources.
|
||||
|
||||
1. Alert rules are evaluated by the Alert Rule Evaluation Engine from within Grafana.
|
||||
|
||||
1. Alerts are delivered using the internal Grafana Alertmanager.
|
||||
|
||||
**Note:**
|
||||
|
||||
You can also configure alerts to be delivered using an external Alertmanager; or use both internal and external alertmanagers.
|
||||
For more information, see Add an external Alertmanager.
|
||||
|
||||
## Data source-managed alert rules
|
||||
|
||||
To create data source-managed alert rules, you must have a compatible Prometheus or Loki data source.
|
||||
|
||||
You can check if your data source supports rule creation via Grafana by testing the data source and observing if the Ruler API is supported.
|
||||
|
||||
For more information on the Ruler API, refer to [Ruler API](/docs/loki/latest/api/#ruler).
|
||||
|
||||
The following diagram shows how data source-managed alerting works.
|
||||
|
||||
{{< figure src="/media/docs/alerting/loki-mimir-rule.png" max-width="750px" caption="Grafana Mimir/Loki-managed alerting" >}}
|
||||
|
||||
1. Alert rules are created and stored within the data source itself.
|
||||
1. Alert rules can only be created based on Prometheus data.
|
||||
1. Alert rule evaluation and delivery is distributed across multiple nodes for high availability and fault tolerance.
|
||||
|
||||
## Choose an alert rule type
|
||||
|
||||
When choosing which alert rule type to use, consider the following comparison between Grafana-managed alert rules and Grafana Mimir or Loki alert rules.
|
||||
|
||||
{{< responsive-table >}}
|
||||
| <div style="width:200px">Feature</div> | <div style="width:200px">Grafana-managed alert rule</div> | <div style="width:200px">Loki/Mimir-managed alert rule |
|
||||
| ----------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Create alert rules<wbr /> based on data from any of our supported data sources | Yes | No: You can only create alert rules that are based on Prometheus data. The data source must have the Ruler API enabled. |
|
||||
| Mix and match data sources | Yes | No |
|
||||
| Includes support for recording rules | No | Yes |
|
||||
| Add expressions to transform<wbr /> your data and set alert conditions | Yes | No |
|
||||
| Use images in alert notifications | Yes | No |
|
||||
| Scaling | More resource intensive, depend on the database, and are likely to suffer from transient errors. They only scale vertically. | Store alert rules within the data source itself and allow for “infinite” scaling. Generate and send alert notifications from the location of your data. |
|
||||
| Alert rule evaluation and delivery | Alert rule evaluation and delivery is done from within Grafana, using an external Alertmanager; or both. | Alert rule evaluation and alert delivery is distributed, meaning there is no single point of failure. |
|
||||
|
||||
{{< /responsive-table >}}
|
||||
|
||||
**Note:**
|
||||
|
||||
If you are using non-Prometheus data, we recommend choosing Grafana-managed alert rules. Otherwise, choose Grafana Mimir or Grafana Loki alert rules where possible.
|
@ -0,0 +1,141 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../fundamentals/annotation-label/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/
|
||||
- ../../fundamentals/annotation-label/labels-and-label-matchers/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/labels-and-label-matchers/
|
||||
- ../../fundamentals/annotation-label/how-to-use-labels/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/how-to-use-labels/
|
||||
- ../../alerting-rules/alert-annotation-label/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/alert-annotation-label/
|
||||
- ../../unified-alerting/alerting-rules/alert-annotation-label/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/alert-annotation-label/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/annotation-label/
|
||||
description: Learn how to use annotations and labels to store key information about alerts
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- guide
|
||||
- rules
|
||||
- create
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Labels and annotations
|
||||
weight: 105
|
||||
---
|
||||
|
||||
# Labels and annotations
|
||||
|
||||
Labels and annotations contain information about an alert. Labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.
|
||||
|
||||
## Labels
|
||||
|
||||
Labels contain information that identifies an alert. An example of a label might be `server=server1` or `team=backend`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.
|
||||
|
||||
For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.
|
||||
|
||||
Labels are a fundamental component of alerting:
|
||||
|
||||
- The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
|
||||
- The alerting UI shows labels for every alert instance generated during evaluation of that rule.
|
||||
- Contact points can access labels to send notification messages that contain specific alert information.
|
||||
- The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
|
||||
|
||||
### How label matching works
|
||||
|
||||
Use labels and label matchers to link alert rules to notification policies and silences. This allows for a flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
|
||||
|
||||
A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
|
||||
|
||||
- The **Label** field is the name of the label to match. It must exactly match the label name.
|
||||
|
||||
- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
|
||||
|
||||
- The **Operator** field is the operator to match against the label value. The available operators are:
|
||||
|
||||
| Operator | Description |
|
||||
| -------- | -------------------------------------------------- |
|
||||
| `=` | Select labels that are exactly equal to the value. |
|
||||
| `!=` | Select labels that are not equal to the value. |
|
||||
| `=~` | Select labels that regex-match the value. |
|
||||
| `!~` | Select labels that do not regex-match the value. |
|
||||
|
||||
If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
|
||||
|
||||
{{< collapse title="Label matching example" >}}
|
||||
|
||||
If you define the following set of labels for your alert:
|
||||
|
||||
`{ foo=bar, baz=qux, id=12 }`
|
||||
|
||||
then:
|
||||
|
||||
- A label matcher defined as `foo=bar` matches this alert rule.
|
||||
- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
|
||||
- A label matcher defined as `id=~[0-9]+` matches this alert rule.
|
||||
- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
|
||||
- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
|
||||
|
||||
**Exclude labels**
|
||||
|
||||
You can also write label matchers to exclude labels.
|
||||
|
||||
Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
|
||||
|
||||
| Label | Operator | Value |
|
||||
| ------ | -------- | ----- |
|
||||
| `team` | `=` | `""` |
|
||||
| `team` | `!~` | `.+` |
|
||||
| `team` | `=~` | `^$` |
|
||||
|
||||
{{< /collapse >}}
|
||||
|
||||
## Label types
|
||||
|
||||
An alert's label set can contain three types of labels:
|
||||
|
||||
- Labels from the datasource,
|
||||
- Custom labels specified in the alert rule,
|
||||
- A series of reserved labels, such as `alertname` or `grafana_folder`.
|
||||
|
||||
### Custom Labels
|
||||
|
||||
Custom labels are additional labels configured manually in the alert rule.
|
||||
|
||||
Ensure the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.
|
||||
|
||||
{{< collapse title="Key format" >}}
|
||||
|
||||
Grafana's built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
|
||||
This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`.
|
||||
Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
|
||||
|
||||
- `Whitespace` will be removed.
|
||||
- `ASCII characters` will be replaced with `_`.
|
||||
- `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.
|
||||
|
||||
Example: A label key/value pair `Alert! 🔔="🔥"` will become `Alert_0x1f514="🔥"`.
|
||||
|
||||
If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.
|
||||
|
||||
{{< /collapse >}}
|
||||
|
||||
### Reserved labels
|
||||
|
||||
Reserved labels can be used in the same way as manually configured custom labels. The current list of available reserved labels are:
|
||||
|
||||
| Label | Description |
|
||||
| -------------- | ----------------------------------------- |
|
||||
| alert_name | The name of the alert rule. |
|
||||
| grafana_folder | Title of the folder containing the alert. |
|
||||
|
||||
Labels prefixed with `grafana_` are reserved by Grafana for special use. To stop Grafana Alerting from adding a reserved label, you can disable it via the `disabled_labels` option in [unified_alerting.reserved_labels](/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana#unified_alertingreserved_labels) configuration.
|
||||
|
||||
## Annotations
|
||||
|
||||
Both labels and annotations have the same structure: a set of named values; however their intended uses are different. The purpose of annotations is to add additional information to existing alerts.
|
||||
|
||||
There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired.
|
||||
|
||||
{{% docs/reference %}}
|
||||
[variables-label-annotation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/templating-labels-annotations"
|
||||
[variables-label-annotation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules/templating-labels-annotations"
|
||||
{{% /docs/reference %}}
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/
|
||||
- ../unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/
|
||||
- ../../unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group/
|
||||
- ../../unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/edit-mimir-loki-namespace-group/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/organising-alerts/
|
||||
description: Learn about organizing alerts using namespaces, folders, and groups
|
||||
keywords:
|
||||
@ -14,7 +14,7 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Namespaces, folders, and groups
|
||||
weight: 105
|
||||
weight: 107
|
||||
---
|
||||
|
||||
## Namespaces, folders, and groups
|
||||
|
@ -1,4 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../fundamentals/evaluate-grafana-alerts/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/evaluate-grafana-alerts/
|
||||
- ../../unified-alerting/fundamentals/evaluate-grafana-alerts/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/queries-conditions/
|
||||
description: Define queries to get the data you want to measure and conditions that need to be met before an alert rule fires
|
||||
keywords:
|
||||
@ -148,9 +151,70 @@ To solve this problem, you can set a (custom) recovery threshold, which basicall
|
||||
|
||||
For example, you could set a threshold of 1000ms and a recovery threshold of 900ms. This way, an alert rule will only stop firing when it goes under 900ms and flapping is reduced.
|
||||
|
||||
## Alert on numeric data
|
||||
|
||||
Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
|
||||
When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.
|
||||
|
||||
### Tabular Data
|
||||
|
||||
This feature is supported with backend data sources that query tabular data:
|
||||
|
||||
- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
|
||||
- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.
|
||||
|
||||
A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
|
||||
|
||||
- The "Format AS" option is set to "Table" in the data source query.
|
||||
- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.
|
||||
|
||||
If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.
|
||||
|
||||
**Example**
|
||||
|
||||
For a MySQL table called "DiskSpace":
|
||||
|
||||
| Time | Host | Disk | PercentFree |
|
||||
| ----------- | ---- | ---- | ----------- |
|
||||
| 2021-June-7 | web1 | /etc | 3 |
|
||||
| 2021-June-7 | web2 | /var | 4 |
|
||||
| 2021-June-7 | web3 | /var | 8 |
|
||||
| ... | ... | ... | ... |
|
||||
|
||||
You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
|
||||
|
||||
```sql
|
||||
SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM (
|
||||
SELECT
|
||||
Host,
|
||||
Disk,
|
||||
Avg(PercentFree)
|
||||
FROM DiskSpace
|
||||
Group By
|
||||
Host,
|
||||
Disk
|
||||
Where __timeFilter(Time)
|
||||
```
|
||||
|
||||
This query returns the following Table response to Grafana:
|
||||
|
||||
| Host | Disk | PercentFree |
|
||||
| ---- | ---- | ----------- |
|
||||
| web1 | /etc | 3 |
|
||||
| web2 | /var | 4 |
|
||||
| web3 | /var | 0 |
|
||||
|
||||
When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:
|
||||
|
||||
| Labels | Status |
|
||||
| --------------------- | -------- |
|
||||
| {Host=web1,disk=/etc} | Alerting |
|
||||
| {Host=web2,disk=/var} | Alerting |
|
||||
| {Host=web3,disk=/var} | Normal |
|
||||
|
||||
{{% docs/reference %}}
|
||||
[data-source-alerting]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/data-source-alerting"
|
||||
[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/data-source-alerting"
|
||||
[data-source-alerting]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules#supported-data-sources"
|
||||
[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules#supported-data-sources"
|
||||
|
||||
[query-transform-data]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/panels-visualizations/query-transform-data"
|
||||
[query-transform-data]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/visualizations/panels-visualizations/query-transform-data"
|
@ -1,27 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/recording-rules/
|
||||
description: Create recording rules to pre-compute frequently needed or computationally expensive expressions and save the result as a new set of time series
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- recording rules
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Recording rules
|
||||
weight: 103
|
||||
---
|
||||
|
||||
# Recording rules
|
||||
|
||||
_Recording rules are only available for compatible Prometheus or Loki data sources._
|
||||
|
||||
A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.
|
||||
|
||||
Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.
|
||||
|
||||
Grafana Enterprise offers an alternative to recorded rules in the form of recorded queries that can be executed against any data source.
|
||||
|
||||
For more information on recording rules in Prometheus, refer to [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/).
|
@ -11,7 +11,7 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alert rule evaluation
|
||||
weight: 106
|
||||
weight: 108
|
||||
---
|
||||
|
||||
# Alert rule evaluation
|
@ -1,6 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/state-and-health/
|
||||
- ../../fundamentals/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/state-and-health/
|
||||
- ../../unified-alerting/alerting-rules/state-and-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/state-and-health
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/state-and-health/
|
||||
description: Learn about the state and health of alert rules to understand several key status indicators about your alerts
|
||||
keywords:
|
||||
@ -14,7 +15,7 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: State and health of alert rules
|
||||
weight: 405
|
||||
weight: 109
|
||||
---
|
||||
|
||||
# State and health of alert rules
|
||||
|
@ -1,59 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ../fundamentals/alertmanager/
|
||||
- ../metrics/
|
||||
- ../unified-alerting/fundamentals/alertmanager/
|
||||
- alerting/manage-notifications/alertmanager/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alertmanager/
|
||||
description: Learn about Alertmanagers and the Alertmanager options for Grafana Alerting
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alertmanager
|
||||
weight: 150
|
||||
---
|
||||
|
||||
# Alertmanager
|
||||
|
||||
Alertmanager enables you to quickly and efficiently manage and respond to alerts. It receives alerts, handles silencing, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.
|
||||
|
||||
In Grafana, you can use the Cloud Alertmanager, Grafana Alertmanager, or an external Alertmanager. You can also run multiple Alertmanagers; your decision depends on your set up and where your alerts are being generated.
|
||||
|
||||
**Cloud Alertmanager**
|
||||
|
||||
Cloud Alertmanager runs in Grafana Cloud and it can receive alerts from Grafana, Mimir, and Loki.
|
||||
|
||||
**Grafana Alertmanager**
|
||||
|
||||
Grafana Alertmanager is an internal Alertmanager that is pre-configured and available for selection by default if you run Grafana on-premises or open-source.
|
||||
|
||||
The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki.
|
||||
|
||||
**Note that inhibition rules are not supported in the Grafana Alertmanager.**
|
||||
|
||||
**External Alertmanager**
|
||||
|
||||
If you want to use a single Alertmanager to receive all your Grafana, Loki, Mimir, and Prometheus alerts, you can set up Grafana to use an external Alertmanager. This external Alertmanager can be configured and administered from within Grafana itself.
|
||||
|
||||
Here are two examples of when you may want to configure your own external alertmanager and send your alerts there instead of the Grafana Alertmanager:
|
||||
|
||||
1. You may already have Alertmanagers on-premises in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.
|
||||
|
||||
2. You want to use both Prometheus on-premises and hosted Grafana to send alerts to the same Alertmanager that runs in your Cloud infrastructure.
|
||||
|
||||
Alertmanagers are visible from the drop-down menu on the Alerting Contact Points, Notification Policies, and Silences pages.
|
||||
|
||||
If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.
|
||||
|
||||
**Useful links**
|
||||
|
||||
[Prometheus Alertmanager documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)
|
||||
|
||||
[Add an external Alertmanager][configure-alertmanager]
|
||||
|
||||
{{% docs/reference %}}
|
||||
[configure-alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-alertmanager"
|
||||
[configure-alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
|
||||
{{% /docs/reference %}}
|
@ -1,53 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ../alerting-rules/alert-annotation-label/
|
||||
- ../unified-alerting/alerting-rules/alert-annotation-label/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/
|
||||
description: Learn how to use annotations and labels to store key information about alerts
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- guide
|
||||
- rules
|
||||
- create
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Labels and annotations
|
||||
weight: 130
|
||||
---
|
||||
|
||||
# Labels and annotations
|
||||
|
||||
Labels and annotations contain information about an alert. Both labels and annotations have the same structure: a set of named values; however their intended uses are different. An example of label, or the equivalent annotation, might be `alertname="test"`.
|
||||
|
||||
The main difference between a label and an annotation is that labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.
|
||||
|
||||
For example, consider two high CPU alerts: one for `server1` and another for `server2`. In such an example we might have a label called `server` where the first alert has the label `server="server1"` and the second alert has the label `server="server2"`. However, we might also want to add a description to each alert such as `"The CPU usage for server1 is above 75%."`, where `server1` and `75%` are replaced with the name and CPU usage of the server (please refer to the documentation on [templating labels and annotations][variables-label-annotation] for how to do this). This kind of description would be more suitable as an annotation.
|
||||
|
||||
## Labels
|
||||
|
||||
Labels contain information that identifies an alert. An example of a label might be `server=server1`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.
|
||||
|
||||
For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.
|
||||
|
||||
The label set for an alert is a combination of the labels from the datasource, custom labels from the alert rule, and a number of reserved labels such as `alertname`.
|
||||
|
||||
### Custom Labels
|
||||
|
||||
Custom labels are additional labels from the alert rule. Like annotations, custom labels must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. Documentation on how to template custom labels can be found [here][variables-label-annotation].
|
||||
|
||||
When using custom labels with templates it is important to make sure that the label value does not change between consecutive evaluations of the alert rule as this will end up creating large numbers of distinct alerts. However, it is OK for the template to produce different label values for different alerts. For example, do not put the value of the query in a custom label as this will end up creating a new set of alerts each time the value changes. Instead use annotations.
|
||||
|
||||
It is also important to make sure that the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.
|
||||
|
||||
## Annotations
|
||||
|
||||
Annotations are named pairs that add additional information to existing alerts. There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. If an annotation contains template code, the template is evaluated once when the alert is fired. It is not re-evaluated, even when the alert is resolved. Documentation on how to template annotations can be found [here][variables-label-annotation].
|
||||
|
||||
{{% docs/reference %}}
|
||||
[variables-label-annotation]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/variables-label-annotation"
|
||||
[variables-label-annotation]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label/variables-label-annotation"
|
||||
{{% /docs/reference %}}
|
@ -1,60 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/how-to-use-labels/
|
||||
description: Learn how to use labels to link alert rules to notification policies and silences
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- guide
|
||||
- fundamentals
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Labels in Grafana Alerting
|
||||
weight: 117
|
||||
---
|
||||
|
||||
# Labels in Grafana Alerting
|
||||
|
||||
This topic explains why labels are a fundamental component of alerting.
|
||||
|
||||
- The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
|
||||
- The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
|
||||
- The alerting UI shows labels for every alert instance generated during evaluation of that rule.
|
||||
- Contact points can access labels to dynamically generate notifications that contain information specific to the alert that is resulting in a notification.
|
||||
- You can add labels to an [alerting rule][alerting-rules]. Labels are manually configurable, use template functions, and can reference other labels. Labels added to an alerting rule take precedence in the event of a collision between labels (except in the case of [Grafana reserved labels](#grafana-reserved-labels)).
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/rule-edit-details-8-0.png" max-width="550px" caption="Alert details" >}}
|
||||
|
||||
## External Alertmanager Compatibility
|
||||
|
||||
Grafana's built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
|
||||
This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`.
|
||||
Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
|
||||
|
||||
- `Whitespace` will be removed.
|
||||
- `ASCII characters` will be replaced with `_`.
|
||||
- `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.
|
||||
|
||||
Example: A label key/value pair `Alert! 🔔="🔥"` will become `Alert_0x1f514="🔥"`.
|
||||
|
||||
**Note** If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.
|
||||
|
||||
## Grafana reserved labels
|
||||
|
||||
{{% admonition type="note" %}}
|
||||
Labels prefixed with `grafana_` are reserved by Grafana for special use. If a manually configured label is added beginning with `grafana_` it may be overwritten in case of collision.
|
||||
To stop the Grafana Alerting engine from adding a reserved label, you can disable it via the `disabled_labels` option in [unified_alerting.reserved_labels](/docs/grafana/<GRAFANA_VERSION>/setup-grafana/configure-grafana#unified_alertingreserved_labels) configuration.
|
||||
{{% /admonition %}}
|
||||
|
||||
Grafana reserved labels can be used in the same way as manually configured labels. The current list of available reserved labels are:
|
||||
|
||||
| Label | Description |
|
||||
| -------------- | ----------------------------------------- |
|
||||
| grafana_folder | Title of the folder containing the alert. |
|
||||
|
||||
{{% docs/reference %}}
|
||||
[alerting-rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules"
|
||||
[alerting-rules]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/alerting-rules"
|
||||
{{% /docs/reference %}}
|
@ -1,64 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/annotation-label/labels-and-label-matchers/
|
||||
description: Learn how to use label matchers to link alert rules to notification policies and silences
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- guide
|
||||
- fundamentals
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
menuTitle: Label matchers
|
||||
title: How label matching works
|
||||
weight: 117
|
||||
---
|
||||
|
||||
# How label matching works
|
||||
|
||||
Use labels and label matchers to link alert rules to notification policies and silences. This allows for a very flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.
|
||||
|
||||
A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
|
||||
|
||||
- The **Label** field is the name of the label to match. It must exactly match the label name.
|
||||
|
||||
- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
|
||||
|
||||
- The **Operator** field is the operator to match against the label value. The available operators are:
|
||||
|
||||
| Operator | Description |
|
||||
| -------- | -------------------------------------------------- |
|
||||
| `=` | Select labels that are exactly equal to the value. |
|
||||
| `!=` | Select labels that are not equal to the value. |
|
||||
| `=~` | Select labels that regex-match the value. |
|
||||
| `!~` | Select labels that do not regex-match the value. |
|
||||
|
||||
If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.
|
||||
|
||||
## Example scenario
|
||||
|
||||
If you define the following set of labels for your alert:
|
||||
|
||||
`{ foo=bar, baz=qux, id=12 }`
|
||||
|
||||
then:
|
||||
|
||||
- A label matcher defined as `foo=bar` matches this alert rule.
|
||||
- A label matcher defined as `foo!=bar` does _not_ match this alert rule.
|
||||
- A label matcher defined as `id=~[0-9]+` matches this alert rule.
|
||||
- A label matcher defined as `baz!~[0-9]+` matches this alert rule.
|
||||
- Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.
|
||||
|
||||
## Exclude labels
|
||||
|
||||
You can also write label matchers to exclude labels.
|
||||
|
||||
Here is an example that shows how to exclude the label `Team`. You can choose between any of the values below to exclude labels.
|
||||
|
||||
| Label | Operator | Value |
|
||||
| ------ | -------- | ----- |
|
||||
| `team` | `=` | `""` |
|
||||
| `team` | `!~` | `.+` |
|
||||
| `team` | `=~` | `^$` |
|
@ -1,95 +0,0 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/data-source-alerting/
|
||||
description: Learn about the data sources supported by Grafana Alerting
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Data sources and Grafana Alerting
|
||||
weight: 140
|
||||
---
|
||||
|
||||
# Data sources and Grafana Alerting
|
||||
|
||||
There are a number of data sources that are compatible with Grafana Alerting. Each data source is supported by a plugin. You can use one of the built-in data sources listed below, use [external data source plugins](/grafana/plugins/?type=datasource), or create your own data source plugin.
|
||||
|
||||
If you are creating your own data source plugin, make sure it is a backend plugin as Grafana Alerting requires this in order to be able to evaluate rules using the data source. Frontend data sources are not supported, because the evaluation engine runs on the backend.
|
||||
|
||||
Specifying `{ "alerting": true, “backend”: true }` in the plugin.json file indicates that the data source plugin is compatible with Grafana Alerting and includes the backend data-fetching code. For more information, refer to [Build a data source backend plugin](/tutorials/build-a-data-source-backend-plugin/).
|
||||
|
||||
These are the data sources that are compatible with and supported by Grafana Alerting.
|
||||
|
||||
- [AWS CloudWatch][]
|
||||
- [Azure Monitor][]
|
||||
- [Elasticsearch][]
|
||||
- [Google Cloud Monitoring][]
|
||||
- [Graphite][]
|
||||
- [InfluxDB][]
|
||||
- [Loki][]
|
||||
- [Microsoft SQL Server (MSSQL)][]
|
||||
- [MySQL][]
|
||||
- [Open TSDB][]
|
||||
- [PostgreSQL][]
|
||||
- [Prometheus][]
|
||||
- [Jaeger][]
|
||||
- [Zipkin][]
|
||||
- [Tempo][]
|
||||
- [Testdata][]
|
||||
|
||||
## Useful links
|
||||
|
||||
- [Grafana data sources][]
|
||||
|
||||
{{% docs/reference %}}
|
||||
[Grafana data sources]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources"
|
||||
[Grafana data sources]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources"
|
||||
|
||||
[AWS CloudWatch]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/aws-cloudwatch"
|
||||
[AWS CloudWatch]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/aws-cloudwatch"
|
||||
|
||||
[Azure Monitor]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/azure-monitor"
|
||||
[Azure Monitor]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/azure-monitor"
|
||||
|
||||
[Elasticsearch]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/elasticsearch"
|
||||
[Elasticsearch]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/elasticsearch"
|
||||
|
||||
[Google Cloud Monitoring]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/google-cloud-monitoring"
|
||||
[Google Cloud Monitoring]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/google-cloud-monitoring"
|
||||
|
||||
[Graphite]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/graphite"
|
||||
[Graphite]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/graphite"
|
||||
|
||||
[InfluxDB]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/influxdb"
|
||||
[InfluxDB]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/influxdb"
|
||||
|
||||
[Loki]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/loki"
|
||||
[Loki]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/loki"
|
||||
|
||||
[Microsoft SQL Server (MSSQL)]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/mssql"
|
||||
[Microsoft SQL Server (MSSQL)]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/mssql"
|
||||
|
||||
[MySQL]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/mysql"
|
||||
[MySQL]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/mysql"
|
||||
|
||||
[Open TSDB]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/opentsdb"
|
||||
[Open TSDB]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/opentsdb"
|
||||
|
||||
[PostgreSQL]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/postgres"
|
||||
[PostgreSQL]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/postgres"
|
||||
|
||||
[Prometheus]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/prometheus"
|
||||
[Prometheus]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/prometheus"
|
||||
|
||||
[Jaeger]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/jaeger"
|
||||
[Jaeger]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/jaeger"
|
||||
|
||||
[Zipkin]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/zipkin"
|
||||
[Zipkin]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/zipkin"
|
||||
|
||||
[Tempo]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/tempo"
|
||||
[Tempo]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/tempo"
|
||||
|
||||
[Testdata]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/datasources/testdata"
|
||||
[Testdata]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/connect-externally-hosted/data-sources/testdata"
|
||||
{{% /docs/reference %}}
|
@ -1,112 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ../metrics/
|
||||
- ../unified-alerting/fundamentals/evaluate-grafana-alerts/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/evaluate-grafana-alerts/
|
||||
description: Learn how how Grafana-managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alerting on numeric data
|
||||
weight: 160
|
||||
---
|
||||
|
||||
# Alerting on numeric data
|
||||
|
||||
This topic describes how Grafana managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data.
|
||||
|
||||
- [Alerting on numeric data](#alerting-on-numeric-data)
|
||||
- [Alert evaluation](#alert-evaluation)
|
||||
- [Metrics from the alerting engine](#metrics-from-the-alerting-engine)
|
||||
- [Alerting on numeric data](#alerting-on-numeric-data-1)
|
||||
- [Tabular Data](#tabular-data)
|
||||
- [Example](#example)
|
||||
|
||||
## Alert evaluation
|
||||
|
||||
Grafana managed alerts query the following backend data sources that have alerting enabled:
|
||||
|
||||
- built-in data sources or those developed and maintained by Grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`,
|
||||
`Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Monitor`
|
||||
- community developed backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json](/developers/plugin-tools/reference-plugin-json)
|
||||
|
||||
### Metrics from the alerting engine
|
||||
|
||||
The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics][set-up-grafana-monitoring].
|
||||
|
||||
| Metric Name | Type | Description |
|
||||
| ------------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- |
|
||||
| `grafana_alerting_alerts` | gauge | How many alerts by state |
|
||||
| `grafana_alerting_request_duration` | histogram | Histogram of requests to the Alerting API |
|
||||
| `grafana_alerting_active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts |
|
||||
| `grafana_alerting_rule_evaluations_total` | counter | The total number of rule evaluations |
|
||||
| `grafana_alerting_rule_evaluation_failures_total` | counter | The total number of rule evaluation failures |
|
||||
| `grafana_alerting_rule_evaluation_duration` | summary | The duration for a rule to execute |
|
||||
| `grafana_alerting_rule_group_rules` | gauge | The number of rules |
|
||||
|
||||
## Alerting on numeric data
|
||||
|
||||
Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules.
|
||||
When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.
|
||||
|
||||
### Tabular Data
|
||||
|
||||
This feature is supported with backend data sources that query tabular data:
|
||||
|
||||
- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
|
||||
- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.
|
||||
|
||||
A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
|
||||
|
||||
- The "Format AS" option is set to "Table" in the data source query.
|
||||
- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.
|
||||
|
||||
If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.
|
||||
|
||||
### Example
|
||||
|
||||
For a MySQL table called "DiskSpace":
|
||||
|
||||
| Time | Host | Disk | PercentFree |
|
||||
| ----------- | ---- | ---- | ----------- |
|
||||
| 2021-June-7 | web1 | /etc | 3 |
|
||||
| 2021-June-7 | web2 | /var | 4 |
|
||||
| 2021-June-7 | web3 | /var | 8 |
|
||||
| ... | ... | ... | ... |
|
||||
|
||||
You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
|
||||
|
||||
```sql
|
||||
SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM (
|
||||
SELECT
|
||||
Host,
|
||||
Disk,
|
||||
Avg(PercentFree)
|
||||
FROM DiskSpace
|
||||
Group By
|
||||
Host,
|
||||
Disk
|
||||
Where __timeFilter(Time)
|
||||
```
|
||||
|
||||
This query returns the following Table response to Grafana:
|
||||
|
||||
| Host | Disk | PercentFree |
|
||||
| ---- | ---- | ----------- |
|
||||
| web1 | /etc | 3 |
|
||||
| web2 | /var | 4 |
|
||||
| web3 | /var | 0 |
|
||||
|
||||
When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:
|
||||
|
||||
| Labels | Status |
|
||||
| --------------------- | -------- |
|
||||
| {Host=web1,disk=/etc} | Alerting |
|
||||
| {Host=web2,disk=/var} | Alerting |
|
||||
| {Host=web3,disk=/var} | Normal |
|
||||
|
||||
{{% docs/reference %}}
|
||||
[set-up-grafana-monitoring]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/setup-grafana/set-up-grafana-monitoring"
|
||||
{{% /docs/reference %}}
|
@ -1,43 +0,0 @@
|
||||
---
|
||||
aliases:
|
||||
- ../high-availability/
|
||||
- ../unified-alerting/high-availability/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/high-availability/
|
||||
description: Learn about high availability in Grafana Alerting
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
- tutorials
|
||||
- ha
|
||||
- high availability
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alerting high availability
|
||||
weight: 170
|
||||
---
|
||||
|
||||
# Alerting high availability
|
||||
|
||||
Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
|
||||
|
||||
When running multiple instances of Grafana, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules will still be evaluated and notifications for alerts will still be sent. You will see this duplication in state history, and is a good way to tell if you are using high availability.
|
||||
|
||||
While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid sending duplicate notifications. Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.
|
||||
|
||||
The Alertmanager uses a gossip protocol to share information about notifications between Grafana instances. It also gossips silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.
|
||||
|
||||
It is important to make sure that gossiping is configured and tested. You can find the documentation on how to do that [here][configure-high-availability].
|
||||
|
||||
## Useful links
|
||||
|
||||
[Configure alerting high availability][configure-high-availability]
|
||||
|
||||
{{% docs/reference %}}
|
||||
[configure-high-availability]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-high-availability"
|
||||
[configure-high-availability]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-high-availability"
|
||||
{{% /docs/reference %}}
|
@ -1,6 +1,8 @@
|
||||
---
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notification-policies/
|
||||
description: Learn about how notification policies work
|
||||
aliases:
|
||||
- ./notification-policies/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notification-policies/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/
|
||||
description: Learn about how notifications work
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
@ -26,6 +28,12 @@ Next, create a notification policy which is a set of rules for where, when and h
|
||||
|
||||
Grafana uses Alertmanagers to send notifications for firing and resolved alerts. Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending notifications from other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.
|
||||
|
||||
## Contact points
|
||||
|
||||
Contact points contain the configuration for sending alert notifications, specifying destinations like email, Slack, OnCall, webhooks, and their notification messages. They allow the customization of notification messages and the use of notification templates.
|
||||
|
||||
A contact point is a list of integrations, each sending a message to a specific destination. You can configure them via notification policies or alert rules.
|
||||
|
||||
## Notification policies
|
||||
|
||||
Notification policies control when and where notifications are sent. A notification policy can choose to send all alerts together in the same notification, send alerts in grouped notifications based on a set of labels, or send alerts as separate notifications. You can configure each notification policy to control how often notifications should be sent as well as having one or more mute timings to inhibit notifications at certain times of the day and on certain days of the week.
|
@ -0,0 +1,46 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../fundamentals/alertmanager/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alertmanager/
|
||||
- ../../unified-alerting/fundamentals/alertmanager/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/fundamentals/alertmanager/
|
||||
- ../../manage-notifications/alertmanager/ # /docs/grafana/<GRAFANA_VERSION>/alerting/manage-notifications/alertmanager/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/notifications/alertmanager/
|
||||
description: Learn about Alertmanagers and the Alertmanager options for Grafana Alerting
|
||||
labels:
|
||||
products:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Alertmanager
|
||||
weight: 111
|
||||
---
|
||||
|
||||
# Alertmanager
|
||||
|
||||
Grafana sends firing and resolved alerts to Alertmanagers. The Alertmanager receives alerts, handles silencing, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.
|
||||
|
||||
Grafana has its own Alertmanager, referred to as "Grafana" in the user interface, but also supports sending alerts to other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/).
|
||||
|
||||
The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.
|
||||
|
||||
Alertmanagers are visible from the drop-down menu on the Alerting Contact Points, Notification Policies, and Silences pages.
|
||||
|
||||
In Grafana, you can use the Cloud Alertmanager, Grafana Alertmanager, or an external Alertmanager. You can also run multiple Alertmanagers; your decision depends on your set up and where your alerts are being generated.
|
||||
|
||||
- **Grafana Alertmanager** is an internal Alertmanager that is pre-configured and available for selection by default if you run Grafana on-premises or open-source.
|
||||
|
||||
The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki. Note that inhibition rules are not supported.
|
||||
|
||||
- **Cloud Alertmanager** runs in Grafana Cloud and it can receive alerts from Grafana, Mimir, and Loki.
|
||||
|
||||
- **External Alertmanager** can receive all your Grafana, Loki, Mimir, and Prometheus alerts. External Alertmanagers can be configured and administered from within Grafana itself.
|
||||
|
||||
Here are two examples of when you may want to [add your own external alertmanager][configure-alertmanager] and send your alerts there instead of the Grafana Alertmanager:
|
||||
|
||||
1. You may already have Alertmanagers on-premises in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.
|
||||
|
||||
2. You want to use both Prometheus on-premises and hosted Grafana to send alerts to the same Alertmanager that runs in your Cloud infrastructure.
|
||||
|
||||
{{% docs/reference %}}
|
||||
[configure-alertmanager]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-alertmanager"
|
||||
[configure-alertmanager]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-alertmanager"
|
||||
{{% /docs/reference %}}
|
@ -1,9 +1,10 @@
|
||||
---
|
||||
aliases:
|
||||
- /docs/grafana/latest/alerting/contact-points/
|
||||
- /docs/grafana/latest/alerting/unified-alerting/contact-points/
|
||||
- /docs/grafana/latest/alerting/fundamentals/contact-points/contact-point-types/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/contact-points/
|
||||
- ../../fundamentals/contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/contact-points/
|
||||
- ../../fundamentals/contact-points/contact-point-types/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/contact-points/contact-point-types/
|
||||
- ../../contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/
|
||||
- ../../unified-alerting/contact-points/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/contact-points/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/contact-points/
|
||||
description: Learn about contact points and the supported contact point integrations
|
||||
keywords:
|
||||
- grafana
|
||||
@ -18,7 +19,7 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Contact points
|
||||
weight: 120
|
||||
weight: 112
|
||||
---
|
||||
|
||||
# Contact points
|
@ -1,9 +1,9 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../contact-points/message-templating/
|
||||
- ../../message-templating/
|
||||
- ../../unified-alerting/message-templating/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/message-templating/
|
||||
- ../../contact-points/message-templating/ # /docs/grafana/<GRAFANA_VERSION>/alerting/contact-points/message-templating/
|
||||
- ../../alert-rules/message-templating/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alert-rules/message-templating/
|
||||
- ../../unified-alerting/message-templating/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/message-templating/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/message-templating/
|
||||
description: Learn about notification templating
|
||||
keywords:
|
||||
- grafana
|
||||
@ -16,11 +16,11 @@ labels:
|
||||
- cloud
|
||||
- enterprise
|
||||
- oss
|
||||
title: Notification templating
|
||||
weight: 415
|
||||
title: Notification templates
|
||||
weight: 114
|
||||
---
|
||||
|
||||
# Notification templating
|
||||
# Notification templates
|
||||
|
||||
Notifications sent via contact points are built using notification templates. Grafana's default templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML (which can affect escaping).
|
||||
|
@ -1,8 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- ../notifications/
|
||||
- alerting/manage-notifications/create-notification-policy/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notification-policies/notifications/
|
||||
- ../notification-policies/notifications/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/notification-policies/notifications/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/fundamentals/notifications/notification-policies/
|
||||
description: Learn about how notification policies work and are structured
|
||||
keywords:
|
||||
- grafana
|
||||
@ -17,7 +16,7 @@ labels:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Notification policies
|
||||
weight: 410
|
||||
weight: 113
|
||||
---
|
||||
|
||||
# Notification policies
|
||||
@ -132,6 +131,6 @@ Repeat interval decides how often notifications are repeated if the group has no
|
||||
**Default** 4 hours
|
||||
|
||||
{{% docs/reference %}}
|
||||
[labels-and-label-matchers]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/annotation-label/labels-and-label-matchers"
|
||||
[labels-and-label-matchers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/annotation-label/labels-and-label-matchers"
|
||||
[labels-and-label-matchers]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules/annotation-label#how-label-matching-works"
|
||||
[labels-and-label-matchers]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules/annotation-label#how-label-matching-works"
|
||||
{{% /docs/reference %}}
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- alerting/alerting-rules/declare-incident-from-alert/
|
||||
- ../../alerting/alerting-rules/declare-incident-from-alert/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/declare-incident-from-alert/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/declare-incident-from-alert/
|
||||
description: Declare an incident from a firing alert
|
||||
keywords:
|
||||
|
@ -1,10 +1,9 @@
|
||||
---
|
||||
aliases:
|
||||
- -docs/grafana/latest/alerting/manage-notifications/view-alert-groups/
|
||||
- ../alert-groups/
|
||||
- ../alert-groups/filter-alerts/
|
||||
- ../alert-groups/view-alert-grouping/
|
||||
- ../unified-alerting/alert-groups/
|
||||
- ../../alerting/alert-groups/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alert-groups/
|
||||
- ../../alerting/alert-groups/filter-alerts/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alert-groups/filter-alerts/
|
||||
- ../../alerting/alert-groups/view-alert-grouping/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alert-groups/view-alert-grouping/
|
||||
- ../../alerting/unified-alerting/alert-groups/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alert-groups/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-alert-groups/
|
||||
description: Alert groups
|
||||
keywords:
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
aliases:
|
||||
- ../unified-alerting/alerting-rules/rule-list/
|
||||
- ../view-alert-rules/
|
||||
- rule-list/
|
||||
- ../../alerting/unified-alerting/alerting-rules/rule-list/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/alerting-rules/rule-list
|
||||
- ../../alerting/alerting-rules/view-alert-rules/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/view-alert-rules
|
||||
- ../../alerting/alerting-rules/rule-list/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/rule-list
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-alert-rules/
|
||||
description: View and filter by alert rules
|
||||
keywords:
|
||||
|
@ -1,8 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../fundamentals/state-and-health/
|
||||
- ../unified-alerting/alerting-rules/state-and-health/
|
||||
- ../view-state-health/
|
||||
- ../../alerting/alerting-rules/view-state-health/ # /docs/grafana/<GRAFANA_VERSION>/alerting/alerting-rules/view-state-health
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/manage-notifications/view-state-health/
|
||||
description: View the state and health of alert rules
|
||||
keywords:
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- unified-alerting/set-up/
|
||||
- unified-alerting/set-up/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/set-up/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/
|
||||
description: Set up or upgrade your implementation of Grafana Alerting
|
||||
labels:
|
||||
@ -66,8 +66,8 @@ The following topics provide you with advanced configuration options for Grafana
|
||||
[configure-high-availability]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/configure-high-availability"
|
||||
[configure-high-availability]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/set-up/configure-high-availability"
|
||||
|
||||
[data-source-alerting]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/data-source-alerting"
|
||||
[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/data-source-alerting"
|
||||
[data-source-alerting]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/alert-rules#supported-data-sources"
|
||||
[data-source-alerting]: "/docs/grafana-cloud/ -> /docs/grafana-cloud/alerting-and-irm/alerting/fundamentals/alert-rules#supported-data-sources"
|
||||
|
||||
[data-source-management]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/administration/data-source-management"
|
||||
|
||||
|
@ -10,11 +10,11 @@ keywords:
|
||||
labels:
|
||||
products:
|
||||
- oss
|
||||
title: Configure Alert State History
|
||||
weight: 600
|
||||
title: Configure alert state history
|
||||
weight: 250
|
||||
---
|
||||
|
||||
# Configure Alert State History
|
||||
# Configure alert state history
|
||||
|
||||
Starting with Grafana 10, Alerting can record all alert rule state changes for your Grafana managed alert rules in a Loki instance.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../configure-alertmanager/
|
||||
- ../configure-alertmanager/ # /docs/grafana/<GRAFANA_VERSION>/configure-alertmanager/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/configure-alertmanager/
|
||||
description: Configure an Alertmanager to receive all of your alerts
|
||||
keywords:
|
||||
|
@ -1,9 +1,11 @@
|
||||
---
|
||||
aliases:
|
||||
- ../high-availability/enable-alerting-ha/
|
||||
- ../unified-alerting/high-availability/
|
||||
- ../unified-alerting/high-availability/ # /docs/grafana/<GRAFANA_VERSION>/alerting/unified-alerting/high-availability
|
||||
- ../high-availability/enable-alerting-ha/ # /docs/grafana/<GRAFANA_VERSION>/alerting/high-availability/enable-alerting-ha/
|
||||
- ../high-availability/ # /docs/grafana/<GRAFANA_VERSION>/alerting/high-availability
|
||||
- ../fundamentals/high-availability/ # /docs/grafana/<GRAFANA_VERSION>/alerting/fundamentals/high-availability
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/configure-high-availability/
|
||||
description: Enable alerting high availability
|
||||
description: Configure High Availability
|
||||
keywords:
|
||||
- grafana
|
||||
- alerting
|
||||
@ -14,14 +16,23 @@ labels:
|
||||
products:
|
||||
- enterprise
|
||||
- oss
|
||||
title: Enable alerting high availability
|
||||
weight: 400
|
||||
title: Configure high availability
|
||||
weight: 600
|
||||
---
|
||||
|
||||
# Enable alerting high availability
|
||||
# Configure high availability
|
||||
|
||||
You can enable alerting high availability support by updating the Grafana configuration file. If you run Grafana in a Kubernetes cluster, additional steps are required. Both options are described below.
|
||||
Please note that the deduplication is done for the notification, but the alert will still be evaluated on every Grafana instance. This means that events in alerting state history will be duplicated by the number of Grafana instances running.
|
||||
Grafana Alerting uses the Prometheus model of separating the evaluation of alert rules from the delivering of notifications. In this model, the evaluation of alert rules is done in the alert generator and the delivering of notifications is done in the alert receiver. In Grafana Alerting, the alert generator is the Scheduler and the receiver is the Alertmanager.
|
||||
|
||||
{{< figure src="/static/img/docs/alerting/unified/high-availability-ua.png" class="docs-image--no-shadow" max-width= "750px" caption="High availability" >}}
|
||||
|
||||
When running multiple instances of Grafana, all alert rules are evaluated on all instances. You can think of the evaluation of alert rules as being duplicated by the number of running Grafana instances. This is how Grafana Alerting makes sure that as long as at least one Grafana instance is working, alert rules will still be evaluated and notifications for alerts will still be sent.
|
||||
|
||||
You can find this duplication in state history and it is a good way to confirm if you are using high availability.
|
||||
|
||||
While the alert generator evaluates all alert rules on all instances, the alert receiver makes a best-effort attempt to avoid sending duplicate notifications. Alertmanager chooses availability over consistency, which may result in occasional duplicated or out-of-order notifications. It takes the opinion that duplicate or out-of-order notifications are better than no notifications.
|
||||
|
||||
The Alertmanager uses a gossip protocol to share information about notifications between Grafana instances. It also gossips silences, which means a silence created on one Grafana instance is replicated to all other Grafana instances. Both notifications and silences are persisted to the database periodically, and during graceful shut down.
|
||||
|
||||
{{% admonition type="note" %}}
|
||||
|
||||
@ -30,31 +41,31 @@ This is because the HA settings (`ha_peers`, etc), only apply to the alert notif
|
||||
|
||||
{{% /admonition %}}
|
||||
|
||||
## Enable alerting high availability in Grafana using Memberlist
|
||||
## Enable alerting high availability using Memberlist
|
||||
|
||||
### Before you begin
|
||||
**Before you begin**
|
||||
|
||||
Since gossiping of notifications and silences uses both TCP and UDP port `9094`, ensure that each Grafana instance is able to accept incoming connections on these ports.
|
||||
|
||||
**To enable high availability support:**
|
||||
|
||||
1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the `[unified_alerting]` section.
|
||||
2. Set `[ha_peers]` to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, `ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094`.
|
||||
1. Set `[ha_peers]` to the number of hosts for each Grafana instance in the cluster (using a format of host:port), for example, `ha_peers=10.0.0.5:9094,10.0.0.6:9094,10.0.0.7:9094`.
|
||||
You must have at least one (1) Grafana instance added to the `ha_peers` section.
|
||||
3. Set `[ha_listen_address]` to the instance IP address using a format of `host:port` (or the [Pod's](https://kubernetes.io/docs/concepts/workloads/pods/) IP in the case of using Kubernetes).
|
||||
1. Set `[ha_listen_address]` to the instance IP address using a format of `host:port` (or the [Pod's](https://kubernetes.io/docs/concepts/workloads/pods/) IP in the case of using Kubernetes).
|
||||
By default, it is set to listen to all interfaces (`0.0.0.0`).
|
||||
4. Set `[ha_peer_timeout]` in the `[unified_alerting]` section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.
|
||||
1. Set `[ha_peer_timeout]` in the `[unified_alerting]` section of the custom.ini to specify the time to wait for an instance to send a notification via the Alertmanager. The default value is 15s, but it may increase if Grafana servers are located in different geographic regions or if the network latency between them is high.
|
||||
|
||||
## Enable alerting high availability in Grafana using Redis
|
||||
## Enable alerting high availability using Redis
|
||||
|
||||
As an alternative to Memberlist, you can use Redis for high availability. This is useful if you want to have a central
|
||||
database for HA and cannot support the meshing of all Grafana servers.
|
||||
|
||||
1. Make sure you have a redis server that supports pub/sub. If you use a proxy in front of your redis cluster, make sure the proxy supports pub/sub.
|
||||
2. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the [unified_alerting] section.
|
||||
3. Set `ha_redis_address` to the redis server address Grafana should connect to.
|
||||
4. [Optional] Set the username and password if authentication is enabled on the redis server using `ha_redis_username` and `ha_redis_password`.
|
||||
5. [Optional] Set `ha_redis_prefix` to something unique if you plan to share the redis server with multiple Grafana instances.
|
||||
1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the [unified_alerting] section.
|
||||
1. Set `ha_redis_address` to the redis server address Grafana should connect to.
|
||||
1. [Optional] Set the username and password if authentication is enabled on the redis server using `ha_redis_username` and `ha_redis_password`.
|
||||
1. [Optional] Set `ha_redis_prefix` to something unique if you plan to share the redis server with multiple Grafana instances.
|
||||
|
||||
The following metrics can be used for meta monitoring, exposed by Grafana's `/metrics` endpoint:
|
||||
|
||||
@ -72,67 +83,67 @@ The following metrics can be used for meta monitoring, exposed by Grafana's `/me
|
||||
|
||||
## Enable alerting high availability using Kubernetes
|
||||
|
||||
If you are using Kubernetes, you can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition.
|
||||
1. You can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition.
|
||||
|
||||
```yaml
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
||||
```yaml
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
||||
|
||||
1. Add the port 9094 to the Grafana deployment:
|
||||
|
||||
```yaml
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http-grafana
|
||||
protocol: TCP
|
||||
- containerPort: 9094
|
||||
name: grafana-alert
|
||||
protocol: TCP
|
||||
```
|
||||
```yaml
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
name: http-grafana
|
||||
protocol: TCP
|
||||
- containerPort: 9094
|
||||
name: grafana-alert
|
||||
protocol: TCP
|
||||
```
|
||||
|
||||
2. Add the environment variables to the Grafana deployment:
|
||||
1. Add the environment variables to the Grafana deployment:
|
||||
|
||||
```yaml
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
||||
```yaml
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
||||
|
||||
3. Create a headless service that returns the pod IP instead of the service IP, which is what the `ha_peers` need:
|
||||
1. Create a headless service that returns the pod IP instead of the service IP, which is what the `ha_peers` need:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: grafana-alerting
|
||||
namespace: grafana
|
||||
labels:
|
||||
app.kubernetes.io/name: grafana-alerting
|
||||
app.kubernetes.io/part-of: grafana
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: 'None'
|
||||
ports:
|
||||
- port: 9094
|
||||
selector:
|
||||
app: grafana
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: grafana-alerting
|
||||
namespace: grafana
|
||||
labels:
|
||||
app.kubernetes.io/name: grafana-alerting
|
||||
app.kubernetes.io/part-of: grafana
|
||||
spec:
|
||||
type: ClusterIP
|
||||
clusterIP: 'None'
|
||||
ports:
|
||||
- port: 9094
|
||||
selector:
|
||||
app: grafana
|
||||
```
|
||||
|
||||
4. Make sure your grafana deployment has the label matching the selector, e.g. `app:grafana`:
|
||||
1. Make sure your grafana deployment has the label matching the selector, e.g. `app:grafana`:
|
||||
|
||||
5. Add in the grafana.ini:
|
||||
1. Add in the grafana.ini:
|
||||
|
||||
```bash
|
||||
[unified_alerting]
|
||||
enabled = true
|
||||
ha_listen_address = "${POD_IP}:9094"
|
||||
ha_peers = "grafana-alerting.grafana:9094"
|
||||
ha_advertise_address = "${POD_IP}:9094"
|
||||
ha_peer_timeout = 15s
|
||||
```
|
||||
```bash
|
||||
[unified_alerting]
|
||||
enabled = true
|
||||
ha_listen_address = "${POD_IP}:9094"
|
||||
ha_peers = "grafana-alerting.grafana:9094"
|
||||
ha_advertise_address = "${POD_IP}:9094"
|
||||
ha_peer_timeout = 15s
|
||||
```
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
aliases:
|
||||
- alerting-limitations/
|
||||
- alerting/performance-limitations/
|
||||
- ./alerting-limitations/ # /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/alerting-limitations/
|
||||
- ../../alerting/performance-limitations/ # /docs/grafana/<GRAFANA_VERSION>/alerting/performance-limitations/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/performance-limitations/
|
||||
description: Learn about performance considerations and limitations
|
||||
keywords:
|
||||
|
@ -1,6 +1,4 @@
|
||||
---
|
||||
aliases:
|
||||
- ../provision-alerting-resources/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/provision-alerting-resources/
|
||||
description: Provision alerting resources
|
||||
keywords:
|
||||
|
@ -1,7 +1,6 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../provision-alerting-resources/view-provisioned-resources/
|
||||
- ./view-provisioned-resources/
|
||||
- ../../provision-alerting-resources/view-provisioned-resources/ # /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/view-provisioned-resources/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/provision-alerting-resources/export-alerting-resources/
|
||||
description: Export alerting resources in Grafana
|
||||
keywords:
|
||||
@ -189,26 +188,26 @@ These endpoints accept a `download` parameter to download a file containing the
|
||||
|
||||
[alerting_file_provisioning_template]: "/docs/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/file-provisioning#import-templates"
|
||||
|
||||
[export_rule]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rule-exportspan-export-an-alert-rule-in-provisioning-file-format-_routegetalertruleexport_"
|
||||
[export_rule]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rule-exportspan-export-an-alert-rule-in-provisioning-file-format-_routegetalertruleexport_"
|
||||
[export_rule]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rule-exportspan-export-an-alert-rule-in-provisioning-file-format-_routegetalertruleexport_"
|
||||
[export_rule]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rule-exportspan-export-an-alert-rule-in-provisioning-file-format-_routegetalertruleexport_"
|
||||
|
||||
[export_rule_group]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rule-group-exportspan-export-an-alert-rule-group-in-provisioning-file-format-_routegetalertrulegroupexport_"
|
||||
[export_rule_group]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rule-group-exportspan-export-an-alert-rule-group-in-provisioning-file-format-_routegetalertrulegroupexport_"
|
||||
[export_rule_group]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rule-group-exportspan-export-an-alert-rule-group-in-provisioning-file-format-_routegetalertrulegroupexport_"
|
||||
[export_rule_group]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rule-group-exportspan-export-an-alert-rule-group-in-provisioning-file-format-_routegetalertrulegroupexport_"
|
||||
|
||||
[export_rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rules-exportspan-export-all-alert-rules-in-provisioning-file-format-_routegetalertrulesexport_"
|
||||
[export_rules]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-alert-rules-exportspan-export-all-alert-rules-in-provisioning-file-format-_routegetalertrulesexport_"
|
||||
[export_rules]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rules-exportspan-export-all-alert-rules-in-provisioning-file-format-_routegetalertrulesexport_"
|
||||
[export_rules]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-alert-rules-exportspan-export-all-alert-rules-in-provisioning-file-format-_routegetalertrulesexport_"
|
||||
|
||||
[export_contacts]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-contactpoints-exportspan-export-all-contact-points-in-provisioning-file-format-_routegetcontactpointsexport_"
|
||||
[export_contacts]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-contactpoints-exportspan-export-all-contact-points-in-provisioning-file-format-_routegetcontactpointsexport_"
|
||||
[export_contacts]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-contactpoints-exportspan-export-all-contact-points-in-provisioning-file-format-_routegetcontactpointsexport_"
|
||||
[export_contacts]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-contactpoints-exportspan-export-all-contact-points-in-provisioning-file-format-_routegetcontactpointsexport_"
|
||||
|
||||
[export_mute_timing]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-mute-timing-exportspan-export-a-mute-timing-in-provisioning-file-format-_routegetmutetimingexport_"
|
||||
[export_mute_timing]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-mute-timing-exportspan-export-a-mute-timing-in-provisioning-file-format-_routegetmutetimingexport_"
|
||||
[export_mute_timing]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-mute-timing-exportspan-export-a-mute-timing-in-provisioning-file-format-_routegetmutetimingexport_"
|
||||
[export_mute_timing]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-mute-timing-exportspan-export-a-mute-timing-in-provisioning-file-format-_routegetmutetimingexport_"
|
||||
|
||||
[export_mute_timings]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-mute-timings-exportspan-export-all-mute-timings-in-provisioning-file-format-_routegetmutetimingsexport_"
|
||||
[export_mute_timings]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-mute-timings-exportspan-export-all-mute-timings-in-provisioning-file-format-_routegetmutetimingsexport_"
|
||||
[export_mute_timings]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-mute-timings-exportspan-export-all-mute-timings-in-provisioning-file-format-_routegetmutetimingsexport_"
|
||||
[export_mute_timings]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-mute-timings-exportspan-export-all-mute-timings-in-provisioning-file-format-_routegetmutetimingsexport_"
|
||||
|
||||
[export_notifications]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-policy-tree-exportspan-export-the-notification-policy-tree-in-provisioning-file-format-_routegetpolicytreeexport_"
|
||||
[export_notifications]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning/#span-idroute-get-policy-tree-exportspan-export-the-notification-policy-tree-in-provisioning-file-format-_routegetpolicytreeexport_"
|
||||
[export_notifications]: "/docs/grafana/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-policy-tree-exportspan-export-the-notification-policy-tree-in-provisioning-file-format-_routegetpolicytreeexport_"
|
||||
[export_notifications]: "/docs/grafana-cloud/ -> /docs/grafana/<GRAFANA_VERSION>/alerting/set-up/provision-alerting-resources/http-api-provisioning#span-idroute-get-policy-tree-exportspan-export-the-notification-policy-tree-in-provisioning-file-format-_routegetpolicytreeexport_"
|
||||
{{% /docs/reference %}}
|
||||
|
||||
<!-- prettier-ignore-end -->
|
||||
|
@ -1,6 +1,4 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../provision-alerting-resources/file-provisioning/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/provision-alerting-resources/file-provisioning/
|
||||
description: Create and manage resources using file provisioning
|
||||
keywords:
|
||||
|
@ -1,6 +1,4 @@
|
||||
---
|
||||
aliases:
|
||||
- ../../provision-alerting-resources/terraform-provisioning/
|
||||
canonical: https://grafana.com/docs/grafana/latest/alerting/set-up/provision-alerting-resources/terraform-provisioning/
|
||||
description: Create and manage alerting resources using Terraform
|
||||
keywords:
|
||||
|
@ -41,11 +41,9 @@ Once you have a Postgres or MySQL database available, you can configure your mul
|
||||
|
||||
## Alerting high availability
|
||||
|
||||
Grafana Alerting provides a [high availability mode]({{< relref "../alerting/fundamentals/high-availability" >}}).
|
||||
Grafana Alerting provides a high availability mode. It preserves the semantics of legacy dashboard alerting by executing all alerts on every server and by sending notifications only once per alert. Load distribution between servers is not supported at this time.
|
||||
|
||||
It preserves the semantics of legacy dashboard alerting by executing all alerts on every server and by sending notifications only once per alert. Load distribution between servers is not supported at this time.
|
||||
|
||||
For instructions on setting up alerting high availability, refer to [Enable alerting high availability]({{< relref "../alerting/set-up/configure-high-availability" >}}).
|
||||
For further information and instructions on setting up alerting high availability, refer to [Enable alerting high availability]({{< relref "../alerting/set-up/configure-high-availability" >}}).
|
||||
|
||||
**Legacy dashboard alerts**
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user