From 998093a280e43c43ce9988ef058170923f93022c Mon Sep 17 00:00:00 2001 From: brendamuir <100768211+brendamuir@users.noreply.github.com> Date: Thu, 27 Oct 2022 13:21:02 +0100 Subject: [PATCH] Docs: more refactoring for alerting (#57741) * docs: more refactoring alerting * performance limitations * typo * fix relrefs --- docs/sources/alerting/alerting-limitations.md | 28 ------------- .../sources/alerting/alerting-rules/_index.md | 6 +-- .../create-grafana-managed-rule.md | 2 +- ...reate-mimir-loki-managed-recording-rule.md | 2 +- .../view-alert-rules.md} | 3 +- .../view-state-health.md} | 2 +- .../fundamentals/evaluate-grafana-alerts.md | 2 +- .../alerting/manage-notifications/_index.md | 4 +- .../migrating-legacy-alerts.md | 2 +- .../index.md} | 41 +++++++++++++++---- docs/sources/developers/http_api/admin.md | 2 +- 11 files changed, 46 insertions(+), 48 deletions(-) delete mode 100644 docs/sources/alerting/alerting-limitations.md rename docs/sources/alerting/{view-alert-rules/index.md => alerting-rules/view-alert-rules.md} (97%) rename docs/sources/alerting/{view-state-health/index.md => alerting-rules/view-state-health.md} (99%) rename docs/sources/alerting/{performance.md => performance-limitations/index.md} (68%) diff --git a/docs/sources/alerting/alerting-limitations.md b/docs/sources/alerting/alerting-limitations.md deleted file mode 100644 index b0f36ea5548..00000000000 --- a/docs/sources/alerting/alerting-limitations.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -aliases: - - /docs/grafana/latest/alerting/alerting-limitations/ -title: Limitations -weight: 552 ---- - -# Limitations - -## Limited rule sources support - -Grafana Alerting can retrieve alerting and recording rules **stored** in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources. - -It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time. - -## Prometheus version support - -We support the latest two minor versions of both Prometheus and Alertmanager. We cannot guarantee that older versions will work. - -As an example, if the current Prometheus version is `2.31.1`, we support >= `2.29.0`. - -## Grafana is not an alert receiver - -Grafana is not an alert receiver; it is an alert generator. This means that Grafana cannot receive alerts from anything other than its internal alert generator. - -Receiving alerts from Prometheus (or anything else) is not supported at the time. - -For more information, refer to [this GitHub discussion](https://github.com/grafana/grafana/discussions/45773). diff --git a/docs/sources/alerting/alerting-rules/_index.md b/docs/sources/alerting/alerting-rules/_index.md index 4aedb845961..cdc4f8251dc 100644 --- a/docs/sources/alerting/alerting-rules/_index.md +++ b/docs/sources/alerting/alerting-rules/_index.md @@ -4,11 +4,11 @@ aliases: - /docs/grafana/latest/alerting/old-alerting/create-alerts/ - /docs/grafana/latest/alerting/rules/ - /docs/grafana/latest/alerting/unified-alerting/alerting-rules/ -title: Create alert rules +title: Manage your alert rules weight: 130 --- -# Create alert rules +# Manage your alert rules An alert rule is a set of evaluation criteria that determines whether an alert will fire. The alert rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met. @@ -20,8 +20,6 @@ You can: - [Create Grafana Mimir or Loki managed recording rules]({{< relref "create-mimir-loki-managed-recording-rule/" >}}) - [Edit Grafana Mimir or Loki rule groups and namespaces]({{< relref "edit-mimir-loki-namespace-group/" >}}) - [Create Grafana managed alert rules]({{< relref "create-grafana-managed-rule/" >}}) -- [View the state and health of alert rules]({{< relref "../view-state-health/" >}}) -- [View and filter alert rules]({{< relref "../view-alert-rules/" >}}) **Note:** Grafana managed alert rules can only be edited or deleted by users with Edit permissions for the folder storing the rules. diff --git a/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md b/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md index edb8869515a..819c10745b9 100644 --- a/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md +++ b/docs/sources/alerting/alerting-rules/create-grafana-managed-rule.md @@ -44,7 +44,7 @@ Watch this video to learn more about creating alerts: {{< vimeo 720001934 >}} - Add Runbook URL, panel, dashboard, and alert IDs. - Add custom labels. 1. Click **Save** to save the rule or **Save and exit** to save the rule and go back to the Alerting page. -1. Next, create a [notification]({{< relref "../notifications/" >}}) for the rule. +1. Next, create a for the rule. ### Single and multi dimensional rule diff --git a/docs/sources/alerting/alerting-rules/create-mimir-loki-managed-recording-rule.md b/docs/sources/alerting/alerting-rules/create-mimir-loki-managed-recording-rule.md index 33ed51bba80..43a6fe19584 100644 --- a/docs/sources/alerting/alerting-rules/create-mimir-loki-managed-recording-rule.md +++ b/docs/sources/alerting/alerting-rules/create-mimir-loki-managed-recording-rule.md @@ -48,7 +48,7 @@ To create a Grafana Mimir or Loki managed recording rule - Add Runbook URL, panel, dashboard, and alert IDs. - Add custom labels. 1. Click **Save** to save the rule or **Save and exit** to save the rule and go back to the Alerting page. -1. Next, create a [notification]({{< relref "../notifications/" >}}) for the rule. +1. Next, create a notification for the rule. 1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. 1. Click **New alert rule**. diff --git a/docs/sources/alerting/view-alert-rules/index.md b/docs/sources/alerting/alerting-rules/view-alert-rules.md similarity index 97% rename from docs/sources/alerting/view-alert-rules/index.md rename to docs/sources/alerting/alerting-rules/view-alert-rules.md index f9136ed16e7..4b8d7fb35df 100644 --- a/docs/sources/alerting/view-alert-rules/index.md +++ b/docs/sources/alerting/alerting-rules/view-alert-rules.md @@ -3,6 +3,7 @@ aliases: - /docs/grafana/latest/alerting/alerting-rules/rule-list/ - /docs/grafana/latest/alerting/unified-alerting/alerting-rules/rule-list/ - /docs/grafana/latest/alerting/view-alert-rules/ + - /docs/grafana/latest/alerting/alerting-rules/view-alert-rules description: Manage alerting rules keywords: - grafana @@ -11,7 +12,7 @@ keywords: - rules - view title: View and filter alert rules -weight: 140 +weight: 410 --- # View and filter alert rules diff --git a/docs/sources/alerting/view-state-health/index.md b/docs/sources/alerting/alerting-rules/view-state-health.md similarity index 99% rename from docs/sources/alerting/view-state-health/index.md rename to docs/sources/alerting/alerting-rules/view-state-health.md index 24ad70f11d0..dd74bf0213a 100644 --- a/docs/sources/alerting/view-state-health/index.md +++ b/docs/sources/alerting/alerting-rules/view-state-health.md @@ -11,7 +11,7 @@ keywords: - state - health title: View the state and health of alert rules -weight: 150 +weight: 420 --- # View the state and health of alert rules diff --git a/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md b/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md index 90c8ff4830f..1770c40008d 100644 --- a/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md +++ b/docs/sources/alerting/fundamentals/evaluate-grafana-alerts.md @@ -28,7 +28,7 @@ Grafana managed alerts query the following backend data sources that have alerti ### Metrics from the alerting engine -The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../setup-grafana/set-up-grafana-monitoring/" >}}). See also, [View alert rules and their current state]({{< relref "../view-state-health/" >}}). +The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../setup-grafana/set-up-grafana-monitoring/" >}}). | Metric Name | Type | Description | | ------------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- | diff --git a/docs/sources/alerting/manage-notifications/_index.md b/docs/sources/alerting/manage-notifications/_index.md index a6d3c3144fe..32a9d9a3d42 100644 --- a/docs/sources/alerting/manage-notifications/_index.md +++ b/docs/sources/alerting/manage-notifications/_index.md @@ -6,11 +6,11 @@ keywords: - grafana - alert - notifications -title: Manage alert notifications +title: Manage your alert notifications weight: 160 --- -# Manage alert notifications +# Manage your alert notifications Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important. diff --git a/docs/sources/alerting/migrating-alerts/migrating-legacy-alerts.md b/docs/sources/alerting/migrating-alerts/migrating-legacy-alerts.md index e7b6fbbd54a..ed52c0b6ad8 100644 --- a/docs/sources/alerting/migrating-alerts/migrating-legacy-alerts.md +++ b/docs/sources/alerting/migrating-alerts/migrating-legacy-alerts.md @@ -25,7 +25,7 @@ longer supported. We refer to these as [Differences]({{< relref "#differences" > - If there are no dashboard permissions and the dashboard is under a folder, then the rule is linked to this folder and inherits its permissions. - If there are no dashboard permissions and the dashboard is under the General folder, then the rule is linked to the `General Alerting` folder, and the rule inherits the default permissions. -3. Since there is no `Keep Last State` option for [`No Data`]({{< relref "../alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) in Grafana Alerting, this option becomes `NoData` during the legacy rules migration. Option "Keep Last State" for [`Error handling`]({{< relref "../alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) is migrated to a new option `Error`. To match the behavior of the `Keep Last State`, in both cases, during the migration Grafana automatically creates a [silence]({{< relref "../silences/" >}}) for each alert rule with a duration of 1 year. +3. Since there is no `Keep Last State` option for [`No Data`]({{< relref "../alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) in Grafana Alerting, this option becomes `NoData` during the legacy rules migration. Option "Keep Last State" for [`Error handling`]({{< relref "../alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) is migrated to a new option `Error`. To match the behavior of the `Keep Last State`, in both cases, during the migration Grafana automatically creates a silence for each alert rule with a duration of 1 year. 4. Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route. diff --git a/docs/sources/alerting/performance.md b/docs/sources/alerting/performance-limitations/index.md similarity index 68% rename from docs/sources/alerting/performance.md rename to docs/sources/alerting/performance-limitations/index.md index 3c1f4279034..adb926af43f 100644 --- a/docs/sources/alerting/performance.md +++ b/docs/sources/alerting/performance-limitations/index.md @@ -1,11 +1,18 @@ -+++ -title = "Performance considerations" -description = "Understanding alerting performance" -keywords = ["grafana", "alerting", "performance"] -weight = 555 -+++ +--- +aliases: + - /docs/grafana/latest/alerting/alerting-limitations/ + - /docs/grafana/latest/alerting/performance-limitations/ +description: Performance considerations and limitations +keywords: + - grafana + - alerting + - performance + - limitations +title: Performance considerations and limitations +weight: 450 +--- -# Alerting performance considerations +# Performance considerations and limitations Grafana Alerting supports multi-dimensional alerting, where one alert rule can generate many alerts. For example, you can configure an alert rule to fire an alert every time the CPU of individual VMs max out. This topic discusses performance considerations resulting from multi-dimensional alerting. @@ -22,3 +29,23 @@ Each evaluation of an alert rule generates a set of alert instances; one for eac Grafana Alerting exposes a metric, `grafana_alerting_rule_evaluations_total` that counts the number of alert rule evaluations. To get a feel for the influence of rule evaluations on your Grafana instance, you can observe the rate of evaluations and compare it with resource consumption. In a Prometheus-compatible database, you can use the query `rate(grafana_alerting_rule_evaluations_total[5m])` to compute the rate over 5 minute windows of time. It's important to remember that this isn't the full picture of rule evaluation. For example, the load will be unevenly distributed if you have some rules that evaluate every 10 seconds, and others every 30 minutes. These factors all affect the load on the Grafana instance, but you should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well. + +## Limited rule sources support + +Grafana Alerting can retrieve alerting and recording rules **stored** in most available Prometheus, Loki, Mimir, and Alertmanager compatible data sources. + +It does not support reading or writing alerting rules from any other data sources but the ones previously mentioned at this time. + +## Prometheus version support + +We support the latest two minor versions of both Prometheus and Alertmanager. We cannot guarantee that older versions will work. + +As an example, if the current Prometheus version is `2.31.1`, we support >= `2.29.0`. + +## Grafana is not an alert receiver + +Grafana is not an alert receiver; it is an alert generator. This means that Grafana cannot receive alerts from anything other than its internal alert generator. + +Receiving alerts from Prometheus (or anything else) is not supported at the time. + +For more information, refer to [this GitHub discussion](https://github.com/grafana/grafana/discussions/45773). diff --git a/docs/sources/developers/http_api/admin.md b/docs/sources/developers/http_api/admin.md index f4e5d87b419..16b691b6336 100644 --- a/docs/sources/developers/http_api/admin.md +++ b/docs/sources/developers/http_api/admin.md @@ -471,7 +471,7 @@ Content-Type: application/json `POST /api/admin/pause-all-alerts` -> **Note:** This API is relevant for the [legacy dashboard alerts](https://grafana.com/docs/grafana/v8.5/alerting/old-alerting/) only. For default alerting, use [silences]({{< relref "../../alerting/silences/" >}}) to stop alerts from being delivered. +> **Note:** This API is relevant for the [legacy dashboard alerts](https://grafana.com/docs/grafana/v8.5/alerting/old-alerting/) only. For default alerting, use silences to stop alerts from being delivered. Only works with Basic Authentication (username and password). See [introduction](http://docs.grafana.org/http_api/admin/#admin-api) for an explanation.