Docs: adds legacy topics (#69898)

* adds legacy topics

* moves legacy and adds deprecation note

* Merge branch 'alerting-docs-support-escalations' of https://github.com/grafana/grafana into alerting-docs-support-escalations

* adds description

* fixes relrefs

* removes relrefs

* removes relrefs

* fixes links

* Adds description frontmatter

* fixing typo

* adds frontmatter

* fix spelling error
This commit is contained in:
brendamuir 2023-06-12 18:23:03 +02:00 committed by GitHub
parent 266751b96d
commit be196a4ad0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 608 additions and 0 deletions

View File

@ -0,0 +1,95 @@
---
aliases:
- /docs/grafana-cloud/alerts/
- /docs/grafana-cloud/how-do-i/alerts/
- /docs/grafana-cloud/legacy-alerting/
description: Legacy alerting
title: Legacy alerting
weight: 110
---
# Legacy alerting
**Note:**
Starting with Grafana v9.0.0, legacy alerting is deprecated. It is no longer actively maintained or supported by Grafana and will be removed in Grafana v11.0.0.
You have two options to configure alerts within the Grafana Cloud GUI and a third option that enables you to set Grafana Cloud Alerts using the command line.
- **Grafana alerts** are the same as in an on-prem instance of Grafana.
These alerts are created from a graph panel within a Grafana dashboard.
This is useful when you want to create a simple alert based on one metric from within a panel.
It also has a much simpler learning curve when you are getting started.
- **Grafana Cloud alerts - GUI** are an implementation of Prometheus-style rules that enable you to query your Grafana Cloud Metrics and then set up Prometheus Alertmanager-style alerts based on those rules.
This is useful when you want to create precise, PromQL-based rules or create alerts from across many metrics and logs being collected into your Grafana Cloud Metrics.
This form of alerting is much more powerful and configurable, but that comes with some complexity.
- **Grafana Cloud alerts - CLI** use mimirtool to create and upload the same types of Prometheus-style recording and alerting rules definitions to your Grafana Cloud Metrics instance.
Once created, you will also be able to view these rules from within the Grafana Cloud Alerting page in the GUI.
- **Synthetic Monitoring alerts** are built on Prometheus alerts, just like in Grafana Cloud alerting.
You can configure synthetic monitoring alerts separately using the UI in synthetic monitoring.
Another option to create alerts for synthetic monitoring checks is to simply use Grafana Cloud alerting.
## Using Grafana alerts in Grafana Cloud
Grafana alerts are dashboard panel-driven and can only be created using the Graph panel.
This style of alerting builds on top of the query defined for the graph visualization, so alerts and notifications are sent based on breaking some threshold in the associated panel.
This also means that there is a one-to-one relationship between a Grafana alert and a graph panel.
So although Grafana alerts can be viewed centrally, they can only be managed directly from the panel that theyre tied to.
As a result, Grafana alerting is best suited for smaller setups, where there are only a few individuals or teams responsible for a small set of dashboards and where there are few dependencies between the dashboards.
{{% admonition type="note" %}}
Most curated dashboards, such as those provided with an integration or with Synthetic Monitoring do not allow you to alert from panels.
This is to preserve the ability to upgrade these dashboards automatically when the integration or Synthetic Monitoring abilities are updated.
To create an editable copy that you can edit and alert from, click settings (the gear logo) within any dashboard and then click **Make Editable**.
The copy will not be upgraded when/if the curated dashboard receives an update.
This is one reason why Grafana Cloud Alerts may be considered a better option.
{{% /admonition %}}
### What makes Grafana alerts unique?
With Grafana alerts, alerts are limited to only graph panels within dashboards.
In addition:
- Alerts can be edited by both Editor and Admin roles
- Alerts are visual, with an associated alerting threshold line
- Alerts work with many non-Prometheus data sources, including Graphite
- Alert notifications can be routed to many external notifier systems, directly from Grafana
- Alerts are directly associated with a dashboard
- Alerts can be tested
## Using Grafana Cloud Alerts
Because the metrics you collect and send to Grafana Cloud are centrally stored in one large time-series database, Grafana Cloud Metrics, you can query across these metrics using [PromQL](https://prometheus.io/docs/prometheus/latest/querying/basics/) and build alerts directly around those metrics rather than around a panel.
You can also query across any logs you have sent using Loki.
Grafana Cloud Alerts are directly tied to metrics and log data.
They can be configured either through the UI or by uploading files containing Prometheus and Loki alert rules with mimirtool.
Grafana Cloud Alerting's Prometheus-style alerts are built by querying directly from the data source itself.
Because these alerts are based on the data, they are not tied to a single panel.
This makes it possible to evaluate and centrally manage alerts across several different Prometheus and Loki data source instances.
### What makes Grafana Cloud Alerts unique?
With Grafana Cloud Alerts, alerts are not limited to coming from a graph panel.
In addition, you can:
- Prevent alerts from being edited, except by users with accounts that are assigned Admin roles.
- Centrally manage and create alerts across many systems, teams, and dashboards.
Alerts are not bound to just one system, team, or dashboard.
- Create alerts for both metric _and_ log data, based on Prometheus and Loki, respectively.
- Silence and mute alerts in bulk, even using a schedule, using the Alertmanager.
- Route alert notifications to [many external notifier systems](https://prometheus.io/docs/operating/integrations/#alertmanager-webhook-receiver) using Alertmanager configurations
- Dedupe alert notifications automatically.
### Grafana Cloud Alert configuration methods
In a traditional on-prem environment, Prometheus-style alert configuration is done through the combination of defining a [Prometheus configuration file](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) and an [Alertmanager configuration file](https://prometheus.io/docs/alerting/latest/configuration/), which live close to the Prometheus server.
With Grafana Cloud, you can still use this setup as well as more flexible architectures.
- You can use `mimirtool` to upload your configuration files to be hosted and evaluated entirely in Grafana Cloud.
- You can manage both alerting rules and Alertmanager configurations directly through the UI.
Configuration files are unnecessary with this setup.
- You can use both methods concurrently to manage the alerts.
For example, updates made using the `mimirtool` are automatically updated and visible within the Grafana Cloud Alerting interface in minutes.

View File

@ -0,0 +1,200 @@
---
aliases:
- /docs/grafana-cloud/alerts/alerts-rules/
- /docs/grafana-cloud/how-do-i/alerts/alerts-rules/
- /docs/grafana-cloud/legacy-alerting/alerts-rules/
- /docs/grafana-cloud/metrics/prometheus/alerts_rules/
- /docs/hosted-metrics/prometheus/alerts_rules/
description: Prometheus rules with mimirtool
title: Prometheus rules with mimirtool
weight: 100
---
# Prometheus rules with mimirtool
This page outlines the steps to use mimirtool and Prometheus-style rules with Grafana Cloud Alerting. You can load Prometheus alerting and recording rules that are evaluated entirely in Grafana Cloud. This allows for global rule evaluation over all of the metrics and logs stored in your Grafana Cloud stack.
{{% admonition type="note" %}}
`mimirtool` does _not_ support Loki.
{{% /admonition %}}
Prometheus-style alerting is driven by your Grafana Cloud Metrics, Grafana Cloud Logs, and Grafana Cloud Alerts instances. The Metrics and Logs instance holds the rules definition, while the Alerts instance is in charge of routing and managing the alerts that fire from the Metrics and Logs instance. These are separate systems that must be individually configured in order for alerting to work correctly.
The following sections cover all of these concepts:
- How to upload alerting and recording rules definition to your Grafana Cloud Metrics instance
- How to upload alerting rules definition to your Grafana Cloud Logs instance
- How to configure an Alertmanager for your Grafana Cloud Alerts instance, giving you access to the Alertmanager UI.
**Note:** You need an API key with proper permissions. You can use the same API key for your Metric, Log, and Alerting instances.
## Download and install mimirtool
mimirtool is a powerful command-line tool for interacting with Mimir, which powers Grafana Cloud Metrics and Alerts. You'll use mimirtool to upload your metric and log rules definition and the Alertmanager configuration using YAML files.
For more information, including installation instructions, see [Grafana Mimirtool](/docs/mimir/latest/operators-guide/tools/mimirtool).
{{% admonition type="note" %}}
For mimirtool to interact with Grafana Cloud, you must set the correct configuration variables. Set them using either environment variables or a command line flags.
{{% /admonition %}}
## Upload rules definition to your Grafana Cloud Metrics and Logs instance
First, you'll need to upload your alerting and recording rules to your Metrics and Logs instance. You'll need the instance ID and the URL. These should be part of /orgs/`<yourOrgName>`/.
### Metrics instance
Your Metrics instance is likely to be in the `us-central1` region. Its address would be in the form of [https://prometheus-us-central1.grafana.net](https://prometheus-us-central1.grafana.net).
### Logs instance
Your Logs instance is likely to be in the `us-central1` region. Its address would be in the form of [https://logs-prod-us-central1.grafana.net](https://logs-prod-us-central1.grafana.net).
### Using mimirtool
With your instance ID, URL, and API key you're now ready to upload your rules to your metrics instance. Use the following commands and files as a reference.
Below is an example alert and rule definition YAML file. Take note of the namespace key which replaces the concept of "files" in this context given each instance only supports 1 configuration file.
```yaml
# first_rules.yml
namespace: 'first_rules'
groups:
- name: 'shopping_service_rules_and_alerts'
rules:
- alert: 'PromScrapeFailed'
annotations:
message: 'Prometheus failed to scrape a target {{ $labels.job }} / {{ $labels.instance }}'
expr: 'up != 1'
for: '1m'
labels:
'severity': 'critical'
- record: 'job:up:sum'
expr: 'sum by(job) (up)'
```
Although both recording and alerting rules are defined under the key `rules` the difference between a rule and and alert is _generally_ (as there are others) whenever the key `record` or `alert` is defined.
With this file, you can run the following commands to upload your rules file in your Metrics or Logs instance. Keep in mind that these are example commands for your Metrics instance, and they use placeholders and command line flags. Follow a similar pattern for your Logs instances by switching the address to the correct one. The examples also assume that files are located in the same directory.
```bash
$ mimirtool rules load first_rules.yml \
--address=https://prometheus-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
Next, confirm that the rules were uploaded correctly by running:
```bash
$ mimirtool rules list \
--address=https://prometheus-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
Output is a list that shows you all the namespaces and rule groups for your instance ID:
```bash
Namespace | Rule Group
first_rules | shopping_service_rules_and_alerts
```
You can also print the rules:
```bash
$ mimirtool rules print \
--address=https://prometheus-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
Output from the print command should look like this:
```yaml
first_rules:
- name: shopping_service_rules_and_alerts
interval: 0s
rules:
- alert: PromScrapeFailed
expr: up != 1
for: 1m
labels:
severity: critical
annotations:
message: Prometheus failed to scrape a target {{ $labels.job }} / {{ $labels.instance }}
- record: job:up:sum
expr: sum by(job) (up)
```
## Upload Alertmanager configuration to your Grafana Cloud Alerts instance
To receive alerts you'll need to upload your Alertmanager configuration to your Grafana Cloud Alerts instance. Similar to the previous step, you'll need the corresponding instance ID, URL and API key. These should be part of /orgs/`<yourOrgName>`/.
Your Alerts instance is likely to be in the `us-central1` region. Its address would be in the form of [https://alertmanager-us-central1.grafana.net](https://alertmanager-us-central1.grafana.net).
### Using mimirtool
With your instance ID, URL, and API key you're now ready to upload your Alertmanager configuration to your Alerts instance. Use the following commands and files as a reference.
Ultimately, you'll need to [write your own](https://prometheus.io/docs/alerting/latest/configuration/) or adapt an [example config file](https://github.com/prometheus/alertmanager/blob/master/doc/examples/simple.yml) for alerts to be delivered.
Below is an example Alertmanager configuration. Please take that this not a working configuration, your alerts won't be delivered with the following configuration but your Alertmanager UI will be accessible.
```yaml
# alertmanager.yml
global:
smtp_smarthost: 'localhost:25'
smtp_from: 'youraddress@example.org'
route:
receiver: example-email
receivers:
- name: example-email
email_configs:
- to: 'youraddress@example.org'
```
With this file, you can run the following commands to upload your Alertmanager configuration in your Alerts instance.
```bash
$ mimirtool alertmanager load alertmanager.yml \
--address=https://alertmanager-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
Then, confirm that the rules were uploaded correctly by running:
```bash
$ mimirtool alertmanager get \
--address=https://alertmanager-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
You should see output similar to the following:
```bash
global:
smtp_smarthost: 'localhost:25'
smtp_from: 'youraddress@example.org'
route:
receiver: example-email
receivers:
- name: example-email
email_configs:
- to: 'youraddress@example.org'
```
Finally, you can delete the configuration with:
```bash
$ mimirtool alertmanager delete \
--address=https://alertmanager-us-central1.grafana.net \
--id=<yourID> \
--key=<yourKey>
```
### UI access
After you upload a working Alertmanager configuration file, you can access the Alertmanager UI at: https://alertmanager-us-central1.grafana.net/alertmanager.

View File

@ -0,0 +1,76 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/
description: Grafana Cloud Alerting
title: Grafana Cloud Alerting
weight: 100
---
# Grafana Cloud Alerting
Grafana Cloud Alerting allows you to create and manage all of your Prometheus-style alerting rules, for both Prometheus metrics and Loki log data. With this feature, you don't need to leave Grafana, upload or edit configuration files, or install additional tools.
![Grafana Cloud Alerting](/static/img/docs/grafana-cloud/grafana-cloud-alerting.png)
## Permissions
All members of an organization that have alerts set up can view alerts in Grafana Cloud Alerting. This includes everyone with a Viewer, Editor, or Admin role.
Users with the organization Admin role can also create, edit, or delete alerts.
## Data sources
Grafana Cloud Alerting supports rule management across multiple data sources, for both metrics and logs, across all of the stacks in your org. If you have more than one Prometheus or Loki data source, there will be a dropdown at the top for you to select the data source to configure rules.
{{% admonition type="note" %}}
Pay attention to which data source you select. Cloud alerts are tied to a specific data source. For example, if you have a Loki data source selected you will not be able to create an alert based on a Prometheus data source.
![Cloud Alerting Data Source](/static/img/docs/grafana-cloud/grafana-cloud-alerting-data-source.png)
{{% /admonition %}}
## Alerts and recording rules
Prometheus supports two types of rules:
- [Recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) - Recording rules allow you to execute expressions or queries, by saving them off as a stored rule instead.
- [Alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) - Alerting rules allow you to define alert conditions and to route those notifications to an external service. An alert fires if metrics meet criteria defined in the alerting rule.
Both of these rules are configurable from the Grafana Cloud Alerting interface and configured in the same way.
## Alert states
Alert states are identical to the standard format found in Prometheus rule configurations. In Grafana Cloud Alerting, each individual alert is highlighted by its state to more clearly distinguish between alerts.
- **Firing -** Alerts that have been active for longer than the configured threshold. Alerts are highlighted in red and tagged with a red `firing` label.
- **Pending -** Alerts that have been active for less than the configured threshold. Alerts are highlighted in orange.
- **Inactive -** Alerts that are neither firing nor pending. Alerts are highlighted in green.
## Notifications
The **Notifications** tab is where you can view all current notifications and sort them by various states, receivers, and labels.
![Grafana Cloud Alerting Notifications](/static/img/docs/grafana-cloud/grafana-cloud-alerting-notifications.png)
## Limits
There is a limit on how many rules can be created in a rule group. There is also a limit on how many rule groups can be created.
You can create:
- 20 rules per rule group
- 35 rule groups
> It is possible to increase these limits. Please contact customer support for further information.
If you exceed the limits, you will encounter an error similar to this:
```bash
ERROR[0000] requests failed fields.msg="request failed with response body
per-user rules per rule group limit (limit: 20 actual: 22) exceeded\n"
status="400 Bad Request"
ERROR[0000] unable to load rule group error="failed request to the cortex api"
group=limit_rules_per_group namespace=test
```
To increase the number of rules or rule groups you can configure, contact support to upgrade your account.

View File

@ -0,0 +1,55 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/alertmanager/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/alertmanager/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/alertmanager/
description: Alertmanager
title: Alertmanager
weight: 500
---
# Alertmanager
Grafana Cloud Alerting allows you to edit and view configuration for your Alertmanager directly inside of Grafana. See the official [Alertmanager documentation](https://prometheus.io/docs/alerting/latest/configuration/) to learn how to configure.
{{% admonition type="note" %}}
Only organization Admins can view or update Alertmanger configurations.
{{% /admonition %}}
## Edit a config for Grafana Cloud Alerting
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alertmanager**.
1. If you have more than one Alertmanager source, there will be a dropdown at the top for you to select the data source to edit configurations.
1. Currently active configuration for the Alertmanager will be displayed. Click the **Edit** button to enter edit mode and start making changes. Click "Save and finish editing" once done to persist your changes.
1. Alternatively, updates to the Alertmanager configurations made using the mimirtool will also sync and appear here.
## Use the Grafana Labs-supplied SMTP option to configure email notifications
Grafana Cloud users who do not have an SMTP server available for sending alert emails may use Grafana-Labs supplied SMTP relay (available at `smtprelay:2525`).
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alertmanager**.
1. If you have more than one Alertmanager source, there will be a dropdown at the top for you to select the data source to edit configurations.
1. Find info box with heading **Send alert email notifications from Grafana Cloud** at the top
1. Enter desired email address into the **email address** field
1. Click **Update configuration** button. Alertmanager config will be updated with grafana SMTP relay settings and an "email" receiver that will send to the specified email address.
{{% admonition type="note" %}}
Following these steps will overwrite any custom global SMTP settings that you might have. Default route configuration will send all notifications to the "email" receiver. If you have already customized routes, they will not be updated and you will have to configure "email" receiver on the appropriate route.
{{% /admonition %}}
Use these settings in your Grafana Cloud Alerting YAML, if you do not find them already set. Most important is the `smtp_require_tls: false` line. If this is not set properly, alert emails will not be received. If you use mimirtool to configure alertmanager, by default this will be set to `true`, which will cause problems.
```yaml
global:
smtp_from: noreply@grafana.net
smtp_smarthost: smtprelay:2525
smtp_require_tls: false
```
## Troubleshooting Alertmanager failures
Configuration errors can cause Alertmanager notification failures, e.g. a typo in an email address recipient or an expired token for a webhook. Grafana Cloud provisions a Loki datasource `grafanacloud-<stack_slug>-usage-insights` which can be used to display select notification errors with a query similar to the example below. The `instance_type` label of `alerts` is what selects the Grafana Cloud Alertmanager logs.
```sql
{instance_type="alerts"} | logfmt | level="warn"
```

View File

@ -0,0 +1,59 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/create-edit-rules/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/create-edit-rules/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/create-edit-rules/
description: Create and edit alert rules
title: Create and edit alert rules
weight: 200
---
# Create and edit alert rules
Creating alerts in Grafana Cloud differs from creating alerts directly with Prometheus or Loki. While the rule format is the same, everything is done in the Grafana Cloud Alerting interface, rather than with configuration files.
{{% admonition type="note" %}}
Only organization Admins can create or edit alert rules.
{{% /admonition %}}
## Create an alert rule
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alerts and rules**.
1. If you have more than one Prometheus or Loki data source, there will be a dropdown at the top for you to select the data source to create or edit rules.
1. Click **Edit rules**.
1. Click **Add rule**.
Grafana creates a new rule with placeholders.
```
alert: ""
expr: ""
```
Enter text according to regular Prometheus rule configuration guidelines:
- [Recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/)
- [Alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
{{% admonition type="note" %}}
Grafana Cloud Alerting does not support comments.
{{% /admonition %}}
When you are finished, click **Save**. You can then repeat the process to create more rules or click **Finish editing** to return to the rules list.
## Edit an alert rule
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alerts and rules**.
1. If you have more than one Prometheus or Loki data source, there will be a dropdown at the top for you to select the data source to create or edit rules.
1. Click **Edit rules**.
1. Scroll down to the rule that you want to edit and then click **Edit**.
1. Make any necessary changes to the rule text and then click **Save**.
1. Click **Finish editing** to return to the rules list.
## Delete an alert rule
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alerts and rules**.
1. If you have more than one Prometheus or Loki data source, there will be a dropdown at the top for you to select the data source to create or edit rules.
1. Click **Edit rules**.
1. Scroll down to the rule that you want to edit and then click **Delete**.
1. Click **Finish editing** to return to the rules list.

View File

@ -0,0 +1,50 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/namespaces-and-groups/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/namespaces-and-groups/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/namespaces-and-groups/
description: Namespaces and rule groups
title: Namespaces and rule groups
weight: 400
---
# Namespaces and rule groups
By default, all alerting and recording rules created in Grafana Cloud Alerting will default to a single namespace and a single rule group.
## Managing namespaces
While Grafana Cloud Alerting does support viewing multiple namespaces that have been added through the mimirtool, it is currently not possible to add new namespaces or to rename the existing ones.
## Managing rule groups
Rule groups can be managed directly within the Grafana Cloud Alerting interface or through the mimirtool, similar to managing namespaces.
{{% admonition type="note" %}}
By default, Grafana Cloud limits the number of rule groups to 20, with a limit of up to 15 rules per group. If you wish to increase the default limits, please [open a support ticket](/profile/org#support) or reach out to your account manager.
{{% /admonition %}}
### Create a new rule group:
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Alerts and rules**.
2. If you have more than one Prometheus or Loki data source, there will be a dropdown at the top for you to select the data source to create or edit rules.
3. Click **Create new rule group**.
4. Enter text to name your new rule group.
5. Enter text for the new rule in your new rule group, according to regular Prometheus rule configuration guidelines:
- [Recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/)
- [Alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
6. When you are finished naming your new rule group and adding new rule details, click **Save**.
{{% admonition type="note" %}}
In order to create a new rule group, you must also create a new rule for it.
{{% /admonition %}}
### Update a rule group
Existing rule groups can be renamed by selecting the **pencil** icon next to the rule group name.
### Delete a rule group
Rule groups will be automatically deleted once the all rules within a group are deleted.

View File

@ -0,0 +1,31 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/silences/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/silences/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/silences/
description: Silences
title: Silences
weight: 600
---
# Silences
Grafana Cloud Alerting allows you to manage silences for your alertmanager notifications directly inside of Grafana. This applies to alerting rules created for both Prometheus metrics and Loki logs.
## Create a silence
1. In Grafana, hover your cursor over the **Grafana Cloud Alerting** icon and then click **Silences**.
2. Click **New silence**.
3. Enter a date in **Start of silence** to indicate when the silence should go into effect.
4. Enter a date in **End of silence** to indicate when the silence should expire.
5. Enter one or more matchers by filling out the **Name** and **Value** fields. Matchers determine which rules the silence will apply to.
6. Enter the name of the owner in **Creator**.
7. Enter a **Comment**.
8. To view which rules will be affected by your silence, click **Preview alerts**.
9. Otherwise, when you are finished, click **Create**
## Update an existing silence
You can always update an existing silence by clicking the **Edit silence** button under the silence.
It is also possible to expire a silence, on-demand, by clicking the **Expire silence** button under the silence. This will override the original scheduled expiration date of the silence.

View File

@ -0,0 +1,42 @@
---
aliases:
- /docs/grafana-cloud/alerts/grafana-cloud-alerting/view-filter-rules/
- /docs/grafana-cloud/how-do-i/grafana-cloud-alerting/view-filter-alerts/
- /docs/grafana-cloud/legacy-alerting/grafana-cloud-alerting/view-filter-rules/
description: View and filter alert rules
title: View and filter alert rules
weight: 300
---
# View and filter alert rules
Grafana Cloud Alerting displays a list of all recording and alerting rules assigned to a selected data source in the Alerts and rules tab.
All members of an organization that have access to a particular data source can view the list of rules and filter or reorder their view.
## View alert rules
1. Hover your cursor over the **Grafana Cloud Alerting** icon (alarm bell with Prometheus logo) and then click **Alerts and rules**.
1. In the list at the top of the tab, select the data source for which you want to view rules.
Grafana displays rules according to rule groups. If your instance has added namespaces and alert groups, then they will be ordered alphabetically. Otherwise, you will have one namespace called `default` and an alert group called `rules`.
If an alert is firing, then click the down carrot arrow to see additional information. The Label and annotations section appears.
## Filter your alert rule view
You can control which alerts you see and in what order they appear several ways. Combine different filters to personalize your view so that you can quickly find the information that you need.
- **Filter by alert state -** Click the toggles to show or hide alerts in different states. Turn off the toggle to hide alerts matching the state.
- **Filter by rule type -** Click the toggles to show or hide alerting rules or recording rules.
- **View options -** Click the toggle to show or hide the Prometheus annotations shown in the Labels and annotations section.
- **Rule sorting -** Click an option to sort alert rules within each rule group.
- **None -** No special sort is applied and sorts as if in a file, ordered according to the editing list order.
- **A-Z -** Sorts rules alphabetically according to the rule name.
- **Alert state -** Sorts rules according to the alert state (Firing, Pending, or Inactive).
## View alert in Explore
Click **View in Explore** or click the `expr` link to open the `expr` in [Explore](/docs/grafana/latest/explore/).
> **Note:** Only users with Admin or Editor roles in an organization can use the Explore feature unless the viewers can edit.