Alerting: add docs for file provisioning (#53101)

This commit is contained in:
Jean-Philippe Quéméner 2022-08-17 18:53:36 +02:00 committed by GitHub
parent c7212643c2
commit 2fef8e6f2c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 773 additions and 0 deletions

View File

@ -0,0 +1,186 @@
# # config file version
apiVersion: 1
# # List of rule groups to import or update
# groups:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> name of the rule group
# name: my_rule_group
# # <string, required> name of the folder the rule group will be stored in
# folder: my_first_folder
# # <duration, required> interval of the rule group evaluation
# interval: 60s
# # <list, required> list of rules that are part of the rule group
# rules:
# # <string, required> unique identifier for the rule
# - uid: my_id_1
# # <string, required> title of the rule, will be displayed in the UI
# title: my_first_rule
# # <string, required> query used for the condition
# condition: A
# # <list, required> list of query objects that should be executed on each
# # evaluation - should be obtained via the API
# data:
# - refId: A
# datasourceUid: "-100"
# model:
# conditions:
# - evaluator:
# params:
# - 3
# type: gt
# operator:
# type: and
# query:
# params:
# - A
# reducer:
# type: last
# type: query
# datasource:
# type: __expr__
# uid: "-100"
# expression: 1==0
# intervalMs: 1000
# maxDataPoints: 43200
# refId: A
# type: math
# # <string> UID of a dashboard that the alert rule should be linked to
# dashboardUid: my_dashboard
# # <int> ID of the panel that the alert rule should be linked to
# panelId: 123
# # <string> state of the alert rule when no data is returned
# # possible values: "NoData", "Alerting", "OK", default = NoData
# noDataState: Alerting
# # <string> state of the alert rule when the query execution
# # fails - possible values: "Error", "Alerting", "OK"
# # default = Alerting
# # <duration, required> how long the alert condition should be breached before Firing. Before this time has elapsed, the alert is considered to be Pending
# for: 60s
# # <map<string, string>> map of strings to attach arbitrary custom data
# annotations:
# some_key: some_value
# # <map<string, string> map of strings to filter and
# # route alerts
# labels:
# team: sre_team_1
# # List of alert rule UIDs that should be deleted
# deleteRules:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> unique identifier for the rule
# uid: my_id_1
# # List of contact points to import or update
# contactPoints:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> name of the contact point
# name: cp_1
# receivers:
# # <string, required> unique identifier for the receiver
# - uid: first_uid
# # <string, required> type of the receiver
# type: prometheus-alertmanager
# # <object, required> settings for the specific receiver type
# settings:
# url: http://test:9000
# # List of receivers that should be deleted
# deleteContactPoints:
# - orgId: 1
# uid: first_uid
# # List of notification policies to import or update
# policies:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string> name of the receiver that should be used for this route
# receiver: grafana-default-email
# # <list<string>> The labels by which incoming alerts are grouped together. For example,
# # multiple alerts coming in for cluster=A and alertname=LatencyHigh would
# # be batched into a single group.
# #
# # To aggregate by all possible labels, use the special value '...' as
# # the sole label name, for example:
# # group_by: ['...']
# # This effectively disables aggregation entirely, passing through all
# # alerts as-is. This is unlikely to be what you want, unless you have
# # a very low alert volume or your upstream notification system performs
# # its own grouping.
# group_by:
# - grafana_folder
# - alertname
# # <list> a list of matchers that an alert has to fulfill to match the node
# matchers:
# - alertname = Watchdog
# - severity =~ "warning|critical"
# # <list> Times when the route should be muted. These must match the name of a
# # mute time interval.
# # Additionally, the root node cannot have any mute times.
# # When a route is muted it will not send any notifications, but
# # otherwise acts normally (including ending the route-matching process
# # if the `continue` option is not set)
# mute_time_intervals:
# - abc
# # <duration> How long to initially wait to send a notification for a group
# # of alerts. Allows to collect more initial alerts for the same group.
# # (Usually ~0s to few minutes), default = 30s
# group_wait: 30s
# # <duration> How long to wait before sending a notification about new alerts that
# # are added to a group of alerts for which an initial notification has
# # already been sent. (Usually ~5m or more), default = 5m
# group_internval: 5m
# # <duration> How long to wait before sending a notification again if it has already
# # been sent successfully for an alert. (Usually ~3h or more), default = 4h
# repeat_interval: 4h
# # <list> Zero or more child routes
# routes:
# ...
# # List of orgIds that should be reset to the default policy
# resetPolicies:
# - 1
# # List of templates to import or update
# templates:
# # <int> organization ID, default = 1
# - orgID: 1
# # <string, required> name of the template, must be unique
# name: my_first_template
# # <string, required> content of the the template
# template: Alerting with a custome text template
# # List of templates that should be deleted
# deleteTemplates:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> name of the template, must be unique
# name: my_first_template
# # List of mute time intervals to import or update
# muteTimes:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> name of the mute time interval, must be unique
# name: mti_1
# # <list> time intervals that should trigger the muting
# refer to https://prometheus.io/docs/alerting/latest/configuration/#time_interval-0
# time_intervals:
# - times:
# - start_time: '06:00'
# end_time: '23:59'
# weekdays: ['monday:wednesday','saturday', 'sunday']
# months: ['1:3', 'may:august', 'december']
# years: ['2020:2022', '2030']
# days_of_month: ['1:5', '-3:-1']
# # List of mute time intervals that should be deleted
# deleteMuteTimes:
# # <int> organization ID, default = 1
# - orgId: 1
# # <string, required> name of the mute time interval, must be unique
# name: mti_1

View File

@ -364,8 +364,590 @@ providers:
> **Note:** To provision dashboards to the General folder, store them in the root of your `path`.
## Alerting
You can manage alert objects in Grafana by adding one or more YAML or JSON
configuration files in the [`provisioning/alerting`]({{< relref "../../setup-grafana/configure-grafana/" >}})
directory. Those files will be applied when starting Grafana. When Grafana
is running, it's possible to do a hot reload using the
[Admin API]({{< relref "../../developers/http_api/admin/#reload-provisioning-configurations" >}}).
### Rules
Creation
```yaml
# config file version
apiVersion: 1
# List of rule groups to import or update
groups:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> name of the rule group
name: my_rule_group
# <string, required> name of the folder the rule group will be stored in
folder: my_first_folder
# <duration, required> interval that the rule group should evaluated at
interval: 60s
# <list, required> list of rules that are part of the rule group
rules:
# <string, required> unique identifier for the rule
- uid: my_id_1
# <string, required> title of the rule that will be displayed in the UI
title: my_first_rule
# <string, required> which query should be used for the condition
condition: A
# <list, required> list of query objects that should be executed on each
# evaluation - should be obtained trough the API
data:
- refId: A
datasourceUid: '-100'
model:
conditions:
- evaluator:
params:
- 3
type: gt
operator:
type: and
query:
params:
- A
reducer:
type: last
type: query
datasource:
type: __expr__
uid: '-100'
expression: 1==0
intervalMs: 1000
maxDataPoints: 43200
refId: A
type: math
# <string> UID of a dashboard that the alert rule should be linked to
dashboardUid: my_dashboard
# <int> ID of the panel that the alert rule should be linked to
panelId: 123
# <string> the state the alert rule will have when no data is returned
# possible values: "NoData", "Alerting", "OK", default = NoData
noDataState: Alerting
# <string> the state the alert rule will have when the query execution
# failed - possible values: "Error", "Alerting", "OK"
# default = Alerting
# <duration, required> for how long should the alert fire before alerting
for: 60s
# <map<string, string>> a map of strings to pass around any data
annotations:
some_key: some_value
# <map<string, string> a map of strings that can be used to filter and
# route alerts
labels:
team: sre_team_1
```
Deletion
```yaml
# config file version
apiVersion: 1
# List of alert rule UIDs that should be deleted
deleteRules:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> unique identifier for the rule
uid: my_id_1
```
### Contact points
Creation
```yaml
# config file version
apiVersion: 1
# List of contact points to import or update
contactPoints:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> name of the contact point
name: cp_1
receivers:
# <string, required> unique identifier for the receiver
- uid: first_uid
# <string, required> type of the receiver
type: prometheus-alertmanager
# <object, required> settings for the specific receiver type
settings:
url: http://test:9000
```
Deletion
```yaml
# config file version
apiVersion: 1
# List of receivers that should be deleted
deleteContactPoints:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> unique identifier for the receiver
uid: first_uid
```
#### Settings
Here we showcase what kind of settings you can have for the different
contact point types.
##### Alertmanager
```yaml
type: prometheus-alertmanager
settings:
# <string, required>
url: http://localhost:9093
# <string>
basicAuthUser: abc
# <string>
basicAuthPassword: abc123
```
##### DingDing
```yaml
type: dingding
settings:
# <string, required>
url: https://oapi.dingtalk.com/robot/send?access_token=xxxxxxxxx
# <string> options: link, actionCard
msgType: link
# <string>
message: |
{{ template "default.message" . }}
```
##### Discord
```yaml
type: discord
settings:
# <string, required>
url: https://discord/webhook
# <string>
avatar_url: https://my_avatar
# <string>
use_discord_username: Grafana
# <string>
message: |
{{ template "default.message" . }}
```
##### E-Mail
```yaml
type: email
settings:
# <string, required>
addresses: me@example.com;you@example.com
# <bool>
singleEmail: false
# <string>
message: my optional message to include
# <string>
subject: |
{{ template "default.title" . }}
```
##### Google Hangouts Chat
```yaml
type: googlechat
settings:
# <string, required>
url: https://google/webhook
# <string>
message: |
{{ template "default.message" . }}
```
##### Kafka
```yaml
type: kafka
settings:
# <string, required>
kafkaRestProxy: http://localhost:8082
# <string, required>
kafkaTopic: topic1
```
##### LINE
```yaml
type: line
settings:
# <string, required>
token: xxx
```
##### Microsoft Teams
```yaml
type: teams
settings:
# <string, required>
url: https://ms_teams_url
# <string>
title: |
{{ template "default.title" . }}
# <string>
sectiontitle: ''
# <string>
message: |
{{ template "default.message" . }}
```
##### OpsGenie
```yaml
type: opsgenie
settings:
# <string, required>
apiKey: xxx
# <string, required>
apiUrl: https://api.opsgenie.com/v2/alerts
# <string>
message: |
{{ template "default.title" . }}
# <string>
description: some descriptive description
# <bool>
autoClose: false
# <bool>
overridePriority: false
# <string> options: tags, details, both
sendTagsAs: both
```
##### PagerDuty
```yaml
type: pagerduty
settings:
# <string, required>
integrationKey: XXX
# <string> options: critical, error, warning, info
severity: critical
# <string>
class: ping failure
# <string>
component: Grafana
# <string>
group: app-stack
# <string>
summary: |
{{ template "default.message" . }}
```
##### Pushover
```yaml
type: pushover
settings:
# <string, required>
apiToken: XXX
# <string, required>
userKey: user1,user2
# <string>
device: device1,device2
# <string> options (high to low): 2,1,0,-1,-2
priority: '2'
# <string>
retry: '30'
# <string>
expire: '120'
# <string>
sound: siren
# <string>
okSound: magic
# <string>
message: |
{{ template "default.message" . }}
```
##### Slack
```yaml
type: slack
settings:
# <string, required>
recipient: alerting-dev
# <string, required>
token: xxx
# <string>
username: grafana_bot
# <string>
icon_emoji: heart
# <string>
icon_url: https://icon_url
# <string>
mentionUsers: user_1,user_2
# <string>
mentionGroups: group_1,group_2
# <string> options: here, channel
mentionChannel: here
# <string> Optionally provide a Slack incoming webhook URL for sending messages, in this case the token isn't necessary
url: https://some_webhook_url
# <string>
endpointUrl: https://custom_url/api/chat.postMessage
# <string>
title: |
{{ template "slack.default.title" . }}
text: |
{{ template "slack.default.text" . }}
```
##### Sensu Go
```yaml
type: sensugo
settings:
# <string, required>
url: http://sensu-api.local:8080
# <string, required>
apikey: xxx
# <string>
entity: default
# <string>
check: default
# <string>
handler: some_handler
# <string>
namespace: default
# <string>
message: |
{{ template "default.message" . }}
```
##### Telegram
```yaml
type: telegram
settings:
# <string, required>
bottoken: xxx
# <string, required>
chatid: some_chat_id
# <string>
message: |
{{ template "default.message" . }}
```
##### Threema Gateway
```yaml
type: threema
settings:
# <string, required>
api_secret: xxx
# <string, required>
gateway_id: A5K94S9
# <string, required>
recipient_id: A9R4KL4S
```
##### VictorOps
```yaml
type: victorops
settings:
# <string, required>
url: XXX
# <string> options: CRITICAL, WARNING
messageType: CRITICAL
```
##### Webhook
```yaml
type: webhook
settings:
# <string, required>
url: https://endpoint_url
# <string> options: POST, PUT
httpMethod: POST
# <string>
username: abc
# <string>
password: abc123
# <string>
authorization_scheme: Bearer
# <string>
authorization_credentials: abc123
# <string>
maxAlerts: '10'
```
##### WeCom
```yaml
type: wecom
settings:
# <string, required>
url: https://qyapi.weixin.qq.com/cgi-bin/webhook/send?key=xxxxxxxx
# <string>
message: |
{{ template "default.message" . }}
# <string>
title: |
{{ template "default.title" . }}
```
### Notification policies
Create
```yaml
# config file version
apiVersion: 1
# List of notification policies
policies:
# <int> organization ID, default = 1
- orgId: 1
# <string> name of the contact point that should be used for this route
receiver: grafana-default-email
# <list> The labels by which incoming alerts are grouped together. For example,
# multiple alerts coming in for cluster=A and alertname=LatencyHigh would
# be batched into a single group.
#
# To aggregate by all possible labels use the special value '...' as
# the sole label name, for example:
# group_by: ['...']
# This effectively disables aggregation entirely, passing through all
# alerts as-is. This is unlikely to be what you want, unless you have
# a very low alert volume or your upstream notification system performs
# its own grouping.
group_by: ['...']
# <list> a list of matchers that an alert has to fulfill to match the node
matchers:
- alertname = Watchdog
- severity =~ "warning|critical"
# <list> Times when the route should be muted. These must match the name of a
# mute time interval.
# Additionally, the root node cannot have any mute times.
# When a route is muted it will not send any notifications, but
# otherwise acts normally (including ending the route-matching process
# if the `continue` option is not set)
mute_time_intervals:
- abc
# <duration> How long to initially wait to send a notification for a group
# of alerts. Allows to collect more initial alerts for the same group.
# (Usually ~0s to few minutes), default = 30s
group_wait: 30s
# <duration> How long to wait before sending a notification about new alerts that
# are added to a group of alerts for which an initial notification has
# already been sent. (Usually ~5m or more), default = 5m
group_internval: 5m
# <duration> How long to wait before sending a notification again if it has already
# been sent successfully for an alert. (Usually ~3h or more), default = 4h
repeat_interval: 4h
# <list> Zero or more child routes
# routes:
# ...
```
Reset
```yaml
# config file version
apiVersion: 1
# List of orgIds that should be reset to the default policy
resetPolicies:
- 1
```
### Templates
Creation
```yaml
# config file version
apiVersion: 1
# List of templates to import or update
templates:
# <int> organization ID, default = 1
- orgID: 1
# <string, required> name of the template, must be unique
name: my_first_template
# <string, required> content of the the template
template: Alerting with a custom text template
```
Deletion
```yaml
# config file version
apiVersion: 1
# List of alert rule UIDs that should be deleted
deleteTemplates:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> name of the template, must be unique
name: my_first_template
```
### Mute timings
Creation
```yaml
# config file version
apiVersion: 1
# List of mute time intervals to import or update
muteTimes:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> name of the mute time interval, must be unique
name: mti_1
# <list> time intervals that should trigger the muting
# refer to https://prometheus.io/docs/alerting/latest/configuration/#time_interval-0
time_intervals:
- times:
- start_time: '06:00'
end_time: '23:59'
weekdays: ['monday:wednesday', 'saturday', 'sunday']
months: ['1:3', 'may:august', 'december']
years: ['2020:2022', '2030']
days_of_month: ['1:5', '-3:-1']
```
Deletion
```yaml
# config file version
apiVersion: 1
# List of mute time intervals that should be deleted
deleteMuteTimes:
# <int> organization ID, default = 1
- orgId: 1
# <string, required> name of the mute time interval, must be unique
name: mti_1
```
## Alert Notification Channels
> **Note:** Alert Notification Channels are part of legacy alerting, which is deprecated and will be removed in Grafana 10. Use Contact Points in the alerting section above.
Alert Notification Channels can be provisioned by adding one or more YAML config files in the [`provisioning/notifiers`](/administration/configuration/#provisioning) directory.
Each config file can contain the following top-level fields:

View File

@ -650,6 +650,8 @@ Content-Type: application/json
`POST /api/admin/provisioning/access-control/reload`
`POST /api/admin/provisioning/alerting/reload`
Reloads the provisioning config files for specified type and provision entities again. It won't return
until the new provisioned entities are already stored in the database. In case of dashboards, it will stop
polling for changes in dashboard files and then restart it with new configurations after returning.
@ -667,6 +669,7 @@ See note in the [introduction]({{< ref "#admin-api" >}}) for an explanation.
| provisioning:reload | provisioners:datasources | datasources |
| provisioning:reload | provisioners:plugins | plugins |
| provisioning:reload | provisioners:notifications | notifications |
| provisioning:reload | provisioners:alerting | alerting |
**Example Request**:

View File

@ -54,6 +54,7 @@ case "$1" in
if [ ! -d $PROVISIONING_CFG_DIR/alerting ]; then
mkdir -p $PROVISIONING_CFG_DIR/alerting
cp /usr/share/grafana/conf/provisioning/alerting/sample.yaml $PROVISIONING_CFG_DIR/alerting/sample.yaml
fi
# configuration files should not be modifiable by grafana user, as this can be a security issue

View File

@ -68,6 +68,7 @@ if [ $1 -eq 1 ] ; then
if [ ! -d $PROVISIONING_CFG_DIR/alerting ]; then
mkdir -p $PROVISIONING_CFG_DIR/alerting
cp /usr/share/grafana/conf/provisioning/alerting/sample.yaml $PROVISIONING_CFG_DIR/alerting/sample.yaml
fi
# Set user permissions on /var/log/grafana, /var/lib/grafana