mirror of
https://github.com/grafana/grafana.git
synced 2025-02-25 18:55:37 -06:00
Docs: Adding the right syntax highlighting in a few places (#71141)
Adding the right syntax highlighting in a few places
This commit is contained in:
parent
00e9185b1a
commit
536146de5f
@ -36,12 +36,12 @@ Since gossiping of notifications and silences uses both TCP and UDP port `9094`,
|
||||
|
||||
If you are using Kubernetes, you can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition.
|
||||
|
||||
```bash
|
||||
```yaml
|
||||
env:
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: POD_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
```
|
||||
|
||||
1. Add the port 9094 to the Grafana deployment:
|
||||
|
@ -21,7 +21,7 @@ Meta monitoring of Grafana Managed Alerts requires having a Prometheus server, o
|
||||
|
||||
Here is an example of how this might look:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- job_name: grafana
|
||||
honor_timestamps: true
|
||||
scrape_interval: 15s
|
||||
@ -30,8 +30,8 @@ Here is an example of how this might look:
|
||||
scheme: http
|
||||
follow_redirects: true
|
||||
static_configs:
|
||||
- targets:
|
||||
- grafana:3000
|
||||
- targets:
|
||||
- grafana:3000
|
||||
```
|
||||
|
||||
The Grafana ruler, which is responsible for evaluating alert rules, and the Grafana Alertmanager, which is responsible for sending notifications of firing and resolved alerts, provide a number of metrics that let you observe them.
|
||||
@ -76,7 +76,7 @@ Meta monitoring in Alertmanager also requires having a Prometheus/Mimir server,
|
||||
|
||||
Here is an example of how this might look:
|
||||
|
||||
```
|
||||
```yaml
|
||||
- job_name: alertmanager
|
||||
honor_timestamps: true
|
||||
scrape_interval: 15s
|
||||
@ -85,8 +85,8 @@ Here is an example of how this might look:
|
||||
scheme: http
|
||||
follow_redirects: true
|
||||
static_configs:
|
||||
- targets:
|
||||
- alertmanager:9093
|
||||
- targets:
|
||||
- alertmanager:9093
|
||||
```
|
||||
|
||||
#### alertmanager_alerts
|
||||
|
@ -50,7 +50,7 @@ When upgrading to Grafana > 9.0, existing installations that use legacy alerting
|
||||
1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).
|
||||
2. Enter the following in your configuration:
|
||||
|
||||
```
|
||||
```toml
|
||||
[alerting]
|
||||
enabled = true
|
||||
|
||||
@ -73,7 +73,7 @@ You can deactivate both Grafana Alerting and legacy alerting in Grafana.
|
||||
1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini).
|
||||
1. Enter the following in your configuration:
|
||||
|
||||
```
|
||||
```toml
|
||||
[alerting]
|
||||
enabled = false
|
||||
|
||||
@ -93,7 +93,7 @@ All new alerts and changes made exclusively in Grafana Alerting will be deleted.
|
||||
|
||||
To roll back to legacy alerting, enter the following in your configuration:
|
||||
|
||||
```
|
||||
```toml
|
||||
force_migration = true
|
||||
|
||||
[alerting]
|
||||
@ -113,7 +113,7 @@ If you have been using legacy alerting up until now your existing alerts will be
|
||||
|
||||
To opt in to Grafana Alerting, enter the following in your configuration:
|
||||
|
||||
```
|
||||
```toml
|
||||
[alerting]
|
||||
enabled = false
|
||||
|
||||
|
@ -52,7 +52,7 @@ Grafana Alerting support is included as part of the [Grafana Terraform provider]
|
||||
|
||||
The following is an example you can use to configure the Terraform provider.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
terraform {
|
||||
required_providers {
|
||||
grafana = {
|
||||
@ -78,7 +78,7 @@ To provision contact points and templates, complete the following steps.
|
||||
|
||||
This example creates a contact point that sends alert notifications to Slack.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_contact_point" "my_slack_contact_point" {
|
||||
name = "Send to My Slack Channel"
|
||||
|
||||
@ -114,7 +114,7 @@ You can re-use the same templates across many contact points. In the example abo
|
||||
|
||||
This fragment can then be managed separately in Terraform:
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_message_template" "my_alert_template" {
|
||||
name = "Alert Instance Template"
|
||||
|
||||
@ -139,7 +139,7 @@ In this example, the alerts are grouped by `alertname`, which means that any not
|
||||
|
||||
If you want to route specific notifications differently, you can add sub-policies. Sub-policies allow you to apply routing to different alerts based on label matching. In this example, we apply a mute timing to all alerts with the label a=b.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_notification_policy" "my_policy" {
|
||||
group_by = ["alertname"]
|
||||
contact_point = grafana_contact_point.my_slack_contact_point.name
|
||||
@ -193,7 +193,7 @@ To provision mute timings, complete the following steps.
|
||||
|
||||
In this example, alert notifications are muted on weekends.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_mute_timing" "my_mute_timing" {
|
||||
name = "My Mute Timing"
|
||||
|
||||
@ -232,7 +232,7 @@ In this example, the [TestData]({{< relref "../../../../datasources/testdata" >}
|
||||
|
||||
Alerts can be defined against any backend datasource in Grafana.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_data_source" "testdata_datasource" {
|
||||
name = "TestData"
|
||||
type = "testdata"
|
||||
@ -251,7 +251,7 @@ For more information on alert rules, refer to [how to create Grafana-managed ale
|
||||
|
||||
In this example, the `grafana_rule_group` resource group is used.
|
||||
|
||||
```terraform
|
||||
```HCL
|
||||
resource "grafana_rule_group" "my_rule_group" {
|
||||
name = "My Alert Rules"
|
||||
folder_uid = grafana_folder.rule_folder.uid
|
||||
|
Loading…
Reference in New Issue
Block a user