diff --git a/docs/sources/alerting/set-up/configure-high-availability/_index.md b/docs/sources/alerting/set-up/configure-high-availability/_index.md index d2ba8f0ed30..fb2709a1cc0 100644 --- a/docs/sources/alerting/set-up/configure-high-availability/_index.md +++ b/docs/sources/alerting/set-up/configure-high-availability/_index.md @@ -36,12 +36,12 @@ Since gossiping of notifications and silences uses both TCP and UDP port `9094`, If you are using Kubernetes, you can expose the pod IP [through an environment variable](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/) via the container definition. -```bash +```yaml env: -- name: POD_IP - valueFrom: - fieldRef: - fieldPath: status.podIP + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP ``` 1. Add the port 9094 to the Grafana deployment: diff --git a/docs/sources/alerting/set-up/meta-monitoring/_index.md b/docs/sources/alerting/set-up/meta-monitoring/_index.md index 467266dc320..b20eeac1d77 100644 --- a/docs/sources/alerting/set-up/meta-monitoring/_index.md +++ b/docs/sources/alerting/set-up/meta-monitoring/_index.md @@ -21,7 +21,7 @@ Meta monitoring of Grafana Managed Alerts requires having a Prometheus server, o Here is an example of how this might look: -``` +```yaml - job_name: grafana honor_timestamps: true scrape_interval: 15s @@ -30,8 +30,8 @@ Here is an example of how this might look: scheme: http follow_redirects: true static_configs: - - targets: - - grafana:3000 + - targets: + - grafana:3000 ``` The Grafana ruler, which is responsible for evaluating alert rules, and the Grafana Alertmanager, which is responsible for sending notifications of firing and resolved alerts, provide a number of metrics that let you observe them. @@ -76,7 +76,7 @@ Meta monitoring in Alertmanager also requires having a Prometheus/Mimir server, Here is an example of how this might look: -``` +```yaml - job_name: alertmanager honor_timestamps: true scrape_interval: 15s @@ -85,8 +85,8 @@ Here is an example of how this might look: scheme: http follow_redirects: true static_configs: - - targets: - - alertmanager:9093 + - targets: + - alertmanager:9093 ``` #### alertmanager_alerts diff --git a/docs/sources/alerting/set-up/migrating-alerts/_index.md b/docs/sources/alerting/set-up/migrating-alerts/_index.md index 8a8de277772..f3ac26197bc 100644 --- a/docs/sources/alerting/set-up/migrating-alerts/_index.md +++ b/docs/sources/alerting/set-up/migrating-alerts/_index.md @@ -50,7 +50,7 @@ When upgrading to Grafana > 9.0, existing installations that use legacy alerting 1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini). 2. Enter the following in your configuration: -``` +```toml [alerting] enabled = true @@ -73,7 +73,7 @@ You can deactivate both Grafana Alerting and legacy alerting in Grafana. 1. Go to your custom configuration file ($WORKING_DIR/conf/custom.ini). 1. Enter the following in your configuration: -``` +```toml [alerting] enabled = false @@ -93,7 +93,7 @@ All new alerts and changes made exclusively in Grafana Alerting will be deleted. To roll back to legacy alerting, enter the following in your configuration: -``` +```toml force_migration = true [alerting] @@ -113,7 +113,7 @@ If you have been using legacy alerting up until now your existing alerts will be To opt in to Grafana Alerting, enter the following in your configuration: -``` +```toml [alerting] enabled = false diff --git a/docs/sources/alerting/set-up/provision-alerting-resources/terraform-provisioning/index.md b/docs/sources/alerting/set-up/provision-alerting-resources/terraform-provisioning/index.md index 4b469959a86..7e68e8c367b 100644 --- a/docs/sources/alerting/set-up/provision-alerting-resources/terraform-provisioning/index.md +++ b/docs/sources/alerting/set-up/provision-alerting-resources/terraform-provisioning/index.md @@ -52,7 +52,7 @@ Grafana Alerting support is included as part of the [Grafana Terraform provider] The following is an example you can use to configure the Terraform provider. -```terraform +```HCL terraform { required_providers { grafana = { @@ -78,7 +78,7 @@ To provision contact points and templates, complete the following steps. This example creates a contact point that sends alert notifications to Slack. -```terraform +```HCL resource "grafana_contact_point" "my_slack_contact_point" { name = "Send to My Slack Channel" @@ -114,7 +114,7 @@ You can re-use the same templates across many contact points. In the example abo This fragment can then be managed separately in Terraform: -```terraform +```HCL resource "grafana_message_template" "my_alert_template" { name = "Alert Instance Template" @@ -139,7 +139,7 @@ In this example, the alerts are grouped by `alertname`, which means that any not If you want to route specific notifications differently, you can add sub-policies. Sub-policies allow you to apply routing to different alerts based on label matching. In this example, we apply a mute timing to all alerts with the label a=b. -```terraform +```HCL resource "grafana_notification_policy" "my_policy" { group_by = ["alertname"] contact_point = grafana_contact_point.my_slack_contact_point.name @@ -193,7 +193,7 @@ To provision mute timings, complete the following steps. In this example, alert notifications are muted on weekends. -```terraform +```HCL resource "grafana_mute_timing" "my_mute_timing" { name = "My Mute Timing" @@ -232,7 +232,7 @@ In this example, the [TestData]({{< relref "../../../../datasources/testdata" >} Alerts can be defined against any backend datasource in Grafana. -```terraform +```HCL resource "grafana_data_source" "testdata_datasource" { name = "TestData" type = "testdata" @@ -251,7 +251,7 @@ For more information on alert rules, refer to [how to create Grafana-managed ale In this example, the `grafana_rule_group` resource group is used. -```terraform +```HCL resource "grafana_rule_group" "my_rule_group" { name = "My Alert Rules" folder_uid = grafana_folder.rule_folder.uid