From 14225b07b2c72a90a119190a5716a5c0da25c73c Mon Sep 17 00:00:00 2001 From: achatterjee-grafana <70489351+achatterjee-grafana@users.noreply.github.com> Date: Wed, 27 Oct 2021 16:57:54 -0400 Subject: [PATCH] Docs: Cleanup alerting documentation, part 1 (#40737) * First commit. * Adding shared content. * More changes. * More changes * Updated few more topics, fixed broken relrefs. * Checking in changes. * Some more topics scrubbed. * Minor update. * Few more changes. * Index pages are finally somewhat sorted. Added relevant information and new topics. * Updated Alert grouping. * Last bunch of changes for today. * Updated folder names, relrefs, and some topic weights. * Fixed typo in L37, notifications topic. * Fixed another typo. * Run prettier. * Fixed remaining broken relrefs. * Minor reorg, added link to basics some overview topic. * Some more re-org of the basics section. * Some more changes. * More changes. * Update docs/sources/shared/alerts/grafana-managed-alerts.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/_index.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/_index.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/opt-in.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/notification-policies.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alert-groups.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/_index.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Ran prettier and applied suggestion from code review. * Update docs/sources/alerting/unified-alerting/message-templating/_index.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/contact-points.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/contact-points.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Change from code review. Also fixed typo "bos" in playlist topic. * Ran prettier to fix formatting issues. * Update docs/sources/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/contact-points.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/basics/alertmanager.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/basics/alertmanager.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/basics/evaluate-grafana-alerts.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/contact-points.md Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> * More changes from code review. * Replaced drop down with drop-down * Fix broken relrefs * Update docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-rule.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Update docs/sources/alerting/unified-alerting/alerting-rules/rule-list.md Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> * Few more. * Couple more. Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com> Co-authored-by: Eve Meelan <81647476+Eve832@users.noreply.github.com> --- .yarn/sdks/eslint/bin/eslint.js | 8 +- .yarn/sdks/eslint/lib/api.js | 8 +- .yarn/sdks/prettier/index.js | 8 +- .yarn/sdks/stylelint/bin/stylelint.js | 8 +- .yarn/sdks/stylelint/lib/index.js | 8 +- .yarn/sdks/typescript/lib/tsc.js | 8 +- .yarn/sdks/typescript/lib/tsserver.js | 134 ++++++++------- .yarn/sdks/typescript/lib/tsserverlibrary.js | 134 ++++++++------- .yarn/sdks/typescript/lib/typescript.js | 8 +- docs/sources/alerting/_index.md | 26 ++- docs/sources/alerting/difference-old-new.md | 26 --- docs/sources/alerting/old-alerting/_index.md | 37 +---- .../alerting/unified-alerting/_index.md | 60 ++----- .../alerting/unified-alerting/alert-groups.md | 23 ++- .../unified-alerting/alerting-rules/_index.md | 12 +- .../alerting-rules/alert-annotation-label.md | 60 +++++++ ...eate-cortex-loki-managed-recording-rule.md | 63 +++----- .../create-cortex-loki-managed-rule.md | 80 ++++----- .../create-grafana-managed-rule.md | 152 +++++------------- .../edit-cortex-loki-namespace-group.md | 29 ++-- .../alerting-rules/rule-list.md | 67 ++++---- .../alerting-rules/state-and-health.md | 35 ---- .../unified-alerting/contact-points.md | 83 ++++------ .../unified-alerting/difference-old-new.md | 26 +++ .../unified-alerting/fundamentals/_index.md | 13 ++ .../fundamentals/alertmanager.md | 17 ++ .../fundamentals/evaluate-grafana-alerts.md | 95 +++++++++++ .../fundamentals/state-and-health.md | 30 ++++ .../grafana-managed-numeric-rule.md | 67 -------- .../message-templating/_index.md | 69 ++++---- .../unified-alerting/notification-policies.md | 101 ++++++------ .../alerting/unified-alerting/opt-in.md | 40 +++-- .../alerting/unified-alerting/silences.md | 44 ++--- docs/sources/dashboards/playlist.md | 4 +- docs/sources/datasources/alertmanager.md | 2 +- .../legacy/defaults-and-editor-mode.md | 4 +- docs/sources/panels/panel-library.md | 4 +- .../release-notes/release-notes-7-3-0.md | 2 +- .../release-notes/release-notes-7-4-0.md | 6 +- .../release-notes/release-notes-8-1-2.md | 2 +- .../shared/alerts/grafana-managed-alerts.md | 31 ++++ docs/sources/sharing/playlists.md | 2 +- docs/sources/visualizations/geomap.md | 4 +- docs/sources/whatsnew/whats-new-in-v5-4.md | 2 +- docs/sources/whatsnew/whats-new-in-v7-1.md | 2 +- docs/sources/whatsnew/whats-new-in-v8-0.md | 2 +- 46 files changed, 824 insertions(+), 822 deletions(-) delete mode 100644 docs/sources/alerting/difference-old-new.md create mode 100644 docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md delete mode 100644 docs/sources/alerting/unified-alerting/alerting-rules/state-and-health.md create mode 100644 docs/sources/alerting/unified-alerting/difference-old-new.md create mode 100644 docs/sources/alerting/unified-alerting/fundamentals/_index.md create mode 100644 docs/sources/alerting/unified-alerting/fundamentals/alertmanager.md create mode 100644 docs/sources/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts.md create mode 100644 docs/sources/alerting/unified-alerting/fundamentals/state-and-health.md delete mode 100644 docs/sources/alerting/unified-alerting/grafana-managed-numeric-rule.md create mode 100644 docs/sources/shared/alerts/grafana-managed-alerts.md diff --git a/.yarn/sdks/eslint/bin/eslint.js b/.yarn/sdks/eslint/bin/eslint.js index 4d327a49a06..774cf54870e 100755 --- a/.yarn/sdks/eslint/bin/eslint.js +++ b/.yarn/sdks/eslint/bin/eslint.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/eslint/lib/api.js b/.yarn/sdks/eslint/lib/api.js index 97a052442a8..0f2216b364a 100644 --- a/.yarn/sdks/eslint/lib/api.js +++ b/.yarn/sdks/eslint/lib/api.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/prettier/index.js b/.yarn/sdks/prettier/index.js index f6882d80972..f5040146272 100755 --- a/.yarn/sdks/prettier/index.js +++ b/.yarn/sdks/prettier/index.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../.pnp.cjs"; +const relPnpApiPath = '../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/stylelint/bin/stylelint.js b/.yarn/sdks/stylelint/bin/stylelint.js index 04ffaf8eefa..ebc40d7412a 100755 --- a/.yarn/sdks/stylelint/bin/stylelint.js +++ b/.yarn/sdks/stylelint/bin/stylelint.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/stylelint/lib/index.js b/.yarn/sdks/stylelint/lib/index.js index 1b0b443f818..09d3a62c27f 100644 --- a/.yarn/sdks/stylelint/lib/index.js +++ b/.yarn/sdks/stylelint/lib/index.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/typescript/lib/tsc.js b/.yarn/sdks/typescript/lib/tsc.js index 16042d01d4f..51dc5720100 100644 --- a/.yarn/sdks/typescript/lib/tsc.js +++ b/.yarn/sdks/typescript/lib/tsc.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/.yarn/sdks/typescript/lib/tsserver.js b/.yarn/sdks/typescript/lib/tsserver.js index 71e35cf6119..d26f9c85c96 100644 --- a/.yarn/sdks/typescript/lib/tsserver.js +++ b/.yarn/sdks/typescript/lib/tsserver.js @@ -1,28 +1,30 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); -const moduleWrapper = tsserver => { +const moduleWrapper = (tsserver) => { if (!process.versions.pnp) { return tsserver; } - const {isAbsolute} = require(`path`); + const { isAbsolute } = require(`path`); const pnpApi = require(`pnpapi`); - const isVirtual = str => str.match(/\/(\$\$virtual|__virtual__)\//); - const normalize = str => str.replace(/\\/g, `/`).replace(/^\/?/, `/`); + const isVirtual = (str) => str.match(/\/(\$\$virtual|__virtual__)\//); + const normalize = (str) => str.replace(/\\/g, `/`).replace(/^\/?/, `/`); - const dependencyTreeRoots = new Set(pnpApi.getDependencyTreeRoots().map(locator => { - return `${locator.name}@${locator.reference}`; - })); + const dependencyTreeRoots = new Set( + pnpApi.getDependencyTreeRoots().map((locator) => { + return `${locator.name}@${locator.reference}`; + }) + ); // VSCode sends the zip paths to TS using the "zip://" prefix, that TS // doesn't understand. This layer makes sure to remove the protocol @@ -64,33 +66,43 @@ const moduleWrapper = tsserver => { // Before | ^zip:/c:/foo/bar.zip/package.json // After | ^/zip//c:/foo/bar.zip/package.json // - case `vscode <1.61`: { - str = `^zip:${str}`; - } break; + case `vscode <1.61`: + { + str = `^zip:${str}`; + } + break; - case `vscode`: { - str = `^/zip/${str}`; - } break; + case `vscode`: + { + str = `^/zip/${str}`; + } + break; // To make "go to definition" work, // We have to resolve the actual file system path from virtual path // and convert scheme to supported by [vim-rzip](https://github.com/lbrayner/vim-rzip) - case `coc-nvim`: { - str = normalize(resolved).replace(/\.zip\//, `.zip::`); - str = resolve(`zipfile:${str}`); - } break; + case `coc-nvim`: + { + str = normalize(resolved).replace(/\.zip\//, `.zip::`); + str = resolve(`zipfile:${str}`); + } + break; // Support neovim native LSP and [typescript-language-server](https://github.com/theia-ide/typescript-language-server) // We have to resolve the actual file system path from virtual path, // everything else is up to neovim - case `neovim`: { - str = normalize(resolved).replace(/\.zip\//, `.zip::`); - str = `zipfile:${str}`; - } break; + case `neovim`: + { + str = normalize(resolved).replace(/\.zip\//, `.zip::`); + str = `zipfile:${str}`; + } + break; - default: { - str = `zip:${str}`; - } break; + default: + { + str = `zip:${str}`; + } + break; } } } @@ -101,22 +113,24 @@ const moduleWrapper = tsserver => { function fromEditorPath(str) { switch (hostInfo) { case `coc-nvim`: - case `neovim`: { - str = str.replace(/\.zip::/, `.zip/`); - // The path for coc-nvim is in format of //zipfile://.yarn/... - // So in order to convert it back, we use .* to match all the thing - // before `zipfile:` - return process.platform === `win32` - ? str.replace(/^.*zipfile:\//, ``) - : str.replace(/^.*zipfile:/, ``); - } break; + case `neovim`: + { + str = str.replace(/\.zip::/, `.zip/`); + // The path for coc-nvim is in format of //zipfile://.yarn/... + // So in order to convert it back, we use .* to match all the thing + // before `zipfile:` + return process.platform === `win32` ? str.replace(/^.*zipfile:\//, ``) : str.replace(/^.*zipfile:/, ``); + } + break; case `vscode`: - default: { - return process.platform === `win32` - ? str.replace(/^\^?(zip:|\/zip)\/+/, ``) - : str.replace(/^\^?(zip:|\/zip)\/+/, `/`); - } break; + default: + { + return process.platform === `win32` + ? str.replace(/^\^?(zip:|\/zip)\/+/, ``) + : str.replace(/^\^?(zip:|\/zip)\/+/, `/`); + } + break; } } @@ -128,8 +142,8 @@ const moduleWrapper = tsserver => { // TypeScript already does local loads and if this code is running the user trusts the workspace // https://github.com/microsoft/vscode/issues/45856 const ConfiguredProject = tsserver.server.ConfiguredProject; - const {enablePluginsWithOptions: originalEnablePluginsWithOptions} = ConfiguredProject.prototype; - ConfiguredProject.prototype.enablePluginsWithOptions = function() { + const { enablePluginsWithOptions: originalEnablePluginsWithOptions } = ConfiguredProject.prototype; + ConfiguredProject.prototype.enablePluginsWithOptions = function () { this.projectService.allowLocalPluginLoads = true; return originalEnablePluginsWithOptions.apply(this, arguments); }; @@ -139,12 +153,12 @@ const moduleWrapper = tsserver => { // like an absolute path of ours and normalize it. const Session = tsserver.server.Session; - const {onMessage: originalOnMessage, send: originalSend} = Session.prototype; + const { onMessage: originalOnMessage, send: originalSend } = Session.prototype; let hostInfo = `unknown`; Object.assign(Session.prototype, { onMessage(/** @type {string} */ message) { - const parsedMessage = JSON.parse(message) + const parsedMessage = JSON.parse(message); if ( parsedMessage != null && @@ -153,21 +167,33 @@ const moduleWrapper = tsserver => { typeof parsedMessage.arguments.hostInfo === `string` ) { hostInfo = parsedMessage.arguments.hostInfo; - if (hostInfo === `vscode` && process.env.VSCODE_IPC_HOOK && process.env.VSCODE_IPC_HOOK.match(/Code\/1\.([1-5][0-9]|60)\./)) { + if ( + hostInfo === `vscode` && + process.env.VSCODE_IPC_HOOK && + process.env.VSCODE_IPC_HOOK.match(/Code\/1\.([1-5][0-9]|60)\./) + ) { hostInfo += ` <1.61`; } } - return originalOnMessage.call(this, JSON.stringify(parsedMessage, (key, value) => { - return typeof value === `string` ? fromEditorPath(value) : value; - })); + return originalOnMessage.call( + this, + JSON.stringify(parsedMessage, (key, value) => { + return typeof value === `string` ? fromEditorPath(value) : value; + }) + ); }, send(/** @type {any} */ msg) { - return originalSend.call(this, JSON.parse(JSON.stringify(msg, (key, value) => { - return typeof value === `string` ? toEditorPath(value) : value; - }))); - } + return originalSend.call( + this, + JSON.parse( + JSON.stringify(msg, (key, value) => { + return typeof value === `string` ? toEditorPath(value) : value; + }) + ) + ); + }, }); return tsserver; diff --git a/.yarn/sdks/typescript/lib/tsserverlibrary.js b/.yarn/sdks/typescript/lib/tsserverlibrary.js index 7a2d65ea220..d0516a25036 100644 --- a/.yarn/sdks/typescript/lib/tsserverlibrary.js +++ b/.yarn/sdks/typescript/lib/tsserverlibrary.js @@ -1,28 +1,30 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); -const moduleWrapper = tsserver => { +const moduleWrapper = (tsserver) => { if (!process.versions.pnp) { return tsserver; } - const {isAbsolute} = require(`path`); + const { isAbsolute } = require(`path`); const pnpApi = require(`pnpapi`); - const isVirtual = str => str.match(/\/(\$\$virtual|__virtual__)\//); - const normalize = str => str.replace(/\\/g, `/`).replace(/^\/?/, `/`); + const isVirtual = (str) => str.match(/\/(\$\$virtual|__virtual__)\//); + const normalize = (str) => str.replace(/\\/g, `/`).replace(/^\/?/, `/`); - const dependencyTreeRoots = new Set(pnpApi.getDependencyTreeRoots().map(locator => { - return `${locator.name}@${locator.reference}`; - })); + const dependencyTreeRoots = new Set( + pnpApi.getDependencyTreeRoots().map((locator) => { + return `${locator.name}@${locator.reference}`; + }) + ); // VSCode sends the zip paths to TS using the "zip://" prefix, that TS // doesn't understand. This layer makes sure to remove the protocol @@ -64,33 +66,43 @@ const moduleWrapper = tsserver => { // Before | ^zip:/c:/foo/bar.zip/package.json // After | ^/zip//c:/foo/bar.zip/package.json // - case `vscode <1.61`: { - str = `^zip:${str}`; - } break; + case `vscode <1.61`: + { + str = `^zip:${str}`; + } + break; - case `vscode`: { - str = `^/zip/${str}`; - } break; + case `vscode`: + { + str = `^/zip/${str}`; + } + break; // To make "go to definition" work, // We have to resolve the actual file system path from virtual path // and convert scheme to supported by [vim-rzip](https://github.com/lbrayner/vim-rzip) - case `coc-nvim`: { - str = normalize(resolved).replace(/\.zip\//, `.zip::`); - str = resolve(`zipfile:${str}`); - } break; + case `coc-nvim`: + { + str = normalize(resolved).replace(/\.zip\//, `.zip::`); + str = resolve(`zipfile:${str}`); + } + break; // Support neovim native LSP and [typescript-language-server](https://github.com/theia-ide/typescript-language-server) // We have to resolve the actual file system path from virtual path, // everything else is up to neovim - case `neovim`: { - str = normalize(resolved).replace(/\.zip\//, `.zip::`); - str = `zipfile:${str}`; - } break; + case `neovim`: + { + str = normalize(resolved).replace(/\.zip\//, `.zip::`); + str = `zipfile:${str}`; + } + break; - default: { - str = `zip:${str}`; - } break; + default: + { + str = `zip:${str}`; + } + break; } } } @@ -101,22 +113,24 @@ const moduleWrapper = tsserver => { function fromEditorPath(str) { switch (hostInfo) { case `coc-nvim`: - case `neovim`: { - str = str.replace(/\.zip::/, `.zip/`); - // The path for coc-nvim is in format of //zipfile://.yarn/... - // So in order to convert it back, we use .* to match all the thing - // before `zipfile:` - return process.platform === `win32` - ? str.replace(/^.*zipfile:\//, ``) - : str.replace(/^.*zipfile:/, ``); - } break; + case `neovim`: + { + str = str.replace(/\.zip::/, `.zip/`); + // The path for coc-nvim is in format of //zipfile://.yarn/... + // So in order to convert it back, we use .* to match all the thing + // before `zipfile:` + return process.platform === `win32` ? str.replace(/^.*zipfile:\//, ``) : str.replace(/^.*zipfile:/, ``); + } + break; case `vscode`: - default: { - return process.platform === `win32` - ? str.replace(/^\^?(zip:|\/zip)\/+/, ``) - : str.replace(/^\^?(zip:|\/zip)\/+/, `/`); - } break; + default: + { + return process.platform === `win32` + ? str.replace(/^\^?(zip:|\/zip)\/+/, ``) + : str.replace(/^\^?(zip:|\/zip)\/+/, `/`); + } + break; } } @@ -128,8 +142,8 @@ const moduleWrapper = tsserver => { // TypeScript already does local loads and if this code is running the user trusts the workspace // https://github.com/microsoft/vscode/issues/45856 const ConfiguredProject = tsserver.server.ConfiguredProject; - const {enablePluginsWithOptions: originalEnablePluginsWithOptions} = ConfiguredProject.prototype; - ConfiguredProject.prototype.enablePluginsWithOptions = function() { + const { enablePluginsWithOptions: originalEnablePluginsWithOptions } = ConfiguredProject.prototype; + ConfiguredProject.prototype.enablePluginsWithOptions = function () { this.projectService.allowLocalPluginLoads = true; return originalEnablePluginsWithOptions.apply(this, arguments); }; @@ -139,12 +153,12 @@ const moduleWrapper = tsserver => { // like an absolute path of ours and normalize it. const Session = tsserver.server.Session; - const {onMessage: originalOnMessage, send: originalSend} = Session.prototype; + const { onMessage: originalOnMessage, send: originalSend } = Session.prototype; let hostInfo = `unknown`; Object.assign(Session.prototype, { onMessage(/** @type {string} */ message) { - const parsedMessage = JSON.parse(message) + const parsedMessage = JSON.parse(message); if ( parsedMessage != null && @@ -153,21 +167,33 @@ const moduleWrapper = tsserver => { typeof parsedMessage.arguments.hostInfo === `string` ) { hostInfo = parsedMessage.arguments.hostInfo; - if (hostInfo === `vscode` && process.env.VSCODE_IPC_HOOK && process.env.VSCODE_IPC_HOOK.match(/Code\/1\.([1-5][0-9]|60)\./)) { + if ( + hostInfo === `vscode` && + process.env.VSCODE_IPC_HOOK && + process.env.VSCODE_IPC_HOOK.match(/Code\/1\.([1-5][0-9]|60)\./) + ) { hostInfo += ` <1.61`; } } - return originalOnMessage.call(this, JSON.stringify(parsedMessage, (key, value) => { - return typeof value === `string` ? fromEditorPath(value) : value; - })); + return originalOnMessage.call( + this, + JSON.stringify(parsedMessage, (key, value) => { + return typeof value === `string` ? fromEditorPath(value) : value; + }) + ); }, send(/** @type {any} */ msg) { - return originalSend.call(this, JSON.parse(JSON.stringify(msg, (key, value) => { - return typeof value === `string` ? toEditorPath(value) : value; - }))); - } + return originalSend.call( + this, + JSON.parse( + JSON.stringify(msg, (key, value) => { + return typeof value === `string` ? toEditorPath(value) : value; + }) + ) + ); + }, }); return tsserver; diff --git a/.yarn/sdks/typescript/lib/typescript.js b/.yarn/sdks/typescript/lib/typescript.js index cbdbf1500fb..6185deec60f 100644 --- a/.yarn/sdks/typescript/lib/typescript.js +++ b/.yarn/sdks/typescript/lib/typescript.js @@ -1,10 +1,10 @@ #!/usr/bin/env node -const {existsSync} = require(`fs`); -const {createRequire, createRequireFromPath} = require(`module`); -const {resolve} = require(`path`); +const { existsSync } = require(`fs`); +const { createRequire, createRequireFromPath } = require(`module`); +const { resolve } = require(`path`); -const relPnpApiPath = "../../../../.pnp.cjs"; +const relPnpApiPath = '../../../../.pnp.cjs'; const absPnpApiPath = resolve(__dirname, relPnpApiPath); const absRequire = (createRequire || createRequireFromPath)(absPnpApiPath); diff --git a/docs/sources/alerting/_index.md b/docs/sources/alerting/_index.md index 51aae41f55b..26f10d688b3 100644 --- a/docs/sources/alerting/_index.md +++ b/docs/sources/alerting/_index.md @@ -7,25 +7,21 @@ weight = 110 Alerts allow you to learn about problems in your systems moments after they occur. Robust and actionable alerts help you identify and resolve issues quickly, minimizing disruption to your services. -Grafana 8.0 has new and improved alerts that centralize alerting information for Grafana managed alerts as well as alerts from Prometheus-compatible data sources into one user interface and API. +Grafana 8.0 has new and improved alerting that centralizes alerting information in a single, searchable view. It allows you to to: -> **Note:** Grafana 8 alerts are an [opt-in]({{< relref "./unified-alerting/opt-in.md" >}}) feature. Out of the box, Grafana still supports old [legacy dashboard alerts]({{< relref "./old-alerting/_index.md" >}}). We encourage you to create issues in the Grafana GitHub repository for bugs found while testing Grafana 8 alerts. +- Create and manage Grafana managed alerts +- Create and manage Cortex and Loki managed alerts +- View alerting information from Prometheus compatible data sources -Alerts have four main components: +Grafana 8 alerting has four key components: -- Alerting rule - One or more queries and/or expressions, conditions, evaluation frequencies, and the (optional) duration that a condition must be met before creating an alert. -- Contact point - A channel for sending notifications when the conditions of an alerting rule are met. -- Notification policy - A set of matching and grouping criteria used to determine where, and how frequently, to send notifications. +- Alerting rule - Evaluation criteria that determine whether an alert will fire. It consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met. +- Contact point - Channel for sending notifications when the conditions of an alerting rule are met. +- Notification policy - Set of matching and grouping criteria used to determine where and how frequently to send notifications. - Silences - Date and matching criteria used to silence notifications. -You can create and edit alerting rules for Grafana managed alerts, Cortex alerts, and Loki alerts, as well as see alerting information from Prometheus-compatible data sources, in a single searchable view. For more information on how to create and edit alerts and notifications, refer to [Overview of Grafana 8.0 alerts]({{< relref "../alerting/unified-alerting/_index.md" >}}). +To learn more, see [What's New with Grafana 8 alerting]({{< relref "../alerting/unified-alerting/difference-old-new.md" >}}). -For handling notifications for Grafana managed alerts, we use an embedded alert manager. You can configure its contact points, notification policies, silences, and templates from the new Grafana alerting UI by selecting `Grafana` from the Alertmanager dropdown on the top of the respective tab. +For information on how to create and manage Grafana 8 alerts and notifications, refer to [Overview of Grafana 8.0 alerts]({{< relref "../alerting/unified-alerting/_index.md" >}}) and [Create and manage Grafana 8 alerting rules]({{< relref "./unified-alerting/alerting-rules/_index.md" >}}). -> **Note:** Currently the configuration of this embedded Alertmanager is shared across organisations. Therefore, users are advised to use the new Grafana 8 Alerts only if they have just one organization. Otherwise all contact points, notification policies, silences, and templates for Grafana managed alerts will be visible by all organizations. - -As part of the new alert changes, we have introduced a new data source, Alertmanager, which includes built-in support for Prometheus Alertmanager. It is presently in alpha and it is not accessible unless alpha plugins are enabled in Grafana settings. For more information, refer to [Alertmanager data source]({{< relref "../datasources/alertmanager.md" >}}). If such a data source is present, then you can view and modify its silences, contact points and notification policies from the Grafana alerting UI by selecting it from the Alertmanager dropdown on the top of the respective tab. - -> **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release. For more information, refer to [Legacy Grafana alerts]({{< relref "./old-alerting/_index.md" >}}). - -To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/difference-old-new.md" >}}). +> **Note:** Grafana 8 alerting is an [opt-in]({{< relref "./unified-alerting/opt-in.md" >}}) feature. Out of the box, Grafana still supports old [legacy dashboard alerts]({{< relref "./old-alerting/_index.md" >}}). We encourage you to create issues in the Grafana GitHub repository for bugs found while testing Grafana 8 alerts. diff --git a/docs/sources/alerting/difference-old-new.md b/docs/sources/alerting/difference-old-new.md deleted file mode 100644 index ff4f760cb73..00000000000 --- a/docs/sources/alerting/difference-old-new.md +++ /dev/null @@ -1,26 +0,0 @@ -+++ -title = "What's New with Grafana 8 alerts" -description = "What's New with Grafana 8 Alerts" -keywords = ["grafana", "alerting", "guide"] -weight = 112 -+++ - -# What's New with Grafana 8 alerts - -The alerts released with Grafana 8.0 centralizes alerting information for Grafana managed alerts and alerts from Prometheus-compatible datasources in one UI and API. You can create and edit alerting rules for Grafana managed alerts, Cortex alerts, and Loki alerts as well as see alerting information from prometheus-compatible datasources in a single, searchable view. - -## Multi-dimensional alerting - -Create alerts that will give you system-wide visibility with a single alerting rule. With Grafana 8 alerts, you are able to generate multiple alert instances from a single rule eg. creating a rule to monitor disk usage for multiple mount points on a single host. The evaluation engine is able to return multiple time series from a single query. Each time series is identified by its label set. - -## Create alerts outside of Dashboards - -Grafana legacy alerts were tied to a dashboard. Grafana 8 Alerts allow you to create queries and expressions that can combine data from multiple sources, in unique ways. You are still able to link dashboards and panels to alerting rules, allowing you to quickly troubleshoot the system under observation, by linking a dashboard and/or panel ID to the alerting rule. - -## Create Loki and Cortex alerting rules - -With Grafana 8 Alerts you are able to manage your Loki and Cortex alerting rules using the same UI and API as your Grafana managed alerts. - -## View and search for alerts from Prometheus - -You can now display all of your alerting information in one, searchable UI. Alerts for Prometheus compatible datasources are listed below Grafana managed alerts. Search for labels across multiple datasources to quickly find all of the relevant alerts. diff --git a/docs/sources/alerting/old-alerting/_index.md b/docs/sources/alerting/old-alerting/_index.md index 710cb1169ef..262edccbf27 100644 --- a/docs/sources/alerting/old-alerting/_index.md +++ b/docs/sources/alerting/old-alerting/_index.md @@ -5,14 +5,7 @@ weight = 114 # Legacy Grafana alerts -Grafana 8.0 has [new and improved alerts]({{< relref "../unified-alerting/_index.md" >}}). The new alerting system are an opt-in feature that centralizes alerting information for Grafana managed alerts and alerts from Prometheus-compatible data sources in one UI and API. - -Out of the box, Grafana still supports legacy dashboard alerts. Legacy Grafana alerts consists of two parts: - -Alert rules - When the alert is triggered. Alert rules are defined by one or more conditions that are regularly evaluated by Grafana. -Notification channel - How the alert is delivered. When the conditions of an alert rule are met, the Grafana notifies the channels configured for that alert. - -Currently only the graph panel visualization supports alerts. +Out of the box, Grafana still supports legacy dashboard alerts. If you are using version 8.0 or later, you can [opt-in]({{< relref "../unified-alerting/opt-in.md" >}}) to use Grafana 8 alerts. See [What's New with Grafana 8 alerting]({{< relref "../unified-alerting/difference-old-new.md" >}}) for more information. Legacy alerts have two main components: @@ -28,30 +21,4 @@ You can perform the following tasks for alerts: - [Test alert rules and troubleshoot]({{< relref "troubleshoot-alerts.md" >}}) - [Add or edit an alert contact point]({{< relref "notifications.md" >}}) -## Clustering - -Currently alerting supports a limited form of high availability. Since v4.2.0 of Grafana, alert notifications are deduped when running multiple servers. This means all alerts are executed on every server but no duplicate alert notifications are sent due to the deduping logic. Proper load balancing of alerts will be introduced in the future. - -## Alert evaluation - -Grafana managed alerts are evaluated by the Grafana backend. Rule evaluations are scheduled, according to the alert rule configuration, and queries are evaluated by an engine that is part of core Grafana. - -Alert rules can only query backend data sources with alerting enabled: - -- builtin or developed and maintained by grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`, - `Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer` -- any community backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../developers/plugins/metadata.md" >}})) - -## Metrics from the alert engine - -The alert engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../administration/view-server/internal-metrics.md" >}}). - -| Metric Name | Type | Description | -| ------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- | -| `alerting.alerts` | gauge | How many alerts by state | -| `alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API | -| `alerting.active_configurations` | gauge | The number of active, non default alertmanager configurations for grafana managed alerts | -| `alerting.rule_evaluations_total` | counter | The total number of rule evaluations | -| `alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures | -| `alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute | -| `alerting.rule_group_rules` | gauge | The number of rules | +{{< docs/shared "alerts/grafana-managed-alerts.md" >}} diff --git a/docs/sources/alerting/unified-alerting/_index.md b/docs/sources/alerting/unified-alerting/_index.md index 20bc47451b2..f6f5a4e9c24 100644 --- a/docs/sources/alerting/unified-alerting/_index.md +++ b/docs/sources/alerting/unified-alerting/_index.md @@ -4,59 +4,23 @@ aliases = ["/docs/grafana/latest/alerting/metrics/"] weight = 113 +++ -# Overview of Grafana 8 alerts +# Overview of Grafana 8 alerting -Grafana 8.0 has a new and improved alerting sub-system that centralizes alerting information for Grafana managed alerts and alerts from Prometheus-compatible data sources into one user interface and API. +Grafana 8.0 has new and improved alerting that centralizes alerting information in a single, searchable view. It is an [opt-in]({{< relref "./opt-in.md" >}}) feature. We encourage you to create issues in the Grafana GitHub repository for bugs found while testing Grafana 8 alerting. See also, [What's New with Grafana 8 alerting]({{< relref "./difference-old-new.md" >}}). -> **Note:** Grafana 8 alerts is an [opt-in]({{< relref "../unified-alerting/opt-in.md" >}}) feature. Out of the box, Grafana still supports old [legacy dashboard alerts]({{< relref "../old-alerting/_index.md" >}}). We encourage you to create issues in the Grafana GitHub repository for bugs found while testing Grafana 8 alerts. +When Grafana 8 alerting is enabled, you can: -Grafana 8 alerts have four main components: - -- Alerting rule - One or more query and/or expression, a condition, the frequency of evaluation, and the (optional) duration that a condition must be met before creating an alert. -- Contact point - A channel for sending notifications when the conditions of an alerting rule are met. -- Notification policy - A set of matching and grouping criteria used to determine where, and how frequently, to send notifications. -- Silences - Date and matching criteria used to silence notifications. - -## Alerting tasks - -You can perform the following tasks for alerts: - -- [Create a Grafana managed alert rule]({{< relref "alerting-rules/create-grafana-managed-rule.md" >}}) -- [Create a Cortex or Loki managed alert rule]({{< relref "alerting-rules/create-cortex-loki-managed-rule.md" >}}) -- [View existing alert rules and their current state]({{< relref "alerting-rules/rule-list.md" >}}) -- [View state and health of alerting rules]({{< relref "alerting-rules/state-and-health.md" >}}) +- [Create a Grafana managed alerting rules]({{< relref "alerting-rules/create-grafana-managed-rule.md" >}}) +- [Create a Cortex or Loki managed alerting rules]({{< relref "alerting-rules/create-cortex-loki-managed-rule.md" >}}) +- [View existing alerting rules and manage their current state]({{< relref "alerting-rules/rule-list.md" >}}) +- [View the state and health of alerting rules]({{< relref "./fundamentals/state-and-health.md" >}}) - [Add or edit an alert contact point]({{< relref "./contact-points.md" >}}) - [Add or edit notification policies]({{< relref "./notification-policies.md" >}}) -- [Create and edit silences]({{< relref "./silences.md" >}}) +- [Add or edit silences]({{< relref "./silences.md" >}}) -## Clustering +Before you begin using Grafana 8 alerting, we recommend that you familiarize yourself with some [basic concepts]({{< relref "./fundamentals/_index.md" >}}) of Grafana 8 alerting. -The current alerting system doesn't support high availability. Alert notifications are not deduplicated and load balancing is not supported between instances e.g. silences from one instance will not appear in the other. The Grafana team aims to have this feature by Grafana version 8.1+. +## Limitations -## Alert evaluation - -Grafana managed alerts are evaluated by the Grafana backend. Rule evaluations are scheduled, according to the alert rule configuration, and queries are evaluated by an engine that is part of core Grafana. - -Alerting rules can only query backend data sources with alerting enabled: - -- builtin or developed and maintained by grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`, - `Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer` -- any community backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../developers/plugins/metadata.md" >}})) - -## Metrics from the alerting engine - -The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../administration/view-server/internal-metrics.md" >}}). See also, [View alert rules and their current state]({{< relref "alerting-rules/rule-list.md" >}}). - -| Metric Name | Type | Description | -| ------------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- | -| `grafana_alerting_alerts` | gauge | How many alerts by state | -| `grafana_alerting_request_duration` | histogram | Histogram of requests to the Alerting API | -| `grafana_alerting_active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts | -| `grafana_alerting_rule_evaluations_total` | counter | The total number of rule evaluations | -| `grafana_alerting_rule_evaluation_failures_total` | counter | The total number of rule evaluation failures | -| `grafana_alerting_rule_evaluation_duration` | summary | The duration for a rule to execute | -| `grafana_alerting_rule_group_rules` | gauge | The number of rules | - -## Limitation - -Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from all other supported data sources at this time. +- Grafana 8 alerting doesn’t support high availability. Alert notifications are not de-duplicated and load balancing is not supported between instances. For example, silences from one instance will not appear in another. +- The Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from other supported data sources. diff --git a/docs/sources/alerting/unified-alerting/alert-groups.md b/docs/sources/alerting/unified-alerting/alert-groups.md index 3aeca4cc782..2a947bf212e 100644 --- a/docs/sources/alerting/unified-alerting/alert-groups.md +++ b/docs/sources/alerting/unified-alerting/alert-groups.md @@ -5,23 +5,22 @@ keywords = ["grafana", "alerting", "alerts", "groups"] weight = 400 +++ -# View alert groups +# Alert groups -Alert groups shows grouped alerts from an alertmanager instance. Alertmanager will group alerts based on common label values. This prevents duplicate alerts from being fired by grouping common alerts into a single alert group. By default, the alerts are grouped by the label keys for the root policy in [notification policies]({{< relref "./notification-policies.md" >}}). +Alert groups show grouped alerts from an Alertmanager instance. By default, the alerts are grouped by the label keys for the root policy in [notification policies]({{< relref "./notification-policies.md" >}}). Grouping common alerts into a single alert group prevents duplicate alerts from being fired. -## Show alert groups for an external Alertmanager +## View alert groupings -Grafana alerting UI supports alert groups from external Alertmanager data sources. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **Alert grouping** to open the page listing existing groups. +1. From the **Alertmanager** drop-down, select an external Alertmanager as your data source. By default, the `Grafana` Alertmanager is selected. +1. From **custom group by** drop-down, select a combination of labels to view a grouping other than the default. This is useful for debugging and verifying your grouping of notification policies. -## View different alert groupings - -To view a grouping other than the default use the **custom group by** dropdown to select combinations of labels to group alerts by. This is useful for debugging and verifying your notification policies grouping. - -If an alert does not contain labels specified in the grouping of the route policy or the custom grouping it will be added to a catch all group with a header of `No grouping`. +If an alert does not contain labels specified either in the grouping of the root policy or the custom grouping, then the alert is added to a catch all group with a header of `No grouping`. ## Filter alerts -You can use the following filters to view only alerts that match specific criteria: +You can use the following filters to view alerts that match specific criteria: -- **Filter alerts by label -** Search by alert labels using label selectors in the **Search** input. eg: `environment=production,region=~US|EU,severity!=warning` -- **Filter alerts by state -** In **States** Select which alert states you want to see. All others are hidden. +- **Search by label:** In **Search**, enter an existing label to view alerts matching the label. For example, `environment=production,region=~US|EU,severity!=warning` +- **Filter alerts by state:** In **States**, select from Active, Suppressed, or Unprocessed states to view alerts matching your selected state. All other alerts are hidden. diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/_index.md b/docs/sources/alerting/unified-alerting/alerting-rules/_index.md index 57b3517b5cf..2de5ab16144 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/_index.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/_index.md @@ -4,13 +4,17 @@ aliases = ["/docs/grafana/latest/alerting/rules/"] weight = 130 +++ -# Create and manage alerting rules +# Create and manage Grafana 8 alerting rules -One or more queries and/or expressions, a condition, the frequency of evaluation, and the (optional) duration that a condition must be met before creating an alert. Alerting rules are how you express the criteria for creating an alert. Queries and expressions select and can operate on the data you wish to alert on. A condition sets the threshold that an alert must meet or exceed to create an alert. The interval specifies how frequently the rule should be evaluated. The duration, when configured, sets a period that a condition must be met or exceeded before an alert is created. Alerting rules also can contain settings for what to do when your query does not return any data, or there is an error attempting to execute the query. +An alerting rule is a set of evaluation criteria that determines whether an alert will fire. The rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met. + +While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert. An interval specifies how frequently an alerting rule is evaluated. Duration, when configured, indicates how long a condition must be met. The rules can also define alerting behavior in the absence of data. + +In Grafana 8 alerting, you can: - [Create Cortex or Loki managed alert rule]({{< relref "./create-cortex-loki-managed-rule.md" >}}) - [Create Cortex or Loki managed recording rule]({{< relref "./create-cortex-loki-managed-recording-rule.md" >}}) - [Edit Cortex or Loki rule groups and namespaces]({{< relref "./edit-cortex-loki-namespace-group.md" >}}) - [Create Grafana managed alert rule]({{< relref "./create-grafana-managed-rule.md" >}}) -- [State and Health of alerting rules]({{< relref "./state-and-health.md" >}}) -- [View existing alert rules and their current state]({{< relref "./rule-list.md" >}}) +- [State and hfundamentalsealth of alerting rules]({{< relref "../fundamentals/state-and-health.md" >}}) +- [Manage alerting rules]({{< relref "./rule-list.md" >}}) diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md b/docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md new file mode 100644 index 00000000000..f85016c4bff --- /dev/null +++ b/docs/sources/alerting/unified-alerting/alerting-rules/alert-annotation-label.md @@ -0,0 +1,60 @@ ++++ +title = "Annotations and labels for alerting rules" +description = "Annotations and labels for alerting" +keywords = ["grafana", "alerting", "guide", "rules", "create"] +weight = 401 ++++ + +# Annotations and labels for alerting rules + +Annotations and labels help customize alert messages so that you can quickly identify the service or application that needs attention. + +## Annotations + +Annotations are key-value pairs that provide additional meta-information about an alert. For example: a description, a summary, and runbook URL. These are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces. + +## Labels + +Labels are key-value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, you can add a `severity` label, then configure a separate notification policy for each severity. You can also add, for example, a `team` label and configure notification policies specific to the team or silence all alerts for a particular team. Labels can also be templated like annotations, for example, `{{ $labels.namespace }}/{{ $labels.job }}` will produce a new rule label that will have the evaluated `namespace` and `job` label value added for every alert this rule produces. The rule labels take precedence over the labels produced by the query/condition. + +{{< figure src="/static/img/docs/alerting/unified/rule-edit-details-8-0.png" max-width="550px" caption="Alert details" >}} + +#### Template variables + +The following template variables are available when expanding annotations and labels. + +| Name | Description | +| ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| $labels | The labels from the query or condition. For example, `{{ $labels.instance }}` and `{{ $labels.job }}`. This is unavailable when the rule uses a classic condition. | +| $values | The values of all reduce and math expressions that were evaluated for this alert rule. For example, `{{ $values.A }}`, `{{ $values.A.Labels }}` and `{{ $values.A.Value }}` where `A` is the `refID` of the expression. This is unavailable when the rule uses a [classic condition]({{< relref "./create-grafana-managed-rule/#single-and-multi-dimensional-rule" >}}) | +| $value | The value string of the alert instance. For example, `[ var='A' labels={instance=foo} value=10 ]`. | + +#### Template functions + +The following template functions are available when expanding annotations and labels. + +| Name | Argument | Return | Description | +| ------------------ | -------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| humanize | number or string | string | Converts a number to a more readable format, using metric prefixes. | +| humanize1024 | number or string | string | Like humanize, but uses 1024 as the base rather than 1000. | +| humanizeDuration | number or string | string | Converts a duration in seconds to a more readable format. | +| humanizePercentage | number or string | string | Converts a ratio value to a fraction of 100. | +| humanizeTimestamp | number or string | string | Converts a Unix timestamp in seconds to a more readable format. | +| title | string | string | strings.Title, capitalises first character of each word. | +| toUpper | string | string | strings.ToUpper, converts all characters to upper case. | +| toLower | string | string | strings.ToLower, converts all characters to lower case. | +| match | pattern, text | boolean | regexp.MatchString Tests for a unanchored regexp match. | +| reReplaceAll | pattern, replacement, text | string | Regexp.ReplaceAllString Regexp substitution, unanchored. | +| graphLink | expr | string | Not supported | +| tableLink | expr | string | Not supported | +| args | []interface{} | map[string]interface{} | Converts a list of objects to a map with keys, for example, arg0, arg1. Use this function to pass multiple arguments to templates. | +| externalURL | nothing | string | Returns a string representing the external URL. | +| pathPrefix | nothing | string | Returns the path of the external URL. | +| tmpl | string, []interface{} | nothing | Not supported | +| safeHtml | string | string | Not supported | +| query | query string | []sample | Not supported | +| first | []sample | sample | Not supported | +| label | label, sample | string | Not supported | +| strvalue | []sample | string | Not supported | +| value | sample | float64 | Not supported | +| sortByLabel | label, []samples | []sample | Not supported | diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule.md b/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule.md index 0f0380218ba..0b2dc3a6305 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-recording-rule.md @@ -9,47 +9,32 @@ weight = 400 You can create and manage recording rules for an external Cortex or Loki instance. Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh. -For both Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. The `local` rule storage type (default for Loki data source), only supports viewing of rules. If you want to edit rules, then configure one of the other rule storage types. +## Before you begin -When configuring a Grafana Prometheus data source to point to Cortex, use the legacy /api/prom prefix, not /prometheus. Only single-binary mode is currently supported, provide a separate URL for the ruler API. +For Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. + +**Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types. + +**Cortex** - When configuring a Grafana Prometheus data source to point to Cortex, use the legacy `/api/prom` prefix, not `/prometheus`. Currently, we support only single-binary mode and you cannot provide a separate URL for the ruler API. + +> **Note:** If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings page and clear the **Manage alerts via Alerting UI** checkbox. ## Add a Cortex or Loki managed recording rule -1. Hover your cursor over the Alerting (bell) icon. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. 1. Click **New alert rule**. -1. Click on the **Alert type** drop down and select **Cortex / Loki managed recording rule**. -1. Enter the recording rule details using instructions in the [Recording rule fields](#recording-rule-fields) section. -1. Click **Save** in the upper right corner to save the rule. - -## Edit a Cortex or Loki managed recording rule - -1. Hover your cursor over the Alerting (bell) icon in the side menu. -1. Expand an existing recording rule in the **Cortex / Loki** section and click **Edit**. -1. Update the recording rule details using instructions in the [Recording rule fields](#recording-rule-fields) section. -1. Click **Save and exit** to save and exit rule editing. - -## Recording rule fields - -This section describes the fields you fill out to create a recording rule. - -### Rule type - -- **Rule name -** Enter a descriptive name. The name will get displayed in the alert rule list. It will also get added as an `alertname` label to every alert instance that is created from this rule. Recording rules names must be valid [metric names](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels). -- **Rule type -** Select **Cortex / Loki managed recording rule**. -- **Data source -** Select a Prometheus or Loki data source. Only data sources that support Cortex ruler API are available. -- **Namespace -** Select an existing rule namespace or click **Add new** and enter a name to create a new one. Namespaces can contain one or more rule groups and have only organizational purpose. -- **Group -** Select an existing group within the selected namespace or click **Add new** to create a new group. Newly created rules are added to the end of this group. Rules within a group are run sequentially at a regular interval, with the same evaluation time. - -![Rule type section screenshot](/static/img/docs/alerting/unified/rule-edit-cortex-recording-rule-8-2.png 'Rule type section screenshot') - -### Query - -Enter a PromQL or LogQL expression. The result of this expression will get recorded as the value for the new metric. - -![Query section](/static/img/docs/alerting/unified/rule-edit-cortex-recording-rule-query-8-2.png 'Query section screenshot') - -### Details - -You can optionally define labels in the details section. - -![Details section](/static/img/docs/alerting/unified/rule-recording-rule-labels-8-2.png 'Details section screenshot') +1. In Step 1, add the rule name, type, and storage location. + - In **Rule name**, add a descriptive name. This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule. + - From the **Rule type** drop-down, select **Cortex / Loki managed alert**. + - From the **Select data source** drop-down, select an external Prometheus, an external Loki, or a Grafana Cloud data source. + - From the **Namespace** drop-down, select an existing rule namespace. Otherwise, click **Add new** and enter a name to create a new one. Namespaces can contain one or more rule groups and only have an organizational purpose. + - From the **Group** drop-down, select an existing group within the selected namespace. Otherwise, click **Add new** and enter a name to create a new one. Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time. + {{< figure src="/static/img/docs/alerting/unified/rule-edit-cortex-alert-type-8-0.png" max-width="550px" caption="Alert details" >}} +1. In Step 2, add the query to evaluate. + - Enter a PromQL or LogQL expression. The rule fires if the evaluation result has at least one series with a value that is greater than 0. An alert is created for each series. + {{< figure src="/static/img/docs/alerting/unified/rule-edit-cortex-query-8-0.png" max-width="550px" caption="Alert details" >}} +1. In Step 3, add additional metadata associated with the rule. + - Add a description and summary to customize alert messages. Use the guidelines in [Annotations and labels for alerting]({{< relref "./alert-annotation-label.md" >}}). + - Add Runbook URL, panel, dashboard, and alert IDs. + - Add custom labels. +1. Click **Save** to save the rule or **Save and exit** to save the rule and go back to the Alerting page. diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-rule.md b/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-rule.md index caa1fef9626..8101128a158 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-rule.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/create-cortex-loki-managed-rule.md @@ -7,62 +7,38 @@ weight = 400 # Create a Cortex or Loki managed alerting rule -Grafana allows you manage alerting rules for an external Cortex or Loki instance. +Grafana allows you to create alerting rules for an external Cortex or Loki instance. -In order for both Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. The`local` rule storage type, default for Loki, only supports viewing of rules. If you want to edit rules, then configure one of the other rule storage types. When configuring a Grafana Prometheus data source to point to Cortex, use the legacy `/api/prom` prefix, not `/prometheus`. Only single-binary mode is currently supported, and it is not possible to provide a separate URL for the ruler API. +## Before you begin -## Add or edit a Cortex or Loki managed alerting rule +For Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. -1. In the Grafana menu hover your cursor over the Alerting (bell) icon. -1. To create a new alert rule, click **New alert rule**. To edit an existing rule, expand one of the rules in the **Cortex / Loki** section and click **Edit**. -1. Click on the **Rule type** drop down and select **Cortex / Loki managed alert**. -1. Fill out the rest of the fields. Descriptions are listed below in [Alert rule fields](#alert-rule-fields). -1. When you have finished writing your rule, click **Save** in the upper right corner to save the rule, or **Save and exit** to save and exit rule editing. +**Loki** - The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other rule storage types. -## Alert rule fields +**Cortex** - When configuring a Grafana Prometheus data source to point to Cortex, use the legacy `/api/prom` prefix, not `/prometheus`. Currently, we support only single-binary mode and you cannot provide a separate URL for the ruler API. -This section describes the fields you fill out to create an alert. +> **Note:** If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via Alerting UI** checkbox. -### Rule type +## Add a Cortex or Loki managed alerting rule -- **Rule name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule. -- **Rule type -** Select **Cortex / Loki managed alert**. -- **Data source -** Select a Prometheus or Loki data source. Only Prometheus data sources that support Cortex ruler API will be available. -- **Namespace -** Select an existing rule namespace or click **Add new** and enter a name to create a new one. Namespaces can contain one or more rule groups and have only organizational purpose. -- **Group -** Select an existing group within the selected namespace or click **Add new** and enter a name to create a new one. Newly created rules will be added to the end of the rule group. Rules within a group are run sequentially at a regular interval, with the same evaluation time. - -![Alert type section screenshot](/static/img/docs/alerting/unified/rule-edit-cortex-alert-type-8-0.png 'Alert type section screenshot') - -### Query - -Enter a PromQL or LogQL expression. Rule will fire if evaluation result has at least one series with value > 0. An alert will be created per each such series. - -![Query section](/static/img/docs/alerting/unified/rule-edit-cortex-query-8-0.png 'Query section screenshot') - -### Conditions - -- **For -** For how long the selected condition should violated before an alert enters `Firing` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Firing`. Else it reverts back to `Normal`. - -![Conditions section](/static/img/docs/alerting/unified/rule-edit-cortex-conditions-8-0.png 'Conditions section screenshot') - -### Details - -Annotations and labels can be optionally added in the details section. - -#### Annotations - -Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces. - -#### Labels - -Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team. - -![Details section](/static/img/docs/alerting/unified/rule-edit-details-8-0.png 'Details section screenshot') - -## Preview alerts - -To evaluate the rule and see what alerts it would produce, click **Preview alerts**. It will display a list of alerts with state and value of for each one. - -## Opt-out a Loki or Prometheus data source - -If you do not want to allow creating rules for a particular Loki or Prometheus data source, go to its settings page and clear the **Manage alerts via Alerting UI** checkbox. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **New alert rule**. +1. In Step 1, add the rule name, type, and storage location. + - In **Rule name**, add a descriptive name. This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule. + - From the **Rule type** drop-down, select **Cortex / Loki managed alert**. + - From the **Select data source** drop-down, select an external Prometheus, an external Loki, or a Grafana Cloud data source. + - From the **Namespace** drop-down, select an existing rule namespace. Otherwise, click **Add new** and enter a name to create a new one. Namespaces can contain one or more rule groups and only have an organizational purpose. For more information, see [Cortex or Loki rule groups and namespaces]({{< relref "./edit-cortex-loki-namespace-group.md" >}}). + - From the **Group** drop-down, select an existing group within the selected namespace. Otherwise, click **Add new** and enter a name to create a new one. Newly created rules are appended to the end of the group. Rules within a group are run sequentially at a regular interval, with the same evaluation time. + {{< figure src="/static/img/docs/alerting/unified/rule-edit-cortex-alert-type-8-0.png" max-width="550px" caption="Alert details" >}} +1. In Step 2, add the query to evaluate. + - Enter a PromQL or LogQL expression. The rule fires if the evaluation result has at least one series with a value that is greater than 0. An alert is created for each series. + {{< figure src="/static/img/docs/alerting/unified/rule-edit-cortex-query-8-0.png" max-width="550px" caption="Alert details" >}} +1. In Step 3, add conditions. + - In the **For** text box, specify the duration for which the condition must be true before an alert fires. If you specify `5m`, the condition must be true for 5 minutes before the alert fires. + > **Note:** Once a condition is met, the alert goes into the `Pending` state. If the condition remains active for the duration specified, the alert transitions to the `Firing` state, else it reverts to the `Normal` state. +1. In Step 4, add additional metadata associated with the rule. + - Add a description and summary to customize alert messages. Use the guidelines in [Annotations and labels for alerting]({{< relref "./alert-annotation-label.md" >}}). + - Add Runbook URL, panel, dashboard, and alert IDs. + - Add custom labels. +1. To evaluate the rule and see what alerts it would produce, click **Preview alerts**. It will display a list of alerts with state and value of for each one. +1. Click **Save** to save the rule or **Save and exit** to save the rule and go back to the Alerting page. diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/create-grafana-managed-rule.md b/docs/sources/alerting/unified-alerting/alerting-rules/create-grafana-managed-rule.md index 3db8db5ddff..e62f079115b 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/create-grafana-managed-rule.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/create-grafana-managed-rule.md @@ -7,75 +7,61 @@ weight = 400 # Create a Grafana managed alerting rule -Grafana allows you to create alerting rules that query one or more data sources, reduce or transform the results and compare them to each other or to fix thresholds. These rules will be executed and notifications sent by Grafana itself. +Grafana allows you to create alerting rules that query one or more data sources, reduce or transform the results and compare them to each other or to fix thresholds. When these are executed, Grafana sends notifications to the contact point. -## Add or edit a Grafana managed alerting rule +## Add Grafana managed rule -1. In the Grafana menu hover your cursor over the Alerting (bell) icon. -1. To create a new alert rule, click **New alert rule**. To edit an existing rule, expand one of the rules in the **Grafana** section and click **Edit**. -1. Click on the **Alert type** drop down and select **Grafana managed alert**. -1. Fill out the rest of the fields. Descriptions are listed below in [Alert rule fields](#alert-rule-fields). -1. When you have finished writing your rule, click **Save** in the upper right corner to save the rule,, or **Save and exit** to save and exit rule editing. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **New alert rule**. +1. In Step 1, add the rule name, type, and storage location. + - In **Rule name**, add a descriptive name. This name is displayed in the alert rule list. It is also the `alertname` label for every alert instance that is created from this rule. + - From the **Rule type** drop-down, select **Grafana managed alert**. + - From the **Folder** drop-down, select the folder where you want to store the rule. If you do not select a folder, the rule is stored in the General folder. To create a new folder, click the drop-down and enter the new folder name. +1. In Step 2, add queries and expressions to evaluate. + - Keep the default name or hover over and click the edit icon to change the name. + - For queries, select a data source from the drop-down. + - Add one or more [queries]({{< relref "../../../panels/queries.md" >}}) or [expressions]({{< relref "../../../panels/expressions.md" >}}). + - For each expression, select either **Classic condition** to create a single alert rule, or choose from **Math**, **Reduce**, **Resample** options to generate separate alert for each series. For details on these options, see [Single and multi dimensional rule](#single-and-multi-dimensional-rule). + - Click **Run queries** to verify that the query is successful. +1. In Step 3, add conditions. + - From the **Condition** drop-down, select the query or expression to trigger the alert rule. + - For **Evaluate every**, specify the frequency of evaluation. Must be a multiple of 10 seconds. For examples, `1m`, `30s`. + - For **Evaluate for**, specify the duration for which the condition must be true before an alert fires. + > **Note:** Once a condition is breached, the alert goes into the Pending state. If the condition remains breached for the duration specified, the alert transitions to the Firing state, else it reverts back to the Normal state. + - In **Configure no data and error handling**, configure alerting behavior in the absence of data. Use the guidelines in [No data and error handling](#no-data-and-error-handling). + - Click **Preview alerts** to check the result of running the query at this moment. Preview excludes no data and error handling. +1. In Step 4, add additional metadata associated with the rule. + - Add a description and summary to customize alert messages. Use the guidelines in [Annotations and labels for alerting]({{< relref "./alert-annotation-label.md" >}}). + - Add Runbook URL, panel, dashboard, and alert IDs. + - Add custom labels. +1. Click **Save** to save the rule or **Save and exit** to save the rule and go back to the Alerting page. -## Alert rule fields +### Single and multi dimensional rule -This section describes the fields you fill out to create an alert. +For Grafana managed alerts, you can create a rule with a classic condition or you can create a multi-dimensional rule. -### Alert type +**Rule with classic condition** -- **Alert name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule. -- **Alert type -** Select **Grafana managed alert**. -- **Folder -** Select a folder this alert rule will belong to. To create a new folder, click on the drop down and type in a new folder name. +Use the classic condition expression to create a rule that triggers a single alert when its condition is met. For a query that returns multiple series, Grafana does not track the alert state of each series. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series. -![Alert type section screenshot](/static/img/docs/alerting/unified/rule-edit-grafana-alert-type-8-0.png 'Alert type section screenshot') +**Multi dimensional rule** -### Query +To generate a separate alert for each series, create a multi-dimensional rule. Use `Math`, `Reduce`, or `Resample` expressions to create a multi-dimensional rule. For example: -Add one or more [queries]({{< relref "../../../panels/queries.md" >}}) or [expressions]({{< relref "../../../panels/expressions.md" >}}). You can use classic condition expression to create a rule that will trigger a single alert if it's threshold is met, or use reduce and math expressions to create a multi dimensional alert rule that can trigger multiple alerts, one per matching series in the query result. +- Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value. (Not needed for [rules using numeric data]({{< relref "../fundamentals/grafana-managed-numeric-rule.md" >}})). +- Add a `Math` expression with the condition for the rule. Not needed in case a query or a reduce expression already returns 0 if rule should not fire, or a positive number if it should fire. Some examples: `$B > 70` if it should fire in case value of B query/expression is more than 70. `$B < $C * 100` in case it should fire if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels or one is a subset of the other. + +![Query section multi dimensional](/static/img/docs/alerting/unified/rule-edit-multi-8-0.png 'Query section multi dimensional screenshot') > **Note:** Grafana does not support alert queries with template variables. More information is available at . #### Rule with classic condition -You can use classic condition expression to create a rule that will trigger a single alert if it's conditions is met. It works about the same way as dashboard alerts in previous versions of Grafana. +For more information, see [expressions documentation]({{< relref "../../../panels/expressions.md" >}}). -1. Add one or more queries -1. Add an expression. Click on **Operation** dropdown and select **Classic condition**. -1. Add one or more conditions. For each condition you can specify operator (`AND` / `OR`), aggregation function, query letter and threshold value. +### No data and error handling -If a query returns multiple series, then the aggregation function and threshold check will be evaluated for each series.It will not track alert state **per series**. This has implications that are detailed in the scenario below. - -- Alert condition with query that returns 2 series: **server1** and **server2** -- **server1** series causes the alert rule to fire and switch to state `Firing` -- Notifications are sent out with message: _load peaking (server1)_ -- In a subsequent evaluation of the same alert rule, the **server2** series also causes the alert rule to fire -- No new notifications are sent as the alert rule is already in state `Firing`. - -So, as you can see from the above scenario Grafana will not send out notifications when other series cause the alert to fire if the rule already is in state `Firing`. If you want to have alert per series, create a multi dimensional alert rule as described in the section below. - -![Query section classic condition](/static/img/docs/alerting/unified/rule-edit-classic-8-0.png 'Query section classic condition screenshot') - -#### Multi dimensional rule - -You can use reduce and math expressions to create a rule that will create an alert per series returned by the query. - -1. Add one or more queries -2. Add a `reduce` expression for each query to aggregate values in the selected time range into a single value. With some data sources this is not needed for [rules using numeric data]({{< relref "../grafana-managed-numeric-rule.md" >}}). -3. Add a `math` expressions with the condition for the rule. Not needed in case a query or a reduce expression already returns 0 if rule should not be firing, or > 0 if it should be firing. Some examples: `$B > 70` if it should fire in case value of B query/expression is more than 70. `$B < $C * 100` in case it should fire if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels or one is a subset of the other. - -See or [expressions documentation]({{< relref "../../../panels/expressions.md" >}}) for in depth explanation of `math` and `reduce` expressions. - -![Query section multi dimensional](/static/img/docs/alerting/unified/rule-edit-multi-8-0.png 'Query section multi dimensional screenshot') - -### Conditions - -- **Condition -** Select the letter of the query or expression whose result will trigger the alert rule. You will likely want to select either a `classic condition` or a `math` expression. -- **Evaluate every -** How often the rule should be evaluated, executing the defined queries and expressions. Must be no less than 10 seconds and a multiple of 10 seconds. Examples: `1m`, `30s` -- **Evaluate for -** For how long the selected condition should violated before an alert enters `Alerting` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Alerting`. Else it reverts back to `Normal`. - -#### No Data & Error handling - -Toggle **Configure no data and error handling** switch to configure how the rule should handle cases where evaluation results in error or returns no data. +Configure alerting behavior in the absence of data using information in the following tables. | No Data Option | Description | | -------------- | ----------------------------------------------------------------------------------------------------- | @@ -87,63 +73,3 @@ Toggle **Configure no data and error handling** switch to configure how the rule | ----------------------- | ---------------------------------- | | Alerting | Set alert rule state to `Alerting` | | OK | Set alert rule state to `Normal` | - -![Conditions section](/static/img/docs/alerting/unified/rule-edit-grafana-conditions-8-0.png 'Conditions section screenshot') - -### Details - -Annotations and labels can be optionally added in the details section. - -#### Annotations - -Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces. - -#### Labels - -Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team. Labels can also be templated like annotations, for example `{{ $labels.namespace }}/{{ $labels.job }}` will produce a new rule label that will have the evaluated `namespace` and `job` label value added for every alert this rule produces. The rule labels take precedence over the labels produced by the query/condition. - -![Details section](/static/img/docs/alerting/unified/rule-edit-details-8-0.png 'Details section screenshot') - -#### Template variables - -The following template variables are available when expanding annotations and labels. - -| Name | Description | -| ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| $labels | The labels from the query or condition. For example, `{{ $labels.instance }}` and `{{ $labels.job }}`. This is unavailable when the rule uses a classic condition. | -| $values | The values of all reduce and math expressions that were evaluated for this alert rule. For example, `{{ $values.A }}`, `{{ $values.A.Labels }}` and `{{ $values.A.Value }}` where `A` is the `refID` of the expression. This is unavailable when the rule uses a classic condition. | -| $value | The value string of the alert instance. For example, `[ var='A' labels={instance=foo} value=10 ]`. | - -#### Template functions - -The following template functions are available when expanding annotations and labels. - -| Name | Argument | Return | Description | -| ------------------ | -------------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | -| humanize | number or string | string | Converts a number to a more readable format, using metric prefixes. | -| humanize1024 | number or string | string | Like humanize, but uses 1024 as the base rather than 1000. | -| humanizeDuration | number or string | string | Converts a duration in seconds to a more readable format. | -| humanizePercentage | number or string | string | Converts a ratio value to a fraction of 100. | -| humanizeTimestamp | number or string | string | Converts a Unix timestamp in seconds to a more readable format. | -| title | string | string | strings.Title, capitalises first character of each word. | -| toUpper | string | string | strings.ToUpper, converts all characters to upper case. | -| toLower | string | string | strings.ToLower, converts all characters to lower case. | -| match | pattern, text | boolean | regexp.MatchString Tests for a unanchored regexp match. | -| reReplaceAll | pattern, replacement, text | string | Regexp.ReplaceAllString Regexp substitution, unanchored. | -| graphLink | expr | string | Not supported | -| tableLink | expr | string | Not supported | -| args | []interface{} | map[string]interface{} | Converts a list of objects to a map with keys, for example, arg0, arg1. Use this function to pass multiple arguments to templates. | -| externalURL | nothing | string | Returns a string representing the external URL. | -| pathPrefix | nothing | string | Returns the path of the external URL. | -| tmpl | string, []interface{} | nothing | Not supported. | -| safeHtml | string | string | Not supported. | -| query | query string | []sample | Not supported. | -| first | []sample | sample | Not supported. | -| label | label, sample | string | Not supported. | -| strvalue | []sample | string | Not supported. | -| value | sample | float64 | Not supported. | -| sortByLabel | label, []samples | []sample | Not supported. | - -## Preview alerts - -To evaluate the rule and see what alerts it would produce, click **Preview alerts**. It will display a list of alerts with state and value for each one. diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group.md b/docs/sources/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group.md index 5ebcbb21a0e..5c49fd4dbf3 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/edit-cortex-loki-namespace-group.md @@ -1,34 +1,39 @@ +++ -title = "Edit Cortex or Loki rule groups and namespaces" +title = "Cortex or Loki rule groups and namespaces" description = "Edit Cortex or Loki rule groups and namespaces" keywords = ["grafana", "alerting", "guide", "group", "namespace", "cortex", "loki"] -weight = 400 +weight = 405 +++ -# Edit Cortex or Loki rule groups and namespaces +# Cortex or Loki rule groups and namespaces -You can rename Cortex or Loki rule namespaces and groups and edit group evaluation intervals. +A namespace contains one or more groups. The rules within a group are run sequentially at a regular interval. The default interval is one (1) minute. You can rename Cortex or Loki rule namespaces and groups, and edit group evaluation intervals. + +![Group list](/static/img/docs/alerting/unified/rule-list-edit-cortex-loki-icon-8-2.png 'Rule group list screenshot') + +{{< figure src="/static/img/docs/alerting/unified/rule-list-edit-cortex-loki-icon-8-2.png" max-width="550px" caption="Alert details" >}} ## Rename a namespace -A namespace contains one or more groups. To rename a namespace, find a group that belongs to the namespace, then update the namespace. +To rename a namespace: -1. Hover your cursor over the Alerting (bell) icon in the side menu. -1. Locate a group that belongs to the namespace you want to edit and click the edit (pen) icon. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Find a Cortex or Loki managed rule with the group that belongs to the namespace you want to edit. +1. Click the **Edit** (pen) icon. 1. Enter a new name in the **Namespace** field, then click **Save changes**. A new namespace is created and all groups are copied into this namespace from the old one. The old namespace is deleted. -## Rename rule group or change rule group evaluation interval +## Rename rule group or change the rule group evaluation interval The rules within a group are run sequentially at a regular interval, the default interval is one (1) minute. You can modify this interval using the following instructions. -1. Hover your cursor over the Alerting (bell) icon in the side menu. -1. Find the group you want to edit and click the edit (pen) icon. +1. n the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Find a Cortex or Loki managed rule with the group you want to edit. +1. Click **Edit** (pen) icon. 1. Modify the **Rule group** and **Rule group evaluation interval** information as necessary. 1. Click **Save changes**. -If you remaned the group, a new group is created that has all the rules from the old group, and the old group deleted. +When you rename the group, a new group with all the rules from the old group is created. The old group is deleted. -![Group list](/static/img/docs/alerting/unified/rule-list-edit-cortex-loki-icon-8-2.png 'Rule group list screenshot') ![Group edit modal](/static/img/docs/alerting/unified/rule-list-cortex-loki-edit-ns-group-8-2.png 'Rule group edit modal screenshot') diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/rule-list.md b/docs/sources/alerting/unified-alerting/alerting-rules/rule-list.md index 8c74162d28c..b88d31d6252 100644 --- a/docs/sources/alerting/unified-alerting/alerting-rules/rule-list.md +++ b/docs/sources/alerting/unified-alerting/alerting-rules/rule-list.md @@ -1,54 +1,55 @@ +++ -title = "View alert rules" -description = "View alert rules" +title = "Manage alerting rules" +description = "Manage alerting rules" keywords = ["grafana", "alerting", "guide", "rules", "view"] -weight = 400 +weight = 402 +++ -# View alert rules +# Manage alerting rules -To view alerts: +The Alerting page lists existing Grafana 8 alerting rules. By default, rules are grouped by types of data sources. The Grafana section lists all Grafana managed rules. Alerting rules for Prometheus compatible data sources are also listed here. You can view alerting rules for Prometheus compatible data sources but you cannot edit them. -1. In the Grafana menu hover your cursor over the Alerting (bell) icon. -1. Click **Alert Rules**. You can see all configured Grafana alert rules as well as any rules from Loki or Prometheus data sources. - By default, the group view is shown. You can toggle between group or state views by clicking the relevant **View as** buttons in the options area at the top of the page. +The Cortex/Loki rules section lists all rules for external Prometheus or Loki data sources. Cloud alerting rules are also listed in this section. + +- [View alerting rules](#view-alerting-rule) +- [Filter alerting rules](#filter-alerting-rules) +- [Edit or delete an alerting rule](#edit-or-delete-an-alerting-rule) + +## View alerting rules + +To view alerting details: + +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page. By default, the group view displays. +1. In **View as**, toggle between group or state views by clicking the relevant option. See [Group view](#group-view) and [State view](#state-view) for more information. +1. Expand the rule row to view the rule labels, annotations, data sources the rule queries, and a list of alert instances resulting from this rule. + +{{< figure src="/static/img/docs/alerting/unified/rule-details-8-0.png" max-width="650px" caption="Alerting rule details" >}} ### Group view -Group view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` + `group`. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Each rule can be further expanded to view its details. Action buttons and any alerts spawned by this rule, and each alert can be further expanded to view its details. +Group view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` + `group`. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Expand a rule further to view its details. You can also expand action buttons and alerts resulting from the rule to view their details. -![Grouped alert rule view](/static/img/docs/alerting/unified/rule-list-group-view-8-0.png 'Screenshot of grouped alert rule view') +{{< figure src="/static/img/docs/alerting/unified/rule-list-group-view-8-0.png" max-width="800px" caption="Alerting grouped view" >}} ### State view -State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts spawned by this rule, and each alert can be further expanded to view its details. +State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts generated by this rule, and each alert can be further expanded to view its details. -![Alert rule state view](/static/img/docs/alerting/unified/rule-list-state-view-8-0.png 'Screenshot of alert rule state view') +{{< figure src="/static/img/docs/alerting/unified/rule-list-state-view-8-0.png" max-width="800px" caption="Alerting state view" >}} -## Filter alert rules +## Filter alerting rules -You can use the following filters to view only alert rules that match specific criteria: +To filter alerting rules: -- **Filter alerts by label -** Search by alert labels using label selectors in the **Search** input. eg: `environment=production,region=~US|EU,severity!=warning` -- **Filter alerts by state -** In **States** Select which alert states you want to see. All others are hidden. -- **Filter alerts by data source -** Click the **Select data source** and select an alerting data source. Only alert rules that query selected data source will be visible. +- From **Select data sources**, select a data source. You can see alerting rules that query the selected data source. +- In the **Search by label**, enter search criteria using label selectors. For example, `environment=production,region=~US|EU,severity!=warning`. +- From **Filter alerts by state**, select an alerting state you want to see. You can see alerting rules that match the state. Rules matching other states are hidden. -## Rule details - -A rule row shows the rule state, health, and summary annotation if the rule has one. You can expand the rule row to display rule labels, all annotations, data sources this rule queries, and a list of alert instances spawned from this rule. - -![Alert rule details](/static/img/docs/alerting/unified/rule-details-8-0.png 'Screenshot of alert rule details') - -### Edit or delete rule - -Grafana rules can only be edited or deleted by users with Edit permissions for the folder which contains the rule. Prometheus or Loki rules can be edited or deleted by users with Editor or Admin roles. +## Edit or delete an alerting rule +Grafana managed alerting rules can only be edited or deleted by users with Edit permissions for the folder storing the rules. Alerting rules for an external Cortex or Loki instance can be edited or deleted by users with Editor or Admin roles. To edit or delete a rule: -1. Expand this rule to reveal rule controls. -1. Click **Edit** to go to the rule editing form. Make changes following [instructions listed here]({{< relref "./create-grafana-managed-rule.md" >}}). -1. Click **Delete"** to delete a rule. - -## Opt-out a Loki or Prometheus data source - -If you do not want rules to be loaded from a Prometheus or Loki data source, go to its settings page and clear the **Manage alerts via Alerting UI** checkbox. +1. Expand a rule row until you can see the rule controls of **View**, **Edit**, and **Delete**. +1. Click **Edit** to open the create rule page. Make updates following instructions in [Create a Grafana managed alerting rule]({{< relref "./create-grafana-managed-rule.md" >}}) or [Create a Cortex or Loki managed alerting rule]({{< relref "./create-cortex-loki-managed-rule.md" >}}). +1. Click **Delete** to delete a rule. diff --git a/docs/sources/alerting/unified-alerting/alerting-rules/state-and-health.md b/docs/sources/alerting/unified-alerting/alerting-rules/state-and-health.md deleted file mode 100644 index 76ffec61de6..00000000000 --- a/docs/sources/alerting/unified-alerting/alerting-rules/state-and-health.md +++ /dev/null @@ -1,35 +0,0 @@ -+++ -title = "State and Health of alerting rules" -description = "State and Health of alerting rules" -keywords = ["grafana", "alerting", "guide", "state"] -+++ - -# State and Health of alerting rule - -The concepts of state and health for alerting rules help you understand, at a glance, several key status indicators about your alerts. Alert state, alerting rule state, and alerting rule health are related, but they each convey subtly different information. - -## Alerting rule state - -Indicates whether any of the timeseries resulting from evaluation of the alerting rule are in an alerting state. Alerting rule state only requires a single alerting instance to be in a pending or firing state for the alerting rule state to not be normal. - -- Normal: none of the timeseries returned are in an alerting state. -- Pending: at least one of the timeseries returned are in a pending state. -- Firing: at least one of the timeseries returned are in an alerting state. - -## Alert state - -Alert state is an indication of the output of the alerting evaluation engine. - -- Normal: the condition for the alerting rule has evaluated to **false** for every timeseries returned by the evaluation engine. -- Alerting: the condition for the alerting rule has evaluated to **true** for at least one timeseries returned by the evaluation engine and the duration, if set, **has** been met or exceeded. -- Pending: the condition for the alerting rule has evaluated to **true** for at least one timeseries returned by the evaluation engine and the duration, if set, **has not** been met or exceeded. -- NoData: the alerting rule has not returned a timeseries, all values for the timeseries are null, or all values for the timeseries are zero. -- Error: There was an error encountered when attempting to evaluate the alerting rule. - -## Alerting rule health - -Indicates the status of alerting rule evaluation. - -- Ok: the rule is being evaluated, data is being returned, and no errors have been encountered. -- Error: an error was encountered when evaluating the alerting rule. -- NoData: at least one of the timeseries returned during evaluation is in a NoData state. diff --git a/docs/sources/alerting/unified-alerting/contact-points.md b/docs/sources/alerting/unified-alerting/contact-points.md index 3f947acac0b..15c727230e4 100644 --- a/docs/sources/alerting/unified-alerting/contact-points.md +++ b/docs/sources/alerting/unified-alerting/contact-points.md @@ -2,39 +2,53 @@ title = "Contact points" description = "Create or edit contact point" keywords = ["grafana", "alerting", "guide", "contact point", "notification channel", "create"] -weight = 400 +weight = 430 +++ # Contact points -Contact points define where to send notifications about alerts that match a particular [notification policy]({{< relref "./notification-policies.md" >}}). A contact point can contain one or more contact point types, eg email, slack, webhook and so on. A notification will dispatched to all contact point types defined on a contact point. [Templating]({{< relref "./message-templating/_index.md" >}}) can be used to customize contact point type message with alert data. Grafana alerting UI can be used to configure both Grafana managed contact points and contact points for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}). +Use contact points to define how your contacts are notified when an alert fires. A contact point can have one or more contact point types, for example, email, slack, webhook, and so on. When an alert fires, a notification is sent to all contact point types listed for a contact point. Optionally, use [mesasge templates]({{< relref "./message-templating/_index.md" >}}) to customize notification messages for the contact point types. -Grafana alerting UI allows you to configure contact points for the Grafana managed alerts (handled by the embedded Alertmanager) as well as contact points for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown. - -> **Note:** In v8.0 and v8.1, the configuration of the embedded Alertmanager was shared across organisations. Users running one of these versions are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise contact points for the Grafana managed alerts will be visible by all organizations. +You can configure Grafana managed contact points as well as contact points for an [external Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}). For more information, see [Alertmanager]({{< relref "./fundamentals/alertmanager.md" >}}). ## Add a contact point -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. Click **Add contact point**. -1. Enter a **Name** for the contact point -1. Select contact point type and fill out mandatory fields. **Optional settings** can be expanded for more options. -1. If you'd like this contact point to notify via multiple channels, for example both email and slack, click **New contact point type** and fill out additional contact point type details. -1. Click **Save contact point** button at the bottom of the page. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **Contact points** to open the page listing existing contact points. +1. Click **New contact point**. +1. From the **Alertmanager** dropdown, select an Alertmanager. By default, Grafana Alertmanager is selected. +1. In **Name**, enter a descriptive name for the contact point. +1. From **Contact point type**, select a type and fill out mandatory fields. For example, if you choose email, enter the email addresses. Or if you choose Slack, enter the Slack channel(s) and users who should be contacted. +1. Some contact point types, like email or webhook, have optional settings. In **Optional settings**, specify additional settings for the selected contact point type. +1. In Notification settings, optionally select **Disable resolved message** if you do not want to be notified when an alert resolves. +1. To add another contact point type, click **New contact point type** and repeat steps 6 through 8. +1. Click **Save contact point** to save your changes. -## Editing a contact point +## Edit a contact point -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. Find the contact point you want to edit in the contact points table and click the **pen icon** on the right side. -1. Make any changes and click **Save contact point** button at the bottom of the page. +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. Find the contact point to edit, then click **Edit** (pen icon). +1. Make any changes and click **Save contact point**. -## Deleting a contact point +## Delete a contact point -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. Find the contact point you want to edit in the contact points table and click the **trash can icon** on the right side. -1. A confirmation dialog will open. Click **Yes, delete**. +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. Find the contact point to delete, then click **Delete** (trash icon). +1. In the confirmation dialog, click **Yes, delete**. -**Note** You will not be able to delete contact points that are currently used by any notification policy. If you want to delete such contact point, you will have to first go to [notification policies]({{< relref "./notification-policies.md" >}}) and delete the policy or update it to use another contact point. +> **Note:** You cannot delete contact points that are in use by a notification policy. You will have to either delete the [notification policy]({{< relref "./notification-policies.md" >}}) or update it to use another contact point. + +## Edit Alertmanager global config + +To edit global configuration options for an external Alertmanager, like SMTP server, that is used by default for all email contact types: + +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. From the **Alertmanager** drop-down, select an external Alertmanager data source. +1. Click the **Edit global config** option. +1. Add global configuration settings. +1. Click **Save global config** to save your changes. + +> **Note** This option is available only for external Alertmanagers. You can configure some global options for Grafana contact types, like email settings, via [Grafana configuration]({{< relref "../../administration/configuration.md" >}}). ## List of notifiers supported by Grafana @@ -60,7 +74,7 @@ Grafana alerting UI allows you to configure contact points for the Grafana manag | [Webhook](#webhook) | `webhook` | | [Zenduty](#zenduty) | `webhook` | -## Webhook +### Webhook Example JSON body: @@ -166,33 +180,6 @@ Example JSON body: | dashboardURL | string | **Will be deprecated soon** | | panelURL | string | **Will be deprecated soon** | -### Breaking changes when updating to unified alerting - -Grafana 8 alerts introduce a new way to manage your alerting rules and alerts in Grafana. -As part of this change, there are some breaking changes that we will explain in details. - -#### Multiple Alerts in one payload - -As we now enable [multi dimensional alerting]({{< relref "../difference-old-new.md#multi-dimensional-alerting" >}}) a payload -consists of an array of alerts. - #### Removed fields related to dashboards Alerts are not coupled to dashboards anymore therefore the fields related to dashboards `dashboardId` and `panelId` have been removed. - -## Manage contact points for an external Alertmanager - -Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source. - -{{< figure max-width="40%" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" caption="Select Alertmanager" >}} - -### Edit Alertmanager global config - -To edit global configuration options for an alertmanager, like SMTP server that is used by default for all email contact types: - -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. In the dropdown at the top of the page, select an Alertmanager data source. -1. Click **Edit global config** button at the bottom of the page. -1. Fill out the form and click **Save global config**. - -**Note** this is only for external Alertmanagers. Some global options for Grafana contact types, like email settings, can be configured via [Grafana configuration]({{< relref "../../administration/configuration.md" >}}). diff --git a/docs/sources/alerting/unified-alerting/difference-old-new.md b/docs/sources/alerting/unified-alerting/difference-old-new.md new file mode 100644 index 00000000000..832ec387ffc --- /dev/null +++ b/docs/sources/alerting/unified-alerting/difference-old-new.md @@ -0,0 +1,26 @@ ++++ +title = "What's new in Grafana 8 alerting" +description = "What's New with Grafana 8 Alerts" +keywords = ["grafana", "alerting", "guide"] +weight = 114 ++++ + +# What's new in Grafana 8 alerting + +Grafana 8.0 alerting has several enhancements over legacy dashboard alerting. + +## Multi-dimensional alerting + +You can now create alerts that give you system-wide visibility with a single alerting rule. Generate multiple alert instances from a single alert rule. For example, you can create a rule to monitor the disk usage of multiple mount points on a single host. The evaluation engine returns multiple time series from a single query, with each time series identified by its label set. + +## Create alerts outside of Dashboards + +Unlike legacy dashboard alerts, Grafana 8 alerts allow you to create queries and expressions that combine data from multiple sources in unique ways. You can still link dashboards and panels to alerting rules using their ID and quickly troubleshoot the system under observation. + +## Create Loki and Cortex alerting rules + +In Grafana 8 alerting, you can manage Loki and Cortex alerting rules using the same UI and API as your Grafana managed alerts. + +## View and search for alerts from Prometheus compatible data sources + +Alerts for Prometheus compatible data sources are now listed under the Grafana alerts section. You can search for labels across multiple data sources to quickly find relevant alerts. diff --git a/docs/sources/alerting/unified-alerting/fundamentals/_index.md b/docs/sources/alerting/unified-alerting/fundamentals/_index.md new file mode 100644 index 00000000000..4b1cddeb243 --- /dev/null +++ b/docs/sources/alerting/unified-alerting/fundamentals/_index.md @@ -0,0 +1,13 @@ ++++ +title = "Alerting fundamentals" +aliases = ["/docs/grafana/latest/alerting/metrics/"] +weight = 120 ++++ + +# Alerting fundamentals + +This section covers the fundamental concepts of Grafana 8 alerting. + +- [Alertmanager]({{< relref "./alertmanager.md" >}}) +- [State and health of alerting rules]({{< relref "./state-and-health.md" >}}) +- [Evaluating Grafana managed alerts]({{< relref "./evaluate-grafana-alerts.md" >}}) diff --git a/docs/sources/alerting/unified-alerting/fundamentals/alertmanager.md b/docs/sources/alerting/unified-alerting/fundamentals/alertmanager.md new file mode 100644 index 00000000000..e6a7d5207cc --- /dev/null +++ b/docs/sources/alerting/unified-alerting/fundamentals/alertmanager.md @@ -0,0 +1,17 @@ ++++ +title = "Alertmanager" +aliases = ["/docs/grafana/latest/alerting/metrics/"] +weight = 116 ++++ + +# Alertmanager + +The Alertmanager helps both group and manage alert rules, adding a layer of orchestration on top of the alerting engines. To learn more, see [Prometheus Alertmanager documentation](https://prometheus.io/docs/alerting/latest/alertmanager/). + +Grafana includes built-in support for Prometheus Alertmanager. By default, notifications for Grafana managed alerts are handled by the embedded Alertmanager that is part of core Grafana. You can configure the Alertmanager's contact points, notification policies, silences, and templates from the alerting UI by selecting the `Grafana` option from the Alertmanager drop-down. + +> **Note:** Before v8.2, the configuration of the embedded Alertmanager was shared across organizations. If you are on an older Grafana version, we recommend that you use Grafana 8 Alerts only if you have one organization. Otherwise, your contact points are visible to all organizations. + +Grafana 8 alerting added support for external Alertmanager configuration. When you add an [Alertmanager data source]({{< relref "../../../datasources/alertmanager.md" >}}), the Alertmanager drop-down shows a list of available external Alertmanager data sources. Select a data source to create and manage alerting for standalone Cortex or Loki data sources. + +{{< figure max-width="40%" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" max-width="250px" caption="Select Alertmanager" >}} diff --git a/docs/sources/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts.md b/docs/sources/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts.md new file mode 100644 index 00000000000..8509319f4d5 --- /dev/null +++ b/docs/sources/alerting/unified-alerting/fundamentals/evaluate-grafana-alerts.md @@ -0,0 +1,95 @@ ++++ +title = "Alerting on numeric data" +aliases = ["/docs/grafana/latest/alerting/metrics/"] +weight = 116 ++++ + +# Alerting on numeric data + +This topic describes how Grafana managed alerts are evaluated by the backend engine as well as how Grafana handles alerting on numeric rather than time series data. + +- [Alert evaluation](#alert-evaluation) +- [Alerting on numeric data](#alerting-on-numeric-data) + +## Alert evaluation + +Grafana managed alerts query the following backend data sources that have alerting enabled: + +- built-in data sources or those developed and maintained by Grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`, + `Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Monitor` +- community developed backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../../developers/plugins/metadata.md" >}})) + +### Metrics from the alerting engine + +The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../../administration/view-server/internal-metrics.md" >}}). See also, [View alert rules and their current state]({{< relref "../alerting-rules/rule-list.md" >}}). + +| Metric Name | Type | Description | +| ------------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- | +| `grafana_alerting_alerts` | gauge | How many alerts by state | +| `grafana_alerting_request_duration` | histogram | Histogram of requests to the Alerting API | +| `grafana_alerting_active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts | +| `grafana_alerting_rule_evaluations_total` | counter | The total number of rule evaluations | +| `grafana_alerting_rule_evaluation_failures_total` | counter | The total number of rule evaluation failures | +| `grafana_alerting_rule_evaluation_duration` | summary | The duration for a rule to execute | +| `grafana_alerting_rule_group_rules` | gauge | The number of rules | + +## Alerting on numeric data + +Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules. +When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead. + +### Tabular Data + +This feature is supported with backend data sources that query tabular data: + +- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle. +- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer. + +A query with Grafana managed alerts or SSE is considered numeric with these data sources, if: + +- The "Format AS" option is set to "Table" in the data source query. +- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns. + +If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels. + +### Example + +For a MySQL table called "DiskSpace": + +| Time | Host | Disk | PercentFree | +| ----------- | ---- | ---- | ----------- | +| 2021-June-7 | web1 | /etc | 3 | +| 2021-June-7 | web2 | /var | 4 | +| 2021-June-7 | web3 | /var | 8 | +| ... | ... | ... | ... | + +You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space: + +```sql +SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM ( + SELECT + Host, + Disk, + Avg(PercentFree) + FROM DiskSpace + Group By + Host, + Disk + Where __timeFilter(Time) +``` + +This query returns the following Table response to Grafana: + +| Host | Disk | PercentFree | +| ---- | ---- | ----------- | +| web1 | /etc | 3 | +| web2 | /var | 4 | +| web3 | /var | 0 | + +When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced: + +| Labels | Status | +| --------------------- | -------- | +| {Host=web1,disk=/etc} | Alerting | +| {Host=web2,disk=/var} | Alerting | +| {Host=web3,disk=/var} | Normal | diff --git a/docs/sources/alerting/unified-alerting/fundamentals/state-and-health.md b/docs/sources/alerting/unified-alerting/fundamentals/state-and-health.md new file mode 100644 index 00000000000..be0779aba3e --- /dev/null +++ b/docs/sources/alerting/unified-alerting/fundamentals/state-and-health.md @@ -0,0 +1,30 @@ ++++ +title = "State and health of alerting rules" +description = "State and Health of alerting rules" +keywords = ["grafana", "alerting", "guide", "state"] +aliases = ["/docs/grafana/llatest/alerting/unified-alerting/alerting-rules/state-and-health/"] ++++ + +# State and health of alerting rules + +The state and health of alerting rules help you understand several key status indicators about your alerts. There are three key components: alert state, alerting rule state, and alerting rule health. Although related, each component conveys subtly different information. + +## Alerting rule state + +- **Normal**: None of the time series returned by the evaluation engine is in a Pending or Firing state. +- **Pending**: At least one time series returned by the evaluation engine is Pending. +- **Firing**: At least one time series returned by the evaluation engine is Firing. + +## Alert state + +- **Normal**: Condition for the alerting rule is **false** for every time series returned by the evaluation engine. +- **Alerting**: Condition of the alerting rule is **true** for at least one time series returned by the evaluation engine. The duration for which the condition must be true before an alert fires, if set, is met or has exceeded. +- **Pending**: Condition of the alerting rule is **true** for at least one time series returned by the evaluation engine. The duration for which the condition must be true before an alert fires, if set, **has not** been met. +- **NoData**: the alerting rule has not returned a time series, all values for the time series are null, or all values for the time series are zero. +- **Error**: Error when attempting to evaluate an alerting rule. + +## Alerting rule health + +- **Ok**: No error when evaluating an alerting rule. +- **Error**: Error when evaluating an alerting rule. +- **NoData**: The absence of data in at least one time series returned during a rule evaluation. diff --git a/docs/sources/alerting/unified-alerting/grafana-managed-numeric-rule.md b/docs/sources/alerting/unified-alerting/grafana-managed-numeric-rule.md deleted file mode 100644 index 2b5e4727662..00000000000 --- a/docs/sources/alerting/unified-alerting/grafana-managed-numeric-rule.md +++ /dev/null @@ -1,67 +0,0 @@ -+++ -title = "Grafana managed alert rules for numeric data" -description = "Grafana managed alert rules for numeric data" -keywords = ["grafana", "alerting", "guide", "rules", "create"] -weight = 400 -+++ - -# Alerting on numeric data - -Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules. -When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead. - -## Tabular Data - -This feature is supported with backend data sources that query tabular data: - -- SQL data sources such as MySQL, Postgres, MSSQL, and Oracle. -- The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer. - -A query with Grafana managed alerts or SSE is considered numeric with these data sources, if: - -- The "Format AS" option is set to "Table" in the data source query. -- The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns. - -If there are string columns then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels. - -## Example - -For a MySQL table called "DiskSpace": - -| Time | Host | Disk | PercentFree | -| ----------- | ---- | ---- | ----------- | -| 2021-June-7 | web1 | /etc | 3 | -| 2021-June-7 | web2 | /var | 4 | -| 2021-June-7 | web3 | /var | 8 | -| ... | ... | ... | ... | - -You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space: - -```sql -SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM ( - SELECT - Host, - Disk, - Avg(PercentFree) - FROM DiskSpace - Group By - Host, - Disk - Where __timeFilter(Time) -``` - -This query returns the following Table response to Grafana: - -| Host | Disk | PercentFree | -| ---- | ---- | ----------- | -| web1 | /etc | 3 | -| web2 | /var | 4 | -| web3 | /var | 0 | - -When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced: - -| Labels | Status | -| --------------------- | -------- | -| {Host=web1,disk=/etc} | Alerting | -| {Host=web2,disk=/var} | Alerting | -| {Host=web3,disk=/var} | Normal | diff --git a/docs/sources/alerting/unified-alerting/message-templating/_index.md b/docs/sources/alerting/unified-alerting/message-templating/_index.md index 93e72fe2abe..19fc8a0a319 100644 --- a/docs/sources/alerting/unified-alerting/message-templating/_index.md +++ b/docs/sources/alerting/unified-alerting/message-templating/_index.md @@ -3,63 +3,58 @@ title = "Message templating" description = "Message templating" aliases = ["/docs/grafana/latest/alerting/message-templating/"] keywords = ["grafana", "alerting", "guide", "contact point", "templating"] -weight = 400 +weight = 440 +++ # Message templating -Notifications sent via [contact points]({{< relref "../contact-points.md" >}}) are built using templates. Grafana comes with default templates which you can customize. Grafana's notification templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML which can affect escaping. Since most of the contact point fields can be templated, you can create reusable templates and them in multiple contact points. See [template data reference]({{< relref "./template-data.md" >}}) to check what variables are available in the templates. The default template is defined in [default_template.go](https://github.com/grafana/grafana/blob/main/pkg/services/ngalert/notifier/channels/default_template.go) which can serve as a useful reference or starting point for custom templates. +Notifications sent via [contact points]({{< relref "../contact-points.md" >}}) are built using messaging templates. Grafana's default templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML (which can affect escaping). The default template, defined in [default_template.go](https://github.com/grafana/grafana/blob/main/pkg/services/ngalert/notifier/channels/default_template.go), is a useful reference for custom templates. -## Using templating in contact point fields +Since most of the contact point fields can be templated, you can create reusable custom templates and use them in multiple contact points. The [template data]({{< relref "./template-data.md" >}}) topic lists variables that are available for templating. The default template is defined in [default_template.go](https://github.com/grafana/grafana/blob/main/pkg/services/ngalert/notifier/channels/default_template.go) which can serve as a useful reference or starting point for custom templates. -This section shows an example of using templating to render a number of firing or resolved alerts in Slack message title, and listing alerts with status and name in the message body: +### Using templates - +The following example shows the use of default templates to render an alert message in slack. The message title contains a count of firing or resolved alerts and the message body has a list of alerts with status. -## Reusable templates + -You can create named templates and then reuse them in contact point fields or other templates. +The following example shows the use of a custom template within one of the contact point fields. -Grafana alerting UI allows you to configure templates for the Grafana managed alerts (handled by the embedded Alertmanager) as well as templates for an [external Alertmanager if one is configured]({{< relref "../../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown. + -> **Note:** Before Grafana v8.2, the configuration of the embedded Alertmanager was shared across organisations. Users of Grafana 8.0 and 8.1 are advised to use the new Grafana 8 Alerts only if they have one organisation. Otherwise, silences for the Grafana managed alerts will be visible by all organizations. +### Create a message template -### Create a template +> **Note:** Before Grafana v8.2, the configuration of the embedded Alertmanager was shared across organisations. Users of Grafana 8.0 and 8.1 are advised to use the new Grafana 8 alerts only if they have one organisation. Otherwise, silences for the Grafana managed alerts will be visible by all organizations. -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. From [Alertmanager]({{< relref "../contact-points.md/#alertmanager" >}}) drop-down, select an external Alertmanager to create and manage templates for the external data source. Otherwise, keep the default option of Grafana. + {{< figure max-width="250px" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" caption="Select Alertmanager" >}} 1. Click **Add template**. -1. Fill in **Name** and **Content** fields. +1. In **Name**, add a descriptive name. +1. In **Content**, add the content of the template. 1. Click **Save template** button at the bottom of the page. + -**Note** The template name used to reference this template in templating is not the value of the **Name** field, but the parameter to `define` tag in the content. When creating a template you can omit `define` entirely and it will be added automatically with same value as **Name** field. It's recommended to use the same name for `define` and **Name** field to avoid confusion. +The `define` tag in the Content section assigns the template name. This tag is optional, and when omitted, the template name is derived from the **Name** field. When both are specified, it is a best practice to ensure that they are the same. - +### Edit a message template -### Edit a template +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. In the Template table, find the template you want to edit, then click the **Edit** (pen icon). +1. Make your changes, then click **Save template**. -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. Find the template you want to edit in the templates table and click the **pen icon** on the right side. -1. Make any changes and click **Save template** button at the bottom of the page. +### Delete a message template -### Delete a template +1. In the Alerting page, click **Contact points** to open the page listing existing contact points. +1. In the Template table, find the template you want to delete, then click the **Delete** (trash icon). +1. In the confirmation dialog, click **Yes, delete** to delete the template. -1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**. -1. Find the template you want to edit in the templates table and click the **trash can icon** on the right side. -1. A confirmation dialog will open. Click **Yes, delete**. +Use caution when deleting a template since Grafana does not prevent you from deleting templates that are in use. -**Note** You are not prevented from deleting templates that are in use somewhere in contact points or other templates. Be careful! +### Custom template examples -### Use a template in a contact point field - -To use a template: - -Enter `{{ template "templatename" . }}` into a contact point field, where `templatename` is the `define` parameter of a template. - - - -### Template examples - -Here is an example of a template to render a single alert: +Template to render a single alert: ``` {{ define "alert" }} @@ -100,9 +95,3 @@ Template to render entire notification message: {{ end }} {{ end }} ``` - -## Manage templates for an external Alertmanager - -Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page, allowing you to select either `Grafana` or an external Alertmanager data source. - -{{< figure max-width="40%" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" caption="Select Alertmanager" >}} diff --git a/docs/sources/alerting/unified-alerting/notification-policies.md b/docs/sources/alerting/unified-alerting/notification-policies.md index 81e82147e31..4a92369d7a7 100644 --- a/docs/sources/alerting/unified-alerting/notification-policies.md +++ b/docs/sources/alerting/unified-alerting/notification-policies.md @@ -2,72 +2,79 @@ title = "Notification policies" description = "Notification policies" keywords = ["grafana", "alerting", "guide", "notification policies", "routes"] -weight = 400 +weight = 450 +++ # Notification policies -Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy except for the root policy can also match specific alert labels. Each alert enters policy tree at the root and then traverses each child policy. If `Continue matching subsequent sibling nodes` is not checked, it stops at the first matching node, otherwise, it continues matching it's siblings as well. If an alert does not match any children of a policy, the alert is handled based on the configuration settings of this policy and notified to the contact point configured on this policy. Alert that does not match any specific policy is handled by the root policy. +Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy, except for the root policy, can also match specific alert labels. Each alert is evaluated by the root policy and subsequently by each child policy. If you enable the `Continue matching subsequent sibling nodes` option is enabled for a specific policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A root policy governs any alert that does not match a specific policy. -Grafana alerting UI allows you to configure notification policies for the Grafana managed alerts (handled by the embedded Alertmanager) as well as notification policies for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown. +You can configure Grafana managed notification policies as well as notification policies for an [external Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}). For more information, see [Alertmanager]({{< relref "./fundamentals/alertmanager.md" >}}). + +## Edit root notification policy > **Note:** Before Grafana v8.2, the configuration of the embedded Alertmanager was shared across organisations. Users of Grafana 8.0 and 8.1 are advised to use the new Grafana 8 Alerts only if they have one organisation. Otherwise, silences for the Grafana managed alerts will be visible by all organizations. -## Edit notification policies +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **Notification policies**. +1. From the **Alertmanager** dropdown, select an external Alertmanager. By default, the Grafana Alertmanager is selected. +1. In the Root policy section, click **Edit** (pen icon). +1. In **Default contact point**, update the [contact point]({{< relref "./contact-points.md" >}}) to whom notifications should be sent for rules when alert rules do not match any specific policy. +1. In **Group by**, choose labels to group alerts by. If multiple alerts are matched for this policy, then they are grouped by these labels. A notification is sent per group. If the field is empty (default), then all notifications are sent in a single group. Use a special label `...` to group alerts by all labels (which effectively disables grouping). +1. In **Timing options**, select from the following options: + - **Group wait** Time to wait to buffer alerts of the same group before sending an initial notification. Default is 30 seconds. + - **Group interval** Minimum time interval between two notifications for a group. Default is 5 minutes. + - **Repeat interval** Minimum time interval for re-sending a notification if no new alerts were added to the group. Default is 4 hours. +1. Click **Save** to save your changes. -To access notification policy editing page, In the Grafana side bar, hover your cursor over the **Alerting (bell)** icon and then click **Notification policies**. +## Add new specific policy -### Edit root notification policy +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. Click **Notification policies**. +1. From the **Alertmanager** dropdown, select an Alertmanager. By default, the Grafana Alertmanager is selected. +1. To add a top level specific policy, go to the **Specific routing** section and click **New specific policy**. +1. In **Matching labels** section, add one or more rules for matching alert labels. For more information, see ["How label matching works"](#how-label-matching-works). +1. In **Contact point**, add the [contact point]({{< relref "./contact-points.md" >}}) to send notification to if alert matches only this specific policy and not any of the nested policies. +1. Optionally, enable **Continue matching subsequent sibling nodes** to continue matching nested policies even after the alert matched the parent policy. When this option is enabled, you can get more than one notification. Use it to send notification to a catch-all contact point as well as to one of more specific contact points handled by nested policies. +1. Optionally, enable **Override grouping** to specify the same grouping as the root policy. If this option is not enabled, the root policy grouping is used. +1. Optionally, enable **Override general timings** to override the timing options configured in the group notification policy. +1. Click **Save policy** to save your changes. -1. Click **edit** button on the top right of the root policy box. -1. Make changes and click **save** button. +## Add nested policy -### Add new specific policy +1. Expand the specific policy you want to update. +1. Click **Add nested policy**, then add the details using information in [Add new specific policy](#add-new-specific-policy). +1. Click **Save policy** to save your changes. -To add a top level specific policy, click **New policy** button in the **Specific routing** section, fill in the form and click **Save policy**. +## Edit specific policy -To add a nested policy to an existing specific policy, expand the parent policy in specific routing table and click **Add nested policy**. fill in the form and click **Save policy**. +1. In the Alerting page, click **Notification policies** to open the page listing existing policies. +1. Find the policy you want to edit, then click **Edit** (pen icon). +1. Make any changes using instructions in [Add new specific policy](#add-new-specific-policy). +1. Click **Save policy**. -### Edit specific policy +## How label matching works -To edit a specific policy, find it in the specific routing table and click **Edit** button. Make your changes and click **Save policy**. - -### Root policy fields - -- **Default contact point -** The [contact point]({{< relref "./contact-points.md" >}}) to send notifications to that did not match any specific policy. -- **Group by -** Labels to group alerts by. If multiple alerts are matched for this policy, they will be grouped based on these labels and a notification will be sent per group. If the field is empty (default), then all notifications are sent in a single group. Use a special label `...` to group alerts by all labels, which effectively disables grouping. - -Group timing options - -- **Group wait -** - How long to wait to buffer alerts of the same group before sending a notification initially. Default is 30 seconds. -- **Group interval -** - How long to wait before sending an notification when an alert has been added to a group for which there has already been a notification. Default is 5 minutes. -- **Repeat interval -** - How long to wait before re-sending a notification after one has already been sent and no new alerts were added to the group. Default is 4 hours. - -### Specific routing policy fields - -- **Contact point -** The [contact point]({{< relref "./contact-points.md" >}}) to send notification to if alert matched this specific policy but did not match any of it's nested policies, or there were no nested specific policies. -- **Matching labels -** Rules for matching alert labels. See ["How label matching works"](#how-label-matching-works) below for details. -- **Continue matching subsequent sibling nodes -** If not enabled and an alert matches this policy but not any of it's nested policies, matching will stop and a notification will be sent to the contact point defined on this policy. If enabled, notification will be sent but alert will continue matching subsequent siblings of this policy, thus sending more than one notification. Use this if for example you want to send notification to a catch-all contact point as well as to one of more specific contact points handled by subsequent policies. -- **Override grouping** - Toggle if you want to override grouping for this policy. If toggled, you will be able to specify grouping same as for root policy described above. If not toggled, root policy grouping will be used. -- **Override group timings** Toggle if you want to override group timings for this policy. If toggled, you will be able to specify group timings same as for root policy described above. If not toggled, root policy group timings will be used. - -### How label matching works - -A policy will match an alert if alert's labels match all of the "Matching Labels" specified on the policy. +A policy will match an alert if the alert's labels match all the "Matching Labels" specified on the policy. - The **Label** field is the name of the label to match. It must exactly match the label name. -- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Regex** and **Equal** checkboxes. -- The **Regex** checkbox specifies if the inputted **Value** should be matched against labels as a regular expression. The regular expression is always anchored. If not selected it is an exact string match. -- The **Equal** checkbox specifies if the match should include alert instances that match or do not match. If not checked, the silence includes alert instances _do not_ match. +- The **Operator** field is the operator to match against the label value. The available operators are: -## Example setup + - `=`: Select labels that are exactly equal to the provided string. + - `!=`: Select labels that are not equal to the provided string. + - `=~`: Select labels that regex-match the provided string. + - `!~`: Select labels that do not regex-match the provided string. -One usage example would be: +- The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value. -- Create a "default" contact point for most alerts with a non invasive contact point type, like a slack message, and set it on root policy -- Edit root policy grouping to group alerts by `cluster`, `namespace` and `alertname` so you get a notification per alert rule and specific k8s cluster & namespace. -- Create specific route for alerts coming from development cluster with an appropriate contact point -- Create a specific route for alerts with "critical" severity with a more invasive contact point type, like pager duty notification -- Create specific routes for particular teams that handle their own onduty rotations +## Example -![Notification policies screenshot](/static/img/docs/alerting/unified/notification-policies-8-0.png 'Notification policies screenshot') +An example of an alert configuration. + +- Create a "default" contact point for slack notifications, and set it on root policy. +- Edit the root policy grouping to group alerts by `cluster`, `namespace` and `severity` so that you get a notification per alert rule and specific kubernetes cluster and namespace. +- Create specific route for alerts coming from the development cluster with an appropriate contact point. +- Create a specific route for alerts with "critical" severity with a more invasive contact point type, like pager duty notification. +- Create specific routes for particular teams that handle their own onduty rotations. + +{{< figure max-width="40%" src="/static/img/docs/alerting/unified/notification-policies-8-0.png" max-width="650px" caption="Notification policies" >}} diff --git a/docs/sources/alerting/unified-alerting/opt-in.md b/docs/sources/alerting/unified-alerting/opt-in.md index 04d0a987346..2c6536ded7c 100644 --- a/docs/sources/alerting/unified-alerting/opt-in.md +++ b/docs/sources/alerting/unified-alerting/opt-in.md @@ -1,42 +1,40 @@ +++ -title = "Opt-in to Grafana 8 Alerts" +title = "Opt-in to Grafana 8 alerting" description = "Enable Grafana 8 Alerts" -weight = 128 +weight = 115 +++ -# Opt-in to Grafana 8 alerts +# Opt-in to Grafana 8 alerting -This topic describes how to enable Grafana 8 alerts as well as the rules and restrictions that govern the migration of existing dashboard alerts to this new alerting system. You can also [disable Grafana 8 alerts]({{< relref "./opt-in.md#disable-grafana-8-alerts" >}}) if needed. +This topic describes how to opt-in to Grafana 8 alerting and the rules and restrictions that govern the migration of existing dashboard alerts to the new alerting system. You can [disable Grafana 8 alerts]({{< relref "./opt-in.md#disable-grafana-8-alerts" >}}) and use the legacy dashboard alerting if needed. Before you begin, we recommend that you backup Grafana's database. If you are using PostgreSQL as the backend database, then the minimum required version is 9.5. -## Enable Grafana 8 alerts +## Enable Grafana 8 alerting To enable Grafana 8 alerts: -1. Go to your custom configuration file located in $WORKING_DIR/conf/custom.ini. -1. In the [unified alerts]({{< relref "../../administration/configuration.md#unified_alerting" >}}) section, set the `enabled` property to `true`. -1. Next, in the [alerting]({{< relref "../../administration/configuration.md#alerting" >}}) section of the configuration file, update the configuration for the legacy dashboard alerts by setting the `enabled` property to `false`. +1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the [unified alerts]({{< relref "../../administration/configuration.md#unified_alerting" >}}) section. +1. Set the `enabled` property to `true`. +1. Next, for [legacy dashboard alerting]({{< relref "../../administration/configuration.md#alerting" >}}), set the `enabled` flag to `true`. 1. Restart Grafana for the configuration changes to take effect. -> **Note:** Before Grafana v8.2, to enable or disable Grafana 8 alerts, users configured the `ngalert` feature toggle. This toggle option is no longer available. +> **Note:** The `ngalert` toggle previously used to enable or disable Grafana 8 alerting is no longer available. -> **Note:** There is no `Keep Last State` option for [`No Data` and `Error handling`]({{< relref "./alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) in Grafana 8 alerts. This option becomes `Alerting` during the legacy rule migration. - -Moreover, before v8.2, notification logs and silences were stored on a disk. If you did not use persistent disks, any configured silences and logs would get lost on a restart, resulting in unwanted or duplicate notifications. - -As of Grafana 8.2, we no longer require the use of a persistent disk. Instead, the notification logs and silences are stored regularly (every 15 minutes), and a clean shutdown to the database. If you used the file-based approach, Grafana will read the existing file and persisting it eventually. +Before v8.2, notification logs and silences were stored on a disk. If you did not use persistent disks, you would have lost any configured silences and logs on a restart, resulting in unwanted or duplicate notifications. We no longer require the use of a persistent disk. Instead, the notification logs and silences are stored regularly (every 15 minutes). If you used the file-based approach, Grafana reads the existing file and persists it eventually. ## Migrating legacy alerts to Grafana 8 alerting system -When Grafana 8 alerting is enabled, existing legacy dashboard alerts migrate in a format compatible with the Grafana 8 alerting system. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside new alerts. +When Grafana 8 alerting is enabled, existing legacy dashboard alerts migrate in a format compatible with the Grafana 8 alerting. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside new alerts. -Read and write access to legacy dashboard alerts was governed by the dashboard and folder permissions storing them. In Grafana 8, alerts inherit the permissions of the folders they are stored in. During migration, legacy dashboard alert permissions are matched to the new rules permissions as follows: +Read and write access to legacy dashboard alerts and Grafana 8 alerts are governed by the permissions of the folders storing them. During migration, legacy dashboard alert permissions are matched to the new rules permissions as follows: - If alert's dashboard has permissions, it will create a folder named like `Migrated {"dashboardUid": "UID", "panelId": 1, "alertId": 1}` to match permissions of the dashboard (including the inherited permissions from the folder). - If there are no dashboard permissions and the dashboard is under a folder, then the rule is linked to this folder and inherits its permissions. - If there are no dashboard permissions and the dashboard is under the General folder, then the rule is linked to the `General Alerting` folder, and the rule inherits the default permissions. +> **Note:** Since there is no `Keep Last State` option for [`No Data` and `Error handling`]({{< relref "./alerting-rules/create-grafana-managed-rule/#no-data--error-handling" >}}) in Grafana 8 alerting, this option becomes `Alerting` during the legacy rules migration. + Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route. Since `Hipchat` and `Sensu` notification channels are no longer supported, legacy alerts associated with these channels are not automatically migrated to Grafana 8 alerting. Assign the legacy alerts to a supported notification channel so that you continue to receive notifications for those alerts. @@ -44,15 +42,15 @@ Silences (expiring after one year) are created for all paused dashboard alerts. ### Limitation -Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from all other supported data sources at this time. +Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch alerting rules from all other supported data sources at this time. ## Disable Grafana 8 alerts To disable Grafana 8 alerts and enable legacy dashboard alerts: -1. Go to your custom configuration file located in $WORKING_DIR/conf/custom.ini. -1. In the [unified alerts]({{< relref "../../administration/configuration.md#unified_alerting" >}}) section, set the `enabled` property to `false`. -1. Next, in the [alerting]({{< relref "../../administration/configuration.md#alerting" >}}) section of the configuration file, update the configuration for the legacy dashboard alerts by setting the `enabled` property to `true`. +1. In your custom configuration file ($WORKING_DIR/conf/custom.ini), go to the [Grafana 8 alerting]({{< relref "../../administration/configuration.md#unified_alerting" >}}) section. +1. Set the `enabled` property to `false`. +1. For [legacy dashboard alerting]({{< relref "../../administration/configuration.md#alerting" >}}), set the `enabled` flag to `true`. 1. Restart Grafana for the configuration changes to take effect. -> **Note:** If you choose to migrate from Grafana 8 alerts to legacy dashboard alerts, you will lose any new alerts that you created in the Grafana 8 alerting system. +> **Note:** If you choose to migrate from Grafana 8 alerting to legacy dashboard alerting, you will lose any new alerts created in the Grafana 8 alerting system. diff --git a/docs/sources/alerting/unified-alerting/silences.md b/docs/sources/alerting/unified-alerting/silences.md index 2f395d8982a..6804cd6fddc 100644 --- a/docs/sources/alerting/unified-alerting/silences.md +++ b/docs/sources/alerting/unified-alerting/silences.md @@ -1,17 +1,17 @@ +++ -title = "Silence alert notifications" -description = "Silence alert notifications" +title = "Silences" +description = "Silences alert notifications" keywords = ["grafana", "alerting", "silence", "mute"] weight = 400 +++ -# Silence alert notifications +# Silences -Grafana allows to you to prevent notifications from one or more alert rules by creating a silence. This silence lasts for a specified window of time. +Use silences to stop notifications from one or more alerting rules. Silences do not prevent alert rules from being evaluated. Nor do they not stop alerting instances from being shown in the user interface. Silences only stop notifications from getting created. A silence lasts for only a specified window of time. Silences do not prevent alert rules from being evaluated. They also do not stop alert instances being shown in the user interface. Silences only prevent notifications from being created. -Grafana alerting UI allows you to configure silences for the Grafana managed alerts (handled by the embedded Alertmanager) as well as silences for an [external Alertmanager if one is configured]({{< relref "../../datasources/alertmanager.md" >}}), using the Alertmanager dropdown. +You can configure Grafana managed silences as well as silences for an [external Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}). For more information, see [Alertmanager]({{< relref "./fundamentals/alertmanager.md" >}}). > **Note:** Before Grafana v8.2, the configuration of the embedded Alertmanager was shared across organisations. Users of Grafana 8.0 and 8.1 are advised to use the new Grafana 8 Alerts only if they have one organisation. Otherwise, silences for the Grafana managed alerts will be visible by all organizations. @@ -19,18 +19,20 @@ Grafana alerting UI allows you to configure silences for the Grafana managed ale To add a silence: -1. In the Grafana menu, hover your cursor over the **Alerting** (bell) icon and then select **Silences** (crossed out bell icon). -1. Click the **New Silence** button. -1. Select the start and end date in **Silence start and end** to indicate when the silence should go into effect and expire. -1. Optionally, update the **Duration** to alter the time for the end of silence in the previous step to correspond to the start plus the duration. -1. Enter one or more _Matching Labels_ by filling out the **Name** and **Value** fields. Matchers determine which rules the silence will apply to. -1. Enter a **Comment**. -1. Enter the name of the owner in **Creator**. +1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts. +1. In the Alerting page, click **Silences** to open the page listing existing contact points. +1. From [Alertmanager]({{< relref "./contact-points.md/#alertmanager" >}}) drop-down, select an external Alertmanager to create and manage silences for the external data source. Otherwise, keep the default option of Grafana. +1. Click **New Silence** to open the Create silence page. +1. In **Silence start and end**, select the start and end date to indicate when the silence should go into effect and expire. +1. Optionally, in **Duration**, specify how long the silence is enforced. This automatically updates the end time in the **Silence start and end** field. +1. In the **Name** and **Value** fields, enter one or more _Matching Labels_. Matchers determine which rules the silence will apply to. For more information, see [Label matching for alert suppression](#label-matching-for-alert-suppression). +1. In **Comment**, add details about the silence. +1. In **Creator**, enter the name of the silence owner or keep the default owner. 1. Click **Create**. -## How label matching works +### Label matching for alert suppression -Alert instances that have labels that match all of the "Matching Labels" specified in the silence will have their notifications suppressed. +Grafana suppresses notifications only for alerts with labels that match all the "Matching Labels" specified in the silence. - The **Label** field is the name of the label to match. It must exactly match the label name. - The **Operator** field is the operator to match against the label value. The available operators are: @@ -42,16 +44,16 @@ Alert instances that have labels that match all of the "Matching Labels" specifi - The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value. -## Viewing and editing silences +1. In the Alerting page, click **Silences** to view the list of existing silences. +1. Find the silence you want to edit, then click **Edit** (pen icon). +1. Make changes, then click **Submit** to save your changes. -1. In the Grafana menu hover your cursor over the **Alerting** (bell) icon, then select **Silences** (crossed out bell icon). -1. To end the silence, click the **Unsilence** option next to the listed silence. Silences that have ended are still listed and are automatically removed after 5 days. There is no method for manual removal. -1. To edit a silence, click the pencil icon next to the listed silence. Edit the silence using instructions on how to create a silence. -1. Click **Submit** to save your changes. +## Remove silences -## Manage silences for an external Alertmanager +1. In the Alerting page, click **Silences** to view the list of existing silences. +1. Find the silence you want to end, then click **Unsilence**. -Grafana alerting UI supports managing external Alertmanager silences. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source. +> **Note:** Silences that have ended are retained and listed for five days. You cannot remove a silence manually. ## Create a URL to silence form with defaults filled in diff --git a/docs/sources/dashboards/playlist.md b/docs/sources/dashboards/playlist.md index 51bea340289..054547f6ea1 100644 --- a/docs/sources/dashboards/playlist.md +++ b/docs/sources/dashboards/playlist.md @@ -51,7 +51,7 @@ You can control a playlist in **Normal** or **TV** mode after it's started, usin | Stop (square) | Ends the playlist, and exits to the current dashboard. | | Cycle view mode (monitor icon) | Rotates the display of the dashboards in different view modes. | | Time range | Displays data within a time range. It can be set to display the last 5 minutes up to 5 years ago, or a custom time range, using the down arrow. | -| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop down arrow. | +| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop-down arrow. | ## Create a playlist @@ -59,7 +59,7 @@ You can create a playlist to present dashboards in a sequence, with a set order 1. In the playlist page, click **New playlist**. The New playlist page opens. 1. In the **Name** text box, enter a descriptive name. -1. In the **Interval** text bos, enter a time interval. Grafana displays a particular dashboard for the interval of time specified here before moving on to the next dashboard. +1. In the **Interval** text box, enter a time interval. Grafana displays a particular dashboard for the interval of time specified here before moving on to the next dashboard. 1. In Dashboards, add existing dashboards to the playlist using **Add by title** and **Add by tag** drop-down options. The dashboards you add are listed in a sequential order. 1. If needed: - Search for a dashboard by its name, a regular expression, or a tag. diff --git a/docs/sources/datasources/alertmanager.md b/docs/sources/datasources/alertmanager.md index 7ecf06977ca..49fd3759e0c 100644 --- a/docs/sources/datasources/alertmanager.md +++ b/docs/sources/datasources/alertmanager.md @@ -8,7 +8,7 @@ weight = 150 # Alertmanager data source -Grafana includes built-in support for Prometheus Alertmanager. It is presently in alpha and not accessible unless [alpha plugins are enabled in Grafana settings](https://grafana.com/docs/grafana/latest/administration/configuration/#enable_alpha). Once you add it as a data source, you can use the [Grafana alerting UI](https://grafana.com/docs/grafana/latest/alerting/) to manage silences, contact points as well as notification policies. A drop down option in these pages allows you to switch between Grafana and any configured Alertmanager data sources . +Grafana includes built-in support for Prometheus Alertmanager. It is presently in alpha and not accessible unless [alpha plugins are enabled in Grafana settings](https://grafana.com/docs/grafana/latest/administration/configuration/#enable_alpha). Once you add it as a data source, you can use the [Grafana alerting UI](https://grafana.com/docs/grafana/latest/alerting/) to manage silences, contact points as well as notification policies. A drop-down option in these pages allows you to switch between Grafana and any configured Alertmanager data sources . > **Note:** Currently, the [Cortex implementation of Prometheus Alertmanager](https://cortexmetrics.io/docs/proposals/scalable-alertmanager/) is required to edit rules. diff --git a/docs/sources/developers/plugins/legacy/defaults-and-editor-mode.md b/docs/sources/developers/plugins/legacy/defaults-and-editor-mode.md index 682e250ac01..a4648e67f25 100644 --- a/docs/sources/developers/plugins/legacy/defaults-and-editor-mode.md +++ b/docs/sources/developers/plugins/legacy/defaults-and-editor-mode.md @@ -104,9 +104,9 @@ Note that there are some Angular attributes here. _ng-model_ will update the pan {{< figure class="float-right" src="/assets/img/blog/clock-panel-editor.png" caption="Panel Editor" >}} -On the editor tab we use a drop down for 12/24 hour clock, an input field for font size and a color picker for the background color. +On the editor tab we use a drop-down for 12/24 hour clock, an input field for font size and a color picker for the background color. -The drop down/select has its own _gf-form-select-wrapper_ css class and looks like this: +The drop-down/select has its own _gf-form-select-wrapper_ css class and looks like this: ```html
diff --git a/docs/sources/panels/panel-library.md b/docs/sources/panels/panel-library.md index 03868e8c2b1..b3bdd019236 100644 --- a/docs/sources/panels/panel-library.md +++ b/docs/sources/panels/panel-library.md @@ -38,7 +38,7 @@ Once created, you can modify the library panel using any dashboard on which it a To add a library panel to a dashboard: -1. Hover over the **+** option on the left menu, then select **Create** from the drop down options. The Add panel dialog opens. +1. Hover over the **+** option on the left menu, then select **Create** from the drop-down options. The Add panel dialog opens. {{< figure src="/static/img/docs/library-panels/add-library-panel-8-0.png" class="docs-image--no-shadow" max-width= "900px" caption="Screenshot of the edit panel" >}} 1. Click the **Add a panel from the panel library** option. You will see a list of your library panels. 1. Filter the list or search to find the panel you want to add. @@ -71,7 +71,7 @@ Before you delete a library panel, verify that it is no longer in use on any das To delete a library panel: -1. Hover over **Dashboard** on the left menu, and select Library panels from the drop down options. +1. Hover over **Dashboard** on the left menu, and select Library panels from the drop-down options. 1. Select a library panel that is being used in different dashboards. You will see a list of all the dashboards. 1. Select the panel you want to delete. 1. Click the delete icon next to the library panel name. diff --git a/docs/sources/release-notes/release-notes-7-3-0.md b/docs/sources/release-notes/release-notes-7-3-0.md index 9eb98b4a3c6..16f0149c6fa 100644 --- a/docs/sources/release-notes/release-notes-7-3-0.md +++ b/docs/sources/release-notes/release-notes-7-3-0.md @@ -44,7 +44,7 @@ list = false - **API**: Fix short URLs. [#28300](https://github.com/grafana/grafana/pull/28300), [@aknuds1](https://github.com/aknuds1) - **BackendSrv**: Fixes queue countdown when unsubscribe is before response. [#28323](https://github.com/grafana/grafana/pull/28323), [@hugohaggmark](https://github.com/hugohaggmark) - **CloudWatch/Athena - valid metrics and dimensions.**. [#28436](https://github.com/grafana/grafana/pull/28436), [@kwarunek](https://github.com/kwarunek) -- **Dashboard links**: Places drop down list so it's always visible. [#28330](https://github.com/grafana/grafana/pull/28330), [@maknik](https://github.com/maknik) +- **Dashboard links**: Places drop-down list so it's always visible. [#28330](https://github.com/grafana/grafana/pull/28330), [@maknik](https://github.com/maknik) - **Graph**: Fix for graph size not taking up full height or width. [#28314](https://github.com/grafana/grafana/pull/28314), [@jackw](https://github.com/jackw) - **Loki**: Base maxDataPoints limits on query type. [#28298](https://github.com/grafana/grafana/pull/28298), [@aocenas](https://github.com/aocenas) - **Loki**: Run instant query only when doing metric query. [#28325](https://github.com/grafana/grafana/pull/28325), [@aocenas](https://github.com/aocenas) diff --git a/docs/sources/release-notes/release-notes-7-4-0.md b/docs/sources/release-notes/release-notes-7-4-0.md index 3c6d8931d2b..11e2a455c9f 100644 --- a/docs/sources/release-notes/release-notes-7-4-0.md +++ b/docs/sources/release-notes/release-notes-7-4-0.md @@ -20,7 +20,7 @@ list = false ### Bug fixes - **Admin**: Fixes so form values are filled in from backend. [#30544](https://github.com/grafana/grafana/pull/30544), [@hugohaggmark](https://github.com/hugohaggmark) -- **Admin**: Fixes so whole org drop down is visible when adding users to org. [#30481](https://github.com/grafana/grafana/pull/30481), [@hugohaggmark](https://github.com/hugohaggmark) +- **Admin**: Fixes so whole org drop-down is visible when adding users to org. [#30481](https://github.com/grafana/grafana/pull/30481), [@hugohaggmark](https://github.com/hugohaggmark) - **Alerting**: Hides threshold handle for percentual thresholds. [#30431](https://github.com/grafana/grafana/pull/30431), [@hugohaggmark](https://github.com/hugohaggmark) - **CloudWatch**: Prevent field config from being overwritten. [#30437](https://github.com/grafana/grafana/pull/30437), [@sunker](https://github.com/sunker) - **Decimals**: Big Improvements to auto decimals and fixes to auto decimals bug found in 7.4-beta1. [#30519](https://github.com/grafana/grafana/pull/30519), [@torkelo](https://github.com/torkelo) @@ -33,7 +33,7 @@ list = false - **Panels**: Fixes so panels are refreshed when scrolling past them fast. [#30784](https://github.com/grafana/grafana/pull/30784), [@hugohaggmark](https://github.com/hugohaggmark) - **Prometheus**: Fix show query instead of Value if no **name** and metric. [#30511](https://github.com/grafana/grafana/pull/30511), [@zoltanbedi](https://github.com/zoltanbedi) - **TimeSeriesPanel**: Fixes default value for Gradient mode. [#30484](https://github.com/grafana/grafana/pull/30484), [@torkelo](https://github.com/torkelo) -- **Variables**: Clears drop down state when leaving dashboard. [#30810](https://github.com/grafana/grafana/pull/30810), [@hugohaggmark](https://github.com/hugohaggmark) +- **Variables**: Clears drop-down state when leaving dashboard. [#30810](https://github.com/grafana/grafana/pull/30810), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables**: Fixes display value when using capture groups in regex. [#30636](https://github.com/grafana/grafana/pull/30636), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables**: Fixes so queries work for numbers values too. [#30602](https://github.com/grafana/grafana/pull/30602), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables**: Fixes so text format will show All instead of custom all value. [#30730](https://github.com/grafana/grafana/pull/30730), [@hugohaggmark](https://github.com/hugohaggmark) @@ -145,7 +145,7 @@ list = false - **Variables**: Fixes Constant variable persistence confusion. [#29407](https://github.com/grafana/grafana/pull/29407), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables**: Fixes Textbox current value persistence. [#29481](https://github.com/grafana/grafana/pull/29481), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables**: Fixes loading with a custom all value in url. [#28958](https://github.com/grafana/grafana/pull/28958), [@hugohaggmark](https://github.com/hugohaggmark) -- **Variables**: Fixes so clicking on Selected in drop down will exclude All value from selection. [#29844](https://github.com/grafana/grafana/pull/29844), [@hugohaggmark](https://github.com/hugohaggmark) +- **Variables**: Fixes so clicking on Selected in drop-down will exclude All value from selection. [#29844](https://github.com/grafana/grafana/pull/29844), [@hugohaggmark](https://github.com/hugohaggmark) ### Breaking changes diff --git a/docs/sources/release-notes/release-notes-8-1-2.md b/docs/sources/release-notes/release-notes-8-1-2.md index eb0deb6ae30..d7cb57ac301 100644 --- a/docs/sources/release-notes/release-notes-8-1-2.md +++ b/docs/sources/release-notes/release-notes-8-1-2.md @@ -31,7 +31,7 @@ list = false - **Explore:** Fix showing of full log context. [#37442](https://github.com/grafana/grafana/pull/37442), [@ivanahuckova](https://github.com/ivanahuckova) - **PanelEdit:** Fix 'Actual' size by passing the correct panel size to Das…. [#37885](https://github.com/grafana/grafana/pull/37885), [@ashharrison90](https://github.com/ashharrison90) - **Plugins:** Fix TLS data source settings. [#37797](https://github.com/grafana/grafana/pull/37797), [@wbrowne](https://github.com/wbrowne) -- **Variables:** Fix issue with empty drop downs on navigation. [#37776](https://github.com/grafana/grafana/pull/37776), [@hugohaggmark](https://github.com/hugohaggmark) +- **Variables:** Fix issue with empty drop-downs on navigation. [#37776](https://github.com/grafana/grafana/pull/37776), [@hugohaggmark](https://github.com/hugohaggmark) - **Variables:** Fix URL util converting `false` into `true`. [#37402](https://github.com/grafana/grafana/pull/37402), [@simPod](https://github.com/simPod) ### Plugin development fixes & changes diff --git a/docs/sources/shared/alerts/grafana-managed-alerts.md b/docs/sources/shared/alerts/grafana-managed-alerts.md new file mode 100644 index 00000000000..3df2ec2a06b --- /dev/null +++ b/docs/sources/shared/alerts/grafana-managed-alerts.md @@ -0,0 +1,31 @@ +--- +title: Grafana managed alerts +--- + +## Clustering + +The current alerting system doesn't support high availability. Alert notifications are not deduplicated and load balancing is not supported between instances; for example, silences from one instance will not appear in the other. + +## Alert evaluation + +Grafana managed alerts are evaluated by the Grafana backend. Rule evaluations are scheduled, according to the alert rule configuration, and queries are evaluated by an engine that is part of core Grafana. + +Alerting rules can only query backend data sources with alerting enabled: + +- builtin or developed and maintained by grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`, + `Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer` +- any community backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../developers/plugins/metadata.md" >}})) + +## Metrics from the alerting engine + +The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../administration/view-server/internal-metrics.md" >}}). See also, [View alert rules and their current state]({{< relref "../../alerting/old-alerting/view-alerts.md" >}}). + +| Metric Name | Type | Description | +| ------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- | +| `alerting.alerts` | gauge | How many alerts by state | +| `alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API | +| `alerting.active_configurations` | gauge | The number of active, non default alertmanager configurations for grafana managed alerts | +| `alerting.rule_evaluations_total` | counter | The total number of rule evaluations | +| `alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures | +| `alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute | +| `alerting.rule_group_rules` | gauge | The number of rules | diff --git a/docs/sources/sharing/playlists.md b/docs/sources/sharing/playlists.md index 6e8891275cb..6963bdf88cd 100644 --- a/docs/sources/sharing/playlists.md +++ b/docs/sources/sharing/playlists.md @@ -132,7 +132,7 @@ You can control a playlist in **Normal** or **TV** mode after it's started, usin | Stop (square) | Ends the playlist, and exits to the current dashboard. | | Cycle view mode (monitor icon) | Rotates the display of the dashboards in different view modes. | | Time range | Displays data within a time range. It can be set to display the last 5 minutes up to 5 years ago, or a custom time range, using the down arrow. | -| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop down arrow. | +| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop-down arrow. | > Shortcut: Press the Esc key to stop the playlist from your keyboard. diff --git a/docs/sources/visualizations/geomap.md b/docs/sources/visualizations/geomap.md index 55946519b3f..22a4dafffa5 100644 --- a/docs/sources/visualizations/geomap.md +++ b/docs/sources/visualizations/geomap.md @@ -122,7 +122,7 @@ The markers layer allows you to display data points as different marker shapes s The heatmap layer clusters various data points to visualize locations with different densities. To add a heatmap layer: -Click on the drop down menu under Data Layer and choose `Heatmap`. +Click on the drop-down menu under Data Layer and choose `Heatmap`. Similar to `Markers`, you are prompted with various options to determine which data points to visualize and how. @@ -130,6 +130,6 @@ Similar to `Markers`, you are prompted with various options to determine which d ![Heatmap Layer Options](/static/img/docs/geomap-panel/geomap-heatmap-options-8-1-0.png) -- **Weight values** configures the intensity of the heatmap clusters. `Fixed value` keeps a constant weight value throughout all data points. This value should be in the range of 0~1. Similar to Markers, there is an alternate option in the drop down to automatically scale the weight values depending on data values. +- **Weight values** configures the intensity of the heatmap clusters. `Fixed value` keeps a constant weight value throughout all data points. This value should be in the range of 0~1. Similar to Markers, there is an alternate option in the drop-down to automatically scale the weight values depending on data values. - **Radius** configures the size of the heatmap clusters. - **Blur** configures the amount of blur on each cluster. diff --git a/docs/sources/whatsnew/whats-new-in-v5-4.md b/docs/sources/whatsnew/whats-new-in-v5-4.md index e069f066190..db0b137b684 100644 --- a/docs/sources/whatsnew/whats-new-in-v5-4.md +++ b/docs/sources/whatsnew/whats-new-in-v5-4.md @@ -37,7 +37,7 @@ Additionally, there's now support for disable the sending of `OK` alert notifica Grafana v5.3 included built-in support for [Google Stackdriver](https://cloud.google.com/stackdriver/) which enables you to visualize your Stackdriver metrics in Grafana. One important feature missing was support for templating queries. This is now included together with a brand new templating query editor for Stackdriver. -The Stackdriver templating query editor lets you choose from a set of different Query Types. This will in turn reveal additional drop downs to help you +The Stackdriver templating query editor lets you choose from a set of different Query Types. This will in turn reveal additional drop-downs to help you find, filter and select the templating values you're interested in, see screenshot for details. The templating query editor also supports chaining multiple variables making it easy to define variables that's dependent on other variables. diff --git a/docs/sources/whatsnew/whats-new-in-v7-1.md b/docs/sources/whatsnew/whats-new-in-v7-1.md index a5a917a9a3e..1d3fd56569b 100644 --- a/docs/sources/whatsnew/whats-new-in-v7-1.md +++ b/docs/sources/whatsnew/whats-new-in-v7-1.md @@ -79,7 +79,7 @@ Grafana v7.1 adds support for provisioning of app plugins. This allows app plugi Support for multiple dimensions has been added to all services in the Azure Monitor datasource. This means you can now group by more than one dimension with time series queries. With the Kusto based services, Log Analytics and Application Insights Analytics, you can also select multiple metrics as well as multiple dimensions. -Additionally, the Raw Edit mode for Application Insights Analytics has been replaced with a new service in the drop down for the data source and is called Insights Analytics. The new query editor behaves in the same way as Log Analytics. +Additionally, the Raw Edit mode for Application Insights Analytics has been replaced with a new service in the drop-down for the data source and is called Insights Analytics. The new query editor behaves in the same way as Log Analytics. ## Deep linking for Google Cloud Monitoring (formerly named Google Stackdriver) data source diff --git a/docs/sources/whatsnew/whats-new-in-v8-0.md b/docs/sources/whatsnew/whats-new-in-v8-0.md index 391f22ebbc9..a5e73e63e28 100644 --- a/docs/sources/whatsnew/whats-new-in-v8-0.md +++ b/docs/sources/whatsnew/whats-new-in-v8-0.md @@ -24,7 +24,7 @@ As part of the new alert changes, we have introduced a new data source, Alertman > **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release. -To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/difference-old-new.md" >}}). +To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/unified-alerting/difference-old-new.md" >}}). ### Library panels