PackageJson: Prettify markdown/mdx on commit with lint-staged (#37616)

* Format md,mdx files with prettier on lint-staged

* Manually run prettier on docs/sources
This commit is contained in:
Connor Lindsey
2021-08-06 07:52:36 -06:00
committed by GitHub
parent e9c032f10f
commit b78a67cec7
301 changed files with 3216 additions and 2980 deletions

View File

@@ -10,18 +10,21 @@ weight = 400
Grafana CLI is a small executable that is bundled with Grafana server. It can be executed on the same machine Grafana server is running on. Grafana CLI has `plugins` and `admin` commands, as well as global options.
To list all commands and options:
```
grafana-cli -h
```
## Invoking Grafana CLI
To invoke Grafana CLI, add the path to the grafana binaries in your `PATH` environment variable. Alternately, if your current directory is the `bin` directory, use `./grafana-cli`. Otherwise, you can specify full path to the CLI. For example, on Linux `/usr/share/grafana/bin/grafana-cli` and on Windows `C:\Program Files\GrafanaLabs\grafana\bin\grafana-cli.exe`.
>**Note:** Some commands, such as installing or removing plugins, require `sudo` on Linux. If you are on Windows, run Windows PowerShell as Administrator.
> **Note:** Some commands, such as installing or removing plugins, require `sudo` on Linux. If you are on Windows, run Windows PowerShell as Administrator.
## Grafana CLI command syntax
The general syntax for commands in Grafana CLI is:
```bash
grafana-cli [global options] command [command options] [arguments...]
```
@@ -37,6 +40,7 @@ Each global option applies only to the command in which it is used. For example,
`--help` or `-h` displays the help, including default paths and Docker configuration information.
**Example:**
```bash
grafana-cli -h
```
@@ -46,6 +50,7 @@ grafana-cli -h
`--version` or `-v` prints the version of Grafana CLI currently running.
**Example:**
```bash
grafana-cli -v
```
@@ -55,6 +60,7 @@ grafana-cli -v
`--pluginsDir value` overrides the path to where your local Grafana instance stores plugins. Use this option if you want to install, update, or remove a plugin somewhere other than the default directory ("/var/lib/grafana/plugins") [$GF_PLUGIN_DIR].
**Example:**
```bash
grafana-cli --pluginsDir "/var/lib/grafana/devplugins" plugins install <plugin-id>
```
@@ -64,6 +70,7 @@ grafana-cli --pluginsDir "/var/lib/grafana/devplugins" plugins install <plugin-i
`--repo value` allows you to download and install or update plugins from a repository other than the default Grafana repo.
**Example:**
```bash
grafana-cli --repo "https://example.com/plugins" plugins install <plugin-id>
```
@@ -73,6 +80,7 @@ grafana-cli --repo "https://example.com/plugins" plugins install <plugin-id>
`--pluginUrl value` allows you to download a .zip file containing a plugin from a local URL instead of downloading it from the default Grafana source.
**Example:**
```bash
grafana-cli --pluginUrl https://company.com/grafana/plugins/<plugin-id>-<plugin-version>.zip plugins install <plugin-id>
```
@@ -84,6 +92,7 @@ grafana-cli --pluginUrl https://company.com/grafana/plugins/<plugin-id>-<plugin-
`--insecure` allows you to turn off Transport Layer Security (TLS) verification (insecure). You might want to do this if you are downloading a plugin from a non-default source.
**Example:**
```bash
grafana-cli --insecure --pluginUrl https://company.com/grafana/plugins/<plugin-id>-<plugin-version>.zip plugins install <plugin-id>
```
@@ -93,6 +102,7 @@ grafana-cli --insecure --pluginUrl https://company.com/grafana/plugins/<plugin-i
`--debug` or `-d` enables debug logging. Debug output is returned and shown in the terminal.
**Example:**
```bash
grafana-cli --debug plugins install <plugin-id>
```
@@ -104,6 +114,7 @@ grafana-cli --debug plugins install <plugin-id>
For example, you can use it to redirect logging to another file (maybe to log plugin installations in Grafana Cloud) or when resetting the admin password and you have non-default values for some important configuration value (like where the database is located).
**Example:**
```bash
grafana-cli --configOverrides cfg:default.paths.log=/dev/null plugins install <plugin-id>
```
@@ -113,6 +124,7 @@ grafana-cli --configOverrides cfg:default.paths.log=/dev/null plugins install <p
Sets the path for the Grafana install/home path, defaults to working directory. You do not need to use this if you are in the Grafana installation directory when using the CLI.
**Example:**
```bash
grafana-cli --homepath "/usr/share/grafana" admin reset-admin-password <new password>
```
@@ -122,6 +134,7 @@ grafana-cli --homepath "/usr/share/grafana" admin reset-admin-password <new pass
`--config value` overrides the default location where Grafana expects the configuration file. Refer to [Configuration]({{< relref "../administration/configuration.md" >}}) for more information about configuring Grafana and default configuration file locations.
**Example:**
```bash
grafana-cli --config "/etc/configuration/" admin reset-admin-password mynewpassword
```
@@ -157,6 +170,7 @@ grafana-cli plugins ls
```
### Update all installed plugins
```bash
grafana-cli plugins update-all
```
@@ -208,6 +222,7 @@ If you need to set the password in a script, then you can use the [Grafana User
`encrypt-datasource-passwords` migrates passwords from unsecured fields to secure_json_data field. Returns `ok` unless there is an error. Safe to execute multiple times.
**Example:**
```bash
grafana-cli admin data-migration encrypt-datasource-passwords
```

View File

@@ -206,7 +206,6 @@ Another way is to put a web server like Nginx or Apache in front of Grafana and
### domain
### enforce_domain
Redirect to correct domain if the host header does not match the domain. Prevents DNS rebinding attacks. Default is `false`.
@@ -423,6 +422,7 @@ For more details check the [Transport.MaxConnsPerHost](https://golang.org/pkg/ne
The maximum number of idle connections that Grafana will maintain. Default is `100`. For more details check the [Transport.MaxIdleConns](https://golang.org/pkg/net/http/#Transport.MaxIdleConns) documentation.
### max_idle_connections_per_host
[Deprecated - use max_idle_connections instead]
The maximum number of idle connections per host that Grafana will maintain. Default is `2`. For more details check the [Transport.MaxIdleConnsPerHost](https://golang.org/pkg/net/http/#Transport.MaxIdleConnsPerHost) documentation.
@@ -588,7 +588,7 @@ As of Grafana v7.3, this also limits the refresh interval options in Explore.
Path to the default home dashboard. If this value is empty, then Grafana uses StaticRootPath + "dashboards/home.json".
>**Note:** On Linux, Grafana uses `/usr/share/grafana/public/dashboards/home.json` as the default home dashboard location.
> **Note:** On Linux, Grafana uses `/usr/share/grafana/public/dashboards/home.json` as the default home dashboard location.
<hr />
@@ -828,7 +828,7 @@ Azure cloud environment where Grafana is hosted:
| Azure Cloud | Value |
| ------------------------------------------------ | ---------------------- |
| Microsoft Azure public cloud | AzureCloud (*default*) |
| Microsoft Azure public cloud | AzureCloud (_default_) |
| Microsoft Chinese national cloud | AzureChinaCloud |
| US Government cloud | AzureUSGovernment |
| Microsoft German national cloud ("Black Forest") | AzureGermanCloud |
@@ -1520,7 +1520,7 @@ The `allowed_origins` option is a comma-separated list of additional origins (`O
If not set (default), then the origin is matched over [root_url]({{< relref "#root_url" >}}) which should be sufficient for most scenarios.
Origin patterns support wildcard symbol "*".
Origin patterns support wildcard symbol "\*".
For example:
@@ -1718,4 +1718,4 @@ default_baselayer_config = `{
### enable_custom_baselayers
Set this to `true` to disable loading other custom base maps and hide them in the Grafana UI. Default is `false`.
Set this to `true` to disable loading other custom base maps and hide them in the Grafana UI. Default is `false`.

View File

@@ -40,14 +40,14 @@ docker run -d --user $ID --volume "$PWD/data:/var/lib/grafana" -p 3000:3000 graf
The following settings are hard-coded when launching the Grafana Docker container and can only be overridden using environment variables, not in `conf/grafana.ini`.
Setting | Default value
----------------------|---------------------------
GF_PATHS_CONFIG | /etc/grafana/grafana.ini
GF_PATHS_DATA | /var/lib/grafana
GF_PATHS_HOME | /usr/share/grafana
GF_PATHS_LOGS | /var/log/grafana
GF_PATHS_PLUGINS | /var/lib/grafana/plugins
GF_PATHS_PROVISIONING | /etc/grafana/provisioning
| Setting | Default value |
| --------------------- | ------------------------- |
| GF_PATHS_CONFIG | /etc/grafana/grafana.ini |
| GF_PATHS_DATA | /var/lib/grafana |
| GF_PATHS_HOME | /usr/share/grafana |
| GF_PATHS_LOGS | /var/log/grafana |
| GF_PATHS_PLUGINS | /var/lib/grafana/plugins |
| GF_PATHS_PROVISIONING | /etc/grafana/provisioning |
## Logging
@@ -92,5 +92,5 @@ You may also specify multiple profiles to `GF_AWS_PROFILES` (e.g.
Supported variables:
- `GF_AWS_${profile}_ACCESS_KEY_ID`: AWS access key ID (required).
- `GF_AWS_${profile}_SECRET_ACCESS_KEY`: AWS secret access key (required).
- `GF_AWS_${profile}_SECRET_ACCESS_KEY`: AWS secret access key (required).
- `GF_AWS_${profile}_REGION`: AWS region (optional).

View File

@@ -9,7 +9,7 @@ weight = 300
Grafana supports automatic rendering of panels as PNG images. This allows Grafana to automatically generate images of your panels to include in [alert notifications]({{< relref "../alerting/old-alerting/notifications.md" >}}).
>**Note:** Image rendering of dashboards is not supported at this time.
> **Note:** Image rendering of dashboards is not supported at this time.
While an image is being rendered, the PNG image is temporarily written to the file system. When the image is rendered, the PNG image is temporarily written to the `png` folder in the Grafana `data` folder.
@@ -35,7 +35,7 @@ To install the plugin, refer to the [Grafana Image Renderer Installation instruc
## Run in custom Grafana Docker image
We recommend setting up another Docker container for rendering and using remote rendering. Refer to [Remote rendering service]({{< relref "#remote-rendering-service" >}}) for instructions.
We recommend setting up another Docker container for rendering and using remote rendering. Refer to [Remote rendering service]({{< relref "#remote-rendering-service" >}}) for instructions.
If you still want to install the plugin in the Grafana Docker image, refer to [Build with Grafana Image Renderer plugin pre-installed]({{< relref "../installation/docker/#build-with-grafana-image-renderer-plugin-pre-installed" >}}).
@@ -60,7 +60,7 @@ services:
grafana:
image: grafana/grafana:main
ports:
- "3000:3000"
- '3000:3000'
environment:
GF_RENDERING_SERVER_URL: http://renderer:8081/render
GF_RENDERING_CALLBACK_URL: http://grafana:3000/
@@ -84,24 +84,24 @@ The following example describes how to build and run the remote HTTP rendering s
1. Clone the [Grafana image renderer plugin](https://grafana.com/grafana/plugins/grafana-image-renderer) Git repository.
1. Install dependencies and build:
```bash
yarn install --pure-lockfile
yarn run build
```
```bash
yarn install --pure-lockfile
yarn run build
```
1. Run the server:
```bash
node build/app.js server --port=8081
```
```bash
node build/app.js server --port=8081
```
1. Update Grafana configuration:
```
[rendering]
server_url = http://localhost:8081/render
callback_url = http://localhost:3000/
```
```
[rendering]
server_url = http://localhost:8081/render
callback_url = http://localhost:3000/
```
1. Restart Grafana.
@@ -128,7 +128,7 @@ Rendering failed: Error: Failed to launch chrome!/var/lib/grafana/plugins/grafan
error while loading shared libraries: libX11.so.6: cannot open shared object file: No such file or directory\n\n\nTROUBLESHOOTING: https://github.com/GoogleChrome/puppeteer/blob/master/docs/troubleshooting.md
```
In general you can use the [`ldd`](https://en.wikipedia.org/wiki/Ldd_(Unix)) utility to figure out what shared libraries
In general you can use the [`ldd`](<https://en.wikipedia.org/wiki/Ldd_(Unix)>) utility to figure out what shared libraries
are not installed in your system:
```bash

View File

@@ -11,6 +11,6 @@ Grafana supports [Jaeger tracing](https://www.jaegertracing.io/).
Grafana can emit Jaeger traces for its HTTP API endpoints and propagate Jaeger trace information to data sources.
All HTTP endpoints are logged evenly (annotations, dashboard, tags, and so on).
When a trace ID is propagated, it is reported with operation 'HTTP /datasources/proxy/:id/*'.
When a trace ID is propagated, it is reported with operation 'HTTP /datasources/proxy/:id/\*'.
Refer to [Configuration]({{< relref "configuration.md#tracing-jaeger" >}}) for information about enabling Jaeger tracing.

View File

@@ -20,10 +20,11 @@ Follow these instructions if you are a Grafana Server Admin.
{{< docs/list >}}
{{< docs/shared "manage-users/view-server-org-list.md" >}}
1. In the organization list, click the name of the organization that you want to change.
1. In **Name**, enter the new organization name.
1. Click **Update**.
{{< /docs/list >}}
{{< /docs/list >}}
### Organization Admin change organization name
@@ -31,14 +32,16 @@ If you are an Organization Admin, follow these steps:
{{< docs/list >}}
{{< docs/shared "preferences/org-preferences-list.md" >}}
1. In **Organization name**, enter the new name.
1. Click **Update organization name**.
{{< /docs/list >}}
{{< /docs/list >}}
## Change team name or email
Organization administrators and team administrators can change team names and email addresses.
To change the team name or email, follow these steps:
1. Hover your cursor over the **Configuration** (gear) icon in the side menu.
1. Click **Teams**. Grafana displays the team list.
1. In the team list, click the name of the team that you want to change.

View File

@@ -52,9 +52,10 @@ Organization and team administrators can change the UI theme for all users in a
{{< docs/list >}}
{{< docs/shared "manage-users/view-team-list.md" >}}
1. Click on the team that you want to change the UI theme for and then navigate to the **Settings** tab.
{{< docs/shared "preferences/select-ui-theme-list.md" >}}
{{< /docs/list >}}
{{< docs/shared "preferences/select-ui-theme-list.md" >}}
{{< /docs/list >}}
## Change your personal UI theme

View File

@@ -30,9 +30,10 @@ Organization administrators and team administrators can choose a default timezon
{{< docs/list >}}
{{< docs/shared "manage-users/view-team-list.md" >}}
1. Click on the team you that you want to change the timezone for and then navigate to the **Settings** tab.
{{< docs/shared "preferences/select-timezone-list.md" >}}
{{< /docs/list >}}
{{< docs/shared "preferences/select-timezone-list.md" >}}
{{< /docs/list >}}
## Set your personal timezone
@@ -41,4 +42,4 @@ You can change the timezone for your user account. This setting overrides timezo
{{< docs/list >}}
{{< docs/shared "preferences/navigate-user-preferences-list.md" >}}
{{< docs/shared "preferences/select-timezone-list.md" >}}
{{< /docs/list >}}
{{< /docs/list >}}

View File

@@ -40,7 +40,7 @@ Users with the Grafana Server Admin flag on their account or access to the confi
default_home_dashboard_path = data/main-dashboard.json
```
>**Note:** On Linux, Grafana uses `/usr/share/grafana/public/dashboards/home.json` as the default home dashboard location.
> **Note:** On Linux, Grafana uses `/usr/share/grafana/public/dashboards/home.json` as the default home dashboard location.
## Set the home dashboard for your organization
@@ -59,9 +59,10 @@ Organization administrators and Team Admins can choose a home dashboard for a te
{{< docs/list >}}
{{< docs/shared "preferences/navigate-to-the-dashboard-list.md" >}}
{{< docs/shared "manage-users/view-team-list.md" >}}
1. Click on the team that you want to change the home dashboard for and then navigate to the **Settings** tab.
{{< docs/shared "preferences/select-home-dashboard-list.md" >}}
{{< /docs/list >}}
{{< docs/shared "preferences/select-home-dashboard-list.md" >}}
{{< /docs/list >}}
## Set your personal home dashboard

View File

@@ -141,49 +141,49 @@ Please refer to each datasource documentation for specific provisioning examples
Since not all datasources have the same configuration settings we only have the most common ones as fields. The rest should be stored as a json blob in the `jsonData` field. Here are the most common settings that the core datasources use.
| Name | Type | Datasource | Description |
| ----------------------- | ------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
| tlsAuth | boolean | _All_ | Enable TLS authentication using client cert configured in secure json data |
| tlsAuthWithCACert | boolean | _All_ | Enable TLS authentication using CA cert |
| tlsSkipVerify | boolean | _All_ | Controls whether a client verifies the server's certificate chain and host name. |
| Name | Type | Datasource | Description |
| ----------------------- | ------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| tlsAuth | boolean | _All_ | Enable TLS authentication using client cert configured in secure json data |
| tlsAuthWithCACert | boolean | _All_ | Enable TLS authentication using CA cert |
| tlsSkipVerify | boolean | _All_ | Controls whether a client verifies the server's certificate chain and host name. |
| serverName | string | _All_ | Optional. Controls the server name used for certificate common name/subject alternative name verification. Defaults to using the data source URL. |
| timeout | string | _All_ | Request timeout in seconds. Overrides dataproxy.timeout option |
| graphiteVersion | string | Graphite | Graphite version |
| timeInterval | string | Prometheus, Elasticsearch, InfluxDB, MySQL, PostgreSQL and MSSQL | Lowest interval/step value that should be used for this data source. |
| httpMode | string | Influxdb | HTTP Method. 'GET', 'POST', defaults to GET |
| maxSeries | number | Influxdb | Max number of series/tables that Grafana processes |
| httpMethod | string | Prometheus | HTTP Method. 'GET', 'POST', defaults to POST |
| customQueryParameters | string | Prometheus | Query parameters to add, as a URL-encoded string. |
| esVersion | string | Elasticsearch | Elasticsearch version (E.g. `7.0.0`, `7.6.1`) |
| timeField | string | Elasticsearch | Which field that should be used as timestamp |
| interval | string | Elasticsearch | Index date time format. nil(No Pattern), 'Hourly', 'Daily', 'Weekly', 'Monthly' or 'Yearly' |
| logMessageField | string | Elasticsearch | Which field should be used as the log message |
| logLevelField | string | Elasticsearch | Which field should be used to indicate the priority of the log message |
| sigV4Auth | boolean | Elasticsearch and Prometheus | Enable usage of SigV4 |
| sigV4AuthType | string | Elasticsearch and Prometheus | SigV4 auth provider. default/credentials/keys |
| sigV4ExternalId | string | Elasticsearch and Prometheus | Optional SigV4 External ID |
| sigV4AssumeRoleArn | string | Elasticsearch and Prometheus | Optional SigV4 ARN role to assume |
| sigV4Region | string | Elasticsearch and Prometheus | SigV4 AWS region |
| sigV4Profile | string | Elasticsearch and Prometheus | Optional SigV4 credentials profile |
| authType | string | Cloudwatch | Auth provider. default/credentials/keys |
| externalId | string | Cloudwatch | Optional External ID |
| assumeRoleArn | string | Cloudwatch | Optional ARN role to assume |
| defaultRegion | string | Cloudwatch | Optional default AWS region |
| customMetricsNamespaces | string | Cloudwatch | Namespaces of Custom Metrics |
| profile | string | Cloudwatch | Optional credentials profile |
| tsdbVersion | string | OpenTSDB | Version |
| tsdbResolution | string | OpenTSDB | Resolution |
| sslmode | string | PostgreSQL | SSLmode. 'disable', 'require', 'verify-ca' or 'verify-full' |
| tlsConfigurationMethod | string | PostgreSQL | SSL Certificate configuration, either by 'file-path' or 'file-content' |
| sslRootCertFile | string | PostgreSQL | SSL server root certificate file, must be readable by the Grafana user |
| sslCertFile | string | PostgreSQL | SSL client certificate file, must be readable by the Grafana user |
| sslKeyFile | string | PostgreSQL | SSL client key file, must be readable by _only_ the Grafana user |
| encrypt | string | MSSQL | Connection SSL encryption handling. 'disable', 'false' or 'true' |
| postgresVersion | number | PostgreSQL | Postgres version as a number (903/904/905/906/1000) meaning v9.3, v9.4, ..., v10 |
| timescaledb | boolean | PostgreSQL | Enable usage of TimescaleDB extension |
| maxOpenConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of open connections to the database (Grafana v5.4+) |
| maxIdleConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of connections in the idle connection pool (Grafana v5.4+) |
| connMaxLifetime | number | MySQL, PostgreSQL and MSSQL | Maximum amount of time in seconds a connection may be reused (Grafana v5.4+) |
| timeout | string | _All_ | Request timeout in seconds. Overrides dataproxy.timeout option |
| graphiteVersion | string | Graphite | Graphite version |
| timeInterval | string | Prometheus, Elasticsearch, InfluxDB, MySQL, PostgreSQL and MSSQL | Lowest interval/step value that should be used for this data source. |
| httpMode | string | Influxdb | HTTP Method. 'GET', 'POST', defaults to GET |
| maxSeries | number | Influxdb | Max number of series/tables that Grafana processes |
| httpMethod | string | Prometheus | HTTP Method. 'GET', 'POST', defaults to POST |
| customQueryParameters | string | Prometheus | Query parameters to add, as a URL-encoded string. |
| esVersion | string | Elasticsearch | Elasticsearch version (E.g. `7.0.0`, `7.6.1`) |
| timeField | string | Elasticsearch | Which field that should be used as timestamp |
| interval | string | Elasticsearch | Index date time format. nil(No Pattern), 'Hourly', 'Daily', 'Weekly', 'Monthly' or 'Yearly' |
| logMessageField | string | Elasticsearch | Which field should be used as the log message |
| logLevelField | string | Elasticsearch | Which field should be used to indicate the priority of the log message |
| sigV4Auth | boolean | Elasticsearch and Prometheus | Enable usage of SigV4 |
| sigV4AuthType | string | Elasticsearch and Prometheus | SigV4 auth provider. default/credentials/keys |
| sigV4ExternalId | string | Elasticsearch and Prometheus | Optional SigV4 External ID |
| sigV4AssumeRoleArn | string | Elasticsearch and Prometheus | Optional SigV4 ARN role to assume |
| sigV4Region | string | Elasticsearch and Prometheus | SigV4 AWS region |
| sigV4Profile | string | Elasticsearch and Prometheus | Optional SigV4 credentials profile |
| authType | string | Cloudwatch | Auth provider. default/credentials/keys |
| externalId | string | Cloudwatch | Optional External ID |
| assumeRoleArn | string | Cloudwatch | Optional ARN role to assume |
| defaultRegion | string | Cloudwatch | Optional default AWS region |
| customMetricsNamespaces | string | Cloudwatch | Namespaces of Custom Metrics |
| profile | string | Cloudwatch | Optional credentials profile |
| tsdbVersion | string | OpenTSDB | Version |
| tsdbResolution | string | OpenTSDB | Resolution |
| sslmode | string | PostgreSQL | SSLmode. 'disable', 'require', 'verify-ca' or 'verify-full' |
| tlsConfigurationMethod | string | PostgreSQL | SSL Certificate configuration, either by 'file-path' or 'file-content' |
| sslRootCertFile | string | PostgreSQL | SSL server root certificate file, must be readable by the Grafana user |
| sslCertFile | string | PostgreSQL | SSL client certificate file, must be readable by the Grafana user |
| sslKeyFile | string | PostgreSQL | SSL client key file, must be readable by _only_ the Grafana user |
| encrypt | string | MSSQL | Connection SSL encryption handling. 'disable', 'false' or 'true' |
| postgresVersion | number | PostgreSQL | Postgres version as a number (903/904/905/906/1000) meaning v9.3, v9.4, ..., v10 |
| timescaledb | boolean | PostgreSQL | Enable usage of TimescaleDB extension |
| maxOpenConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of open connections to the database (Grafana v5.4+) |
| maxIdleConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of connections in the idle connection pool (Grafana v5.4+) |
| connMaxLifetime | number | MySQL, PostgreSQL and MSSQL | Maximum amount of time in seconds a connection may be reused (Grafana v5.4+) |
#### Secure Json Data
@@ -191,15 +191,15 @@ Since not all datasources have the same configuration settings we only have the
Secure json data is a map of settings that will be encrypted with [secret key]({{< relref "configuration.md#secret-key" >}}) from the Grafana config. The purpose of this is only to hide content from the users of the application. This should be used for storing TLS Cert and password that Grafana will append to the request on the server side. All of these settings are optional.
| Name | Type | Datasource | Description |
| ----------------- | ------ | ---------- | --------------------------------------- |
| tlsCACert | string | _All_ | CA cert for out going requests |
| tlsClientCert | string | _All_ | TLS Client cert for outgoing requests |
| tlsClientKey | string | _All_ | TLS Client key for outgoing requests |
| password | string | _All_ | password |
| basicAuthPassword | string | _All_ | password for basic authentication |
| accessKey | string | Cloudwatch | Access key for connecting to Cloudwatch |
| secretKey | string | Cloudwatch | Secret key for connecting to Cloudwatch |
| Name | Type | Datasource | Description |
| ----------------- | ------ | ---------------------------- | -------------------------------------------------------- |
| tlsCACert | string | _All_ | CA cert for out going requests |
| tlsClientCert | string | _All_ | TLS Client cert for outgoing requests |
| tlsClientKey | string | _All_ | TLS Client key for outgoing requests |
| password | string | _All_ | password |
| basicAuthPassword | string | _All_ | password for basic authentication |
| accessKey | string | Cloudwatch | Access key for connecting to Cloudwatch |
| secretKey | string | Cloudwatch | Secret key for connecting to Cloudwatch |
| sigV4AccessKey | string | Elasticsearch and Prometheus | SigV4 access key. Required when using keys auth provider |
| sigV4SecretKey | string | Elasticsearch and Prometheus | SigV4 secret key. Required when using keys auth provider |
@@ -319,9 +319,11 @@ By default, Grafana deletes dashboards in the database if the file is removed. Y
> or `uid` within the same installation as this will cause weird behaviors.
### Provision folders structure from filesystem to Grafana
If you already store your dashboards using folders in a git repo or on a filesystem, and also you want to have the same folder names in the Grafana menu, you can use `foldersFromFilesStructure` option.
For example, to replicate these dashboards structure from the filesystem to Grafana,
```
/etc/dashboards
├── /server
@@ -331,18 +333,21 @@ For example, to replicate these dashboards structure from the filesystem to Graf
├── /requests_dashboard.json
└── /resources_dashboard.json
```
you need to specify just this short provision configuration file.
```yaml
apiVersion: 1
providers:
- name: dashboards
type: file
updateIntervalSeconds: 30
options:
path: /etc/dashboards
foldersFromFilesStructure: true
- name: dashboards
type: file
updateIntervalSeconds: 30
options:
path: /etc/dashboards
foldersFromFilesStructure: true
```
`server` and `application` will become new folders in Grafana menu.
> **Note:** `folder` and `folderUid` options should be empty or missing to make `foldersFromFilesStructure` work.
@@ -426,7 +431,7 @@ The following sections detail the supported settings and secure settings for eac
#### Alert notification `pushover`
| Name | Secure setting |
| -------- | -------------- |
| ---------- | -------------- |
| apiToken | yes |
| userKey | yes |
| device | |
@@ -439,11 +444,11 @@ The following sections detail the supported settings and secure settings for eac
#### Alert notification `discord`
| Name | Secure setting |
| -------------- | -------------- |
| url | yes |
| avatar_url | |
| message | |
| Name | Secure setting |
| ---------- | -------------- |
| url | yes |
| avatar_url | |
| message | |
#### Alert notification `slack`
@@ -500,7 +505,7 @@ The following sections detail the supported settings and secure settings for eac
#### Alert notification `sensugo`
| Name | Secure setting |
| -------- | -------------- |
| --------- | -------------- |
| url | |
| apikey | yes |
| entity | |

View File

@@ -28,7 +28,7 @@ Require all network requests being made by Grafana to go through a proxy server.
## Limit Viewer query permissions
Users with the Viewer role can enter *any possible query* in *any* of the data sources available in the **organization**, not just the queries that are defined on the dashboards for which the user has Viewer permissions.
Users with the Viewer role can enter _any possible query_ in _any_ of the data sources available in the **organization**, not just the queries that are defined on the dashboards for which the user has Viewer permissions.
**For example:** In a Grafana instance with one data source, one dashboard, and one panel that has one query defined, you might assume that a Viewer can only see the result of the query defined in that panel. Actually, the Viewer has access to send any query to the data source. With a command-line tool like curl (there are lots of tools for this), the Viewer can make their own query to the data source and potentially access sensitive data.
@@ -41,6 +41,6 @@ To address this vulnerability, you can restrict data source query access in the
When you enable anonymous access to a dashboard, it is publicly available. This section lists the security implications of enabling Anonymous access.
- Anyone with the URL can access the dashboard.
- Anyone with the URL can access the dashboard.
- Anyone can make view calls to the API and list all folders, dashboards, and data sources.
- Anyone can make arbitrary queries to any data source that the Grafana instance is configured with.

View File

@@ -59,6 +59,7 @@ These instructions assume you have already added Prometheus as a data source in
static_configs:
- targets: ['localhost:3000']
```
1. Restart Prometheus. Your new job should appear on the Targets tab.
1. In Grafana, hover your mouse over the **Configuration** (gear) icon on the left sidebar and then click **Data Sources**.
1. Select the **Prometheus** data source.
@@ -81,6 +82,7 @@ These instructions assume you have already added Graphite as a data source in Gr
```
1. Enable [metrics.graphite] options:
```
# Send internal metrics to Graphite
[metrics.graphite]

View File

@@ -47,5 +47,5 @@ If a user belongs to several organizations, then that user is counted once as a
For example, if Sofia is a Viewer in two organizations, an Editor in two organizations, and Admin in three organizations, then she would be reflected in the stats as:
- Total users 1
- Total admins 1
- Total users 1
- Total admins 1

View File

@@ -26,4 +26,4 @@ As part of the new alert changes, we have introduced a new data source, Alertman
> **Note:** Out of the box, Grafana still supports old Grafana alerts. They are legacy alerts at this time, and will be deprecated in a future release. For more information, refer to [Legacy Grafana alerts]({{< relref "./old-alerting/_index.md" >}}).
To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/difference-old-new.md" >}}).
To learn more about the differences between new alerts and the legacy alerts, refer to [What's New with Grafana 8 Alerts]({{< relref "../alerting/difference-old-new.md" >}}).

View File

@@ -6,16 +6,21 @@ weight = 112
+++
# What's New with Grafana 8 Alerts
The Alerts released with Grafana 8.0 are an opt-in feature that centralizes alerting information for Grafana managed alerts and alerts from Prometheus-compatible datasources in one UI and API. You are able to create and edit alerting rules for Grafana managed alerts, Cortex alerts, and Loki alerts as well as see alerting information from prometheus-compatible datasources in a single, searchable view.
## Multi-dimensional alerting
Create alerts that will give you system-wide visibility with a single alerting rule. With Grafana 8 alerts, you are able to generate multiple alert instances from a single rule eg. creating a rule to monitor disk usage for multiple mount points on a single host. The evaluation engine is able to return multiple time series from a single query. Each time series is identified by its label set.
Create alerts that will give you system-wide visibility with a single alerting rule. With Grafana 8 alerts, you are able to generate multiple alert instances from a single rule eg. creating a rule to monitor disk usage for multiple mount points on a single host. The evaluation engine is able to return multiple time series from a single query. Each time series is identified by its label set.
## Create alerts outside of Dashboards
Grafana legacy alerts were tied to a dashboard. Grafana 8 Alerts allow you to create queries and expressions that can combine data from multiple sources, in unique ways. You are still able to link dashboards and panels to alerting rules, allowing you to quickly troubleshoot the system under observation, by linking a dashboard and/or panel ID to the alerting rule.
Grafana legacy alerts were tied to a dashboard. Grafana 8 Alerts allow you to create queries and expressions that can combine data from multiple sources, in unique ways. You are still able to link dashboards and panels to alerting rules, allowing you to quickly troubleshoot the system under observation, by linking a dashboard and/or panel ID to the alerting rule.
## Create Loki and Cortex alerting rules
With Grafana 8 Alerts you are able to manage your Loki and Cortex alerting rules using the same UI and API as your Grafana managed alerts.
With Grafana 8 Alerts you are able to manage your Loki and Cortex alerting rules using the same UI and API as your Grafana managed alerts.
## View and search for alerts from Prometheus
You can now display all of your alerting information in one, searchable UI. Alerts for Prometheus compatible datasources are listed below Grafana managed alerts. Search for labels across multiple datasources to quickly find all of the relevant alerts.

View File

@@ -17,7 +17,7 @@ Currently only the graph panel visualization supports alerts.
Legacy alerts have two main components:
- Alert rule - When the alert is triggered. Alert rules are defined by one or more conditions that are regularly evaluated by Grafana.
- Notification channel - How the alert is delivered. When the conditions of an alert rule are met, the Grafana notifies the channels configured for that alert.
- Notification channel - How the alert is delivered. When the conditions of an alert rule are met, the Grafana notifies the channels configured for that alert.
## Alert tasks
@@ -37,20 +37,21 @@ Currently alerting supports a limited form of high availability. Since v4.2.0 of
Grafana managed alerts are evaluated by the Grafana backend. Rule evaluations are scheduled, according to the alert rule configuration, and queries are evaluated by an engine that is part of core Grafana.
Alert rules can only query backend data sources with alerting enabled:
- builtin or developed and maintained by grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`,
`Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer`
`Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer`
- any community backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../developers/plugins/metadata.md" >}}))
## Metrics from the alert engine
The alert engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../administration/view-server/internal-metrics.md" >}}).
Metric Name | Type | Description
---------- | ----------- | ----------
`alerting.alerts` | gauge | How many alerts by state
`alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API
`alerting.active_configurations` | gauge | The number of active, non default alertmanager configurations for grafana managed alerts
`alerting.rule_evaluations_total` | counter | The total number of rule evaluations
`alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures
`alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute
`alerting.rule_group_rules` | gauge | The number of rules
| Metric Name | Type | Description |
| ------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- |
| `alerting.alerts` | gauge | How many alerts by state |
| `alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API |
| `alerting.active_configurations` | gauge | The number of active, non default alertmanager configurations for grafana managed alerts |
| `alerting.rule_evaluations_total` | counter | The total number of rule evaluations |
| `alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures |
| `alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute |
| `alerting.rule_group_rules` | gauge | The number of rules |

View File

@@ -96,16 +96,16 @@ Below are conditions you can configure how the rule evaluation engine should han
| --------------- | ------------------------------------------------------------------------------------------ |
| No Data | Set alert rule state to `NoData` |
| Alerting | Set alert rule state to `Alerting` |
| Keep Last State | Keep the current alert rule state, whatever it is. |
| Keep Last State | Keep the current alert rule state, whatever it is. |
| Ok | Not sure why you would want to send yourself an alert when things are okay, but you could. |
### Execution errors or timeouts
Tell Grafana how to handle execution or timeout errors.
| Error or timeout option | Description |
| ----------------------- | --------------------------------------------------- |
| Alerting | Set alert rule state to `Alerting` |
| Error or timeout option | Description |
| ----------------------- | -------------------------------------------------- |
| Alerting | Set alert rule state to `Alerting` |
| Keep Last State | Keep the current alert rule state, whatever it is. |
If you have an unreliable time series store from which queries sometime timeout or fail randomly you can set this option to `Keep Last State` in order to basically ignore them.
@@ -124,4 +124,3 @@ The actual notifications are configured and shared between multiple alerts. Read
## Alert state history and annotations
Alert state changes are recorded in the internal annotation table in Grafana's database. The state changes are visualized as annotations in the alert rule's graph panel. You can also go into the `State history` submenu in the alert tab to view and clear state history.

View File

@@ -37,41 +37,41 @@ This is done from the Notification channels page.
These examples show how often and when reminders are sent for a triggered alert.
Alert rule evaluation interval | Send reminders every | Reminder sent every (after last alert notification)
---------- | ----------- | -----------
`30s` | `15s` | ~30 seconds
`1m` | `5m` | ~5 minutes
`5m` | `15m` | ~15 minutes
`6m` | `20m` | ~24 minutes
`1h` | `15m` | ~1 hour
`1h` | `2h` | ~2 hours
| Alert rule evaluation interval | Send reminders every | Reminder sent every (after last alert notification) |
| ------------------------------ | -------------------- | --------------------------------------------------- |
| `30s` | `15s` | ~30 seconds |
| `1m` | `5m` | ~5 minutes |
| `5m` | `15m` | ~15 minutes |
| `6m` | `20m` | ~24 minutes |
| `1h` | `15m` | ~1 hour |
| `1h` | `2h` | ~2 hours |
<div class="clearfix"></div>
## List of supported notifiers
Name | Type | Supports images | Support alert rule tags
-----|------|---------------- | -----------------------
[DingDing](#dingdingdingtalk) | `dingding` | yes, external only | no
[Discord](#discord) | `discord` | yes | no
[Email](#email) | `email` | yes | no
[Google Hangouts Chat](#google-hangouts-chat) | `googlechat` | yes, external only | no
Hipchat | `hipchat` | yes, external only | no
[Kafka](#kafka) | `kafka` | yes, external only | no
Line | `line` | yes, external only | no
Microsoft Teams | `teams` | yes, external only | no
[Opsgenie](#opsgenie) | `opsgenie` | yes, external only | yes
[Pagerduty](#pagerduty) | `pagerduty` | yes, external only | yes
Prometheus Alertmanager | `prometheus-alertmanager` | yes, external only | yes
[Pushover](#pushover) | `pushover` | yes | no
Sensu | `sensu` | yes, external only | no
[Sensu Go](#sensu-go) | `sensugo` | yes, external only | no
[Slack](#slack) | `slack` | yes | no
Telegram | `telegram` | yes | no
Threema | `threema` | yes, external only | no
VictorOps | `victorops` | yes, external only | yes
[Webhook](#webhook) | `webhook` | yes, external only | yes
[Zenduty](#zenduty) | `webhook` | yes, external only | yes
| Name | Type | Supports images | Support alert rule tags |
| --------------------------------------------- | ------------------------- | ------------------ | ----------------------- |
| [DingDing](#dingdingdingtalk) | `dingding` | yes, external only | no |
| [Discord](#discord) | `discord` | yes | no |
| [Email](#email) | `email` | yes | no |
| [Google Hangouts Chat](#google-hangouts-chat) | `googlechat` | yes, external only | no |
| Hipchat | `hipchat` | yes, external only | no |
| [Kafka](#kafka) | `kafka` | yes, external only | no |
| Line | `line` | yes, external only | no |
| Microsoft Teams | `teams` | yes, external only | no |
| [Opsgenie](#opsgenie) | `opsgenie` | yes, external only | yes |
| [Pagerduty](#pagerduty) | `pagerduty` | yes, external only | yes |
| Prometheus Alertmanager | `prometheus-alertmanager` | yes, external only | yes |
| [Pushover](#pushover) | `pushover` | yes | no |
| Sensu | `sensu` | yes, external only | no |
| [Sensu Go](#sensu-go) | `sensugo` | yes, external only | no |
| [Slack](#slack) | `slack` | yes | no |
| Telegram | `telegram` | yes | no |
| Threema | `threema` | yes, external only | no |
| VictorOps | `victorops` | yes, external only | yes |
| [Webhook](#webhook) | `webhook` | yes, external only | yes |
| [Zenduty](#zenduty) | `webhook` | yes, external only | yes |
### Email
@@ -83,10 +83,10 @@ able to access the image.
> **Note:** Template variables are not supported in email alerts.
Setting | Description
---------- | -----------
Single email | Send a single email to all recipients. Disabled per default.
Addresses | Email addresses to recipients. You can enter multiple email addresses using a ";" separator.
| Setting | Description |
| ------------ | -------------------------------------------------------------------------------------------- |
| Single email | Send a single email to all recipients. Disabled per default. |
| Addresses | Email addresses to recipients. You can enter multiple email addresses using a ";" separator. |
### Slack
@@ -98,17 +98,17 @@ firing alerts in the Slack messages you have to configure either the [external i
in Grafana or a bot integration via Slack Apps. [Follow Slack's guide to set up a bot integration](https://api.slack.com/bot-users) and use the token
provided, which starts with "xoxb".
Setting | Description
---------- | -----------
Url | Slack incoming webhook URL, or eventually the [chat.postMessage](https://api.slack.com/methods/chat.postMessage) Slack API endpoint.
Username | Set the username for the bot's message.
Recipient | Allows you to override the Slack recipient. You must either provide a channel Slack ID, a user Slack ID, a username reference (@&lt;user&gt;, all lowercase, no whitespace), or a channel reference (#&lt;channel&gt;, all lowercase, no whitespace). If you use the `chat.postMessage` Slack API endpoint, this is required.
Icon emoji | Provide an emoji to use as the icon for the bot's message. Ex :smile:
Icon URL | Provide a URL to an image to use as the icon for the bot's message.
Mention Users | Optionally mention one or more users in the Slack notification sent by Grafana. You have to refer to users, comma-separated, via their corresponding Slack IDs (which you can find by clicking the overflow button on each user's Slack profile).
Mention Groups | Optionally mention one or more groups in the Slack notification sent by Grafana. You have to refer to groups, comma-separated, via their corresponding Slack IDs (which you can get from each group's Slack profile URL).
Mention Channel | Optionally mention either all channel members or just active ones.
Token | If provided, Grafana will upload the generated image via Slack's file.upload API method, not the external image destination. If you use the `chat.postMessage` Slack API endpoint, this is required.
| Setting | Description |
| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Url | Slack incoming webhook URL, or eventually the [chat.postMessage](https://api.slack.com/methods/chat.postMessage) Slack API endpoint. |
| Username | Set the username for the bot's message. |
| Recipient | Allows you to override the Slack recipient. You must either provide a channel Slack ID, a user Slack ID, a username reference (@&lt;user&gt;, all lowercase, no whitespace), or a channel reference (#&lt;channel&gt;, all lowercase, no whitespace). If you use the `chat.postMessage` Slack API endpoint, this is required. |
| Icon emoji | Provide an emoji to use as the icon for the bot's message. Ex :smile: |
| Icon URL | Provide a URL to an image to use as the icon for the bot's message. |
| Mention Users | Optionally mention one or more users in the Slack notification sent by Grafana. You have to refer to users, comma-separated, via their corresponding Slack IDs (which you can find by clicking the overflow button on each user's Slack profile). |
| Mention Groups | Optionally mention one or more groups in the Slack notification sent by Grafana. You have to refer to groups, comma-separated, via their corresponding Slack IDs (which you can get from each group's Slack profile URL). |
| Mention Channel | Optionally mention either all channel members or just active ones. |
| Token | If provided, Grafana will upload the generated image via Slack's file.upload API method, not the external image destination. If you use the `chat.postMessage` Slack API endpoint, this is required. |
If you are using the token for a slack bot, then you have to invite the bot to the channel you want to send notifications and add the channel to the recipient field.
@@ -116,12 +116,12 @@ If you are using the token for a slack bot, then you have to invite the bot to t
To setup Opsgenie you will need an API Key and the Alert API Url. These can be obtained by configuring a new [Grafana Integration](https://docs.opsgenie.com/docs/grafana-integration).
Setting | Description
--------|------------
Alert API URL | The API URL for your Opsgenie instance. This will normally be either `https://api.opsgenie.com` or, for EU customers, `https://api.eu.opsgenie.com`.
API Key | The API Key as provided by Opsgenie for your configured Grafana integration.
Override priority | Configures the alert priority using the `og_priority` tag. The `og_priority` tag must have one of the following values: `P1`, `P2`, `P3`, `P4`, or `P5`. Default is `False`.
Send notification tags as | Specify how you would like [Notification Tags]({{< relref "create-alerts.md/#notifications" >}}) delivered to Opsgenie. They can be delivered as `Tags`, `Extra Properties` or both. Default is Tags. See note below for more information.
| Setting | Description |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Alert API URL | The API URL for your Opsgenie instance. This will normally be either `https://api.opsgenie.com` or, for EU customers, `https://api.eu.opsgenie.com`. |
| API Key | The API Key as provided by Opsgenie for your configured Grafana integration. |
| Override priority | Configures the alert priority using the `og_priority` tag. The `og_priority` tag must have one of the following values: `P1`, `P2`, `P3`, `P4`, or `P5`. Default is `False`. |
| Send notification tags as | Specify how you would like [Notification Tags]({{< relref "create-alerts.md/#notifications" >}}) delivered to Opsgenie. They can be delivered as `Tags`, `Extra Properties` or both. Default is Tags. See note below for more information. |
> **Note:** When notification tags are sent as `Tags` they are concatenated into a string with a `key:value` format. If you prefer to receive the notifications tags as key/values under Extra Properties in Opsgenie then change the `Send notification tags as` to either `Extra Properties` or `Tags & Extra Properties`.
@@ -129,19 +129,19 @@ Send notification tags as | Specify how you would like [Notification Tags]({{< r
To set up PagerDuty, all you have to do is to provide an integration key.
Setting | Description
---------- | -----------
Integration Key | Integration key for PagerDuty.
Severity | Level for dynamic notifications, default is `critical` (1)
Auto resolve incidents | Resolve incidents in PagerDuty once the alert goes back to ok
Message in details | Removes the Alert message from the PD summary field and puts it into custom details instead (2)
| Setting | Description |
| ---------------------- | ----------------------------------------------------------------------------------------------- |
| Integration Key | Integration key for PagerDuty. |
| Severity | Level for dynamic notifications, default is `critical` (1) |
| Auto resolve incidents | Resolve incidents in PagerDuty once the alert goes back to ok |
| Message in details | Removes the Alert message from the PD summary field and puts it into custom details instead (2) |
>**Note:** The tags `Severity`, `Class`, `Group`, `dedup_key`, and `Component` have special meaning in the [Pagerduty Common Event Format - PD-CEF](https://support.pagerduty.com/docs/pd-cef). If an alert panel defines these tag keys, then they are transposed to the root of the event sent to Pagerduty. This means they will be available within the Pagerduty UI and Filtering tools. A Severity tag set on an alert overrides the global Severity set on the notification channel if it's a valid level.
> **Note:** The tags `Severity`, `Class`, `Group`, `dedup_key`, and `Component` have special meaning in the [Pagerduty Common Event Format - PD-CEF](https://support.pagerduty.com/docs/pd-cef). If an alert panel defines these tag keys, then they are transposed to the root of the event sent to Pagerduty. This means they will be available within the Pagerduty UI and Filtering tools. A Severity tag set on an alert overrides the global Severity set on the notification channel if it's a valid level.
>Using Message In Details will change the structure of the `custom_details` field in the PagerDuty Event.
This might break custom event rules in your PagerDuty rules if you rely on the fields in `payload.custom_details`.
Move any existing rules using `custom_details.myMetric` to `custom_details.queries.myMetric`.
This behavior will become the default in a future version of Grafana.
> Using Message In Details will change the structure of the `custom_details` field in the PagerDuty Event.
> This might break custom event rules in your PagerDuty rules if you rely on the fields in `payload.custom_details`.
> Move any existing rules using `custom_details.myMetric` to `custom_details.queries.myMetric`.
> This behavior will become the default in a future version of Grafana.
> **Note:** The `dedup_key` tag overrides the Grafana-generated `dedup_key` with a custom key.
@@ -152,22 +152,22 @@ This behavior will become the default in a future version of Grafana.
To configure VictorOps, provide the URL from the Grafana Integration and substitute `$routing_key` with a valid key.
> **Note:** The tag `Severity` has special meaning in the [VictorOps Incident Fields](https://help.victorops.com/knowledge-base/incident-fields-glossary/). If an alert panel defines this key, then it replaces the `message_type` in the root of the event sent to VictorOps.
### Pushover
To set up Pushover, you must provide a user key and an API token. Refer to [What is Pushover and how do I use it](https://support.pushover.net/i7-what-is-pushover-and-how-do-i-use-it) for instructions on how to generate them.
Setting | Description
---------- | -----------
API Token | Application token
User key(s) | A comma-separated list of user keys
Device(s) | A comma-separated list of devices
Priority | The priority alerting nottifications are sent
OK priority | The priority OK notifications are sent; if not set, then OK notifications are sent with the priority set for alerting notifications
Retry | How often (in seconds) the Pushover servers send the same notification to the user. (minimum 30 seconds)
Expire | How many seconds your notification will continue to be retried for (maximum 86400 seconds)
Alerting sound | The sound for alerting notifications
OK sound | The sound for OK notifications
| Setting | Description |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| API Token | Application token |
| User key(s) | A comma-separated list of user keys |
| Device(s) | A comma-separated list of devices |
| Priority | The priority alerting nottifications are sent |
| OK priority | The priority OK notifications are sent; if not set, then OK notifications are sent with the priority set for alerting notifications |
| Retry | How often (in seconds) the Pushover servers send the same notification to the user. (minimum 30 seconds) |
| Expire | How many seconds your notification will continue to be retried for (maximum 86400 seconds) |
| Alerting sound | The sound for alerting notifications |
| OK sound | The sound for OK notifications |
### Webhook
@@ -178,26 +178,26 @@ Example json body:
```json
{
"dashboardId":1,
"evalMatches":[
"dashboardId": 1,
"evalMatches": [
{
"value":1,
"metric":"Count",
"tags":{}
"value": 1,
"metric": "Count",
"tags": {}
}
],
"imageUrl":"https://grafana.com/assets/img/blog/mixed_styles.png",
"message":"Notification Message",
"orgId":1,
"panelId":2,
"ruleId":1,
"ruleName":"Panel Title alert",
"ruleUrl":"http://localhost:3000/d/hZ7BuVbWz/test-dashboard?fullscreen\u0026edit\u0026tab=alert\u0026panelId=2\u0026orgId=1",
"state":"alerting",
"tags":{
"tag name":"tag value"
"imageUrl": "https://grafana.com/assets/img/blog/mixed_styles.png",
"message": "Notification Message",
"orgId": 1,
"panelId": 2,
"ruleId": 1,
"ruleName": "Panel Title alert",
"ruleUrl": "http://localhost:3000/d/hZ7BuVbWz/test-dashboard?fullscreen\u0026edit\u0026tab=alert\u0026panelId=2\u0026orgId=1",
"state": "alerting",
"tags": {
"tag name": "tag value"
},
"title":"[Alerting] Panel Title alert"
"title": "[Alerting] Panel Title alert"
}
```
@@ -213,7 +213,7 @@ In DingTalk PC Client:
2. Click "Robot Manage" item in the pop menu, there will be a new panel call "Robot Manage".
3. In the "Robot Manage" panel, select "customized: customized robot with Webhook".
3. In the "Robot Manage" panel, select "customized: customized robot with Webhook".
4. In the next new panel named "robot detail", click "Add" button.
@@ -226,11 +226,11 @@ In DingTalk PC Client:
To set up Discord, you must create a Discord channel webhook. For instructions on how to create the channel, refer to
[Intro to Webhooks](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks).
Setting | Description
---------- | -----------
Webhook URL | Discord webhook URL.
Message Content | Mention a group using @ or a user using <@ID> when notifying in a channel.
Avatar URL | Optionally, provide a URL to an image to use as the avatar for the bot's message.
| Setting | Description |
| --------------- | --------------------------------------------------------------------------------- |
| Webhook URL | Discord webhook URL. |
| Message Content | Mention a group using @ or a user using <@ID> when notifying in a channel. |
| Avatar URL | Optionally, provide a URL to an image to use as the avatar for the bot's message. |
Alternately, use the [Slack](#slack) notifier by appending `/slack` to a Discord webhook URL.

View File

@@ -12,4 +12,4 @@ Pausing the evaluation of an alert rule can sometimes be useful. For example, du
1. In the Grafana side bar, hover your cursor over the Alerting (bell) icon and then click **Alert Rules**. All configured alert rules are listed, along with their current state.
1. Find your alert in the list, and click the **Pause** icon on the right. The **Pause** icon turns into a **Play** icon.
1. Click the **Play** icon to resume evaluation of your alert.
1. Click the **Play** icon to resume evaluation of your alert.

View File

@@ -17,4 +17,4 @@ You can do several things while viewing alerts.
- **Filter alerts by name -** Type an alert name in the **Search alerts** field.
- **Filter alerts by state -** In **States**, select which alert states you want to see. All others will be hidden.
- **Pause or resume an alert -** Click the **Pause** or **Play** icon next to the alert to pause or resume evaluation. See [Pause an alert rule]({{< relref "pause-an-alert-rule.md" >}}) for more information.
- **Access alert rule settings -** Click the alert name or the **Edit alert rule** (gear) icon. Grafana opens the Alert tab of the panel where the alert rule is defined. This is helpful when an alert is firing but you don't know which panel it is defined in.
- **Access alert rule settings -** Click the alert name or the **Edit alert rule** (gear) icon. Grafana opens the Alert tab of the panel where the alert rule is defined. This is helpful when an alert is firing but you don't know which panel it is defined in.

View File

@@ -5,9 +5,10 @@ weight = 113
+++
# Overview of Grafana 8 alerts
Alerts allow you to know about problems in your systems moments after they occur. Robust and actionable alerts help you identify and resolve issues quickly, minimizing disruption to your services.
>**Note:** This information is for the new, Grafana 8 Alerts. This is an [opt-in]({{< relref"./opt-in.md" >}}) feature released in Grafana 8.0. Grafana still supports [legacy dashboard alerts]({{< relref "../old-alerting/_index.md" >}}) out of the box
> **Note:** This information is for the new, Grafana 8 Alerts. This is an [opt-in]({{< relref"./opt-in.md" >}}) feature released in Grafana 8.0. Grafana still supports [legacy dashboard alerts]({{< relref "../old-alerting/_index.md" >}}) out of the box
Alerts have four main components:
@@ -20,7 +21,6 @@ Alerts have four main components:
You can perform the following tasks for alerts:
- [Create a Grafana managed alert rule]({{< relref "alerting-rules/create-grafana-managed-rule.md" >}})
- [Create a Cortex or Loki managed alert rule]({{< relref "alerting-rules/create-cortex-loki-managed-rule.md" >}})
- [View existing alert rules and their current state]({{< relref "alerting-rules/rule-list.md" >}})
@@ -38,6 +38,7 @@ The current alerting system doesn't support high availability. Alert notificatio
Grafana managed alerts are evaluated by the Grafana backend. Rule evaluations are scheduled, according to the alert rule configuration, and queries are evaluated by an engine that is part of core Grafana.
Alerting rules can only query backend data sources with alerting enabled:
- builtin or developed and maintained by grafana: `Graphite`, `Prometheus`, `Loki`, `InfluxDB`, `Elasticsearch`,
`Google Cloud Monitoring`, `Cloudwatch`, `Azure Monitor`, `MySQL`, `PostgreSQL`, `MSSQL`, `OpenTSDB`, `Oracle`, and `Azure Data Explorer`
- any community backend data sources with alerting enabled (`backend` and `alerting` properties are set in the [plugin.json]({{< relref "../../developers/plugins/metadata.md" >}}))
@@ -46,15 +47,14 @@ Alerting rules can only query backend data sources with alerting enabled:
The alerting engine publishes some internal metrics about itself. You can read more about how Grafana publishes [internal metrics]({{< relref "../../administration/view-server/internal-metrics.md" >}}).
Metric Name | Type | Description
---------- | ----------- | ----------
`alerting.alerts` | gauge | How many alerts by state
`alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API
`alerting.active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts
`alerting.rule_evaluations_total` | counter | The total number of rule evaluations
`alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures
`alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute
`alerting.rule_group_rules` | gauge | The number of rules
| Metric Name | Type | Description |
| ------------------------------------------- | --------- | ---------------------------------------------------------------------------------------- |
| `alerting.alerts` | gauge | How many alerts by state |
| `alerting.request_duration_seconds` | histogram | Histogram of requests to the Alerting API |
| `alerting.active_configurations` | gauge | The number of active, non default Alertmanager configurations for grafana managed alerts |
| `alerting.rule_evaluations_total` | counter | The total number of rule evaluations |
| `alerting.rule_evaluation_failures_total` | counter | The total number of rule evaluation failures |
| `alerting.rule_evaluation_duration_seconds` | summary | The duration for a rule to execute |
| `alerting.rule_group_rules` | gauge | The number of rules |
- [View alert rules and their current state]({{< relref "alerting-rules/rule-list.md" >}})

View File

@@ -5,7 +5,8 @@ weight = 130
+++
# Create and manage alerting Rules
One or more queries and/or expressions, a condition, the frequency of evaluation, and the (optional) duration that a condition must be met before creating an alert. Alerting rules are how you express the criteria for creating an alert. Queries and expressions select and can operate on the data you wish to alert on. A condition sets the threshold that an alert must meet or exceed to create an alert. The interval specifies how frequently the rule should be evaluated. The duration, when configured, sets a period that a condition must be met or exceeded before an alert is created. Alerting rules also can contain settings for what to do when your query does not return any data, or there is an error attempting to execute the query.
One or more queries and/or expressions, a condition, the frequency of evaluation, and the (optional) duration that a condition must be met before creating an alert. Alerting rules are how you express the criteria for creating an alert. Queries and expressions select and can operate on the data you wish to alert on. A condition sets the threshold that an alert must meet or exceed to create an alert. The interval specifies how frequently the rule should be evaluated. The duration, when configured, sets a period that a condition must be met or exceeded before an alert is created. Alerting rules also can contain settings for what to do when your query does not return any data, or there is an error attempting to execute the query.
- [Create Cortex or Loki managed alert rule]({{< relref "./create-cortex-loki-managed-rule.md" >}})
- [Create Grafana managed alert rule]({{< relref "./create-grafana-managed-rule.md" >}})

View File

@@ -7,10 +7,9 @@ weight = 400
# Create a Cortex or Loki managed alerting rule
Grafana allows you manage alerting rules for an external Cortex or Loki instance.
Grafana allows you manage alerting rules for an external Cortex or Loki instance.
In order for both Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. The`local` rule storage type, default for Loki, only supports viewing of rules. If you want to edit rules, then configure one of the other rule storage types. When configuring a Grafana Prometheus data source to point to Cortex, use the legacy `/api/prom` prefix, not `/prometheus`. Only single-binary mode is currently supported, and it is not possible to provide a separate URL for the ruler API.
In order for both Cortex and Loki data sources to work with Grafana 8.0 alerting, enable the ruler API by configuring their respective services. The`local` rule storage type, default for Loki, only supports viewing of rules. If you want to edit rules, then configure one of the other rule storage types. When configuring a Grafana Prometheus data source to point to Cortex, use the legacy `/api/prom` prefix, not `/prometheus`. Only single-binary mode is currently supported, and it is not possible to provide a separate URL for the ruler API.
## Add or edit a Cortex or Loki managed alerting rule
@@ -26,11 +25,11 @@ This section describes the fields you fill out to create an alert.
### Alert type
- **Alert name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule.
- **Alert type -** Select **Cortex / Loki managed alert**.
- **Data source -** Select a Prometheus or Loki data source. Only Prometheus data sources that support Cortex ruler API will be available.
- **Namespace -** Select an existing rule namespace or click **Add new** to create a new one.
- **Group -** Select an existing group within the selected namespace or click **Add new** to create a new one. Newly created rules will be added to the end of the rule group.
- **Alert name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule.
- **Alert type -** Select **Cortex / Loki managed alert**.
- **Data source -** Select a Prometheus or Loki data source. Only Prometheus data sources that support Cortex ruler API will be available.
- **Namespace -** Select an existing rule namespace or click **Add new** to create a new one.
- **Group -** Select an existing group within the selected namespace or click **Add new** to create a new one. Newly created rules will be added to the end of the rule group.
![Alert type section screenshot](/static/img/docs/alerting/unified/rule-edit-cortex-alert-type-8-0.png 'Alert type section screenshot')
@@ -42,7 +41,7 @@ Enter a PromQL or LogQL expression. Rule will fire if evaluation result has at l
### Conditions
- **For -** For how long the selected condition should violated before an alert enters `Firing` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Firing`. Else it reverts back to `Normal`.
- **For -** For how long the selected condition should violated before an alert enters `Firing` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Firing`. Else it reverts back to `Normal`.
![Conditions section](/static/img/docs/alerting/unified/rule-edit-cortex-conditions-8-0.png 'Conditions section screenshot')
@@ -52,11 +51,11 @@ Annotations and labels can be optionally added in the details section.
#### Annotations
Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces.
Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces.
#### Labels
Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team.
Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team.
![Details section](/static/img/docs/alerting/unified/rule-edit-details-8-0.png 'Details section screenshot')

View File

@@ -7,7 +7,7 @@ weight = 400
# Create a Grafana managed alerting rule
Grafana allows you to create alerting rules that query one or more data sources, reduce or transform the results and compare them to each other or to fix thresholds. These rules will be executed and notifications sent by Grafana itself.
Grafana allows you to create alerting rules that query one or more data sources, reduce or transform the results and compare them to each other or to fix thresholds. These rules will be executed and notifications sent by Grafana itself.
## Add or edit a Grafana managed alerting rule
@@ -23,15 +23,15 @@ This section describes the fields you fill out to create an alert.
### Alert type
- **Alert name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule.
- **Alert type -** Select **Grafana managed alert**.
- **Folder -** Select a folder this alert rule will belong to. To create a new folder, click on the drop down and type in a new folder name.
- **Alert name -** Enter a descriptive name. The name will be displayed in the alert rule list, as well as added as `alertname` label to every alert instance that is created from this rule.
- **Alert type -** Select **Grafana managed alert**.
- **Folder -** Select a folder this alert rule will belong to. To create a new folder, click on the drop down and type in a new folder name.
![Alert type section screenshot](/static/img/docs/alerting/unified/rule-edit-grafana-alert-type-8-0.png 'Alert type section screenshot')
### Query
Add one or more [queries]({{< relref "../../../panels/queries.md" >}}) or [expressions]({{< relref "../../../panels/expressions.md" >}}). You can use classic condition expression to create a rule that will trigger a single alert if it's threshold is met, or use reduce and math expressions to create a multi dimensional alert rule that can trigger multiple alerts, one per matching series in the query result.
Add one or more [queries]({{< relref "../../../panels/queries.md" >}}) or [expressions]({{< relref "../../../panels/expressions.md" >}}). You can use classic condition expression to create a rule that will trigger a single alert if it's threshold is met, or use reduce and math expressions to create a multi dimensional alert rule that can trigger multiple alerts, one per matching series in the query result.
#### Rule with classic condition
@@ -41,7 +41,7 @@ You can use classic condition expression to create a rule that will trigger a si
1. Add an expression. Click on **Operation** dropdown and select **Classic condition**.
1. Add one or more conditions. For each condition you can specify operator (`AND` / `OR`), aggregation function, query letter and threshold value.
If a query returns multiple series, then the aggregation function and threshold check will be evaluated for each series.It will not track alert state **per series**. This has implications that are detailed in the scenario below.
If a query returns multiple series, then the aggregation function and threshold check will be evaluated for each series.It will not track alert state **per series**. This has implications that are detailed in the scenario below.
- Alert condition with query that returns 2 series: **server1** and **server2**
- **server1** series causes the alert rule to fire and switch to state `Firing`
@@ -67,25 +67,24 @@ See or [expressions documentation]({{< relref "../../../panels/expressions.md" >
### Conditions
- **Condition -** Select the letter of the query or expression whose result will trigger the alert rule. You will likely want to select either a `classic condition` or a `math` expression.
- **Evaluate every -** How often the rule should be evaluated, executing the defined queries and expressions. Must be no less than 10 seconds and a multiple of 10 seconds. Examples: `1m`, `30s`
- **Evaluate for -** For how long the selected condition should violated before an alert enters `Alerting` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Alerting`. Else it reverts back to `Normal`.
- **Condition -** Select the letter of the query or expression whose result will trigger the alert rule. You will likely want to select either a `classic condition` or a `math` expression.
- **Evaluate every -** How often the rule should be evaluated, executing the defined queries and expressions. Must be no less than 10 seconds and a multiple of 10 seconds. Examples: `1m`, `30s`
- **Evaluate for -** For how long the selected condition should violated before an alert enters `Alerting` state. When condition threshold is violated for the first time, an alert becomes `Pending`. If the **for** time elapses and the condition is still violated, it becomes `Alerting`. Else it reverts back to `Normal`.
#### No Data & Error handling
Toggle **Configure no data and error handling** switch to configure how the rule should handle cases where evaluation results in error or returns no data.
| No Data Option | Description |
| --------------- | ------------------------------------------------------------------------------------------ |
| No Data | Set alert state to `NoData` and rule state to `Normal` |
| Alerting | Set alert rule state to `Alerting` |
| Ok | Set alert rule state to `Normal` |
Toggle **Configure no data and error handling** switch to configure how the rule should handle cases where evaluation results in error or returns no data.
| No Data Option | Description |
| -------------- | ------------------------------------------------------ |
| No Data | Set alert state to `NoData` and rule state to `Normal` |
| Alerting | Set alert rule state to `Alerting` |
| Ok | Set alert rule state to `Normal` |
| Error or timeout option | Description |
| ----------------------- | --------------------------------------------------- |
| Alerting | Set alert rule state to `Alerting` |
| OK | Set alert rule state to `Normal` |
| Error or timeout option | Description |
| ----------------------- | ---------------------------------- |
| Alerting | Set alert rule state to `Alerting` |
| OK | Set alert rule state to `Normal` |
![Conditions section](/static/img/docs/alerting/unified/rule-edit-grafana-conditions-8-0.png 'Conditions section screenshot')
@@ -95,11 +94,11 @@ Annotations and labels can be optionally added in the details section.
#### Annotations
Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces.
Annotations are key and value pairs that provide additional meta information about the alert, for example description, summary, runbook URL. They are displayed in rule and alert details in the UI and can be used in contact type message templates. Annotations can also be templated, for example `Instance {{ $labels.instance }} down` will have the evaluated `instance` label value added for every alert this rule produces.
#### Labels
Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team. Labels can also be templated like annotations, for example `{{ $labels.namespace }}/{{ $labels.job }}` will produce a new rule label that will have the evaluated `namespace` and `job` label value added for every alert this rule produces. The rule labels take precedence over the labels produced by the query/condition.
Labels are key value pairs that categorize or identify an alert. Labels are used to match alerts in silences or match and groups alerts in notification policies. Labels are also shown in rule or alert details in the UI and can be used in contact type message templates. For example, it is common to add a `severity` label and then configure a separate notification policy for each severity. Or one could add a `team` label and configure team specific notification policies, or silence all alerts for a particular team. Labels can also be templated like annotations, for example `{{ $labels.namespace }}/{{ $labels.job }}` will produce a new rule label that will have the evaluated `namespace` and `job` label value added for every alert this rule produces. The rule labels take precedence over the labels produced by the query/condition.
![Details section](/static/img/docs/alerting/unified/rule-edit-details-8-0.png 'Details section screenshot')
@@ -107,11 +106,11 @@ Labels are key value pairs that categorize or identify an alert. Labels are use
The following template variables are available when expanding annotations and labels.
| Name | Description |
| ------- | --------------- |
| $labels | The labels from the query or condition. For example, `{{ $labels.instance }}` and `{{ $labels.job }}`. |
| Name | Description |
| ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| $labels | The labels from the query or condition. For example, `{{ $labels.instance }}` and `{{ $labels.job }}`. |
| $values | The values of all reduce and math expressions that were evaluated for this alert rule. For example, `{{ $values.A }}`, `{{ $values.A.Labels }}` and `{{ $values.A.Value }}` where `A` is the `refID` of the expression. |
| $value | The value string of the alert instance. For example, `[ var='A' labels={instance=foo} value=10 ]`. |
| $value | The value string of the alert instance. For example, `[ var='A' labels={instance=foo} value=10 ]`. |
## Preview alerts

View File

@@ -9,13 +9,17 @@ keywords = ["grafana", "alerting", "guide", "state"]
The concepts of state and health for alerting rules help you understand, at a glance, several key status indicators about your alerts. Alert state, alerting rule state, and alerting rule health are related, but they each convey subtly different information.
## Alerting rule state
Indicates whether any of the timeseries resulting from evaluation of the alerting rule are in an alerting state. Alerting rule state only requires a single alerting instance to be in a pending or firing state for the alerting rule state to not be normal.
- Normal: none of the timeseries returned are in an alerting state.
- Pending: at least one of the timeseries returned are in a pending state.
- Firing: at least one of the timeseries returned are in an alerting state.
## Alert state
Alert state is an indication of the output of the alerting evaluation engine.
- Normal: the condition for the alerting rule has evaluated to **false** for every timeseries returned by the evaluation engine.
- Alerting: the condition for the alerting rule has evaluated to **true** for at least one timeseries returned by the evaluation engine and the duration, if set, **has** been met or exceeded.
- Pending: the condition for the alerting rule has evaluated to **true** for at least one timeseries returned by the evaluation engine and the duration, if set, **has not** been met or exceeded.
@@ -23,7 +27,9 @@ Alert state is an indication of the output of the alerting evaluation engine.
- Error: There was an error encountered when attempting to evaluate the alerting rule.
## Alerting rule health
Indicates the status of alerting rule evaluation.
- Ok: the rule is being evaluated, data is being returned, and no errors have been encountered.
- Error: an error was encountered when evaluating the alerting rule.
- NoData: at least one of the timeseries returned during evaluation is in a NoData state.

View File

@@ -17,17 +17,19 @@ Grafana alerting UI allows you to configure contact points for the Grafana manag
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Click **Add contact point**.
1. Enter a **Name** for the contact point
1. Enter a **Name** for the contact point
1. Select contact point type and fill out mandatory fields. **Optional settings** can be expanded for more options.
1. If you'd like this contact point to notify via multiple channels, for example both email and slack, click **New contact point type** and fill out additional contact point type details.
1. Click **Save contact point** button at the bottom of the page.
## Editing a contact point
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Find the contact point you want to edit in the contact points table and click the **pen icon** on the right side.
1. Make any changes and click **Save contact point** button at the bottom of the page.
## Deleting a contact point
## Deleting a contact point
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Find the contact point you want to edit in the contact points table and click the **trash can icon** on the right side.
1. A confirmation dialog will open. Click **Yes, delete**.
@@ -36,31 +38,31 @@ Grafana alerting UI allows you to configure contact points for the Grafana manag
## List of notifiers supported by Grafana
Name | Type
-----|-----
[DingDing](#dingdingdingtalk) | `dingding`
[Discord](#discord) | `discord`
[Email](#email) | `email`
[Google Hangouts Chat](#google-hangouts-chat) | `googlechat`
[Kafka](#kafka) | `kafka`
Line | `line`
Microsoft Teams | `teams`
[Opsgenie](#opsgenie) | `opsgenie`
[Pagerduty](#pagerduty) | `pagerduty`
Prometheus Alertmanager | `prometheus-alertmanager`
[Pushover](#pushover) | `pushover`
Sensu | `sensu`
[Sensu Go](#sensu-go) | `sensugo`
[Slack](#slack) | `slack`
Telegram | `telegram`
Threema | `threema`
VictorOps | `victorops`
[Webhook](#webhook) | `webhook`
[Zenduty](#zenduty) | `webhook`
| Name | Type |
| --------------------------------------------- | ------------------------- |
| [DingDing](#dingdingdingtalk) | `dingding` |
| [Discord](#discord) | `discord` |
| [Email](#email) | `email` |
| [Google Hangouts Chat](#google-hangouts-chat) | `googlechat` |
| [Kafka](#kafka) | `kafka` |
| Line | `line` |
| Microsoft Teams | `teams` |
| [Opsgenie](#opsgenie) | `opsgenie` |
| [Pagerduty](#pagerduty) | `pagerduty` |
| Prometheus Alertmanager | `prometheus-alertmanager` |
| [Pushover](#pushover) | `pushover` |
| Sensu | `sensu` |
| [Sensu Go](#sensu-go) | `sensugo` |
| [Slack](#slack) | `slack` |
| Telegram | `telegram` |
| Threema | `threema` |
| VictorOps | `victorops` |
| [Webhook](#webhook) | `webhook` |
| [Zenduty](#zenduty) | `webhook` |
## Manage contact points for an external Alertmanager
Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source.
Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source.
{{< figure max-width="40%" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" caption="Select Alertmanager" >}}
@@ -68,7 +70,6 @@ Grafana alerting UI supports managing external Alertmanager configuration. Once
To edit global configuration options for an alertmanager, like SMTP server that is used by default for all email contact types:
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. In the dropdown at the top of the page, select an Alertmanager data source.
1. Click **Edit global config** button at the bottom of the page.

View File

@@ -28,40 +28,40 @@ If there are string columns then those columns become labels. The name of column
For a MySQL table called "DiskSpace":
| Time | Host | Disk | PercentFree
| ----------- | --- | -----| --------
| 2021-June-7 | web1 | /etc | 3
| 2021-June-7 | web2 | /var | 4
| 2021-June-7 | web3 | /var | 8
| ... | ... | ... | ...
| Time | Host | Disk | PercentFree |
| ----------- | ---- | ---- | ----------- |
| 2021-June-7 | web1 | /etc | 3 |
| 2021-June-7 | web2 | /var | 4 |
| 2021-June-7 | web3 | /var | 8 |
| ... | ... | ... | ... |
You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:
```sql
SELECT Host, Disk, CASE WHEN PercentFree < 5.0 THEN PercentFree ELSE 0 END FROM (
SELECT
Host,
Disk,
Avg(PercentFree)
SELECT
Host,
Disk,
Avg(PercentFree)
FROM DiskSpace
Group By
Host,
Group By
Host,
Disk
Where __timeFilter(Time)
```
This query returns the following Table response to Grafana:
| Host | Disk | PercentFree
| --- | -----| --------
| web1 | /etc | 3
| web2 | /var | 4
| web3 | /var | 0
| Host | Disk | PercentFree |
| ---- | ---- | ----------- |
| web1 | /etc | 3 |
| web2 | /var | 4 |
| web3 | /var | 0 |
When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:
| Labels | Status
| ----------------------| ------
| {Host=web1,disk=/etc} | Alerting
| {Host=web2,disk=/var} | Alerting
| {Host=web3,disk=/var} | Normal
| Labels | Status |
| --------------------- | -------- |
| {Host=web1,disk=/etc} | Alerting |
| {Host=web2,disk=/var} | Alerting |
| {Host=web3,disk=/var} | Normal |

View File

@@ -25,6 +25,7 @@ Grafana alerting UI allows you to configure templates for the Grafana managed al
> **Note:** Currently the configuration of the embedded Alertmanager is shared across organisations. Therefore users are advised to use the new Grafana 8 Alerts only if they have one organisation otherwise templates for the Grafana managed alerts will be visible by all organizations
### Create a template
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Click **Add template**.
1. Fill in **Name** and **Content** fields.
@@ -32,18 +33,19 @@ Grafana alerting UI allows you to configure templates for the Grafana managed al
**Note** The template name used to reference this template in templating is not the value of the **Name** field, but the parameter to `define` tag in the content. When creating a template you can omit `define` entirely and it will be added automatically with same value as **Name** field. It's recommended to use the same name for `define` and **Name** field to avoid confusion.
<img src="/static/img/docs/alerting/unified/templates-create-8-0.png" width="600px">
### Edit a template
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Find the template you want to edit in the templates table and click the **pen icon** on the right side.
1. Make any changes and click **Save template** button at the bottom of the page.
### Delete a template
1. In the Grafana side bar, hover your cursor over the **Alerting** (bell) icon and then click **Contact points**.
1. Find the template you want to edit in the templates table and click the **trash can icon** on the right side.
1. A confirmation dialog will open. Click **Yes, delete**.
1. A confirmation dialog will open. Click **Yes, delete**.
**Note** You are not prevented from deleting templates that are in use somewhere in contact points or other templates. Be careful!
@@ -51,23 +53,23 @@ Grafana alerting UI allows you to configure templates for the Grafana managed al
To use a template:
Enter `{{ template "templatename" . }}` into a contact point field, where `templatename` is the `define` parameter of a template.
Enter `{{ template "templatename" . }}` into a contact point field, where `templatename` is the `define` parameter of a template.
<img src="/static/img/docs/alerting/unified/contact-points-use-template-8-0.png" width="600px">
### Template examples
Here is an example of a template to render a single alert:
```
{{ define "alert" }}
[{{.Status}}] {{ .Labels.alertname }}
Labels:
{{ range .Labels.SortedPairs }}
{{ .Name }}: {{ .Value }}
{{ end }}
{{ if gt (len .Annotations) 0 }}
Annotations:
{{ range .Annotations.SortedPairs }}
@@ -85,6 +87,7 @@ Here is an example of a template to render a single alert:
```
Template to render entire notification message:
```
{{ define "message" }}
{{ if gt (len .Alerts.Firing) 0 }}
@@ -100,6 +103,6 @@ Template to render entire notification message:
## Manage templates for an external Alertmanager
Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page, allowing you to select either `Grafana` or an external Alertmanager data source.
Grafana alerting UI supports managing external Alertmanager configuration. Once you add an [Alertmanager data source]({{< relref "../../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page, allowing you to select either `Grafana` or an external Alertmanager data source.
{{< figure max-width="40%" src="/static/img/docs/alerting/unified/contact-points-select-am-8-0.gif" caption="Select Alertmanager" >}}

View File

@@ -3,41 +3,40 @@ title = "Template data"
keywords = ["grafana", "alerting", "guide", "contact point", "templating"]
+++
# Template data
Template data is passed on to [message templates]({{< relref "./_index.md" >}}) as well as sent as payload to webhook pushes.
Name | Type | Notes
------------------|----------|-----------------------------------------------------------------
Receiver | string | Name of the contact point that the notification is being sent to.
Status | string | `firing` if at least one alert is firing, otherwise `resolved`.
Alerts | Alert | List of alert objects that are included in this notification (see below).
GroupLabels | KeyValue | Labels these alerts were grouped by.
CommonLabels | KeyValue | Labels common to all the alerts included in this notification.
CommonAnnotations | KeyValue | Annotations common to all the alerts included in this notification.
ExternalURL | string | Back link to the Grafana that sent the notification. If using external Alertmanager, back link to this Alertmanager.
| Name | Type | Notes |
| ----------------- | -------- | -------------------------------------------------------------------------------------------------------------------- |
| Receiver | string | Name of the contact point that the notification is being sent to. |
| Status | string | `firing` if at least one alert is firing, otherwise `resolved`. |
| Alerts | Alert | List of alert objects that are included in this notification (see below). |
| GroupLabels | KeyValue | Labels these alerts were grouped by. |
| CommonLabels | KeyValue | Labels common to all the alerts included in this notification. |
| CommonAnnotations | KeyValue | Annotations common to all the alerts included in this notification. |
| ExternalURL | string | Back link to the Grafana that sent the notification. If using external Alertmanager, back link to this Alertmanager. |
The `Alerts` type exposes functions for filtering alerts:
* `Alerts.Firing` returns a list of firing alerts.
* `Alerts.Resolved` returns a list of resolved alerts.
- `Alerts.Firing` returns a list of firing alerts.
- `Alerts.Resolved` returns a list of resolved alerts.
## Alert
Name | Type | Notes
-------------|-----------|---------------------------------------------------------------------
Status | string | `firing` or `resolved`.
Labels | KeyValue | A set of labels attached to the alert.
Annotations | KeyValue | A set of annotations attached to the alert.
StartsAt | time.Time | Time the alert started firing.
EndsAt | time.Time | Only set if the end time of an alert is known. Otherwise set to a configurable timeout period from the time since the last alert was received.
GeneratorURL | string | A back link to Grafana or external Alertmanager.
SilenceURL | string | Link to grafana silence for with labels for this alert pre-filled. Only for Grafana managed alerts.
DashboardURL | string | Link to grafana dashboard, if alert rule belongs to one. Only for Grafana managed alerts.
PanelURL | string | Link to grafana dashboard panel, if alert rule belongs to one. Only for Grafana managed alerts.
Fingerprint | string | Fingerprint that can be used to identify the alert.
ValueString | string | A string that contains the labels and value of each reduced expression in the alert.
| Name | Type | Notes |
| ------------ | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| Status | string | `firing` or `resolved`. |
| Labels | KeyValue | A set of labels attached to the alert. |
| Annotations | KeyValue | A set of annotations attached to the alert. |
| StartsAt | time.Time | Time the alert started firing. |
| EndsAt | time.Time | Only set if the end time of an alert is known. Otherwise set to a configurable timeout period from the time since the last alert was received. |
| GeneratorURL | string | A back link to Grafana or external Alertmanager. |
| SilenceURL | string | Link to grafana silence for with labels for this alert pre-filled. Only for Grafana managed alerts. |
| DashboardURL | string | Link to grafana dashboard, if alert rule belongs to one. Only for Grafana managed alerts. |
| PanelURL | string | Link to grafana dashboard panel, if alert rule belongs to one. Only for Grafana managed alerts. |
| Fingerprint | string | Fingerprint that can be used to identify the alert. |
| ValueString | string | A string that contains the labels and value of each reduced expression in the alert. |
## KeyValue
@@ -54,23 +53,23 @@ Here is an example containing two annotations:
In addition to direct access of data (labels and annotations) stored as KeyValue, there are also methods for sorting, removing and transforming.
Name | Arguments | Returns | Notes
------------|-----------|-----------------------------------------|----------------
SortedPairs | | Sorted list of key & value string pairs |
Remove | []string | KeyValue | Returns a copy of the Key/Value map without the given keys.
Names | | []string | List of label names
Values | | []string | List of label values
| Name | Arguments | Returns | Notes |
| ----------- | --------- | --------------------------------------- | ----------------------------------------------------------- |
| SortedPairs | | Sorted list of key & value string pairs |
| Remove | []string | KeyValue | Returns a copy of the Key/Value map without the given keys. |
| Names | | []string | List of label names |
| Values | | []string | List of label values |
## Functions
Some functions to transform values are also available, along with [default functions provided by Go templating](https://golang.org/pkg/text/template/#hdr-Functions).
Name | Arguments | Returns
-------------|------------------------------|----------------------------------------------
title | string | Capitalizes first character of each word.
toUpper | string | Converts all characters to upper case.
match | pattern, string | Match a string using RegExp.
reReplaceAll | pattern, replacement, string | RegExp substitution, unanchored.
join | string, []string | Concatenates the elements of the second argument to create a single string. First argument is the separator.
safeHtml | string | Marks string as HTML, not requiring auto-escaping.
stringSlice | ...string | Returns passed strings as slice of strings.
| Name | Arguments | Returns |
| ------------ | ---------------------------- | ------------------------------------------------------------------------------------------------------------ |
| title | string | Capitalizes first character of each word. |
| toUpper | string | Converts all characters to upper case. |
| match | pattern, string | Match a string using RegExp. |
| reReplaceAll | pattern, replacement, string | RegExp substitution, unanchored. |
| join | string, []string | Concatenates the elements of the second argument to create a single string. First argument is the separator. |
| safeHtml | string | Marks string as HTML, not requiring auto-escaping. |
| stringSlice | ...string | Returns passed strings as slice of strings. |

View File

@@ -20,7 +20,7 @@ To access notification policy editing page, In the Grafana side bar, hover your
### Edit root notification policy
1. Click **edit** button on the top right of the root policy box.
1. Make changes and click **save** button.
1. Make changes and click **save** button.
### Add new specific policy
@@ -32,7 +32,6 @@ To add a nested policy to an existing specific policy, expand the parent policy
To edit a specific policy, find it in the specific routing table and click **Edit** button. Make your changes and click **Save policy**.
### Root policy fields
- **Default contact point -** The [contact point]({{< relref "./contact-points.md" >}}) to send notifications to that did not match any specific policy.
@@ -44,16 +43,14 @@ Group timing options
- **Group interval -** - How long to wait before sending an notification when an alert has been added to a group for which there has already been a notification. Default is 5 minutes.
- **Repeat interval -** - How long to wait before re-sending a notification after one has already been sent and no new alerts were added to the group. Default is 4 hours.
### Specific policy fields
- **Contact point -** The [contact point]({{< relref "./contact-points.md" >}}) to send notification to if alert matched this specific policy but did not match any of it's nested policies, or there were no nested specific policies.
- **Matching labels -** Rules for matching alert labels. See ["How label matching works"](#how-label-matching-works) below for details.
- **Continue matching subsequent sibling nodes -** If not enabled and an alert matches this policy but not any of it's nested policies, matching will stop and a notification will be sent to the contact point defined on this policy. If enabled, notification will be sent but alert will continue matching subsequent siblings of this policy, thus sending more than one notification. Use this if for example you want to send notification to a catch-all contact point as well as to one of more specific contact points handled by subsequent policies.
- **Override grouping** - Toggle if you want to override grouping for this policy. If toggled, you will be able to specify grouping same as for root policy described above. If not toggled, root policy grouping will be used.
- **Continue matching subsequent sibling nodes -** If not enabled and an alert matches this policy but not any of it's nested policies, matching will stop and a notification will be sent to the contact point defined on this policy. If enabled, notification will be sent but alert will continue matching subsequent siblings of this policy, thus sending more than one notification. Use this if for example you want to send notification to a catch-all contact point as well as to one of more specific contact points handled by subsequent policies.
- **Override grouping** - Toggle if you want to override grouping for this policy. If toggled, you will be able to specify grouping same as for root policy described above. If not toggled, root policy grouping will be used.
- **Override group timings** Toggle if you want to override group timings for this policy. If toggled, you will be able to specify group timings same as for root policy described above. If not toggled, root policy group timings will be used.
### How label matching works
A policy will match an alert if alert's labels match all of the "Matching Labels" specified on the policy.
@@ -63,15 +60,14 @@ A policy will match an alert if alert's labels match all of the "Matching Labels
- The **Regex** checkbox specifies if the inputted **Value** should be matched against labels as a regular expression. The regular expression is always anchored. If not selected it is an exact string match.
- The **Equal** checkbox specifies if the match should include alert instances that match or do not match. If not checked, the silence includes alert instances _do not_ match.
## Example setup
One usage example would be:
* Create a "default" contact point for most alerts with a non invasive contact point type, like a slack message, and set it on root policy
* Edit root policy grouping to group alerts by `cluster`, `namespace` and `alertname` so you get a notification per alert rule and specific k8s cluster & namespace.
* Create specific route for alerts coming from development cluster with an appropriate contact point
* Create a specific route for alerts with "critical" severity with a more invasive contact point type, like pager duty notification
* Create specific routes for particular teams that handle their own onduty rotations
- Create a "default" contact point for most alerts with a non invasive contact point type, like a slack message, and set it on root policy
- Edit root policy grouping to group alerts by `cluster`, `namespace` and `alertname` so you get a notification per alert rule and specific k8s cluster & namespace.
- Create specific route for alerts coming from development cluster with an appropriate contact point
- Create a specific route for alerts with "critical" severity with a more invasive contact point type, like pager duty notification
- Create specific routes for particular teams that handle their own onduty rotations
![Notification policies screenshot](/static/img/docs/alerting/unified/notification-policies-8-0.png 'Notification policies screenshot')

View File

@@ -8,11 +8,11 @@ weight = 128
Setting the `ngalert` feature toggle enables the new Grafana 8 alerting system.
>**Note:** We recommend that you backup Grafana's database before enabling this feature. If you are using PostgreSQL as the backend data source, then the minimum required version is 9.5.
> **Note:** We recommend that you backup Grafana's database before enabling this feature. If you are using PostgreSQL as the backend data source, then the minimum required version is 9.5.
At startup, when [the feature toggle is enabled]({{< relref "../../administration/configuration.md">}}#feature_toggles), the legacy Grafana dashboard alerting is disabled and existing dashboard alerts are migrated into a format that is compatible with the Grafana 8 alerting system. You can view these migrated rules, alongside any new alerts you create after the migration, from the Alerting page of your Grafana instance.
>**Note:** Since the new system stores the notification log and silences on disk, we require the use of persistent disks for using Grafana 8 alerts. Otherwise, the silences and notification log will get lost on a restart, and you might get unwanted or duplicate notifications.
> **Note:** Since the new system stores the notification log and silences on disk, we require the use of persistent disks for using Grafana 8 alerts. Otherwise, the silences and notification log will get lost on a restart, and you might get unwanted or duplicate notifications.
Read and write access to dashboard alerts in Grafana versions 7 and earlier were governed by the dashboard and folder permissions under which the alerts were stored. In Grafana 8, alerts are stored in folders and inherit the permissions of those folders. During the migration, dashboard alert permissions are matched to the new rules permissions as follows:
@@ -20,7 +20,7 @@ Read and write access to dashboard alerts in Grafana versions 7 and earlier were
- If there are no dashboard permissions and the dashboard is under a folder, then the rule is linked to this folder and inherits its permissions.
- If there are no dashboard permissions and the dashboard is under the General folder, then the rule is linked to the `General Alerting` folder and the rule inherits the default permissions.
During beta, Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from all other supported data sources at this time.
During beta, Grafana 8 alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from all other supported data sources at this time.
Also notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route.
@@ -28,6 +28,7 @@ Since `Hipchat` and `Sensu` are discontinued, they are not migrated to the new a
Finally, silences (expiring after one year) are created for all paused dashboard alerts.
## Disabling Grafana 8 Alerting after migration
To disable Grafana 8 Alerting, remove or disable the `ngalert` feature toggle. Dashboard alerts will be re-enabled and any alerts created during or after the migration are deleted.
>**Note:** Any alerting rules created in the Grafana 8 Alerting system will be lost when migrating back to dashboard alerts
> **Note:** Any alerting rules created in the Grafana 8 Alerting system will be lost when migrating back to dashboard alerts

View File

@@ -23,7 +23,7 @@ To add a silence:
1. Click the **New Silence** button.
1. Select the start and end date in **Silence start and end** to indicate when the silence should go into effect and expire.
1. Optionally, update the **Duration** to alter the time for the end of silence in the previous step to correspond to the start plus the duration.
1. Enter one or more *Matching Labels* by filling out the **Name** and **Value** fields. Matchers determine which rules the silence will apply to.
1. Enter one or more _Matching Labels_ by filling out the **Name** and **Value** fields. Matchers determine which rules the silence will apply to.
1. Enter a **Comment**.
1. Enter the name of the owner in **Creator**.
1. Click **Create**.
@@ -46,12 +46,12 @@ Alert instances that have labels that match all of the "Matching Labels" specifi
## Manage silences for an external Alertmanager
Grafana alerting UI supports managing external Alertmanager silences. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source.
Grafana alerting UI supports managing external Alertmanager silences. Once you add an [Alertmanager data source]({{< relref "../../datasources/alertmanager.md" >}}), a dropdown displays at the top of the page where you can select either `Grafana` or an external Alertmanager as your data source.
## Create a URL to silence form with defaults filled in
When linking to silence form, you can provide default matching labels and comment via `matchers` and `comment` query parameters. `matchers` expects one more matching labels of type `[label][operator][value]` joined by a comma. `operator` can be one of `=` (equals, not regex), `!=` (not equals, not regex), `=~` (equals, regex), `!~` (not equals, not regex).
For example, to link to silence form with matching labels `severity=critical` & `cluster!~europe-.*` and comment `Silence critical EU alerts`, create a URL `https://mygrafana/aleting/silence/new?matchers=severity%3Dcritical%2Ccluster!~europe-*&comment=Silence%20critical%20EU%20alert`.
For example, to link to silence form with matching labels `severity=critical` & `cluster!~europe-.*` and comment `Silence critical EU alerts`, create a URL `https://mygrafana/aleting/silence/new?matchers=severity%3Dcritical%2Ccluster!~europe-*&comment=Silence%20critical%20EU%20alert`.
To link to a new silence page for an [external Alertmanager]({{< relref "../../datasources/alertmanager.md" >}}), add a `alertmanager` query parameter with the Alertmanager data source name.

View File

@@ -12,14 +12,14 @@ Here is a table showing all supported authentication providers and the features
See also, [Grafana Authentication]({{< relref "grafana.md" >}}).
Provider | Support | Role mapping | Team sync<br> *(Enterprise only)* | Active sync<br> *(Enterprise only)*
-------- | :-----: | :----------: | :-------: | :---------:
[Auth Proxy]({{< relref "auth-proxy.md" >}}) | v2.1+ | - | v6.3+ | -
[Azure AD OAuth]({{< relref "azuread.md" >}}) | v6.7+ | v6.7+ | v6.7+ | -
[Generic OAuth]({{< relref "generic-oauth.md" >}}) | v4.0+ | v6.5+ | - | -
[GitHub OAuth]({{< relref "github.md" >}}) | v2.0+ | - | v6.3+ | -
[GitLab OAuth]({{< relref "gitlab.md" >}}) | v5.3+ | - | v6.4+ | -
[Google OAuth]({{< relref "google.md" >}}) | v2.0+ | - | - | -
[LDAP]({{< relref "ldap.md" >}}) | v2.1+ | v2.1+ | v5.3+ | v6.3+
[Okta OAuth]({{< relref "okta.md" >}}) | v7.0+ | v7.0+ | v7.0+ | -
[SAML]({{< relref "../enterprise/saml.md" >}}) (Enterprise only) | v6.3+ | v7.0+ | v7.0+ | -
| Provider | Support | Role mapping | Team sync<br> _(Enterprise only)_ | Active sync<br> _(Enterprise only)_ |
| ---------------------------------------------------------------- | :-----: | :----------: | :-------------------------------: | :---------------------------------: |
| [Auth Proxy]({{< relref "auth-proxy.md" >}}) | v2.1+ | - | v6.3+ | - |
| [Azure AD OAuth]({{< relref "azuread.md" >}}) | v6.7+ | v6.7+ | v6.7+ | - |
| [Generic OAuth]({{< relref "generic-oauth.md" >}}) | v4.0+ | v6.5+ | - | - |
| [GitHub OAuth]({{< relref "github.md" >}}) | v2.0+ | - | v6.3+ | - |
| [GitLab OAuth]({{< relref "gitlab.md" >}}) | v5.3+ | - | v6.4+ | - |
| [Google OAuth]({{< relref "google.md" >}}) | v2.0+ | - | - | - |
| [LDAP]({{< relref "ldap.md" >}}) | v2.1+ | v2.1+ | v5.3+ | v6.3+ |
| [Okta OAuth]({{< relref "okta.md" >}}) | v7.0+ | v7.0+ | v7.0+ | - |
| [SAML]({{< relref "../enterprise/saml.md" >}}) (Enterprise only) | v6.3+ | v7.0+ | v7.0+ | - |

View File

@@ -73,7 +73,6 @@ Ill demonstrate how to use Apache for authenticating users. In this example w
In this example we use Apache as a reverse proxy in front of Grafana. Apache handles the Authentication of users before forwarding requests to the Grafana backend service.
#### Apache configuration
```bash
@@ -107,11 +106,11 @@ In this example we use Apache as a reverse proxy in front of Grafana. Apache han
- We use a **\<proxy>** configuration block for applying our authentication rules to every proxied request. These rules include requiring basic authentication where user:password credentials are stored in the **/etc/apache2/grafana_htpasswd** file. This file can be created with the `htpasswd` command.
- The next part of the configuration is the tricky part. We use Apaches rewrite engine to create our **X-WEBAUTH-USER header**, populated with the authenticated user.
- The next part of the configuration is the tricky part. We use Apaches rewrite engine to create our **X-WEBAUTH-USER header**, populated with the authenticated user.
- **RewriteRule .* - [E=PROXY_USER:%{LA-U:REMOTE_USER}, NS]**: This line is a little bit of magic. What it does, is for every request use the rewriteEngines look-ahead (LA-U) feature to determine what the REMOTE_USER variable would be set to after processing the request. Then assign the result to the variable PROXY_USER. This is necessary as the REMOTE_USER variable is not available to the RequestHeader function.
- **RewriteRule .\* - [E=PROXY_USER:%{LA-U:REMOTE_USER}, NS]**: This line is a little bit of magic. What it does, is for every request use the rewriteEngines look-ahead (LA-U) feature to determine what the REMOTE_USER variable would be set to after processing the request. Then assign the result to the variable PROXY_USER. This is necessary as the REMOTE_USER variable is not available to the RequestHeader function.
- **RequestHeader set X-WEBAUTH-USER “%{PROXY_USER}e”**: With the authenticated username now stored in the PROXY_USER variable, we create a new HTTP request header that will be sent to our backend Grafana containing the username.
- **RequestHeader set X-WEBAUTH-USER “%{PROXY_USER}e”**: With the authenticated username now stored in the PROXY_USER variable, we create a new HTTP request header that will be sent to our backend Grafana containing the username.
- The **RequestHeader unset Authorization** removes the Authorization header from the HTTP request before it is forwarded to Grafana. This ensures that Grafana does not try to authenticate the user using these credentials (BasicAuth is a supported authentication handler in Grafana).
@@ -204,15 +203,15 @@ ProxyPassReverse / http://grafana:3000/
- Create a htpasswd file. We create a new user **anthony** with the password **password**
```bash
htpasswd -bc htpasswd anthony password
```
```bash
htpasswd -bc htpasswd anthony password
```
- Launch the httpd container using our custom httpd.conf and our htpasswd file. The container will listen on port 80, and we create a link to the **grafana** container so that this container can resolve the hostname **grafana** to the Grafana containers IP address.
```bash
docker run -i -p 80:80 --link grafana:grafana -v $(pwd)/httpd.conf:/usr/local/apache2/conf/httpd.conf -v $(pwd)/htpasswd:/tmp/htpasswd httpd:2.4
```
```bash
docker run -i -p 80:80 --link grafana:grafana -v $(pwd)/httpd.conf:/usr/local/apache2/conf/httpd.conf -v $(pwd)/htpasswd:/tmp/htpasswd httpd:2.4
```
### Use grafana.
@@ -224,7 +223,7 @@ With our Grafana and Apache containers running, you can now connect to http://lo
With Team Sync, it's possible to set up synchronization between teams in your authentication provider and Grafana. You can send Grafana values as part of an HTTP header and have Grafana map them to your team structure. This allows you to put users into specific teams automatically.
To support the feature, auth proxy allows optional headers to map additional user attributes. The specific attribute to support team sync is `Groups`.
To support the feature, auth proxy allows optional headers to map additional user attributes. The specific attribute to support team sync is `Groups`.
```bash
# Optionally define more headers to sync other user attributes
@@ -293,7 +292,6 @@ With this, the user `leonard` will be automatically placed into the Loki team as
[Learn more about Team Sync]({{< relref "team-sync.md" >}})
## Login token and session cookie
With `enable_login_token` set to `true` Grafana will, after successful auth proxy header validation, assign the user

View File

@@ -42,49 +42,49 @@ To enable the Azure AD OAuth2 you must register your application with Azure AD.
- Define the required Application Role values for Grafana: Viewer, Editor, Admin. Otherwise, all users will have the Viewer role.
- Every role requires a unique ID.
- Generate the unique ID on Linux with `uuidgen`, and on Windows through Microsoft
PowerShell with `New-Guid`.
PowerShell with `New-Guid`.
- Include the unique ID in the configuration file:
```json
"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana admin Users",
"displayName": "Grafana Admin",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Admin"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana read only Users",
"displayName": "Grafana Viewer",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Viewer"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana Editor Users",
"displayName": "Grafana Editor",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Editor"
}
],
```
```json
"appRoles": [
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana admin Users",
"displayName": "Grafana Admin",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Admin"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana read only Users",
"displayName": "Grafana Viewer",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Viewer"
},
{
"allowedMemberTypes": [
"User"
],
"description": "Grafana Editor Users",
"displayName": "Grafana Editor",
"id": "SOME_UNIQUE_ID",
"isEnabled": true,
"lang": null,
"origin": "Application",
"value": "Editor"
}
],
```
1. Go to **Azure Active Directory** and then to **Enterprise Applications**. Search for your application and click on it.
@@ -139,7 +139,7 @@ allowed_domains = mycompany.com mycompany.org
### Team Sync (Enterprise only)
> Only available in Grafana Enterprise v6.7+
> Only available in Grafana Enterprise v6.7+
With Team Sync you can map your Azure AD groups to teams in Grafana so that your users will automatically be added to
the correct teams.

View File

@@ -8,6 +8,7 @@ weight = 500
# Generic OAuth authentication
You can configure many different OAuth2 authentication services with Grafana using the generic OAuth2 feature. Examples:
- [Generic OAuth authentication](#generic-oauth-authentication)
- [Set up OAuth2 with Auth0](#set-up-oauth2-with-auth0)
- [Set up OAuth2 with Bitbucket](#set-up-oauth2-with-bitbucket)
@@ -45,6 +46,7 @@ tls_client_ca =
Set `api_url` to the resource that returns [OpenID UserInfo](https://connect2id.com/products/server/docs/api/userinfo) compatible information.
You can also specify the SSL/TLS configuration used by the client.
- Set `tls_client_cert` to the path of the certificate.
- Set `tls_client_key` to the path containing the key.
- Set `tls_client_ca` to the path containing a trusted certificate authority list.
@@ -57,7 +59,7 @@ Grafana will attempt to determine the user's e-mail address by querying the OAut
1. Check for the presence of an e-mail address via the `email` field encoded in the OAuth `id_token` parameter.
1. Check for the presence of an e-mail address using the [JMESPath](http://jmespath.org/examples.html) specified via the `email_attribute_path` configuration option. The JSON used for the path lookup is the HTTP response obtained from querying the UserInfo endpoint specified via the `api_url` configuration option.
**Note**: Only available in Grafana v6.4+.
**Note**: Only available in Grafana v6.4+.
1. Check for the presence of an e-mail address in the `attributes` map encoded in the OAuth `id_token` parameter. By default Grafana will perform a lookup into the attributes map using the `email:primary` key, however, this is configurable and can be adjusted by using the `email_attribute_name` configuration option.
1. Query the `/emails` endpoint of the OAuth provider's API (configured with `api_url`) and check for the presence of an e-mail address marked as a primary address.
1. If no e-mail address is found in steps (1-4), then the e-mail address of the user is set to the empty string.
@@ -86,28 +88,30 @@ You can set the user's display name with JMESPath using the `name_attribute_path
## Set up OAuth2 with Auth0
1. Create a new Client in Auth0
- Name: Grafana
- Type: Regular Web Application
- Name: Grafana
- Type: Regular Web Application
1. Go to the Settings tab and set:
- Allowed Callback URLs: `https://<grafana domain>/login/generic_oauth`
- Allowed Callback URLs: `https://<grafana domain>/login/generic_oauth`
1. Click Save Changes, then use the values at the top of the page to configure Grafana:
```bash
[auth.generic_oauth]
enabled = true
allow_sign_up = true
team_ids =
allowed_organizations =
name = Auth0
client_id = <client id>
client_secret = <client secret>
scopes = openid profile email
auth_url = https://<domain>/authorize
token_url = https://<domain>/oauth/token
api_url = https://<domain>/userinfo
```
```bash
[auth.generic_oauth]
enabled = true
allow_sign_up = true
team_ids =
allowed_organizations =
name = Auth0
client_id = <client id>
client_secret = <client secret>
scopes = openid profile email
auth_url = https://<domain>/authorize
token_url = https://<domain>/oauth/token
api_url = https://<domain>/userinfo
```
## Set up OAuth2 with Bitbucket
@@ -144,53 +148,57 @@ allowed_organizations =
1. Configure Grafana as follows:
```bash
[auth.generic_oauth]
name = Centrify
enabled = true
allow_sign_up = true
client_id = <OpenID Connect Client ID from Centrify>
client_secret = <your generated OpenID Connect Client Secret"
scopes = openid profile email
auth_url = https://<your domain>.my.centrify.com/OAuth2/Authorize/<Application ID>
token_url = https://<your domain>.my.centrify.com/OAuth2/Token/<Application ID>
api_url = https://<your domain>.my.centrify.com/OAuth2/UserInfo/<Application ID>
```
```bash
[auth.generic_oauth]
name = Centrify
enabled = true
allow_sign_up = true
client_id = <OpenID Connect Client ID from Centrify>
client_secret = <your generated OpenID Connect Client Secret"
scopes = openid profile email
auth_url = https://<your domain>.my.centrify.com/OAuth2/Authorize/<Application ID>
token_url = https://<your domain>.my.centrify.com/OAuth2/Token/<Application ID>
api_url = https://<your domain>.my.centrify.com/OAuth2/UserInfo/<Application ID>
```
## Set up OAuth2 with OneLogin
1. Create a new Custom Connector with the following settings:
- Name: Grafana
- Sign On Method: OpenID Connect
- Redirect URI: `https://<grafana domain>/login/generic_oauth`
- Signing Algorithm: RS256
- Login URL: `https://<grafana domain>/login/generic_oauth`
then:
- Name: Grafana
- Sign On Method: OpenID Connect
- Redirect URI: `https://<grafana domain>/login/generic_oauth`
- Signing Algorithm: RS256
- Login URL: `https://<grafana domain>/login/generic_oauth`
then:
1. Add an App to the Grafana Connector:
- Display Name: Grafana
then:
- Display Name: Grafana
then:
1. Under the SSO tab on the Grafana App details page you'll find the Client ID and Client Secret.
Your OneLogin Domain will match the URL you use to access OneLogin.
Your OneLogin Domain will match the URL you use to access OneLogin.
Configure Grafana as follows:
Configure Grafana as follows:
```bash
[auth.generic_oauth]
name = OneLogin
enabled = true
allow_sign_up = true
client_id = <client id>
client_secret = <client secret>
scopes = openid email name
auth_url = https://<onelogin domain>.onelogin.com/oidc/2/auth
token_url = https://<onelogin domain>.onelogin.com/oidc/2/token
api_url = https://<onelogin domain>.onelogin.com/oidc/2/me
team_ids =
allowed_organizations =
```
```bash
[auth.generic_oauth]
name = OneLogin
enabled = true
allow_sign_up = true
client_id = <client id>
client_secret = <client secret>
scopes = openid email name
auth_url = https://<onelogin domain>.onelogin.com/oidc/2/auth
token_url = https://<onelogin domain>.onelogin.com/oidc/2/token
api_url = https://<onelogin domain>.onelogin.com/oidc/2/me
team_ids =
allowed_organizations =
```
## JMESPath examples
@@ -205,6 +213,7 @@ If  the`role_attribute_path` property does not return a role, then the user is
In the following example user will get `Editor` as role when authenticating. The value of the property `role` will be the resulting role if the role is a proper Grafana role, i.e. `Viewer`, `Editor` or `Admin`.
Payload:
```json
{
...
@@ -214,6 +223,7 @@ Payload:
```
Config:
```bash
role_attribute_path = role
```
@@ -223,6 +233,7 @@ role_attribute_path = role
In the following example user will get `Admin` as role when authenticating since it has a role `admin`. If a user has a role `editor` it will get `Editor` as role, otherwise `Viewer`.
Payload:
```json
{
...
@@ -239,14 +250,14 @@ Payload:
```
Config:
```bash
role_attribute_path = contains(info.roles[*], 'admin') && 'Admin' || contains(info.roles[*], 'editor') && 'Editor' || 'Viewer'
```
### Groups mapping
> Available in Grafana Enterprise v8.1 and later versions.
> Available in Grafana Enterprise v8.1 and later versions.
With Team Sync you can map your Generic OAuth groups to teams in Grafana so that the users are automatically added to the correct teams.
@@ -261,6 +272,7 @@ groups_attribute_path = info.groups
```
Payload:
```json
{
...
@@ -274,4 +286,4 @@ Payload:
},
...
}
```
```

View File

@@ -11,7 +11,7 @@ To enable the GitHub OAuth2 you must register your application with GitHub. GitH
## Configure GitHub OAuth application
You need to create a GitHub OAuth application (you will find this under the GitHub
You need to create a GitHub OAuth application (you will find this under the GitHub
settings page). When you create the application you will need to specify
a callback URL. Specify this as callback:
@@ -96,7 +96,7 @@ allowed_organizations = github google
### Team Sync (Enterprise only)
> Only available in Grafana Enterprise v6.3+
> Only available in Grafana Enterprise v6.3+
With Team Sync you can map your GitHub org teams to teams in Grafana so that your users will automatically be added to
the correct teams.

View File

@@ -12,7 +12,7 @@ To enable GitLab OAuth2 you must register the application in GitLab. GitLab will
## Create GitLab OAuth keys
You need to [create a GitLab OAuth application](https://docs.gitlab.com/ce/integration/oauth_provider.html).
Choose a descriptive *Name*, and use the following *Redirect URI*:
Choose a descriptive _Name_, and use the following _Redirect URI_:
```
https://grafana.example.com/login/gitlab
@@ -26,12 +26,12 @@ instance, if you access Grafana at `http://203.0.113.31:3000`, you should use
http://203.0.113.31:3000/login/gitlab
```
Finally, select *read_api* as the *Scope* and submit the form. Note that if you're
Finally, select _read_api_ as the _Scope_ and submit the form. Note that if you're
not going to use GitLab groups for authorization (i.e. not setting
`allowed_groups`, see below), you can select *read_user* instead of *read_api* as
the *Scope*, thus giving a more restricted access to your GitLab API.
`allowed_groups`, see below), you can select _read_user_ instead of _read_api_ as
the _Scope_, thus giving a more restricted access to your GitLab API.
You'll get an *Application Id* and a *Secret* in return; we'll call them
You'll get an _Application Id_ and a _Secret_ in return; we'll call them
`GITLAB_APPLICATION_ID` and `GITLAB_SECRET` respectively for the rest of this
section.
@@ -63,7 +63,7 @@ If you use your own instance of GitLab instead of `gitlab.com`, adjust
hostname with your own.
With `allow_sign_up` set to `false`, only existing users will be able to login
using their GitLab account, but with `allow_sign_up` set to `true`, *any* user
using their GitLab account, but with `allow_sign_up` set to `true`, _any_ user
who can authenticate on GitLab will be able to login on your Grafana instance;
if you use the public `gitlab.com`, it means anyone in the world would be able
to login on your Grafana instance.
@@ -78,7 +78,6 @@ groups](https://docs.gitlab.com/ce/user/group/index.html), set `allowed_groups`
to a comma- or space-separated list of groups. For instance, if you want to
only give access to members of the `example` group, set
```ini
allowed_groups = example
```

View File

@@ -20,7 +20,7 @@ These short-lived tokens are rotated each `token_rotation_interval_minutes` for
An active authenticated user that gets it token rotated will extend the `login_maximum_inactive_lifetime_duration` time from "now" that Grafana will remember the user.
This means that a user can close its browser and come back before `now + login_maximum_inactive_lifetime_duration` and still being authenticated.
This is true as long as the time since user login is less than `login_maximum_lifetime_duration`.
This is true as long as the time since user login is less than `login_maximum_lifetime_duration`.
#### Remote logout

View File

@@ -8,13 +8,15 @@ weight = 250
# JWT authentication
You can configure Grafana to accept a JWT token provided in the HTTP header. The token is verified using any of the following:
- PEM-encoded key file
- JSON Web Key Set (JWKS) in a local file
- JSON Web Key Set (JWKS) in a local file
- JWKS provided by the configured JWKS endpoint
## Enable JWT
To use JWT authentication:
1. Enable JWT in the [main config file]({{< relref "../administration/configuration.md" >}}).
1. Specify the header name that contains a token.

View File

@@ -45,6 +45,7 @@ Depending on which LDAP server you're using and how that's configured your Grafa
See [configuration examples](#configuration-examples) for more information.
**LDAP specific configuration file (ldap.toml) example:**
```bash
[[servers]]
# Ldap server host (specify multiple hosts space separated)
@@ -104,12 +105,11 @@ Within this view, you'll be able to see which LDAP servers are currently reachab
{{< figure src="/static/img/docs/ldap_debug.png" class="docs-image--no-shadow" max-width="600px" >}}
To use the debug view:
1. Type the username of a user that exists within any of your LDAP server(s)
1. Then, press "Run"
1. If the user is found within any of your LDAP instances, the mapping information is displayed
1. Type the username of a user that exists within any of your LDAP server(s)
1. Then, press "Run"
1. If the user is found within any of your LDAP instances, the mapping information is displayed
{{< figure src="/static/img/docs/ldap_debug_mapping_testing.png" class="docs-image--no-shadow" max-width="600px" >}}
@@ -138,6 +138,7 @@ In this case you skip providing a `bind_password` and instead provide a `bind_dn
The search filter and search bases settings are still needed to perform the LDAP search to retrieve the other LDAP information (like LDAP groups and email).
### POSIX schema
If your LDAP server does not support the memberOf attribute add these options:
```bash
@@ -151,13 +152,14 @@ group_search_filter_user_attribute = "uid"
### Group Mappings
In `[[servers.group_mappings]]` you can map an LDAP group to a Grafana organization and role. These will be synced every time the user logs in, with LDAP being
In `[[servers.group_mappings]]` you can map an LDAP group to a Grafana organization and role. These will be synced every time the user logs in, with LDAP being
the authoritative source. So, if you change a user's role in the Grafana Org. Users page, this change will be reset the next time the user logs in. If you
change the LDAP groups of a user, the change will take effect the next time the user logs in.
The first group mapping that an LDAP user is matched to will be used for the sync. If you have LDAP users that fit multiple mappings, the topmost mapping in the TOML configuration will be used.
**LDAP specific configuration file (ldap.toml) example:**
```bash
[[servers]]
# other settings omitted for clarity
@@ -180,12 +182,12 @@ group_dn = "*"
org_role = "Viewer"
```
Setting | Required | Description | Default
------------ | ------------ | ------------- | -------------
`group_dn` | Yes | LDAP distinguished name (DN) of LDAP group. If you want to match all (or no LDAP groups) then you can use wildcard (`"*"`) |
`org_role` | Yes | Assign users of `group_dn` the organization role `"Admin"`, `"Editor"` or `"Viewer"` |
`org_id` | No | The Grafana organization database id. Setting this allows for multiple group_dn's to be assigned to the same `org_role` provided the `org_id` differs | `1` (default org id)
`grafana_admin` | No | When `true` makes user of `group_dn` Grafana server admin. A Grafana server admin has admin access over all organizations and users. Available in Grafana v5.3 and above | `false`
| Setting | Required | Description | Default |
| --------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------- |
| `group_dn` | Yes | LDAP distinguished name (DN) of LDAP group. If you want to match all (or no LDAP groups) then you can use wildcard (`"*"`) |
| `org_role` | Yes | Assign users of `group_dn` the organization role `"Admin"`, `"Editor"` or `"Viewer"` |
| `org_id` | No | The Grafana organization database id. Setting this allows for multiple group_dn's to be assigned to the same `org_role` provided the `org_id` differs | `1` (default org id) |
| `grafana_admin` | No | When `true` makes user of `group_dn` Grafana server admin. A Grafana server admin has admin access over all organizations and users. Available in Grafana v5.3 and above | `false` |
### Nested/recursive group membership
@@ -193,6 +195,7 @@ Users with nested/recursive group membership must have an LDAP server that suppo
and configure `group_search_filter` in a way that it returns the groups the submitted username is a member of.
To configure `group_search_filter`:
- You can set `group_search_base_dns` to specify where the matching groups are defined.
- If you do not use `group_search_base_dns`, then the previously defined `search_base_dns` is used.
@@ -224,6 +227,7 @@ For troubleshooting, by changing `member_of` in `[servers.attributes]` to "dn" i
[OpenLDAP](http://www.openldap.org/) is an open source directory service.
**LDAP specific configuration file (ldap.toml):**
```bash
[[servers]]
host = "127.0.0.1"
@@ -248,6 +252,7 @@ email = "email"
Grafana does support receiving information from multiple LDAP servers.
**LDAP specific configuration file (ldap.toml):**
```bash
# --- First LDAP Server ---
@@ -300,7 +305,7 @@ org_role = "Viewer"
### Active Directory
[Active Directory](https://technet.microsoft.com/en-us/library/hh831484(v=ws.11).aspx) is a directory service which is commonly used in Windows environments.
[Active Directory](<https://technet.microsoft.com/en-us/library/hh831484(v=ws.11).aspx>) is a directory service which is commonly used in Windows environments.
Assuming the following Active Directory server setup:
@@ -309,6 +314,7 @@ Assuming the following Active Directory server setup:
- DNS name: `corp.local`
**LDAP specific configuration file (ldap.toml):**
```bash
[[servers]]
host = "10.0.0.1"
@@ -327,12 +333,10 @@ email = "mail"
# [[servers.group_mappings]] omitted for clarity
```
#### Port requirements
In above example SSL is enabled and an encrypted port have been configured. If your Active Directory don't support SSL please change `enable_ssl = false` and `port = 389`.
Please inspect your Active Directory configuration and documentation to find the correct settings. For more information about Active Directory and port requirements see [link](https://technet.microsoft.com/en-us/library/dd772723(v=ws.10)).
Please inspect your Active Directory configuration and documentation to find the correct settings. For more information about Active Directory and port requirements see [link](<https://technet.microsoft.com/en-us/library/dd772723(v=ws.10)>).
## Troubleshooting

View File

@@ -10,18 +10,18 @@ Grafana provides many ways to authenticate users. Some authentication integratio
Here is a table showing all supported authentication providers and the features available for them. [Team sync]({{< relref "../enterprise/team-sync.md" >}}) and [active sync]({{< relref "../enterprise/enhanced_ldap.md#active-ldap-synchronization" >}}) are only available in Grafana Enterprise.
Provider | Support | Role mapping | Team sync<br> *(Enterprise only)* | Active sync<br> *(Enterprise only)*
-------- | :-----: | :----------: | :-------: | :---------:
[Auth Proxy]({{< relref "auth-proxy.md" >}}) | v2.1+ | - | v6.3+ | -
[Azure AD OAuth]({{< relref "azuread.md" >}}) | v6.7+ | v6.7+ | v6.7+ | -
[Generic OAuth]({{< relref "generic-oauth.md" >}}) | v4.0+ | v6.5+ | - | -
[GitHub OAuth]({{< relref "github.md" >}}) | v2.0+ | - | v6.3+ | -
[GitLab OAuth]({{< relref "gitlab.md" >}}) | v5.3+ | - | v6.4+ | -
[Google OAuth]({{< relref "google.md" >}}) | v2.0+ | - | - | -
[JWT]({{< relref "jwt.md" >}}) | v8.0+ | - | - | -
[LDAP]({{< relref "ldap.md" >}}) | v2.1+ | v2.1+ | v5.3+ | v6.3+
[Okta OAuth]({{< relref "okta.md" >}}) | v7.0+ | v7.0+ | v7.0+ | -
[SAML]({{< relref "../enterprise/saml.md" >}}) (Enterprise only) | v6.3+ | v7.0+ | v7.0+ | -
| Provider | Support | Role mapping | Team sync<br> _(Enterprise only)_ | Active sync<br> _(Enterprise only)_ |
| ---------------------------------------------------------------- | :-----: | :----------: | :-------------------------------: | :---------------------------------: |
| [Auth Proxy]({{< relref "auth-proxy.md" >}}) | v2.1+ | - | v6.3+ | - |
| [Azure AD OAuth]({{< relref "azuread.md" >}}) | v6.7+ | v6.7+ | v6.7+ | - |
| [Generic OAuth]({{< relref "generic-oauth.md" >}}) | v4.0+ | v6.5+ | - | - |
| [GitHub OAuth]({{< relref "github.md" >}}) | v2.0+ | - | v6.3+ | - |
| [GitLab OAuth]({{< relref "gitlab.md" >}}) | v5.3+ | - | v6.4+ | - |
| [Google OAuth]({{< relref "google.md" >}}) | v2.0+ | - | - | - |
| [JWT]({{< relref "jwt.md" >}}) | v8.0+ | - | - | - |
| [LDAP]({{< relref "ldap.md" >}}) | v2.1+ | v2.1+ | v5.3+ | v6.3+ |
| [Okta OAuth]({{< relref "okta.md" >}}) | v7.0+ | v7.0+ | v7.0+ | - |
| [SAML]({{< relref "../enterprise/saml.md" >}}) (Enterprise only) | v6.3+ | v7.0+ | v7.0+ | - |
## Grafana Auth
@@ -38,7 +38,7 @@ These short-lived tokens are rotated each `token_rotation_interval_minutes` for
An active authenticated user that gets it token rotated will extend the `login_maximum_inactive_lifetime_duration` time from "now" that Grafana will remember the user.
This means that a user can close its browser and come back before `now + login_maximum_inactive_lifetime_duration` and still being authenticated.
This is true as long as the time since user login is less than `login_maximum_lifetime_duration`.
This is true as long as the time since user login is less than `login_maximum_lifetime_duration`.
#### Remote logout
@@ -138,4 +138,3 @@ URL to redirect the user to after signing out from Grafana. This can for example
[auth]
signout_redirect_url =
```

View File

@@ -19,4 +19,4 @@ This mechanism allows Grafana to remove an existing synchronized user from a tea
<div class="clearfix"></div>
> Team Sync is available in Grafana Enterprise Cloud Pro and Advanced and in Grafana Enterprise. For more information, refer to [Team sync]({{< relref "../enterprise/team-sync.md" >}}) in [Grafana Enterprise]({{< relref "../enterprise" >}}).
> Team Sync is available in Grafana Enterprise Cloud Pro and Advanced and in Grafana Enterprise. For more information, refer to [Team sync]({{< relref "../enterprise/team-sync.md" >}}) in [Grafana Enterprise]({{< relref "../enterprise" >}}).

View File

@@ -17,15 +17,14 @@ To examine the details of an exemplar trace:
1. Place your cursor over an exemplar (highlighted star). Depending on your backend trace data source, you will see a blue button with the label `Query with <data source name>`. In the following example, the tracing data source is Tempo.
{{< figure src="/static/img/docs/basics/exemplar-details.png" class="docs-image--no-shadow" max-width= "275px" caption="Screenshot showing Exemplar details" >}}
{{< figure src="/static/img/docs/basics/exemplar-details.png" class="docs-image--no-shadow" max-width= "275px" caption="Screenshot showing Exemplar details" >}}
1. Click the **Query with Tempo** option next to the `traceID` property. The trace details, including the spans within the trace are listed in a separate panel on the right.
{{< figure src="/static/img/docs/basics/exemplar-explore-view.png" class="docs-image--no-shadow" max-width= "750px" caption="Explorer view with panel showing trace details" >}}
{{< figure src="/static/img/docs/basics/exemplar-explore-view.png" class="docs-image--no-shadow" max-width= "750px" caption="Explorer view with panel showing trace details" >}}
For more information on how to drill down and analyze the trace and span details, refer to the [Analyze trace and span details](#analyze-trace-and-spans) section.
## In logs
You can also view exemplar trace details from the Loki logs in Explore. Use regex within the Derived fields links for Loki to extract the `traceID` information. Now when you expand Loki logs, you can see a `traceID` property under the **Detected fields** section. To learn more about how to extract a part of a log message into an internal or external link, refer to [using derived fields in Loki]({{< relref "../../explore/logs-integration.md" >}}).
@@ -40,24 +39,24 @@ To view the details of an exemplar trace:
For more information on how to drill down and analyze the trace and span details, refer to the [Analyze trace and span details](#analyze-trace-and-spans) section.
## Analyze trace and spans
## Analyze trace and spans
This panel shows the details of the trace in different segments.
This panel shows the details of the trace in different segments.
- The top segment shows the Trace ID to indicate that the query results correspond to the specific trace.
- The top segment shows the Trace ID to indicate that the query results correspond to the specific trace.
You can add more traces to the results using the `Add query` button.
You can add more traces to the results using the `Add query` button.
- The next segment shows the entire span for the specific trace as a narrow strip. All levels of the trace from the client all the way down to database query is displayed, which provides a bird's eye view of the time distribution across all layers over which the HTTP request was processed.
1. You can click within this strip view to display a magnified view of a smaller time segment within the span. This magnified view shows up in the bottom segment of the panel.
1. You can click within this strip view to display a magnified view of a smaller time segment within the span. This magnified view shows up in the bottom segment of the panel.
1. In the magnified view, you can expand or collapse the various levels of the trace to drill down to the specific span of interest.
1. In the magnified view, you can expand or collapse the various levels of the trace to drill down to the specific span of interest.
For example, if the strip view shows that most of the latency was within the app layer, you can expand the trace down the app layer to investigate the problem further. To expand a particular layer of span, click the icon on the left. The same button can collapse an expanded span.
For example, if the strip view shows that most of the latency was within the app layer, you can expand the trace down the app layer to investigate the problem further. To expand a particular layer of span, click the icon on the left. The same button can collapse an expanded span.
- To see the details of the span at any level, click the span itself.
This displays additional metadata associated with the span. The metadata itself is initially shown in a narrow strip but you can see more details by clicking the metadata strip.
This displays additional metadata associated with the span. The metadata itself is initially shown in a narrow strip but you can see more details by clicking the metadata strip.
{{< figure src="/static/img/docs/basics/exemplar-span-details.png" class="docs-image--no-shadow" max-width= "750px" caption="Span details" >}}
{{< figure src="/static/img/docs/basics/exemplar-span-details.png" class="docs-image--no-shadow" max-width= "750px" caption="Span details" >}}

View File

@@ -43,7 +43,7 @@ For more information about heatmap visualization options, refer to [Heatmap]({{<
There are a number of data sources supporting histogram over time like Elasticsearch (by using a Histogram bucket
aggregation) or Prometheus (with [histogram](https://prometheus.io/docs/concepts/metric_types/#histogram) metric type
and *Format as* option set to Heatmap). But generally, any data source could be used if it meets the requirements:
and _Format as_ option set to Heatmap). But generally, any data source could be used if it meets the requirements:
returns series with names representing bucket bound or returns series sorted by the bound in ascending order.
## Raw data vs aggregated

View File

@@ -15,7 +15,7 @@ Imagine you wanted to know how the temperature outside changes throughout the da
| 10:00 | 26°C |
| 11:00 | 27°C |
Temperature data like this is one example of what we call a *time series*—a sequence of measurements, ordered in time. Every row in the table represents one individual measurement at a specific time.
Temperature data like this is one example of what we call a _time series_—a sequence of measurements, ordered in time. Every row in the table represents one individual measurement at a specific time.
Tables are useful when you want to identify individual measurements but make it difficult to see the big picture. A more common visualization for time series is the _graph_, which instead places each measurement along a time axis. Visual representations like the graph make it easier to discover patterns and features of the data that otherwise would be difficult to see.
@@ -91,15 +91,15 @@ Here are some of the TSDBs supported by Grafana:
- [InfluxDB](https://www.influxdata.com/products/influxdb-overview/)
- [Prometheus](https://prometheus.io/)
```
weather,location=us-midwest temperature=82 1465839830100400200
| -------------------- -------------- |
| | | |
| | | |
+-----------+--------+-+---------+-+---------+
|measurement|,tag_set| |field_set| |timestamp|
+-----------+--------+-+---------+-+---------+
```
```
weather,location=us-midwest temperature=82 1465839830100400200
| -------------------- -------------- |
| | | |
| | | |
+-----------+--------+-+---------+-+---------+
|measurement|,tag_set| |field_set| |timestamp|
+-----------+--------+-+---------+-+---------+
```
### Collecting time series data
@@ -114,10 +114,9 @@ Here are some examples of collectors:
A collector either _pushes_ data to a database or lets the database _pull_ the data from it. Both methods come with their own set of pros and cons:
| | Pros | Cons |
| ---- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| Push | Easier to replicate data to multiple destinations. | The TSDB has no control over how much data gets sent. |
| | Pros | Cons |
| ---- | ------------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| Push | Easier to replicate data to multiple destinations. | The TSDB has no control over how much data gets sent. |
| Pull | Better control of how much data that gets ingested, and its authenticity. | Firewalls, VPNs or load balancers can make it hard to access the agents. |
Since it would be inefficient to write every measurement to the database, collectors pre-aggregate the data and write to the time series database at regular intervals.

View File

@@ -23,6 +23,7 @@ Keep your graphs simple and focused on answering the question that you are askin
_Cognitive load_ is basically how hard you need to think about something in order to figure it out. Make your dashboard easy to interpret. Other users and future you (when you're trying to figure out what broke at 2AM) will appreciate it.
Ask yourself:
- Can I tell what exactly each graph represents? Is it obvious, or do I have to think about it?
- If I show this to someone else, how long will it take them to figure it out? Will they get lost?
@@ -46,8 +47,8 @@ Once you have a strategy or design guidelines, write them down to help maintain
- Grafana retrieves data from a data source. A basic understanding of [data sources]({{< relref "../datasources/_index.md" >}}) in general and your specific is important.
- Avoid unnecessary dashboard refreshing to reduce the load on the network or backend. For example, if your data changes every hour, then you don't need to set the dashboard refresh rate to 30 seconds.
- Use the left and right Y-axes when displaying time series with different units or ranges.
- Add documentation to dashboards and panels.
- To add documentation to a dashboard, add a [Text panel visualization]({{< relref "../panels/visualizations/text-panel.md" >}}) to the dashboard. Record things like the purpose of the dashboard, useful resource links, and any instructions users might need to interact with the dashboard. Check out this [Wikimedia example](https://grafana.wikimedia.org/d/000000066/resourceloader?orgId=1).
- To add documentation to a panel, [edit the panel settings]({{< relref "../panels/add-a-panel.md#edit-panel-settings" >}}) and add a description. Any text you add will appear if you hover your cursor over the small `i` in the top left corner of the panel.
- Reuse your dashboards and enforce consistency by using [templates and variables]({{< relref "../variables/_index.md" >}}).
- Be careful with stacking graph data. The visualizations can be misleading, and hide important data. We recommend turning it off in most cases.
- Add documentation to dashboards and panels.
- To add documentation to a dashboard, add a [Text panel visualization]({{< relref "../panels/visualizations/text-panel.md" >}}) to the dashboard. Record things like the purpose of the dashboard, useful resource links, and any instructions users might need to interact with the dashboard. Check out this [Wikimedia example](https://grafana.wikimedia.org/d/000000066/resourceloader?orgId=1).
- To add documentation to a panel, [edit the panel settings]({{< relref "../panels/add-a-panel.md#edit-panel-settings" >}}) and add a description. Any text you add will appear if you hover your cursor over the small `i` in the top left corner of the panel.
- Reuse your dashboards and enforce consistency by using [templates and variables]({{< relref "../variables/_index.md" >}}).
- Be careful with stacking graph data. The visualizations can be misleading, and hide important data. We recommend turning it off in most cases.

View File

@@ -32,6 +32,6 @@ What is your dashboard maturity level? Analyze your current dashboard setup and
- In many cases copies are being made to simply customize the view by setting template parameters. This should instead be done by maintaining a link to the master dashboard and customizing the view with [URL parameters]({{< relref "../linking/data-link-variables.md" >}}).
- When you must copy a dashboard, clearly rename it and _do not_ copy the dashboard tags. Tags are important metadata for dashboards that are used during search. Copying tags can result in false matches.
- Maintain a dashboard of dashboards or cross-reference dashboards. This can be done in several ways:
- Create dashboard links, panel, or data links. Links can go to other dashboards or to external systems. For more information, refer to [Linking]({{< relref "../linking/_index.md" >}}).
- Add a [Dashboard list panel]({{< relref "../panels/visualizations/dashboard-list-panel.md" >}}). You can then customize what you see by doing tag or folder searches.
- Add a [Text panel]({{< relref "../panels/visualizations/text-panel.md" >}}) and use markdown to customize the display.
- Create dashboard links, panel, or data links. Links can go to other dashboards or to external systems. For more information, refer to [Linking]({{< relref "../linking/_index.md" >}}).
- Add a [Dashboard list panel]({{< relref "../panels/visualizations/dashboard-list-panel.md" >}}). You can then customize what you see by doing tag or folder searches.
- Add a [Text panel]({{< relref "../panels/visualizations/text-panel.md" >}}) and use markdown to customize the display.

View File

@@ -48,13 +48,13 @@ Dashboards can be tagged, and the dashboard picker provides quick, searchable ac
## Rows
A *row* is a logical divider within a dashboard. It is used to group panels together.
A _row_ is a logical divider within a dashboard. It is used to group panels together.
Rows are always 12 “units” wide. These units are automatically scaled dependent on the horizontal resolution of your browser. You can control the relative width of panels within a row by setting their specific width.
We use a unit abstraction so that Grafana looks great on all screen sizes.
> **Note:** With MaxDataPoint functionality, Grafana can show you the perfect number of data points, regardless of resolution or time range.
> **Note:** With MaxDataPoint functionality, Grafana can show you the perfect number of data points, regardless of resolution or time range.
Collapse a row by clicking on the row title. If you save a dashboard with a row collapsed, then it saves in that state and does not load those graphs until you expand the row.

View File

@@ -41,8 +41,7 @@ can still show them if you add a new **Annotation Query** and filter by tags. Bu
### Query by tag
You can create new queries to fetch annotations from the native annotation store via the `-- Grafana --` data source by setting *Filter by* to `Tags`.
You can create new queries to fetch annotations from the native annotation store via the `-- Grafana --` data source by setting _Filter by_ to `Tags`.
Grafana v8.1 and later versions also support typeahead of existing tags, provide at least one tag.
@@ -61,6 +60,7 @@ open the dashboard settings menu, then select `Annotations`. This will open the
settings view. To create a new annotation query hit the `New` button.
<!--![](/static/img/docs/v50/annotation_new_query.png)-->
{{< figure src="/static/img/docs/v50/annotation_new_query.png" max-width="600px" >}}
Specify a name for the annotation query. This name is given to the toggle (checkbox) that will allow

View File

@@ -46,6 +46,5 @@ The Dashboard Folder Page is similar to the Manage Dashboards page and is where
Permissions can be assigned to a folder and inherited by the containing dashboards. An Access Control List (ACL) is used where
**Organization Role**, **Team** and Individual **User** can be assigned permissions. Read the
[Dashboard and Folder Permissions]({{< relref "../permissions/dashboard-folder-permissions.md" >}}) docs for more detail
on the permission system.
[Dashboard and Folder Permissions]({{< relref "../permissions/dashboard-folder-permissions.md" >}}) docs for more detail
on the permission system.

View File

@@ -5,7 +5,6 @@ aliases = ["/docs/grafana/latest/reference/dashboard_history/"]
weight = 100
+++
# Dashboard Version History
Whenever you save a version of your dashboard, a copy of that version is saved so that previous versions of your dashboard are never lost. A list of these versions is available by entering the dashboard settings and then selecting "Versions" in the left side menu.

View File

@@ -56,6 +56,7 @@ Dashboards exported from Grafana 3.1+ have a new json section `__inputs`
that define what data sources and metric prefixes the dashboard uses.
Example:
```json
{
"__inputs": [
@@ -76,7 +77,6 @@ Example:
}
]
}
```
These are then referenced in the dashboard panels like this:
@@ -84,14 +84,14 @@ These are then referenced in the dashboard panels like this:
```json
{
"rows": [
{
"panels": [
{
"type": "graph",
"datasource": "${DS_GRAPHITE}"
}
]
}
{
"panels": [
{
"type": "graph",
"datasource": "${DS_GRAPHITE}"
}
]
}
]
}
```
@@ -106,4 +106,5 @@ data source. Another alternative is to open the json file in a text editor and u
to value that matches a name of your data source.
## Note
In Grafana v5.3.4+ the export modal has new checkbox for sharing for external use (other instances). If the checkbox is not checked then the `__inputs` section will not be included in the exported JSON file.

View File

@@ -11,9 +11,9 @@ A dashboard in Grafana is represented by a JSON object, which stores metadata of
To view the JSON of a dashboard:
1. Navigate to a dashboard.
1. In the top navigation menu, click the **Dashboard settings** (gear) icon.
1. Click **JSON Model**.
1. Navigate to a dashboard.
1. In the top navigation menu, click the **Dashboard settings** (gear) icon.
1. Click **JSON Model**.
## JSON fields
@@ -53,6 +53,7 @@ When a user creates a new dashboard, a new dashboard JSON object is initialized
"links": []
}
```
Each field in the dashboard JSON is explained below with its usage:
| Name | Usage |
@@ -99,7 +100,7 @@ Panels are the building blocks of a dashboard. It consists of data source querie
The gridPos property describes the panel size and position in grid coordinates.
- `w` 1-24 (the width of the dashboard is divided into 24 columns)
- `w` 1-24 (the width of the dashboard is divided into 24 columns)
- `h` In grid height units, each represents 30 pixels.
- `x` The x position, in same unit as `w`.
- `y` The y position, in same unit as `h`.

View File

@@ -5,7 +5,6 @@ aliases = ["/docs/grafana/latest/reference/playlist/"]
weight = 4
+++
# Playlist
A playlist is a list of dashboards that are displayed in a sequence. You might use a playlist to build situational awareness or to present your metrics to your team or visitors.
@@ -87,7 +86,7 @@ You can save a playlist to add it to your **Playlists** page, where you can star
1. Click **Playlists**.
1. Click on the playlist.
1. Edit the playlist.
* Ensure that your playlist has a **Name**, **Interval**, and at least one **Dashboard** added to it.
- Ensure that your playlist has a **Name**, **Interval**, and at least one **Dashboard** added to it.
1. Click **Save**.
## Start a playlist
@@ -100,38 +99,38 @@ By default, each dashboard is displayed for the amount of time entered in the In
1. Next to the playlist you want to start, click **Start playlist**.
1. In the dropdown, select the mode you want the playlist to display in.
- **Normal mode:**
- The side menu remains visible.
- The navbar, row and panel controls appear at the top of the screen.
- The side menu remains visible.
- The navbar, row and panel controls appear at the top of the screen.
- **TV mode:**
- The side menu is hidden/removed.
- The navbar, row and panel controls appear at the top of the screen.
- Enabled automatically after one minute of user inactivity.
- You can enable it manually using the `d v` sequence shortcut, or by appending the parameter `?inactive` to the dashboard URL.
- You can disable it with any mouse movement or keyboard action.
- The side menu is hidden/removed.
- The navbar, row and panel controls appear at the top of the screen.
- Enabled automatically after one minute of user inactivity.
- You can enable it manually using the `d v` sequence shortcut, or by appending the parameter `?inactive` to the dashboard URL.
- You can disable it with any mouse movement or keyboard action.
- **TV mode (with auto fit panels):**
- The side menu is hidden/removed.
- The navbar, row and panel controls appear at the top of the screen.
- Dashboard panels automatically adjust to optimize space on screen.
- The side menu is hidden/removed.
- The navbar, row and panel controls appear at the top of the screen.
- Dashboard panels automatically adjust to optimize space on screen.
- **Kiosk mode:**
- The side menu, navbar, row and panel controls are completely hidden/removed from view.
- You can enable it manually using the `d v` sequence shortcut after the playlist has started.
- You can disable it manually with the same shortcut.
- The side menu, navbar, row and panel controls are completely hidden/removed from view.
- You can enable it manually using the `d v` sequence shortcut after the playlist has started.
- You can disable it manually with the same shortcut.
- **Kiosk mode (with auto fit panels):**
- The side menu, navbar, row and panel controls are completely hidden/removed from view.
- Dashboard panels automatically adjust to optimize space on screen.
- The side menu, navbar, row and panel controls are completely hidden/removed from view.
- Dashboard panels automatically adjust to optimize space on screen.
## Control a playlist
You can control a playlist in **Normal** or **TV** mode after it's started, using the navigation bar at the top of your screen.
| Button | Result |
| --- | --- |
| Next (double-right arrow) | Advances to the next dashboard. |
| Back (left arrow) | Returns to the previous dashboard. |
| Stop (square) | Ends the playlist, and exits to the current dashboard. |
| Cycle view mode (monitor icon) | Rotates the display of the dashboards in different view modes. |
| Time range | Displays data within a time range. It can be set to display the last 5 minutes up to 5 years ago, or a custom time range, using the down arrow. |
| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop down arrow. |
| Button | Result |
| ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Next (double-right arrow) | Advances to the next dashboard. |
| Back (left arrow) | Returns to the previous dashboard. |
| Stop (square) | Ends the playlist, and exits to the current dashboard. |
| Cycle view mode (monitor icon) | Rotates the display of the dashboards in different view modes. |
| Time range | Displays data within a time range. It can be set to display the last 5 minutes up to 5 years ago, or a custom time range, using the down arrow. |
| Refresh (circle arrow) | Reloads the dashboard, to display the current data. It can be set to reload automatically every 5 seconds to 1 day, using the drop down arrow. |
> Shortcut: Press the Esc key to stop the playlist from your keyboard.
@@ -139,11 +138,13 @@ You can control a playlist in **Normal** or **TV** mode after it's started, usin
You can share a playlist by copying the link address on the view mode you prefer, and pasting the URL to your destination.
1. From the Dashboards submenu, click **Playlists**.
1. Next to the playlist you want to share, click **Start playlist**.
1. In the dropdown, right click the view mode you prefer.
1. Click **Copy Link Address** to copy the URL to your clipboard.
1. From the Dashboards submenu, click **Playlists**.
1. Next to the playlist you want to share, click **Start playlist**.
1. In the dropdown, right click the view mode you prefer.
1. Click **Copy Link Address** to copy the URL to your clipboard.
Example: The URL for the first playlist on the Grafana Play site in Kiosk mode will look like this:
[https://play.grafana.org/playlists/play/1?kiosk](https://play.grafana.org/playlists/play/1?kiosk).
1. Paste the URL to your destination.
Example: The URL for the first playlist on the Grafana Play site in Kiosk mode will look like this:
[https://play.grafana.org/playlists/play/1?kiosk](https://play.grafana.org/playlists/play/1?kiosk).
1. Paste the URL to your destination.

View File

@@ -5,7 +5,6 @@ aliases =["/docs/grafana/latest/reference/search/"]
weight = 5
+++
# Dashboard Search
Dashboards can be searched by the dashboard name, filtered by one (or many) tags or filtered by starred status. The dashboard search is accessed through the dashboard picker, available in the dashboard top nav area. The dashboard search can also be opened by using the shortcut `F`.
@@ -26,9 +25,10 @@ When using only a keyboard, you can use your keyboard arrow keys to navigate the
Begin typing any part of the desired dashboard names in the search bar. Search will return results for any partial string match in real-time, as you type.
Dashboard search is:
- Real-time
- *Not* case sensitive
- Functional across stored *and* file based dashboards.
- _Not_ case sensitive
- Functional across stored _and_ file based dashboards.
## Filter by Tag(s)
@@ -38,6 +38,6 @@ To filter the dashboard list by tag, click on any tag appearing in the right col
Alternately, to see a list of all available tags, click the tags dropdown menu. All tags will be shown, and when a tag is selected, the dashboard search will be instantly filtered:
When using only a keyboard: `tab` to focus on the *tags* link, `▼` down arrow key to find a tag and select with the `Enter` key.
When using only a keyboard: `tab` to focus on the _tags_ link, `▼` down arrow key to find a tag and select with the `Enter` key.
> **Note:** When multiple tags are selected, Grafana will show dashboards that include **all**.

View File

@@ -14,19 +14,19 @@ To add a data source:
1. Move your cursor to the cog icon on the side menu which will show the configuration options.
{{< figure src="/static/img/docs/v75/sidemenu-datasource-7-5.png" max-width="150px" class="docs-image--no-shadow">}}
{{< figure src="/static/img/docs/v75/sidemenu-datasource-7-5.png" max-width="150px" class="docs-image--no-shadow">}}
1. Click on **Data sources**. The data sources page opens showing a list of previously configured data sources for the Grafana instance.
1. Click **Add data source** to see a list of all supported data sources.
{{< figure src="/static/img/docs/v75/add-data-source-7-5.png" max-width="600px" class="docs-image--no-shadow">}}
{{< figure src="/static/img/docs/v75/add-data-source-7-5.png" max-width="600px" class="docs-image--no-shadow">}}
1. Search for a specific data source by entering the name in the search dialog. Or you can scroll through supported data sources grouped into time series, logging, tracing and other categories.
1. Move the cursor over the data source you want to add.
{{< figure src="/static/img/docs/v75/select-data-source-7-5.png" max-width="700px" class="docs-image--no-shadow">}}
{{< figure src="/static/img/docs/v75/select-data-source-7-5.png" max-width="700px" class="docs-image--no-shadow">}}
1. Click **Select**. The data source configuration page opens.

View File

@@ -10,9 +10,9 @@ weight = 1300
Grafana includes built-in support for Prometheus Alertmanager. It is presently in alpha and not accessible unless [alpha plugins are enabled in Grafana settings](https://grafana.com/docs/grafana/latest/administration/configuration/#enable_alpha). Once you add it as a data source, you can use the [Grafana alerting UI](https://grafana.com/docs/grafana/latest/alerting/) to manage silences, contact points as well as notification policies. A drop down option in these pages allows you to switch between Grafana and any configured Alertmanager data sources .
>**Note:** New in Grafana 8.0.
> **Note:** New in Grafana 8.0.
>**Note:** Currently only the [Cortex implementation of Prometheus alertmanager](https://cortexmetrics.io/docs/proposals/scalable-alertmanager/) is supported.
> **Note:** Currently only the [Cortex implementation of Prometheus alertmanager](https://cortexmetrics.io/docs/proposals/scalable-alertmanager/) is supported.
## Provision the Alertmanager data source

View File

@@ -61,7 +61,7 @@ Available Elasticsearch versions are `2.x`, `5.x`, `5.6+`, `6.0+`, `7.0+`, `7.7+
Grafana assumes that you are running the lowest possible version for a specified range. This ensures that new features or breaking changes in a future Elasticsearch release will not affect your configuration.
For example, suppose you are running Elasticsearch `7.6.1` and you selected `7.0+`. If a new feature is made available for Elasticsearch `7.5.0` or newer releases, then a `7.5+` option will be available. However, your configuration will not be affected until you explicitly select the new `7.5+` option in your settings.
For example, suppose you are running Elasticsearch `7.6.1` and you selected `7.0+`. If a new feature is made available for Elasticsearch `7.5.0` or newer releases, then a `7.5+` option will be available. However, your configuration will not be affected until you explicitly select the new `7.5+` option in your settings.
### Min time interval
@@ -87,6 +87,7 @@ Enables `X-Pack` specific features and options, providing the query editor with
#### Include frozen indices
When `X-Pack enabled` is active and the configured Elasticsearch version is higher than `6.6.0`, you can configure Grafana to not ignore [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
### Logs
There are two parameters, `Message field name` and `Level field name`, that can optionally be configured from the data source settings page that determine
@@ -94,7 +95,7 @@ which fields will be used for log messages and log levels when visualizing logs
For example, if you're using a default setup of Filebeat for shipping logs to Elasticsearch the following configuration should work:
- **Message field name:** message
- **Message field name:** message
- **Level field name:** fields.level
### Data links
@@ -102,6 +103,7 @@ For example, if you're using a default setup of Filebeat for shipping logs to El
Data links create a link from a specified field that can be accessed in logs view in Explore.
Each data link configuration consists of:
- **Field -** Name of the field used by the data link.
- **URL/query -** If the link is external, then enter the full link URL. If the link is internal link, then this input serves as query for the target data source. In both cases, you can interpolate the value from the field with `${__value.raw }` macro.
- **URL Label -** (Optional) Set a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
@@ -126,7 +128,7 @@ You can control the name for time series via the `Alias` input field.
## Pipeline metrics
Some metric aggregations are called Pipeline aggregations, for example, *Moving Average* and *Derivative*. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric to hide metrics from appearing in the graph. This is useful for metrics you only have in the query for use in a pipeline metric.
Some metric aggregations are called Pipeline aggregations, for example, _Moving Average_ and _Derivative_. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric to hide metrics from appearing in the graph. This is useful for metrics you only have in the query for use in a pipeline metric.
![Pipeline aggregation editor](/static/img/docs/elasticsearch/pipeline-aggregation-editor-7-4.png)
@@ -141,7 +143,7 @@ types of template variables.
### Query variable
The Elasticsearch data source supports two types of queries you can use in the *Query* field of *Query* variables. The query is written using a custom JSON string.
The Elasticsearch data source supports two types of queries you can use in the _Query_ field of _Query_ variables. The query is written using a custom JSON string.
| Query | Description |
| -------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -172,16 +174,16 @@ To keep terms in the doc count order, set the variable's Sort dropdown to **Disa
There are two syntaxes:
- `$<varname>` Example: @hostname:$hostname
- `$<varname>` Example: @hostname:$hostname
- `[[varname]]` Example: @hostname:[[hostname]]
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the *Multi-value* or *Include all value*
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the _Multi-value_ or _Include all value_
options are enabled, Grafana converts the labels from plain text to a lucene compatible condition.
![Query with template variables](/static/img/docs/elasticsearch/elastic-templating-query-7-4.png)
In the above example, we have a lucene query that filters documents based on the `@hostname` property using a variable named `$hostname`. It is also using
a variable in the *Terms* group by field input box. This allows you to use a variable to quickly change how the data is grouped.
In the above example, we have a lucene query that filters documents based on the `@hostname` property using a variable named `$hostname`. It is also using
a variable in the _Terms_ group by field input box. This allows you to use a variable to quickly change how the data is grouped.
Example dashboard:
[Elasticsearch Templated Dashboard](https://play.grafana.org/d/CknOEXDMk/elasticsearch-templated?orgId=1d)
@@ -193,7 +195,7 @@ queries via the Dashboard menu / Annotations view. Grafana can query any Elastic
for annotation events.
| Name | Description |
| -------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `Query` | You can leave the search query blank or specify a lucene query. |
| `Time` | The name of the time field, needs to be date field. |
| `Time End` | Optional name of the time end field needs to be date field. If set, then annotations will be marked as a region between time and time-end. |
@@ -230,11 +232,11 @@ datasources:
- name: Elastic
type: elasticsearch
access: proxy
database: "[metrics-]YYYY.MM.DD"
database: '[metrics-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: "@timestamp"
timeField: '@timestamp'
```
or, for logs:
@@ -246,18 +248,18 @@ datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
database: "[filebeat-]YYYY.MM.DD"
database: '[filebeat-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: "@timestamp"
esVersion: "7.0.0"
timeField: '@timestamp'
esVersion: '7.0.0'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: "$${__value.raw}" # Careful about the double "$$" because of env var expansion
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Amazon Elasticsearch Service

View File

@@ -9,6 +9,7 @@ weight = 10
# Preconfigured Cloud Monitoring dashboards
Google Cloud Monitoring data source ships with pre-configured dashboards for some of the most popular GCP services. These curated dashboards are based on similar dashboards in the GCP dashboard samples repository. See also, [Using Google Cloud Monitoring in Grafana]({{< relref "./_index.md" >}}) for detailed instructions on how to add and configure the Google Cloud Monitoring data source.
## Curated dashboards
To import the curated dashboards:
@@ -19,6 +20,6 @@ To import the curated dashboards:
The data source of the newly created dashboard panels will be the one selected above. The dashboards have a template variable that is populated with the projects accessible by the configured service account every time the dashboard is loaded. After the dashboard is loaded, you can select the project you prefer from the drop-down list.
In case you want to customize a dashboard, we recommend that you save it under a different name. Otherwise the dashboard will be overwritten when a new version of the dashboard is released.
In case you want to customize a dashboard, we recommend that you save it under a different name. Otherwise the dashboard will be overwritten when a new version of the dashboard is released.
{{< figure src="/static/img/docs/google-cloud-monitoring/curated-dashboards-7-4.png" max-width= "650px" >}}

View File

@@ -18,22 +18,22 @@ Refer to [Add a data source]({{< relref "add-a-data-source.md" >}}) for instruct
To access Graphite settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the Graphite data source.
Name | Description
--------------------- | -------------
`Name` | The data source name. This is how you refer to the data source in panels and queries.
`Default` | Default data source means that it will be pre-selected for new panels.
`URL` | The HTTP protocol, IP, and port of your graphite-web or graphite-api install.
`Access` | Server (default) = URL needs to be accessible from the Grafana backend/server, Browser = URL needs to be accessible from the browser.
`Auth` | Refer to [Authentication]({{< relref "../auth/_index.md" >}}) for more information.
`Basic Auth` | Enable basic authentication to the data source.
`User` | User name for basic authentication.
`Password` | Password for basic authentication.
`Custom HTTP Headers` | Click **Add header** to add a custom HTTP header.
`Header` | Enter the custom header name.
`Value` | Enter the custom header value.
`Graphite details` |
`Version` | Select your version of Graphite.
`Type` | Select your type of Graphite.
| Name | Description |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP, and port of your graphite-web or graphite-api install. |
| `Access` | Server (default) = URL needs to be accessible from the Grafana backend/server, Browser = URL needs to be accessible from the browser. |
| `Auth` | Refer to [Authentication]({{< relref "../auth/_index.md" >}}) for more information. |
| `Basic Auth` | Enable basic authentication to the data source. |
| `User` | User name for basic authentication. |
| `Password` | Password for basic authentication. |
| `Custom HTTP Headers` | Click **Add header** to add a custom HTTP header. |
| `Header` | Enter the custom header name. |
| `Value` | Enter the custom header value. |
| `Graphite details` |
| `Version` | Select your version of Graphite. |
| `Type` | Select your type of Graphite. |
Access mode controls how requests to the data source will be handled. Server should be the preferred way if nothing else is stated.
@@ -62,6 +62,7 @@ Click **Select metric** to start navigating the metric space. Once you start, yo
Click the plus icon next to **Function** to add a function. You can search for the function or select it from the menu. Once
a function is selected, it will be added and your focus will be in the text box of the first parameter.
- To edit or change a parameter, click on it and it will turn into a text box.
- To delete a function, click the function name followed by the x icon.
@@ -78,13 +79,13 @@ If you want consistent ordering, use sortByName. This can be particularly annoyi
### Nested queries
You can reference queries by the row “letter” that theyre on (similar to Microsoft Excel). If you add a second query to a graph, you can reference the first query simply by typing in #A. This provides an easy and convenient way to build compounded queries.
You can reference queries by the row “letter” that theyre on (similar to Microsoft Excel). If you add a second query to a graph, you can reference the first query simply by typing in #A. This provides an easy and convenient way to build compounded queries.
### Avoiding many queries by using wildcards
Occasionally one would like to see multiple time series plotted on the same graph. For example we might want to see how the CPU is being utilized on a machine. You might
initially create the graph by adding a query for each time series, such as `cpu.percent.user.g`,
`cpu.percent.system.g`, and so on. This results in *n* queries made to the data source, which is inefficient.
`cpu.percent.system.g`, and so on. This results in _n_ queries made to the data source, which is inefficient.
To be more efficient one can use wildcards in your search, returning all the time series in one query. For example, `cpu.percent.*.g`.
@@ -114,7 +115,7 @@ When exploring data, previously-selected tags are used to filter the remaining r
The Grafana query builder does this for you automatically when you select a tag.
> **Tip:** The regular expression search can be quite slow on high-cardinality tags, so try to use other tags to reduce the scope first.
Starting off with a particular name/namespace can help reduce the results.
> Starting off with a particular name/namespace can help reduce the results.
## Template variables
@@ -126,13 +127,13 @@ For more information, refer to [Variables and templates]({{< relref "../variable
Graphite 1.1 introduced tags and Grafana added support for Graphite queries with tags in version 5.0. To create a variable using tag values, use the Grafana functions `tags` and `tag_values`.
Query | Description
----------------------------------------------------------- | -------------
`tags()` | Returns all tags.
`tags(server=~backend\*)` | Returns only tags that occur in series matching the filter expression.
`tag_values(server)` | Return tag values for the specified tag.
`tag_values(server, server=~backend\*)` | Returns filtered tag values that occur for the specified tag in series matching those expressions.
`tag_values(server, server=~backend\*, app=~${apps:regex})` | Multiple filter expressions and expressions can contain other variables.
| Query | Description |
| ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `tags()` | Returns all tags. |
| `tags(server=~backend\*)` | Returns only tags that occur in series matching the filter expression. |
| `tag_values(server)` | Return tag values for the specified tag. |
| `tag_values(server, server=~backend\*)` | Returns filtered tag values that occur for the specified tag in series matching those expressions. |
| `tag_values(server, server=~backend\*, app=~${apps:regex})` | Multiple filter expressions and expressions can contain other variables. |
For more details, see the [Graphite docs on the autocomplete API for tags](http://graphite.readthedocs.io/en/latest/tags.html#auto-complete-support).
@@ -149,20 +150,21 @@ use expand function (`expand(*.servers.*)`).
The expanded query returns the full names of matching metrics. In combination with regex, it can extract any part of the metric name. By contrast, a non-expanded query only returns the last part of the metric name. It does not allow you to extract other parts of metric names.
Here are some example metrics:
- `prod.servers.001.cpu`
- `prod.servers.002.cpu`
- `test.servers.001.cpu`
The following examples show how expanded and non-expanded queries can be used to fetch specific parts of the metrics name.
| non-expanded query | results | expanded query | expanded results |
|--------------|---------|----------------|------------------|
| `*` | prod, test | `expand(*)` | prod, test
| `*.servers` | servers | `expand(*.servers)` | prod.servers, test.servers |
| `test.servers` | servers | `expand(test.servers)` | test.servers |
| `*.servers.*` | 001,002 | `expand(*.servers.*)` | prod.servers.001, prod.servers.002, test.servers.001 |
| `test.servers.*` | 001 | `expand(test.servers.*)` | test.servers.001 |
| `*.servers.*.cpu` | cpu | `expand(*.servers.*.cpu)` | prod.servers.001.cpu, prod.servers.002.cpu, test.servers.001.cpu |
| non-expanded query | results | expanded query | expanded results |
| ------------------ | ---------- | ------------------------- | ---------------------------------------------------------------- |
| `*` | prod, test | `expand(*)` | prod, test |
| `*.servers` | servers | `expand(*.servers)` | prod.servers, test.servers |
| `test.servers` | servers | `expand(test.servers)` | test.servers |
| `*.servers.*` | 001,002 | `expand(*.servers.*)` | prod.servers.001, prod.servers.002, test.servers.001 |
| `test.servers.*` | 001 | `expand(test.servers.*)` | test.servers.001 |
| `*.servers.*.cpu` | cpu | `expand(*.servers.*.cpu)` | prod.servers.001.cpu, prod.servers.002.cpu, test.servers.001.cpu |
As you can see from the results, the non-expanded query is the same as an expanded query with a regex matching the last part of the name.
@@ -170,6 +172,7 @@ You can also create nested variables that use other variables in their definitio
`apps.$app.servers.*` uses the variable `$app` in its query definition.
#### Using `__searchFilter` to filter query variable results
> Available from Grafana 6.5 and above
Using `__searchFilter` in the query field will filter the query result based on what the user types in the dropdown select box.
@@ -178,11 +181,13 @@ When nothing has been entered by the user the default value for `__searchFilter`
The example below shows how to use `__searchFilter` as part of the query field to enable searching for `server` while the user types in the dropdown select box.
Query
```bash
apps.$app.servers.$__searchFilter
```
TagValues
```bash
tag_values(server, server=~${__searchFilter:regex})
```
@@ -194,11 +199,11 @@ You can use a variable in a metric node path or as a parameter to a function.
There are two syntaxes:
- `$<varname>` Example: apps.frontend.$server.requests.count
- `$<varname>` Example: apps.frontend.$server.requests.count
- `${varname}` Example: apps.frontend.${server}.requests.count
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. Use
the second syntax in expressions like `my.server${serverNumber}.count`.
the second syntax in expressions like `my.server${serverNumber}.count`.
Example:
[Graphite Templated Dashboard](https://play.grafana.org/dashboard/db/graphite-templated-nested)
@@ -242,7 +247,7 @@ datasources:
access: proxy
url: http://localhost:8080
jsonData:
graphiteVersion: "1.1"
graphiteVersion: '1.1'
```
## Integration with Loki

View File

@@ -19,25 +19,26 @@ To access data source settings, hover your mouse over the **Configuration** (gea
InfluxDB data source options differ depending on which [query language](#query-languages) you select: InfluxQL or Flux.
> **Note:** Though not required, it's a good practice to append the language choice to the data source name. For example:
>
- InfluxDB-InfluxQL
- InfluxDB-Flux
- InfluxDB-InfluxQL
- InfluxDB-Flux
### InfluxQL (classic InfluxDB query)
These options apply if you are using the InfluxQL query language. If you are using Flux, refer to [Flux support in Grafana]({{< relref "influxdb-flux.md" >}}).
Name | Description
----------- | -------------
`Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-InfluxQL`.
`Default` | Default data source means that it will be pre-selected for new panels.
`URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB API port is by default 8086.
`Access` | Server (default) = URL needs to be accessible from the Grafana backend/server, Browser = URL needs to be accessible from the browser.
| Name | Description |
| --------- | -------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-InfluxQL`. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB API port is by default 8086. |
| `Access` | Server (default) = URL needs to be accessible from the Grafana backend/server, Browser = URL needs to be accessible from the browser. |
**Note**: Browser access is deprecated and will be removed in a future release.
`Whitelisted Cookies`| Cookies that will be forwarded to the data source. All other cookies will be deleted.
`Database` | The ID of the bucket you want to query from, copied from the [Buckets page](https://docs.influxdata.com/influxdb/v2.0/organizations/buckets/view-buckets/) of the InfluxDB UI.
`User` | The username you use to sign into InfluxDB.
`Password` | The token you use to query the bucket above, copied from the [Tokens page](https://docs.influxdata.com/influxdb/v2.0/security/tokens/view-tokens/) of the InfluxDB UI.
`Database` | The ID of the bucket you want to query from, copied from the [Buckets page](https://docs.influxdata.com/influxdb/v2.0/organizations/buckets/view-buckets/) of the InfluxDB UI.
`User` | The username you use to sign into InfluxDB.
`Password` | The token you use to query the bucket above, copied from the [Tokens page](https://docs.influxdata.com/influxdb/v2.0/security/tokens/view-tokens/) of the InfluxDB UI.
`HTTP mode` | How to query the database (`GET` or `POST` HTTP verb). The `POST` verb allows heavy queries that would return an error using the `GET` verb. Default is `GET`.
`Min time interval` | (Optional) Refer to [Min time interval]({{< relref "#min-time-interval" >}}).
`Max series`| (Optional) Limits the number of series/tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have lots of small time series and not all are shown. Defaults to 1000.
@@ -51,16 +52,16 @@ For information on data source settings and using Flux in Grafana, refer to [Flu
A lower limit for the auto group by time interval. Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value _must_ be formatted as a number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
Identifier | Description
------------ | -------------
`y` | year
`M` | month
`w` | week
`d` | day
`h` | hour
`m` | minute
`s` | second
`ms` | millisecond
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
## Query languages
@@ -104,6 +105,7 @@ Use the plus button and select Field > field to add another SELECT clause. You c
specify an asterix `*` to select all fields.
### Group By
To group by a tag, click the plus icon at the end of the GROUP BY row. Pick a tag from the dropdown that appears.
You can remove the "Group By" by clicking on the `tag` and then click on the x icon.
@@ -119,7 +121,7 @@ You can switch to raw query mode by clicking hamburger icon and then `Switch edi
- $m = replaced with measurement name
- $measurement = replaced with measurement name
- $col = replaced with column name
- $tag_exampletag = replaced with the value of the `exampletag` tag. The syntax is `$tag_yourTagName` (must start with `$tag_`). To use your tag as an alias in the ALIAS BY field then the tag must be used to group by in the query.
- $tag_exampletag = replaced with the value of the `exampletag` tag. The syntax is `$tag*yourTagName`(must start with`$tag*`). To use your tag as an alias in the ALIAS BY field then the tag must be used to group by in the query.
- You can also use [[tag_hostname]] pattern replacement syntax. For example, in the ALIAS BY field using this text `Host: [[tag_hostname]]` would substitute in the `hostname` tag value for each legend value and an example legend value would be: `Host: server1`.
## Querying logs
@@ -147,4 +149,4 @@ An example query:
SELECT title, description from events WHERE $timeFilter ORDER BY time ASC
```
For InfluxDB, you need to enter a query like the one in the example above. The ```where $timeFilter``` component is required. If you only select one column, then you do not need to enter anything in the column mapping fields. The **Tags** field can be a comma-separated string.
For InfluxDB, you need to enter a query like the one in the example above. The `where $timeFilter` component is required. If you only select one column, then you do not need to enter anything in the column mapping fields. The **Tags** field can be a comma-separated string.

View File

@@ -6,18 +6,18 @@ weight = 200
# Flux query language in Grafana
Grafana supports Flux running on InfluxDB 1.8+. See [1.8 compatibility](https://github.com/influxdata/influxdb-client-go/#influxdb-18-api-compatibility) for more information and connection details.
Grafana supports Flux running on InfluxDB 1.8+. See [1.8 compatibility](https://github.com/influxdata/influxdb-client-go/#influxdb-18-api-compatibility) for more information and connection details.
Name | Description
---------------- | -------------
`Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-Flux`.
`Default` | Default data source means that it will be pre-selected for new panels.
`URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB 2.0 API port is by default 8086.
`Organization` | The [Influx organization](https://v2.docs.influxdata.com/v2.0/organizations/) that will be used for Flux queries. This is also used to for the `v.organization` query macro.
`Token` | The authentication token used for Flux queries. With Influx 2.0, use the [influx authentication token to function](https://v2.docs.influxdata.com/v2.0/security/tokens/create-token/). For influx 1.8, the token is `username:password`.
`Default bucket` | (Optional) The [Influx bucket](https://v2.docs.influxdata.com/v2.0/organizations/buckets/) that will be used for the `v.defaultBucket` macro in Flux queries.
`Min time interval` | (Optional) Refer to [Min time interval]({{< relref "#min-time-interval" >}}).
`Max series`| (Optional) Limits the number of series/tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have lots of small time series and not all are shown. Defaults to 1000.
| Name | Description |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-Flux`. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB 2.0 API port is by default 8086. |
| `Organization` | The [Influx organization](https://v2.docs.influxdata.com/v2.0/organizations/) that will be used for Flux queries. This is also used to for the `v.organization` query macro. |
| `Token` | The authentication token used for Flux queries. With Influx 2.0, use the [influx authentication token to function](https://v2.docs.influxdata.com/v2.0/security/tokens/create-token/). For influx 1.8, the token is `username:password`. |
| `Default bucket` | (Optional) The [Influx bucket](https://v2.docs.influxdata.com/v2.0/organizations/buckets/) that will be used for the `v.defaultBucket` macro in Flux queries. |
| `Min time interval` | (Optional) Refer to [Min time interval]({{< relref "#min-time-interval" >}}). |
| `Max series` | (Optional) Limits the number of series/tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have lots of small time series and not all are shown. Defaults to 1000. |
## Min time interval
@@ -25,16 +25,16 @@ A lower limit for the auto group by time interval. Recommended to be set to writ
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
Identifier | Description
------------ | -------------
`y` | year
`M` | month
`w` | week
`d` | day
`h` | hour
`m` | minute
`s` | second
`ms` | millisecond
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
You can use the [Flux query and scripting language](https://www.influxdata.com/products/flux/). Grafana's Flux query editor is a text editor for raw Flux queries with Macro support.
@@ -42,13 +42,13 @@ You can use the [Flux query and scripting language](https://www.influxdata.com/p
The macros support copying and pasting from [Chronograph](https://www.influxdata.com/time-series-platform/chronograf/).
Macro example | Description
------------ | -------------
`v.timeRangeStart` | Will be replaced by the start of the currently active time selection. For example, *2020-06-11T13:31:00Z*
`v.timeRangeStop` | Will be replaced by the end of the currently active time selection. For example, *2020-06-11T14:31:00Z*
`v.windowPeriod` | Will be replaced with an interval string compatible with Flux that corresponds to Grafana's calculated interval based on the time range of the active time selection. For example, *5s*
`v.defaultBucket` | Will be replaced with the data source configuration's "Default Bucket" setting
`v.organization` | Will be replaced with the data source configuration's "Organization" setting
| Macro example | Description |
| ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `v.timeRangeStart` | Will be replaced by the start of the currently active time selection. For example, _2020-06-11T13:31:00Z_ |
| `v.timeRangeStop` | Will be replaced by the end of the currently active time selection. For example, _2020-06-11T14:31:00Z_ |
| `v.windowPeriod` | Will be replaced with an interval string compatible with Flux that corresponds to Grafana's calculated interval based on the time range of the active time selection. For example, _5s_ |
| `v.defaultBucket` | Will be replaced with the data source configuration's "Default Bucket" setting |
| `v.organization` | Will be replaced with the data source configuration's "Organization" setting |
For example, the following query will be interpolated as the query that follows it, with interval and time period values changing according to active time selection\):

View File

@@ -8,19 +8,19 @@ weight = 300
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place.
For more information, refer to [Templates and variables]({{< relref "../../variables/_index.md" >}}).
For more information, refer to [Templates and variables]({{< relref "../../variables/_index.md" >}}).
## Using variables in InfluxDB queries
There are two syntaxes:
`$<varname>` Example:
`$<varname>` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^$host$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
`[[varname]]` Example:
`[[varname]]` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^[[host]]$/ AND $timeFilter GROUP BY time($__interval), "hostname"
@@ -40,6 +40,7 @@ For example, you can have a variable that contains all values for tag `hostname`
```sql
SHOW TAG VALUES WITH KEY = "hostname"
```
## Chained or nested variables
You can also create nested variables, sometimes called [chained variables]({{< relref "../../variables/variable-types/chained-variables.md" >}}).

View File

@@ -63,5 +63,4 @@ datasources:
httpHeaderName1: 'Authorization'
secureJsonData:
httpHeaderValue1: 'Token <token>'
```

View File

@@ -21,8 +21,8 @@ To access Loki settings, click the **Configuration** (gear) icon, then click **D
| Name | Description |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels, queries, and Explore. |
| `Default` | Default data source that is pre-selected for new panels. |
| `URL` | URL of the Loki instance, e.g., `http://localhost:3100`. |
| `Default` | Default data source that is pre-selected for new panels. |
| `URL` | URL of the Loki instance, e.g., `http://localhost:3100`. |
| `Whitelisted Cookies` | Grafana Proxy deletes forwarded cookies by default. Specify cookies by name that should be forwarded to the data source. |
| `Maximum lines` | Upper limit for the number of log lines returned by Loki (default is 1000). Lower this limit if your browser is sluggish when displaying logs in Explore. |
@@ -149,18 +149,20 @@ Check out the [Templating]({{< relref "../variables/_index.md" >}}) documentatio
Variable of the type _Query_ allows you to query Loki for a list labels or label values. The Loki data source plugin
provides the following functions you can use in the `Query` input field.
| Name | Description |
| -------------------------------------------| -------------------------------------------------------------------------------------|
| `label_names()` | Returns a list of label names. |
| `label_values(label)` | Returns a list of label values for the `label`. |
| `label_values(log stream selector, label)` | Returns a list of label values for the `label` in the specified `log stream selector`.|
| Name | Description |
| ------------------------------------------ | -------------------------------------------------------------------------------------- |
| `label_names()` | Returns a list of label names. |
| `label_values(label)` | Returns a list of label values for the `label`. |
| `label_values(log stream selector, label)` | Returns a list of label values for the `label` in the specified `log stream selector`. |
### Ad hoc filters variable
Loki supports the special ad hoc filters variable type. It allows you to specify any number of label/value filters on the fly. These filters are automatically applied to all your Loki queries.
### Using interval and range variables
You can use some global built-in variables in query variables; `$__interval`, `$__interval_ms`, `$__range`, `$__range_s` and `$__range_ms`. For more information, refer to [Global built-in variables]({{< relref "../variables/variable-types/global-variables.md" >}}).
## Annotations
You can use any non-metric Loki query as a source for [annotations]({{< relref "../dashboards/annotations" >}}). Log content will be used as annotation text and your log stream labels as tags, so there is no need for additional mapping.

View File

@@ -8,7 +8,7 @@ weight = 1000
# Using MySQL in Grafana
> Starting from Grafana v5.1 you can name the time column *time* in addition to earlier supported *time_sec*. Usage of *time_sec* will eventually be deprecated.
> Starting from Grafana v5.1 you can name the time column _time_ in addition to earlier supported _time_sec_. Usage of _time_sec_ will eventually be deprecated.
Grafana ships with a built-in MySQL data source plugin that allows you to query and visualize data from a MySQL compatible database.
@@ -17,22 +17,22 @@ Grafana ships with a built-in MySQL data source plugin that allows you to query
1. Open the side menu by clicking the Grafana icon in the top header.
1. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
1. Click the `+ Add data source` button in the top header.
1. Select *MySQL* from the *Type* dropdown.
1. Select _MySQL_ from the _Type_ dropdown.
### Data source options
Name | Description
------------------ | -------------
`Name` | The data source name. This is how you refer to the data source in panels and queries.
`Default` | Default data source means that it will be pre-selected for new panels.
`Host` | The IP address/hostname and optional port of your MySQL instance.
`Database` | Name of your MySQL database.
`User` | Database user's login/username
`Password` | Database user's password
`Session Timezone` | Specify the time zone used in the database session, such as `Europe/Berlin` or `+02:00`. This is necessary, if the timezone of the database (or the host of the database) is set to something other than UTC. Set the value used in the session with `SET time_zone='...'`. If you leave this field empty, then the time zone is not updated. For more information, refer to the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/time-zone-support.html).
`Max open` | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+).
`Max idle` | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+).
`Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours. This should always be lower than configured [wait_timeout](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout) in MySQL (Grafana v5.4+).
| Name | Description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `Host` | The IP address/hostname and optional port of your MySQL instance. |
| `Database` | Name of your MySQL database. |
| `User` | Database user's login/username |
| `Password` | Database user's password |
| `Session Timezone` | Specify the time zone used in the database session, such as `Europe/Berlin` or `+02:00`. This is necessary, if the timezone of the database (or the host of the database) is set to something other than UTC. Set the value used in the session with `SET time_zone='...'`. If you leave this field empty, then the time zone is not updated. For more information, refer to the [MySQL documentation](https://dev.mysql.com/doc/refman/8.0/en/time-zone-support.html). |
| `Max open` | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+). |
| `Max idle` | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+). |
| `Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours. This should always be lower than configured [wait_timeout](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_wait_timeout) in MySQL (Grafana v5.4+). |
### Min time interval
@@ -66,7 +66,7 @@ Example:
GRANT SELECT ON mydatabase.mytable TO 'grafanaReader';
```
You can use wildcards (`*`) in place of database or table if you want to grant access to more databases and tables.
You can use wildcards (`*`) in place of database or table if you want to grant access to more databases and tables.
## Query Editor
@@ -102,10 +102,12 @@ If you use aggregate functions you need to group your resultset. The editor will
You may add further value columns by clicking the plus button and selecting `Column` from the menu. Multiple value columns will be plotted as separate series in the graph panel.
### Filter data (WHERE)
To add a filter click the plus icon to the right of the `WHERE` condition. You can remove filters by clicking on
the filter and selecting `Remove`. A filter for the current selected timerange is automatically added to new queries.
### Group By
To group by time or any other columns click the plus icon at the end of the GROUP BY row. The suggestion dropdown will only show text columns of your currently selected table but you may manually enter any column.
You can remove the group by clicking on the item and then selecting `Remove`.
@@ -116,6 +118,7 @@ If you add any grouping, all selected columns need to have an aggregate function
Grafana can fill in missing values when you group by time. The time function accepts two arguments. The first argument is the time window that you would like to group by, and the second argument is the value you want Grafana to fill missing items with.
### Text Editor Mode (RAW)
You can switch to the raw query editor mode by clicking the hamburger icon and selecting `Switch editor mode` or by clicking `Edit SQL` below the query.
> If you use the raw query editor, be sure your query at minimum has `ORDER BY time` and a filter on the returned time range.
@@ -124,26 +127,26 @@ You can switch to the raw query editor mode by clicking the hamburger icon and s
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
Macro example | Description
------------------------------------------------------ | -------------
`$__time(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, *UNIX_TIMESTAMP(dateColumn) as time_sec*
`$__timeEpoch(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, *UNIX_TIMESTAMP(dateColumn) as time_sec*
`$__timeFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name. For example, *dateColumn BETWEEN FROM_UNIXTIME(1494410783) AND FROM_UNIXTIME(1494410983)*
`$__timeFrom()` | Will be replaced by the start of the currently active time selection. For example, *FROM_UNIXTIME(1494410783)*
`$__timeTo()` | Will be replaced by the end of the currently active time selection. For example, *FROM_UNIXTIME(1494410983)*
`$__timeGroup(dateColumn,'5m')` | Will be replaced by an expression usable in GROUP BY clause. For example, *cast(cast(UNIX_TIMESTAMP(dateColumn)/(300) as signed)*300 as signed),*
`$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value.
`$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points.
`$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+).
`$__timeGroupAlias(dateColumn,'5m')` | Will be replaced identical to $__timeGroup but with an added column alias (only available in Grafana 5.3+).
`$__unixEpochFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as Unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183*
`$__unixEpochFrom()` | Will be replaced by the start of the currently active time selection as Unix timestamp. For example, *1494410783*
`$__unixEpochTo()` | Will be replaced by the end of the currently active time selection as Unix timestamp. For example, *1494497183*
`$__unixEpochNanoFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as nanosecond timestamp. For example, *dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872*
`$__unixEpochNanoFrom()` | Will be replaced by the start of the currently active time selection as nanosecond timestamp. For example, *1494410783152415214*
`$__unixEpochNanoTo()` | Will be replaced by the end of the currently active time selection as nanosecond timestamp. For example, *1494497183142514872*
`$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as $__timeGroup but for times stored as Unix timestamp (only available in Grafana 5.3+).
`$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+).
| Macro example | Description |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `$__time(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, _UNIX_TIMESTAMP(dateColumn) as time_sec_ |
| `$__timeEpoch(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, _UNIX_TIMESTAMP(dateColumn) as time_sec_ |
| `$__timeFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name. For example, _dateColumn BETWEEN FROM_UNIXTIME(1494410783) AND FROM_UNIXTIME(1494410983)_ |
| `$__timeFrom()` | Will be replaced by the start of the currently active time selection. For example, _FROM_UNIXTIME(1494410783)_ |
| `$__timeTo()` | Will be replaced by the end of the currently active time selection. For example, _FROM_UNIXTIME(1494410983)_ |
| `$__timeGroup(dateColumn,'5m')` | Will be replaced by an expression usable in GROUP BY clause. For example, *cast(cast(UNIX_TIMESTAMP(dateColumn)/(300) as signed)*300 as signed),\* |
| `$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value. |
| `$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points. |
| `$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+). |
| `$__timeGroupAlias(dateColumn,'5m')` | Will be replaced identical to $\_\_timeGroup but with an added column alias (only available in Grafana 5.3+). |
| `$__unixEpochFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as Unix timestamp. For example, _dateColumn > 1494410783 AND dateColumn < 1494497183_ |
| `$__unixEpochFrom()` | Will be replaced by the start of the currently active time selection as Unix timestamp. For example, _1494410783_ |
| `$__unixEpochTo()` | Will be replaced by the end of the currently active time selection as Unix timestamp. For example, _1494497183_ |
| `$__unixEpochNanoFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as nanosecond timestamp. For example, _dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872_ |
| `$__unixEpochNanoFrom()` | Will be replaced by the start of the currently active time selection as nanosecond timestamp. For example, _1494410783152415214_ |
| `$__unixEpochNanoTo()` | Will be replaced by the end of the currently active time selection as nanosecond timestamp. For example, _1494497183142514872_ |
| `$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as $\_\_timeGroup but for times stored as Unix timestamp (only available in Grafana 5.3+). |
| `$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+). |
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
@@ -197,7 +200,7 @@ GROUP BY time
ORDER BY time
```
**Example using the fill parameter in the $__timeGroup macro to convert null values to be zero instead:**
**Example using the fill parameter in the $\_\_timeGroup macro to convert null values to be zero instead:**
```sql
SELECT
@@ -240,7 +243,7 @@ Check out the [Templating]({{< relref "../variables/_index.md" >}}) documentatio
If you add a template variable of the type `Query`, you can write a MySQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable _Query_ setting.
```sql
SELECT hostname FROM my_host
@@ -252,7 +255,7 @@ A query can return multiple columns and Grafana will automatically create a list
SELECT my_host.hostname, my_other_host.hostname2 FROM my_host JOIN my_other_host ON my_host.city = my_other_host.city
```
To use time range dependent macros like `$__timeFilter(column)` in your query the refresh mode of the template variable needs to be set to *On Time Range Change*.
To use time range dependent macros like `$__timeFilter(column)` in your query the refresh mode of the template variable needs to be set to _On Time Range Change_.
```sql
SELECT event_name FROM event_log WHERE $__timeFilter(time_column)
@@ -272,6 +275,7 @@ SELECT hostname FROM my_host WHERE region IN($region)
```
#### Using `__searchFilter` to filter results in Query Variable
> Available from Grafana 6.5 and above
Using `__searchFilter` in the query field will filter the query result based on what the user types in the dropdown select box.
@@ -282,6 +286,7 @@ When nothing has been entered by the user the default value for `__searchFilter`
The example below shows how to use `__searchFilter` as part of the query field to enable searching for `hostname` while the user types in the dropdown select box.
Query
```sql
SELECT hostname FROM my_host WHERE hostname LIKE '$__searchFilter'
```
@@ -296,7 +301,7 @@ If the variable is a multi-value variable then use the `IN` comparison operator
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
@@ -308,7 +313,7 @@ WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp ASC
```
`[[varname]]` Example with a template variable named `hostname`:
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
@@ -374,12 +379,12 @@ WHERE
$__timeFilter(native_date_time)
```
Name | Description
----------- | -------------
`time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value.
`timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+)
`text` | Event description field.
`tags` | Optional field name to use for event tags as a comma separated string.
| Name | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value. |
| `timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+) |
| `text` | Event description field. |
| `tags` | Optional field name to use for event tags as a comma separated string. |
## Alerting
@@ -402,7 +407,7 @@ datasources:
user: grafana
password: password
jsonData:
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
```

View File

@@ -14,15 +14,15 @@ Grafana ships with advanced support for OpenTSDB. This topic explains options, v
To access OpenTSDB settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the OpenTSDB data source.
| Name | Description |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP, and port of your OpenTSDB server (default port is usually 4242) |
| `Whitelisted Cookies` | List the names of cookies to forward to the data source. |
| `Version` | Version = opentsdb version, either <=2.1 or 2.2 |
| `Resolution` | Metrics from opentsdb may have datapoints with either second or millisecond resolution. |
| `Lookup Limit`| Default is 1000. |
| Name | Description |
| --------------------- | --------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP, and port of your OpenTSDB server (default port is usually 4242) |
| `Whitelisted Cookies` | List the names of cookies to forward to the data source. |
| `Version` | Version = opentsdb version, either <=2.1 or 2.2 |
| `Resolution` | Metrics from opentsdb may have datapoints with either second or millisecond resolution. |
| `Lookup Limit` | Default is 1000. |
## Query editor

View File

@@ -14,22 +14,22 @@ Grafana ships with a built-in PostgreSQL data source plugin that allows you to q
To access PostgreSQL settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the Prometheus data source.
Name | Description
----------------- | -------------
`Name` | The data source name. This is how you refer to the data source in panels and queries.
`Default` | Default data source means that it will be pre-selected for new panels.
`Host` | The IP address/hostname and optional port of your PostgreSQL instance. _Do not_ include the database name. The connection string for connecting to Postgres will not be correct and it may cause errors.
`Database` | Name of your PostgreSQL database.
`User` | Database user's login/username
`Password` | Database user's password
`SSL Mode` | Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. When SSL Mode is disabled, SSL Method and Auth Details would not be visible.
`SSL Auth Details Method` | Determines whether the SSL Auth details will be configured as a file path or file content. Grafana v7.5+
`SSL Auth Details Value` | File path or file content of SSL root certificate, client certificate and client key
`Max open` | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+).
`Max idle` | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+).
`Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours (Grafana v5.4+).
`Version` |Determines which functions are available in the query builder (only available in Grafana 5.3+).
`TimescaleDB` |A time-series database built as a PostgreSQL extension. When enabled, Grafana uses `time_bucket` in the `$__timeGroup` macro to display TimescaleDB specific aggregate functions in the query builder (only available in Grafana 5.3+).
| Name | Description |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `Host` | The IP address/hostname and optional port of your PostgreSQL instance. _Do not_ include the database name. The connection string for connecting to Postgres will not be correct and it may cause errors. |
| `Database` | Name of your PostgreSQL database. |
| `User` | Database user's login/username |
| `Password` | Database user's password |
| `SSL Mode` | Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. When SSL Mode is disabled, SSL Method and Auth Details would not be visible. |
| `SSL Auth Details Method` | Determines whether the SSL Auth details will be configured as a file path or file content. Grafana v7.5+ |
| `SSL Auth Details Value` | File path or file content of SSL root certificate, client certificate and client key |
| `Max open` | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+). |
| `Max idle` | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+). |
| `Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours (Grafana v5.4+). |
| `Version` | Determines which functions are available in the query builder (only available in Grafana 5.3+). |
| `TimescaleDB` | A time-series database built as a PostgreSQL extension. When enabled, Grafana uses `time_bucket` in the `$__timeGroup` macro to display TimescaleDB specific aggregate functions in the query builder (only available in Grafana 5.3+). |
### Min time interval
@@ -109,10 +109,12 @@ avg(tx_bytes) OVER (ORDER BY "time" ROWS 5 PRECEDING) AS "tx_bytes"
You may add further value columns by clicking the plus button and selecting `Column` from the menu. Multiple value columns will be plotted as separate series in the graph panel.
### Filter data (WHERE)
To add a filter click the plus icon to the right of the `WHERE` condition. You can remove filters by clicking on
the filter and selecting `Remove`. A filter for the current selected timerange is automatically added to new queries.
### Group by
To group by time or any other columns click the plus icon at the end of the GROUP BY row. The suggestion dropdown will only show text columns of your currently selected table but you may manually enter any column.
You can remove the group by clicking on the item and then selecting `Remove`.
@@ -123,6 +125,7 @@ If you add any grouping, all selected columns need to have an aggregate function
Grafana can fill in missing values when you group by time. The time function accepts two arguments. The first argument is the time window that you would like to group by, and the second argument is the value you want Grafana to fill missing items with.
### Text editor mode (RAW)
You can switch to the raw query editor mode by clicking the hamburger icon and selecting `Switch editor mode` or by clicking `Edit SQL` below the query.
> If you use the raw query editor, be sure your query at minimum has `ORDER BY time` and a filter on the returned time range.
@@ -131,26 +134,26 @@ You can switch to the raw query editor mode by clicking the hamburger icon and s
Macros can be used within a query to simplify syntax and allow for dynamic parts.
Macro example | Description
------------------------------------------------------ | -------------
`$__time(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, *UNIX_TIMESTAMP(dateColumn) as time_sec*
`$__timeEpoch(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, *UNIX_TIMESTAMP(dateColumn) as time_sec*
`$__timeFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name. For example, *dateColumn BETWEEN FROM_UNIXTIME(1494410783) AND FROM_UNIXTIME(1494410983)*
`$__timeFrom()` | Will be replaced by the start of the currently active time selection. For example, *FROM_UNIXTIME(1494410783)*
`$__timeTo()` | Will be replaced by the end of the currently active time selection. For example, *FROM_UNIXTIME(1494410983)*
`$__timeGroup(dateColumn,'5m')` | Will be replaced by an expression usable in GROUP BY clause. For example, *cast(cast(UNIX_TIMESTAMP(dateColumn)/(300) as signed)*300 as signed),*
`$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value.
`$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points.
`$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+).
`$__timeGroupAlias(dateColumn,'5m')` | Will be replaced identical to $__timeGroup but with an added column alias (only available in Grafana 5.3+).
`$__unixEpochFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as Unix timestamp. For example, *dateColumn > 1494410783 AND dateColumn < 1494497183*
`$__unixEpochFrom()` | Will be replaced by the start of the currently active time selection as Unix timestamp. For example, *1494410783*
`$__unixEpochTo()` | Will be replaced by the end of the currently active time selection as Unix timestamp. For example, *1494497183*
`$__unixEpochNanoFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as nanosecond timestamp. For example, *dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872*
`$__unixEpochNanoFrom()` | Will be replaced by the start of the currently active time selection as nanosecond timestamp. For example, *1494410783152415214*
`$__unixEpochNanoTo()` | Will be replaced by the end of the currently active time selection as nanosecond timestamp. For example, *1494497183142514872*
`$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as $__timeGroup but for times stored as Unix timestamp (only available in Grafana 5.3+).
`$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+).
| Macro example | Description |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `$__time(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, _UNIX_TIMESTAMP(dateColumn) as time_sec_ |
| `$__timeEpoch(dateColumn)` | Will be replaced by an expression to convert to a UNIX timestamp and rename the column to `time_sec`. For example, _UNIX_TIMESTAMP(dateColumn) as time_sec_ |
| `$__timeFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name. For example, _dateColumn BETWEEN FROM_UNIXTIME(1494410783) AND FROM_UNIXTIME(1494410983)_ |
| `$__timeFrom()` | Will be replaced by the start of the currently active time selection. For example, _FROM_UNIXTIME(1494410783)_ |
| `$__timeTo()` | Will be replaced by the end of the currently active time selection. For example, _FROM_UNIXTIME(1494410983)_ |
| `$__timeGroup(dateColumn,'5m')` | Will be replaced by an expression usable in GROUP BY clause. For example, *cast(cast(UNIX_TIMESTAMP(dateColumn)/(300) as signed)*300 as signed),\* |
| `$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value. |
| `$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points. |
| `$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+). |
| `$__timeGroupAlias(dateColumn,'5m')` | Will be replaced identical to $\_\_timeGroup but with an added column alias (only available in Grafana 5.3+). |
| `$__unixEpochFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as Unix timestamp. For example, _dateColumn > 1494410783 AND dateColumn < 1494497183_ |
| `$__unixEpochFrom()` | Will be replaced by the start of the currently active time selection as Unix timestamp. For example, _1494410783_ |
| `$__unixEpochTo()` | Will be replaced by the end of the currently active time selection as Unix timestamp. For example, _1494497183_ |
| `$__unixEpochNanoFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as nanosecond timestamp. For example, _dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872_ |
| `$__unixEpochNanoFrom()` | Will be replaced by the start of the currently active time selection as nanosecond timestamp. For example, _1494410783152415214_ |
| `$__unixEpochNanoTo()` | Will be replaced by the end of the currently active time selection as nanosecond timestamp. For example, _1494497183142514872_ |
| `$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as $\_\_timeGroup but for times stored as Unix timestamp (only available in Grafana 5.3+). |
| `$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+). |
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
@@ -162,7 +165,6 @@ Query editor with example query:
![](/static/img/docs/v46/postgres_table_query.png)
The query:
```sql
@@ -203,7 +205,7 @@ GROUP BY time
ORDER BY time
```
**Example using the fill parameter in the $__timeGroup macro to convert null values to be zero instead:**
**Example using the fill parameter in the $\_\_timeGroup macro to convert null values to be zero instead:**
```sql
SELECT
@@ -241,7 +243,7 @@ Refer to [Templates and variables]({{< relref "../variables/_index.md" >}}) for
If you add a template variable of the type `Query`, you can write a PostgreSQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable _Query_ setting.
```sql
SELECT hostname FROM host
@@ -253,7 +255,7 @@ A query can return multiple columns and Grafana will automatically create a list
SELECT host.hostname, other_host.hostname2 FROM host JOIN other_host ON host.city = other_host.city
```
To use time range dependent macros like `$__timeFilter(column)` in your query the refresh mode of the template variable needs to be set to *On Time Range Change*.
To use time range dependent macros like `$__timeFilter(column)` in your query the refresh mode of the template variable needs to be set to _On Time Range Change_.
```sql
SELECT event_name FROM event_log WHERE $__timeFilter(time_column)
@@ -273,6 +275,7 @@ SELECT hostname FROM host WHERE region IN($region)
```
#### Using `__searchFilter` to filter results in Query Variable
> Available from Grafana 6.5 and above
Using `__searchFilter` in the query field will filter the query result based on what the user types in the dropdown select box.
@@ -283,6 +286,7 @@ When nothing has been entered by the user the default value for `__searchFilter`
The example below shows how to use `__searchFilter` as part of the query field to enable searching for `hostname` while the user types in the dropdown select box.
Query
```sql
SELECT hostname FROM my_host WHERE hostname LIKE '$__searchFilter'
```
@@ -297,7 +301,7 @@ If the variable is a multi-value variable then use the `IN` comparison operator
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
@@ -308,7 +312,7 @@ WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp ASC
```
`[[varname]]` Example with a template variable named `hostname`:
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
@@ -373,12 +377,12 @@ WHERE
$__timeFilter(native_date_time)
```
Name | Description
----------- | -------------
`time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value.
`timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+)
`text` | Event description field.
`tags` | Optional field name to use for event tags as a comma separated string.
| Name | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value. |
| `timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+) |
| `text` | Event description field. |
| `tags` | Optional field name to use for event tags as a comma separated string. |
## Alerting
@@ -401,17 +405,19 @@ datasources:
database: grafana
user: grafana
secureJsonData:
password: "Password!"
password: 'Password!'
jsonData:
sslmode: "disable" # disable/require/verify-ca/verify-full
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
sslmode: 'disable' # disable/require/verify-ca/verify-full
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
postgresVersion: 903 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
timescaledb: false
```
>**Note:** In the above code, the `postgresVersion` value of `10` refers to version PotgreSQL 10 and above.
> **Note:** In the above code, the `postgresVersion` value of `10` refers to version PotgreSQL 10 and above.
If you encounter metric request errors or other issues:
- Make sure your data source YAML file parameters exactly match the example. This includes parameter names and use of quotation marks.
- Make sure the `database` name is not included in the `url`.

View File

@@ -36,6 +36,7 @@ Once you provided the numbers, `TestData DB` distributes them evenly based on th
## Dashboards
`TestData DB` also contains some dashboards with examples.
1. Click **Configuration** > **Data Sources** > **TestData DB** > **Dashboards**.
1. **Import** the **Simple Streaming Example** dashboard.

View File

@@ -5,93 +5,107 @@ aliases = ["/docs/grafana/latest/project/cla", "docs/contributing/cla.html"]
+++
# Grafana Labs Software Grant and Contributor License Agreement ("Agreement")
This agreement is based on the Apache Software Foundation Contributor License Agreement.
(v r190612)
Thank you for your interest in software projects stewarded by Raintank, Inc. dba Grafana Labs (“Grafana Labs”). In order to clarify the intellectual property license
granted with Contributions from any person or entity, Grafana Labs
must have a Contributor License Agreement (CLA) on file that has been
agreed to by each Contributor, indicating agreement to the license terms
below. This license is for your protection as a Contributor as well
as the protection of Grafana Labs and its users; it does not change
your rights to use your own Contributions for any other purpose.
This Agreement allows an individual to contribute to Grafana Labs on that individuals own behalf, or an entity (the "Corporation") to
submit Contributions to Grafana Labs, to authorize Contributions
submitted by its designated employees to Grafana Labs, and to grant
copyright and patent licenses thereto.
granted with Contributions from any person or entity, Grafana Labs
must have a Contributor License Agreement (CLA) on file that has been
agreed to by each Contributor, indicating agreement to the license terms
below. This license is for your protection as a Contributor as well
as the protection of Grafana Labs and its users; it does not change
your rights to use your own Contributions for any other purpose.
This Agreement allows an individual to contribute to Grafana Labs on that individuals own behalf, or an entity (the "Corporation") to
submit Contributions to Grafana Labs, to authorize Contributions
submitted by its designated employees to Grafana Labs, and to grant
copyright and patent licenses thereto.
You accept and agree to the following terms and conditions for Your
present and future Contributions submitted to Grafana Labs. Except
for the license granted herein to Grafana Labs and recipients of
software distributed by Grafana Labs, You reserve all right, title,
and interest in and to Your Contributions.
## 1. Definitions.
"You" (or "Your") shall mean the copyright owner or legal entity
authorized by the copyright owner that is making this Agreement
with Grafana Labs. For legal entities, the entity making a
Contribution and all other entities that control, are controlled by,
or are under common control with that entity are considered to be a
single Contributor. For the purposes of this definition, "control"
means (i) the power, direct or indirect, to cause the direction or
management of such entity, whether by contract or otherwise, or
(ii) ownership of fifty percent (50%) or more of the outstanding
shares, or (iii) beneficial ownership of such entity.
"Contribution" shall mean any work, as well as
any modifications or additions to an existing work, that is intentionally
submitted by You to Grafana Labs for inclusion in, or
documentation of, any of the products owned or managed by Grafana Labs (the "Work"). For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written
communication sent to Grafana Labs or its representatives,
including but not limited to communication on electronic mailing
lists, source code control systems (such as GitHub), and issue tracking systems
that are managed by, or on behalf of, Grafana Labs for the
purpose of discussing and improving the Work, but excluding
communication that is conspicuously marked or otherwise designated
in writing by You as "Not a Contribution."
## 2. Grant of Copyright License. Subject to the terms and conditions
of this Agreement, You hereby grant to Grafana Labs and to
recipients of software distributed by Grafana Labs a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare derivative works of,
publicly display, publicly perform, sublicense, and distribute
Your Contributions and such derivative works.
## 3. Grant of Patent License. Subject to the terms and conditions of
this Agreement, You hereby grant to Grafana Labs and to recipients
of software distributed by Grafana Labs a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as
stated in this section) patent license to make, have made, use,
offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by You that are necessarily infringed by Your Contribution(s)
alone or by combination of Your Contribution(s) with the Work to
which such Contribution(s) were submitted. If any entity institutes
patent litigation against You or any other entity (including a
cross-claim or counterclaim in a lawsuit) alleging that your
Contribution, or the Work to which you have contributed, constitutes
direct or contributory patent infringement, then any patent licenses
granted to that entity under this Agreement for that Contribution or
Work shall terminate as of the date such litigation is filed.
## 4. You represent that You are legally entitled to grant the above
license. If You are an individual, and if Your employer(s) has rights to intellectual property
that you create that includes Your Contributions, you represent
that You have received permission to make Contributions on behalf
of that employer, or that Your employer has waived such rights for
your Contributions to Grafana Labs. If You are a Corporation, any individual who makes a contribution from an account associated with You will be considered authorized to Contribute on Your behalf.
## 5. You represent that each of Your Contributions is Your original
creation (see section 7 for submissions on behalf of others).
## 6. You are not expected to provide support for Your Contributions,
except to the extent You desire to provide support. You may provide
support for free, for a fee, or not at all. Unless required by
applicable law or agreed to in writing, You provide Your
Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied, including, without
limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
## 7. Should You wish to submit work that is not Your original creation,
You may submit it to Grafana Labs separately from any
Contribution, identifying the complete details of its source and
of any license or other restriction (including, but not limited
to, related patents, trademarks, and license agreements) of which
you are personally aware, and conspicuously marking the work as
"Submitted on behalf of a third-party: [named here]".
present and future Contributions submitted to Grafana Labs. Except
for the license granted herein to Grafana Labs and recipients of
software distributed by Grafana Labs, You reserve all right, title,
and interest in and to Your Contributions.
## 1. Definitions.
"You" (or "Your") shall mean the copyright owner or legal entity
authorized by the copyright owner that is making this Agreement
with Grafana Labs. For legal entities, the entity making a
Contribution and all other entities that control, are controlled by,
or are under common control with that entity are considered to be a
single Contributor. For the purposes of this definition, "control"
means (i) the power, direct or indirect, to cause the direction or
management of such entity, whether by contract or otherwise, or
(ii) ownership of fifty percent (50%) or more of the outstanding
shares, or (iii) beneficial ownership of such entity.
"Contribution" shall mean any work, as well as
any modifications or additions to an existing work, that is intentionally
submitted by You to Grafana Labs for inclusion in, or
documentation of, any of the products owned or managed by Grafana Labs (the "Work"). For the purposes of this definition,
"submitted" means any form of electronic, verbal, or written
communication sent to Grafana Labs or its representatives,
including but not limited to communication on electronic mailing
lists, source code control systems (such as GitHub), and issue tracking systems
that are managed by, or on behalf of, Grafana Labs for the
purpose of discussing and improving the Work, but excluding
communication that is conspicuously marked or otherwise designated
in writing by You as "Not a Contribution."
## 2. Grant of Copyright License. Subject to the terms and conditions
of this Agreement, You hereby grant to Grafana Labs and to
recipients of software distributed by Grafana Labs a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare derivative works of,
publicly display, publicly perform, sublicense, and distribute
Your Contributions and such derivative works.
## 3. Grant of Patent License. Subject to the terms and conditions of
this Agreement, You hereby grant to Grafana Labs and to recipients
of software distributed by Grafana Labs a perpetual, worldwide,
non-exclusive, no-charge, royalty-free, irrevocable (except as
stated in this section) patent license to make, have made, use,
offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by You that are necessarily infringed by Your Contribution(s)
alone or by combination of Your Contribution(s) with the Work to
which such Contribution(s) were submitted. If any entity institutes
patent litigation against You or any other entity (including a
cross-claim or counterclaim in a lawsuit) alleging that your
Contribution, or the Work to which you have contributed, constitutes
direct or contributory patent infringement, then any patent licenses
granted to that entity under this Agreement for that Contribution or
Work shall terminate as of the date such litigation is filed.
## 4. You represent that You are legally entitled to grant the above
license. If You are an individual, and if Your employer(s) has rights to intellectual property
that you create that includes Your Contributions, you represent
that You have received permission to make Contributions on behalf
of that employer, or that Your employer has waived such rights for
your Contributions to Grafana Labs. If You are a Corporation, any individual who makes a contribution from an account associated with You will be considered authorized to Contribute on Your behalf.
## 5. You represent that each of Your Contributions is Your original
creation (see section 7 for submissions on behalf of others).
## 6. You are not expected to provide support for Your Contributions,
except to the extent You desire to provide support. You may provide
support for free, for a fee, or not at all. Unless required by
applicable law or agreed to in writing, You provide Your
Contributions on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied, including, without
limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,
MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
## 7. Should You wish to submit work that is not Your original creation,
You may submit it to Grafana Labs separately from any
Contribution, identifying the complete details of its source and
of any license or other restriction (including, but not limited
to, related patents, trademarks, and license agreements) of which
you are personally aware, and conspicuously marking the work as
"Submitted on behalf of a third-party: [named here]".

View File

@@ -83,5 +83,3 @@ Learn more about Grafana options and packages.
#### Go
- [Grafana Plugin SDK for Go]({{< relref "backend/grafana-plugin-sdk-for-go" >}})

View File

@@ -3,7 +3,7 @@ title = "Add authentication for data source plugins"
aliases = ["/docs/grafana/latest/plugins/developing/auth-for-datasources/", "/docs/grafana/next/developers/plugins/authentication/"]
+++
# Add authentication for data source plugins
# Add authentication for data source plugins
This page explains how to configure your data source plugin to authenticate against a third-party API.
@@ -146,11 +146,10 @@ To forward requests through the Grafana proxy, you need to configure one or more
```ts
const routePath = '/example';
getBackendSrv()
.datasourceRequest({
url: this.url + routePath + '/v1/users',
method: 'GET',
});
getBackendSrv().datasourceRequest({
url: this.url + routePath + '/v1/users',
method: 'GET',
});
```
### Add a dynamic proxy route to your plugin

View File

@@ -13,9 +13,7 @@ By adding a help component to your plugin, you can for example create "cheat she
import { QueryEditorHelpProps } from '@grafana/data';
export default (props: QueryEditorHelpProps) => {
return (
<h2>My cheat sheet</h2>
);
return <h2>My cheat sheet</h2>;
};
```
@@ -62,7 +60,7 @@ By adding a help component to your plugin, you can for example create "cheat she
{item.expression ? (
<div
className="cheat-sheet-item__example"
onClick={e => props.onClickExample({ refId: 'A', queryText: item.expression } as DataQuery)}
onClick={(e) => props.onClickExample({ refId: 'A', queryText: item.expression } as DataQuery)}
>
<code>{item.expression}</code>
</div>

View File

@@ -29,5 +29,6 @@ To enable annotation support for your data source, add the following two lines o
**datasource.ts**
```ts
annotations: {};
annotations: {
}
```

View File

@@ -29,9 +29,7 @@ The query editor for Explore is similar to the query editor for the data source
export type Props = ExploreQueryFieldProps<DataSource, MyQuery, MyDataSourceOptions>;
export default (props: Props) => {
return (
<h2>My query editor</h2>
);
return <h2>My query editor</h2>;
};
```
@@ -92,6 +90,7 @@ Explore should by default select a reasonable visualization for your data so use
If this does not work for you or you want to show some data in a specific visualization, add a hint to your returned data frame using the `preferredVisualisationType` meta attribute.
You can construct a data frame with specific metadata:
```
const firstResult = new MutableDataFrame({
fields: [...],

View File

@@ -30,10 +30,10 @@ Add `replaceVariables` to the argument list, and pass it a user-defined template
```ts
export const SimplePanel: React.FC<Props> = ({ options, data, width, height, replaceVariables }) => {
const query = replaceVariables('Now displaying $service')
const query = replaceVariables('Now displaying $service');
return <div>{ query }</div>
}
return <div>{query}</div>;
};
```
## Interpolate variables in data source plugins
@@ -67,7 +67,7 @@ A data source can define the default format option when no format is specified b
Let's change the SQL query to use CSV format by default:
```ts
getTemplateSrv().replace('SELECT * FROM services WHERE id IN ($service)', options.scopedVars, "csv");
getTemplateSrv().replace('SELECT * FROM services WHERE id IN ($service)', options.scopedVars, 'csv');
```
Now, when users write `$service`, the query looks like this:
@@ -177,7 +177,13 @@ Let's create a custom query editor to allow the user to edit the query model.
</div>
<div className="gf-form">
<span className="gf-form-label width-10">Query</span>
<input name="rawQuery" className="gf-form-input" onBlur={saveQuery} onChange={handleChange} value={state.rawQuery} />
<input
name="rawQuery"
className="gf-form-input"
onBlur={saveQuery}
onChange={handleChange}
value={state.rawQuery}
/>
</div>
</>
);

View File

@@ -20,6 +20,7 @@ Because Grafana maintains the plugin protocol, the plugin protocol attempts to f
## Writing plugins without Go
If you want to write a backend plugin in another language than Go, then its possible as long as the language supports [gRPC](https://grpc.io/). However, writing a plugin in Go is recommended and has several advantages that should be carefully taken into account before proceeding:
- There's an official [SDK]({{< relref "grafana-plugin-sdk-for-go.md" >}}) available.
- Single binary as the compiled output.
- Building and compiling for multiple platforms is easy.

View File

@@ -45,8 +45,8 @@ Grafana uses [RxJS](https://rxjs.dev/) to continuously send data from a data sou
```
```ts
const observables = options.targets.map(target => {
return new Observable<DataQueryResponse>(subscriber => {
const observables = options.targets.map((target) => {
return new Observable<DataQueryResponse>((subscriber) => {
// ...
});
});
@@ -99,38 +99,38 @@ Grafana uses [RxJS](https://rxjs.dev/) to continuously send data from a data sou
Here's the final `query` method.
```ts
query(options: DataQueryRequest<MyQuery>): Observable<DataQueryResponse> {
const streams = options.targets.map(target => {
const query = defaults(target, defaultQuery);
```ts
query(options: DataQueryRequest<MyQuery>): Observable<DataQueryResponse> {
const streams = options.targets.map(target => {
const query = defaults(target, defaultQuery);
return new Observable<DataQueryResponse>(subscriber => {
const frame = new CircularDataFrame({
append: 'tail',
capacity: 1000,
});
return new Observable<DataQueryResponse>(subscriber => {
const frame = new CircularDataFrame({
append: 'tail',
capacity: 1000,
});
frame.refId = query.refId;
frame.addField({ name: 'time', type: FieldType.time });
frame.addField({ name: 'value', type: FieldType.number });
frame.refId = query.refId;
frame.addField({ name: 'time', type: FieldType.time });
frame.addField({ name: 'value', type: FieldType.number });
const intervalId = setInterval(() => {
frame.add({ time: Date.now(), value: Math.random() });
const intervalId = setInterval(() => {
frame.add({ time: Date.now(), value: Math.random() });
subscriber.next({
data: [frame],
key: query.refId,
});
}, 100);
subscriber.next({
data: [frame],
key: query.refId,
});
}, 100);
return () => {
clearInterval(intervalId);
};
});
});
return () => {
clearInterval(intervalId);
};
});
});
return merge(...streams);
}
```
return merge(...streams);
}
```
One limitation with this example is that the panel visualization is cleared every time you update the dashboard. If you have access to historical data, you can add, or _backfill_, it to the data frame before the first call to `subscriber.next()`.

View File

@@ -27,14 +27,13 @@ To use a custom panel option editor, use the `addCustomEditor` on the `OptionsUI
**module.ts**
```ts
export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setPanelOptions(builder => {
return builder
.addCustomEditor({
id: 'label',
path: 'label',
name: 'Label',
editor: SimpleEditor,
});
export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setPanelOptions((builder) => {
return builder.addCustomEditor({
id: 'label',
path: 'label',
name: 'Label',
editor: SimpleEditor,
});
});
```
@@ -52,7 +51,7 @@ interface Settings {
to: number;
}
export const SimpleEditor: React.FC<StandardEditorProps<number, Settings>> = ({ item, value, onChange }) => {
export const SimpleEditor: React.FC<StandardEditorProps<number, Settings>> = ({ item, value, onChange }) => {
const options: Array<SelectableValue<number>> = [];
// Default values
@@ -66,25 +65,24 @@ export const SimpleEditor: React.FC<StandardEditorProps<number, Settings>> = ({
});
}
return <Select options={options} value={value} onChange={selectableValue => onChange(selectableValue.value)} />;
return <Select options={options} value={value} onChange={(selectableValue) => onChange(selectableValue.value)} />;
};
```
You can now configure the editor for each option, by configuring the `settings` property in the call to `addCustomEditor`.
```ts
export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setPanelOptions(builder => {
return builder
.addCustomEditor({
id: 'index',
path: 'index',
name: 'Index',
editor: SimpleEditor,
settings: {
from: 1,
to: 10,
}
});
export const plugin = new PanelPlugin<SimpleOptions>(SimplePanel).setPanelOptions((builder) => {
return builder.addCustomEditor({
id: 'index',
path: 'index',
name: 'Index',
editor: SimpleEditor,
settings: {
from: 1,
to: 10,
},
});
});
```
@@ -113,7 +111,7 @@ export const SimpleEditor: React.FC<StandardEditorProps<string>> = ({ item, valu
}
}
return <Select options={options} value={value} onChange={selectableValue => onChange(selectableValue.value)} />;
return <Select options={options} value={value} onChange={(selectableValue) => onChange(selectableValue.value)} />;
};
```

View File

@@ -16,7 +16,7 @@ Allow the user to learn your plugin in small steps. Provide a useful default con
For example, by selecting the first field of an expected type, the panel can display a visualization without any user configuration. If a user explicitly selects a field, then use that one. Otherwise, default to the first field of type `string`:
```ts
const numberField = frame.fields.find(field =>
const numberField = frame.fields.find((field) =>
options.numberFieldName ? field.name === options.numberFieldName : field.type === FieldType.number
);
```
@@ -54,10 +54,10 @@ Users have full freedom when they create data source queries for panels. If your
```ts
if (!numberField) {
throw new Error('Query result is missing a number field')
throw new Error('Query result is missing a number field');
}
if (frame.length === 0) {
throw new Error('Query returned an empty result')
throw new Error('Query returned an empty result');
}
```

View File

@@ -45,15 +45,15 @@ The javascript object that communicates with the database and transforms data to
The Data source should contain the following functions:
```javascript
query(options) // used by panels to get data
testDatasource() // used by data source configuration page to make sure the connection is working
annotationQuery(options) // used by dashboards to get annotations
metricFindQuery(options) // used by query editor to get metric suggestions.
query(options); // used by panels to get data
testDatasource(); // used by data source configuration page to make sure the connection is working
annotationQuery(options); // used by dashboards to get annotations
metricFindQuery(options); // used by query editor to get metric suggestions.
```
### testDatasource
When a user clicks on the *Save & Test* button when adding a new data source, the details are first saved to the database and then the `testDatasource` function that is defined in your data source plugin will be called. It is recommended that this function makes a query to the data source that will also test that the authentication details are correct. This is so the data source is correctly configured when the user tries to write a query in a new dashboard.
When a user clicks on the _Save & Test_ button when adding a new data source, the details are first saved to the database and then the `testDatasource` function that is defined in your data source plugin will be called. It is recommended that this function makes a query to the data source that will also test that the authentication details are correct. This is so the data source is correctly configured when the user tries to write a query in a new dashboard.
### Query
@@ -81,15 +81,15 @@ An array of:
```json
[
{
"target":"upper_75",
"datapoints":[
"target": "upper_75",
"datapoints": [
[622, 1450754160000],
[365, 1450754220000]
]
},
{
"target":"upper_90",
"datapoints":[
"target": "upper_90",
"datapoints": [
[861, 1450754160000],
[767, 1450754220000]
]
@@ -118,16 +118,8 @@ An array of:
}
],
"rows": [
[
1457425380000,
null,
null
],
[
1457425370000,
1002.76215352,
1002.76215352
]
[1457425380000, null, null],
[1457425370000, 1002.76215352, 1002.76215352]
],
"type": "table"
}
@@ -141,7 +133,7 @@ Request object passed to datasource.annotationQuery function:
```json
{
"range": { "from": "2016-03-04T04:07:55.144Z", "to": "2016-03-04T07:07:55.144Z" },
"rangeRaw": { "from": "now-3h", to: "now" },
"rangeRaw": { "from": "now-3h", "to": "now" },
"annotation": {
"datasource": "generic datasource",
"enable": true,
@@ -160,7 +152,7 @@ Expected result from datasource.annotationQuery:
"name": "annotation name", //should match the annotation name in grafana
"enabled": true,
"datasource": "generic datasource"
},
},
"title": "Cluster outage",
"time": 1457075272576,
"text": "Joe causes brain split",

View File

@@ -61,13 +61,13 @@ Grafana conventions mean all you need to do is to hook up an Angular template wi
## Using Events
To add an editor tab you need to hook into the event model so that the tab is added when the *init-edit-mode* event is triggered. The following code should be added to the constructor of the plugin Ctrl class:
To add an editor tab you need to hook into the event model so that the tab is added when the _init-edit-mode_ event is triggered. The following code should be added to the constructor of the plugin Ctrl class:
```javascript
this.events.on('init-edit-mode', this.onInitEditMode.bind(this));
```
Then you need to create a handler function that is bound to the event. In the example above, the handler is called onInitEditMode. The tab is added by calling the controller function, *addEditorTab*. This function has three parameters; the tab name, the path to a html template for the new editor tab and the tab number. It can be a bit tricky to figure out the path, the path name will be based on the id that is specified in the plugin.json file - for example **grafana-clock-panel**. The code below hooks up an Angular template called editor.html that is located in the `src/partials` directory.
Then you need to create a handler function that is bound to the event. In the example above, the handler is called onInitEditMode. The tab is added by calling the controller function, _addEditorTab_. This function has three parameters; the tab name, the path to a html template for the new editor tab and the tab number. It can be a bit tricky to figure out the path, the path name will be based on the id that is specified in the plugin.json file - for example **grafana-clock-panel**. The code below hooks up an Angular template called editor.html that is located in the `src/partials` directory.
```javascript
onInitEditMode() {
@@ -82,31 +82,42 @@ For editor tabs html, it is best to use Grafana css styles rather than custom st
Most editor tabs should use the [gf-form css class](https://github.com/grafana/grafana/blob/main/public/sass/components/_gf-form.scss) from Grafana. The example below has one row with a couple of columns and each column is wrapped in a div like this:
```html
<div class="section gf-form-group">
```
<div class="section gf-form-group"></div>
```
Then each pair, label and field is wrapped in a div with a gf-form class.
```html
<div class="gf-form">
<label class="gf-form-label width-8">Font Size</label>
<input type="text" class="gf-form-input width-4" ng-model="ctrl.panel.fontSize" ng-change="ctrl.render()" ng-model-onblur>
<input
type="text"
class="gf-form-input width-4"
ng-model="ctrl.panel.fontSize"
ng-change="ctrl.render()"
ng-model-onblur
/>
</div>
```
Note that there are some Angular attributes here. *ng-model* will update the panel data. *ng-change* will render the panel when you change the value. This change will occur on the onblur event due to the *ng-model-onblur* attribute. This means you can see the effect of your changes on the panel while editing.
Note that there are some Angular attributes here. _ng-model_ will update the panel data. _ng-change_ will render the panel when you change the value. This change will occur on the onblur event due to the _ng-model-onblur_ attribute. This means you can see the effect of your changes on the panel while editing.
{{< figure class="float-right" src="/assets/img/blog/clock-panel-editor.png" caption="Panel Editor" >}}
On the editor tab we use a drop down for 12/24 hour clock, an input field for font size and a color picker for the background color.
The drop down/select has its own *gf-form-select-wrapper* css class and looks like this:
The drop down/select has its own _gf-form-select-wrapper_ css class and looks like this:
```html
<div class="gf-form">
<label class="gf-form-label width-9">12 or 24 hour</label>
<div class="gf-form-select-wrapper max-width-9">
<select class="input-small gf-form-input" ng-model="ctrl.panel.clockType" ng-options="t for t in ['12 hour', '24 hour', 'custom']" ng-change="ctrl.render()"></select>
<select
class="input-small gf-form-input"
ng-model="ctrl.panel.clockType"
ng-options="t for t in ['12 hour', '24 hour', 'custom']"
ng-change="ctrl.render()"
></select>
</div>
</div>
```
@@ -114,11 +125,11 @@ The drop down/select has its own *gf-form-select-wrapper* css class and looks li
The color picker (or spectrum picker) is a component that already exists in Grafana. We use it like this for the background color:
```html
<spectrum-picker class="gf-form-input" ng-model="ctrl.panel.bgColor" ng-change="ctrl.render()" ></spectrum-picker>
<spectrum-picker class="gf-form-input" ng-model="ctrl.panel.bgColor" ng-change="ctrl.render()"></spectrum-picker>
```
## Editor Tab Finished
To reiterate, this all ties together quite neatly. We specify properties and panel defaults in the constructor for the panel controller and these can then be changed in the editor. Grafana takes care of saving the changes.
One thing to be aware of is that panel defaults are used the first time a panel is created to set the initial values of the panel properties. After the panel is saved then the saved value will be used instead. So beware if you update panel defaults they will not automatically update the property in an existing panel. For example, if you set the default font size to 60px first and then in version 2 of the plugin change it to 50px, existing panels will still have 60px and only new panels will get the new 50px value.
One thing to be aware of is that panel defaults are used the first time a panel is created to set the initial values of the panel properties. After the panel is saved then the saved value will be used instead. So beware if you update panel defaults they will not automatically update the property in an existing panel. For example, if you set the default font size to 60px first and then in version 2 of the plugin change it to 50px, existing panels will still have 60px and only new panels will get the new 50px value.

View File

@@ -11,7 +11,8 @@ Panels are the main building blocks of dashboards.
## Panel development
### Scrolling
The grafana dashboard framework controls the panel height. To enable a scrollbar within the panel the PanelCtrl needs to set the scrollable static variable:
The grafana dashboard framework controls the panel height. To enable a scrollbar within the panel the PanelCtrl needs to set the scrollable static variable:
```javascript
export class MyPanelCtrl extends PanelCtrl {
@@ -19,7 +20,7 @@ export class MyPanelCtrl extends PanelCtrl {
...
```
In this case, make sure the template has a single `<div>...</div>` root. The plugin loader will modify that element adding a scrollbar.
In this case, make sure the template has a single `<div>...</div>` root. The plugin loader will modify that element adding a scrollbar.
### Examples

View File

@@ -48,20 +48,19 @@ A minimal `plugin.json` file:
"dependencies": {
"grafanaVersion": "3.x.x",
"plugins": [ ]
"plugins": []
}
}
```
- The convention for the plugin id is **[grafana.com username/org]-[plugin name]-[datasource|app|panel]** and it has to be unique. The org **cannot** be `grafana` unless it is a plugin created by the Grafana core team.
Examples:
- raintank-worldping-app
- ryantxu-ajax-panel
- alexanderzobnin-zabbix-app
- hawkular-datasource
Examples:
- raintank-worldping-app
- ryantxu-ajax-panel
- alexanderzobnin-zabbix-app
- hawkular-datasource
- The `type` field should be either `datasource` `app` or `panel`.
- The `version` field should be in the form: x.x.x e.g. `1.0.0` or `0.4.1`.
@@ -118,11 +117,23 @@ Below is a minimal example of an editor row with one form group and two fields,
<div class="gf-form">
<label class="gf-form-label width-10">Label1</label>
<div class="gf-form-select-wrapper max-width-10">
<select class="input-small gf-form-input" ng-model="ctrl.panel.mySelectProperty" ng-options="t for t in ['option1', 'option2', 'option3']" ng-change="ctrl.onSelectChange()"></select>
<select
class="input-small gf-form-input"
ng-model="ctrl.panel.mySelectProperty"
ng-options="t for t in ['option1', 'option2', 'option3']"
ng-change="ctrl.onSelectChange()"
></select>
</div>
<div class="gf-form">
<label class="gf-form-label width-10">Label2</label>
<input type="text" class="input-small gf-form-input width-10" ng-model="ctrl.panel.myProperty" ng-change="ctrl.onFieldChange()" placeholder="suggestion for user" ng-model-onblur />
<input
type="text"
class="input-small gf-form-input width-10"
ng-model="ctrl.panel.myProperty"
ng-change="ctrl.onFieldChange()"
placeholder="suggestion for user"
ng-model-onblur
/>
</div>
</div>
</div>
@@ -132,22 +143,19 @@ Below is a minimal example of an editor row with one form group and two fields,
Use the `width-x` and `max-width-x` classes to control the width of your labels and input fields. Try to get labels and input fields to line up neatly by having the same width for all the labels in a group and the same width for all inputs in a group if possible.
## Data Sources
For more information about data sources, refer to the [basic guide for data sources](http://docs.grafana.org/plugins/developing/datasources/).
### Configuration Page Guidelines
- It should be as easy as possible for a user to configure a URL. If the data source is using the `datasource-http-settings` component, it should use the `suggest-url` attribute to suggest the default URL or a URL that is similar to what it should be (especially important if the URL refers to a REST endpoint that is not common knowledge for most users e.g. `https://yourserver:4000/api/custom-endpoint`).
```html
<datasource-http-settings
current="ctrl.current"
suggest-url="http://localhost:8080">
</datasource-http-settings>
```
```html
<datasource-http-settings current="ctrl.current" suggest-url="http://localhost:8080"> </datasource-http-settings>
```
- The `testDatasource` function should make a query to the data source that will also test that the authentication details are correct. This is so the data source is correctly configured when the user tries to write a query in a new dashboard.
#### Password Security
If possible, any passwords or secrets should be saved in the `secureJsonData` blob. To encrypt sensitive data, the Grafana server's proxy feature must be used. The Grafana server has support for token authentication (OAuth) and HTTP Header authentication. If the calls have to be sent directly from the browser to a third-party API, this will not be possible and sensitive data will not be encrypted.
@@ -156,9 +164,8 @@ Read more here about how [authentication for data sources]({{< relref "../add-au
If using the proxy feature, the Configuration page should use the `secureJsonData` blob like this:
- good: `<input type="password" class="gf-form-input" ng-model='ctrl.current.secureJsonData.password' placeholder="password"></input>`
- bad: `<input type="password" class="gf-form-input" ng-model='ctrl.current.password' placeholder="password"></input>`
- good: `<input type="password" class="gf-form-input" ng-model='ctrl.current.secureJsonData.password' placeholder="password"></input>`
- bad: `<input type="password" class="gf-form-input" ng-model='ctrl.current.password' placeholder="password"></input>`
### Query Editor

View File

@@ -14,7 +14,7 @@ The plugin.json file is required for all plugins. When Grafana starts, it scans
## Properties
| Property | Type | Required | Description |
|-----------------|-------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| --------------- | ----------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `dependencies` | [object](#dependencies) | **Yes** | Dependencies needed by the plugin. |
| `id` | string | **Yes** | Unique name of the plugin. If the plugin is published on grafana.com, then the plugin id has to follow the naming conventions. |
| `info` | [object](#info) | **Yes** | Metadata for the plugin. Some fields are used on the plugins page in Grafana and others on grafana.com if the plugin is published. |
@@ -33,7 +33,7 @@ The plugin.json file is required for all plugins. When Grafana starts, it scans
| `metrics` | boolean | No | For data source plugins. If the plugin supports metric queries. Used in the Explore feature. |
| `preload` | boolean | No | Initialize plugin on startup. By default, the plugin initializes on first use. |
| `queryOptions` | [object](#queryoptions) | No | For data source plugins. There is a query options section in the plugin's query editor and these options can be turned on if needed. |
| `routes` | [object](#routes)[] | No | For data source plugins. Proxy routes used for plugin authentication and adding headers to HTTP requests made by the plugin. For more information, refer to [Add authentication for data source plugins]({{< relref "add-authentication-for-data-source-plugins.md">}}). |
| `routes` | [object](#routes)[] | No | For data source plugins. Proxy routes used for plugin authentication and adding headers to HTTP requests made by the plugin. For more information, refer to [Add authentication for data source plugins]({{< relref "add-authentication-for-data-source-plugins.md">}}). |
| `skipDataQuery` | boolean | No | For panel plugins. Hides the query editor. |
| `state` | string | No | Marks a plugin as a pre-release. Possible values are: `alpha`, `beta`. |
| `streaming` | boolean | No | For data source plugins. If the plugin supports streaming. |
@@ -47,7 +47,7 @@ Dependencies needed by the plugin.
### Properties
| Property | Type | Required | Description |
|---------------------|----------------------|----------|-------------------------------------------------------------------------------------------------------------------------------|
| ------------------- | -------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `grafanaDependency` | string | **Yes** | Required Grafana version for this plugin. Validated using https://github.com/npm/node-semver. |
| `grafanaVersion` | string | No | (Deprecated) Required Grafana version for this plugin, e.g. `6.x.x 7.x.x` to denote plugin requires Grafana v6.x.x or v7.x.x. |
| `plugins` | [object](#plugins)[] | No | An array of required plugins on which this plugin depends. |
@@ -59,7 +59,7 @@ Plugin dependency. Used to display information about plugin dependencies in the
#### Properties
| Property | Type | Required | Description |
|-----------|--------|----------|----------------------------------------------------|
| --------- | ------ | -------- | -------------------------------------------------- |
| `id` | string | **Yes** | |
| `name` | string | **Yes** | |
| `type` | string | **Yes** | Possible values are: `app`, `datasource`, `panel`. |
@@ -70,7 +70,7 @@ Plugin dependency. Used to display information about plugin dependencies in the
### Properties
| Property | Type | Required | Description |
|--------------|---------|----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ------------ | ------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `addToNav` | boolean | No | Add the include to the side menu. |
| `component` | string | No | (Legacy) The Angular component to use for a page. |
| `defaultNav` | boolean | No | Page or dashboard when user clicks the icon in the side menu. |
@@ -87,7 +87,7 @@ Metadata for the plugin. Some fields are used on the plugins page in Grafana and
### Properties
| Property | Type | Required | Description |
|---------------|--------------------------|----------|-------------------------------------------------------------------------------------------------------------------------------|
| ------------- | ------------------------ | -------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `keywords` | string[] | **Yes** | Array of plugin keywords. Used for search on grafana.com. |
| `logos` | [object](#logos) | **Yes** | SVG images that are used as plugin icons. |
| `updated` | string | **Yes** | Date when this plugin was built. |
@@ -105,7 +105,7 @@ Information about the plugin author.
#### Properties
| Property | Type | Required | Description |
|----------|--------|----------|---------------------------|
| -------- | ------ | -------- | ------------------------- |
| `email` | string | No | Author's name. |
| `name` | string | No | Author's name. |
| `url` | string | No | Link to author's website. |
@@ -117,7 +117,7 @@ Build information
#### Properties
| Property | Type | Required | Description |
|----------|--------|----------|------------------------------------------------------|
| -------- | ------ | -------- | ---------------------------------------------------- |
| `branch` | string | No | Git branch the plugin was built from. |
| `hash` | string | No | Git hash of the commit the plugin was built from |
| `number` | number | No | |
@@ -130,7 +130,7 @@ Build information
#### Properties
| Property | Type | Required | Description |
|----------|--------|----------|-------------|
| -------- | ------ | -------- | ----------- |
| `name` | string | No | |
| `url` | string | No | |
@@ -141,7 +141,7 @@ SVG images that are used as plugin icons.
#### Properties
| Property | Type | Required | Description |
|----------|--------|----------|------------------------------------------------------------------------------------------------------------------------------|
| -------- | ------ | -------- | ---------------------------------------------------------------------------------------------------------------------------- |
| `large` | string | **Yes** | Link to the "large" version of the plugin logo, which must be an SVG image. "Large" and "small" logos can be the same image. |
| `small` | string | **Yes** | Link to the "small" version of the plugin logo, which must be an SVG image. "Large" and "small" logos can be the same image. |
@@ -150,7 +150,7 @@ SVG images that are used as plugin icons.
#### Properties
| Property | Type | Required | Description |
|----------|--------|----------|-------------|
| -------- | ------ | -------- | ----------- |
| `name` | string | No | |
| `path` | string | No | |
@@ -161,7 +161,7 @@ For data source plugins. There is a query options section in the plugin's query
### Properties
| Property | Type | Required | Description |
|-----------------|---------|----------|----------------------------------------------------------------------------------------------------------------------------|
| --------------- | ------- | -------- | -------------------------------------------------------------------------------------------------------------------------- |
| `cacheTimeout` | boolean | No | For data source plugins. If the `cache timeout` option should be shown in the query options section in the query editor. |
| `maxDataPoints` | boolean | No | For data source plugins. If the `max data points` option should be shown in the query options section in the query editor. |
| `minInterval` | boolean | No | For data source plugins. If the `min interval` option should be shown in the query options section in the query editor. |
@@ -173,7 +173,7 @@ For data source plugins. Proxy routes used for plugin authentication and adding
### Properties
| Property | Type | Required | Description |
|----------------|-------------------------|----------|---------------------------------------------------------------------------------------------------------|
| -------------- | ----------------------- | -------- | ------------------------------------------------------------------------------------------------------- |
| `body` | [object](#body) | No | For data source plugins. Route headers set the body content and length to the proxied request. |
| `headers` | array | No | For data source plugins. Route headers adds HTTP headers to the proxied request. |
| `jwtTokenAuth` | [object](#jwttokenauth) | No | For data source plugins. Token authentication section used with an JWT OAuth API. |
@@ -189,7 +189,7 @@ For data source plugins. Proxy routes used for plugin authentication and adding
For data source plugins. Route headers set the body content and length to the proxied request.
| Property | Type | Required | Description |
|----------|------|----------|-------------|
| -------- | ---- | -------- | ----------- |
### jwtTokenAuth
@@ -198,7 +198,7 @@ For data source plugins. Token authentication section used with an JWT OAuth API
#### Properties
| Property | Type | Required | Description |
|----------|-------------------|----------|------------------------------------------------------|
| -------- | ----------------- | -------- | ---------------------------------------------------- |
| `params` | [object](#params) | No | Parameters for the JWT token authentication request. |
| `scopes` | string | No | |
| `url` | string | No | URL to fetch the JWT token. |
@@ -210,7 +210,7 @@ Parameters for the JWT token authentication request.
##### Properties
| Property | Type | Required | Description |
|----------------|----------|----------|-------------|
| -------------- | -------- | -------- | ----------- |
| `client_email` | string | No | |
| `private_key` | string | No | |
| `scopes` | string[] | No | |
@@ -223,7 +223,7 @@ For data source plugins. Token authentication section used with an OAuth API.
#### Properties
| Property | Type | Required | Description |
|----------|-------------------|----------|--------------------------------------------------|
| -------- | ----------------- | -------- | ------------------------------------------------ |
| `params` | [object](#params) | No | Parameters for the token authentication request. |
| `scopes` | string | No | |
| `url` | string | No | URL to fetch the authentication token. |
@@ -235,10 +235,8 @@ Parameters for the token authentication request.
##### Properties
| Property | Type | Required | Description |
|-----------------|--------|----------|-------------------------------------------------------------------------------------------|
| --------------- | ------ | -------- | ----------------------------------------------------------------------------------------- |
| `client_id` | string | No | OAuth client ID |
| `client_secret` | string | No | OAuth client secret. Usually populated by decrypting the secret from the SecureJson blob. |
| `grant_type` | string | No | OAuth grant type |
| `resource` | string | No | OAuth resource |

View File

@@ -59,7 +59,7 @@ To sign a plugin, you need to decide the _signature level_ you want to sign it u
You can sign your plugin under three different _signature levels_.
| **Plugin Level** | **Paid Subscription Required?** | **Description** |
|------------------|-------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ---------------- | ----------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Private | No;<br>Free of charge | <p>You can create and sign a Private Plugin for any technology at no charge.</p><p>Private Plugins are for use on your own Grafana. They may not be distributed to the Grafana community, and are not published in the Grafana catalog.</p> |
| Community | No;<br>Free of charge | <p>You can create, sign and distribute plugins at no charge, provided that all dependent technologies are open source and not for profit.</p><p>Community Plugins are published in the official Grafana catalog, and are available to the Grafana community.</p> |
| Commercial | Yes;<br>Commercial Plugin Subscription required | <p>You can create, sign and distribute plugins with dependent technologies that are closed source or commercially backed, by entering into a Commercial Plugin Subscription with Grafana Labs.</p><p>Commercial Plugins are published on the official Grafana catalog, and are available to the Grafana community.</p> |
@@ -73,7 +73,7 @@ For instructions on how to sign a plugin under the Private signature level, refe
For Grafana to verify the digital signature of a plugin, the plugin must include a signed manifest file, _MANIFEST.txt_. The signed manifest file contains two sections:
- **Signed message -** The signed message contains plugin metadata and plugin files with their respective checksums (SHA256).
- **Digital signature -** The digital signature is created by encrypting the signed message using a private key. Grafana has a public key built-in that can be used to verify that the digital signature have been encrypted using expected private key.
- **Digital signature -** The digital signature is created by encrypting the signed message using a private key. Grafana has a public key built-in that can be used to verify that the digital signature have been encrypted using expected private key.
**Example manifest file:**

View File

@@ -23,11 +23,11 @@ const numberValues = [12.3, 28.6];
// Create data frame from values.
const frame = toDataFrame({
name: "http_requests_total",
fields: [
{ name: "Time", type: FieldType.time, values: timeValues },
{ name: "Value", type: FieldType.number, values: numberValues }
]
name: 'http_requests_total',
fields: [
{ name: 'Time', type: FieldType.time, values: timeValues },
{ name: 'Value', type: FieldType.number, values: numberValues },
],
});
```
@@ -37,12 +37,12 @@ As you can see from the example, creating data frames like this requires that yo
```ts
const series = [
{ Time: 1599471973065, Value: 12.3 },
{ Time: 1599471975729, Value: 28.6 }
{ Time: 1599471973065, Value: 12.3 },
{ Time: 1599471975729, Value: 28.6 },
];
const frame = toDataFrame(series);
frame.name = 'http_requests_total'
frame.name = 'http_requests_total';
```
## Read values from a data frame
@@ -54,22 +54,22 @@ const SimplePanel: React.FC<Props> = ({ data }) => {
const frame = data.series[0];
// ...
}
};
```
Before you start reading the data, think about what data you expect. For example, to visualize a time series we'd need at least one time field, and one number field.
```ts
const timeField = frame.fields.find(field => field.type === FieldType.time);
const valueField = frame.fields.find(field => field.type === FieldType.number);
const timeField = frame.fields.find((field) => field.type === FieldType.time);
const valueField = frame.fields.find((field) => field.type === FieldType.number);
```
Other types of visualizations might need multiple dimensions. For example, a bubble chart that uses three numeric fields: the X-axis, Y-axis, and one for the radius of each bubble. In this case, instead of hard coding the field names, we recommend that you let the user choose the field to use for each dimension.
```ts
const x = frame.fields.find(field => field.name === xField);
const y = frame.fields.find(field => field.name === yField);
const size = frame.fields.find(field => field.name === sizeField);
const x = frame.fields.find((field) => field.name === xField);
const y = frame.fields.find((field) => field.name === yField);
const size = frame.fields.find((field) => field.name === sizeField);
for (let i = 0; i < frame.length; i++) {
const row = [x?.values.get(i), y?.values.get(i), size?.values.get(i)];
@@ -83,9 +83,9 @@ Alternatively, you can use the [DataFrameView]({{< relref "../../packages_api/da
```ts
const view = new DataFrameView(frame);
view.forEach(row => {
view.forEach((row) => {
console.log(row[options.xField], row[options.yField], row[options.sizeField]);
})
});
```
## Display values from a data frame
@@ -95,12 +95,12 @@ Field options let the user control how Grafana displays the data in a data frame
To apply the field options to a value, use the `display` method on the corresponding field. The result contains information such as the color and suffix to use when display the value.
```ts
const valueField = frame.fields.find(field => field.type === FieldType.number);
const valueField = frame.fields.find((field) => field.type === FieldType.number);
return (
<div>
{valueField
? valueField.values.toArray().map(value => {
? valueField.values.toArray().map((value) => {
const displayValue = valueField.display!(value);
return (
<p style={{ color: displayValue.color }}>
@@ -116,7 +116,6 @@ return (
To apply field options to the name of a field, use [getFieldDisplayName]({{< relref "../../packages_api/data/getfielddisplayname.md" >}}).
```ts
const valueField = frame.fields.find(field => field.type === FieldType.number);
const valueField = frame.fields.find((field) => field.type === FieldType.number);
const valueFieldName = getFieldDisplayName(valueField, frame);
```

View File

@@ -9,7 +9,7 @@ weight = 100
> **Note:** Fine-grained access control is in beta, and you can expect changes in future releases.
Fine-grained access control provides a standardized way of granting, changing, and revoking access when it comes to viewing and modifying Grafana resources, such as users and reports.
Fine-grained access control provides a standardized way of granting, changing, and revoking access when it comes to viewing and modifying Grafana resources, such as users and reports.
Fine-grained access control works alongside the current [Grafana permissions]({{< relref "../../permissions/_index.md" >}}), and it allows you granular control of users actions.
To learn more about how fine-grained access control works, refer to [Roles]({{< relref "./roles.md" >}}) and [Permissions]({{< relref "./permissions.md" >}}).

View File

@@ -6,31 +6,32 @@ weight = 130
+++
# Fine-grained access control references
The reference information that follows complements conceptual information about [Roles]({{< relref "./roles.md" >}}).
## Fine-grained access fixed roles
Fixed roles | Permissions | Descriptions
--- | --- | ---
`fixed:permissions:admin:read` | `roles:read`<br>`roles:list`<br>`roles.builtin:list` | Allows to list and get available roles and built-in role assignments.
`fixed:permissions:admin:edit` | All permissions from `fixed:permissions:admin:read` and <br>`roles:write`<br>`roles:delete`<br>`roles.builtin:add`<br>`roles.builtin:remove` | Allows every read action and in addition allows to create, change and delete custom roles and create or remove built-in role assignments.
`fixed:reporting:admin:read` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Allows to read reports and report settings.
`fixed:reporting:admin:edit` | All permissions from `fixed:reporting:admin:read` and <br>`reports.admin:write`<br>`reports:delete`<br>`reports.settings:write` | Allows every read action for reports and in addition allows to administer reports.
`fixed:users:admin:read` | `users.authtoken:list`<br>`users.quotas:list`<br>`users:read`<br>`users.teams:read` | Allows to list and get users and related information.
`fixed:users:admin:edit` | All permissions from `fixed:users:admin:read` and <br>`users.password:update`<br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.permissions:update`<br>`users:logout`<br>`users.authtoken:update`<br>`users.quotas:update` | Allows every read action for users and in addition allows to administer users.
`fixed:users:org:read` | `org.users:read` | Allows to get user organizations.
`fixed:users:org:edit` | All permissions from `fixed:users:org:read` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users.role:update` | Allows every read action for user organizations and in addition allows to administer user organizations.
`fixed:ldap:admin:read` | `ldap.user:read`<br>`ldap.status:read` | Allows to read LDAP information and status.
`fixed:ldap:admin:edit` | All permissions from `fixed:ldap:admin:read` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Allows every read action for LDAP and in addition allows to administer LDAP.
`fixed:server:admin:read` | `server.stats:read` | Read server stats
`fixed:settings:admin:read` | `settings:read` | Read settings
`fixed:settings:admin:edit` | All permissions from `fixed:settings:admin:read` and<br>`settings:write` | Update settings
`fixed:datasource:editor:read` | `datasources:explore` | Explore datasources
| Fixed roles | Permissions | Descriptions |
| ------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| `fixed:permissions:admin:read` | `roles:read`<br>`roles:list`<br>`roles.builtin:list` | Allows to list and get available roles and built-in role assignments. |
| `fixed:permissions:admin:edit` | All permissions from `fixed:permissions:admin:read` and <br>`roles:write`<br>`roles:delete`<br>`roles.builtin:add`<br>`roles.builtin:remove` | Allows every read action and in addition allows to create, change and delete custom roles and create or remove built-in role assignments. |
| `fixed:reporting:admin:read` | `reports:read`<br>`reports:send`<br>`reports.settings:read` | Allows to read reports and report settings. |
| `fixed:reporting:admin:edit` | All permissions from `fixed:reporting:admin:read` and <br>`reports.admin:write`<br>`reports:delete`<br>`reports.settings:write` | Allows every read action for reports and in addition allows to administer reports. |
| `fixed:users:admin:read` | `users.authtoken:list`<br>`users.quotas:list`<br>`users:read`<br>`users.teams:read` | Allows to list and get users and related information. |
| `fixed:users:admin:edit` | All permissions from `fixed:users:admin:read` and <br>`users.password:update`<br>`users:write`<br>`users:create`<br>`users:delete`<br>`users:enable`<br>`users:disable`<br>`users.permissions:update`<br>`users:logout`<br>`users.authtoken:update`<br>`users.quotas:update` | Allows every read action for users and in addition allows to administer users. |
| `fixed:users:org:read` | `org.users:read` | Allows to get user organizations. |
| `fixed:users:org:edit` | All permissions from `fixed:users:org:read` and <br>`org.users:add`<br>`org.users:remove`<br>`org.users.role:update` | Allows every read action for user organizations and in addition allows to administer user organizations. |
| `fixed:ldap:admin:read` | `ldap.user:read`<br>`ldap.status:read` | Allows to read LDAP information and status. |
| `fixed:ldap:admin:edit` | All permissions from `fixed:ldap:admin:read` and <br>`ldap.user:sync`<br>`ldap.config:reload` | Allows every read action for LDAP and in addition allows to administer LDAP. |
| `fixed:server:admin:read` | `server.stats:read` | Read server stats |
| `fixed:settings:admin:read` | `settings:read` | Read settings |
| `fixed:settings:admin:edit` | All permissions from `fixed:settings:admin:read` and<br>`settings:write` | Update settings |
| `fixed:datasource:editor:read` | `datasources:explore` | Explore datasources |
## Default built-in role assignments
Built-in roles | Associated roles | Descriptions
--- | --- | ---
Grafana Admin | `fixed:permissions:admin:edit`<br>`fixed:permissions:admin:read`<br>`fixed:reporting:admin:edit`<br>`fixed:reporting:admin:read`<br>`fixed:users:admin:edit`<br>`fixed:users:admin:read`<br>`fixed:users:org:edit`<br>`fixed:users:org:read`<br>`fixed:ldap:admin:edit`<br>`fixed:ldap:admin:read`<br>`fixed:server:admin:read`<br>`fixed:settings:admin:read`<br>`fixed:settings:admin:edit` | Allows access to resources which [Grafana Server Admin]({{< relref "../../permissions/_index.md#grafana-server-admin-role" >}}) has permissions by default.
Admin | `fixed:users:org:edit`<br>`fixed:users:org:read`<br>`fixed:reporting:admin:edit`<br>`fixed:reporting:admin:read` | Allows access to resource which [Admin]({{< relref "../../permissions/organization_roles.md" >}}) has permissions by default.
Editor | `fixed:datasource:editor:read`
| Built-in roles | Associated roles | Descriptions |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Grafana Admin | `fixed:permissions:admin:edit`<br>`fixed:permissions:admin:read`<br>`fixed:reporting:admin:edit`<br>`fixed:reporting:admin:read`<br>`fixed:users:admin:edit`<br>`fixed:users:admin:read`<br>`fixed:users:org:edit`<br>`fixed:users:org:read`<br>`fixed:ldap:admin:edit`<br>`fixed:ldap:admin:read`<br>`fixed:server:admin:read`<br>`fixed:settings:admin:read`<br>`fixed:settings:admin:edit` | Allows access to resources which [Grafana Server Admin]({{< relref "../../permissions/_index.md#grafana-server-admin-role" >}}) has permissions by default. |
| Admin | `fixed:users:org:edit`<br>`fixed:users:org:read`<br>`fixed:reporting:admin:edit`<br>`fixed:reporting:admin:read` | Allows access to resource which [Admin]({{< relref "../../permissions/organization_roles.md" >}}) has permissions by default. |
| Editor | `fixed:datasource:editor:read` |

View File

@@ -9,7 +9,7 @@ weight = 115
A permission is an action and a scope. When creating a fine-grained access control, consider what specific action a user should be allowed to perform, and on what resources (its scope).
To grant permissions to a user, you create a built-in role assignment to map a role to a built-in role. A built-in role assignment *modifies* to one of the existing built-in roles in Grafana (Viewer, Editor, Admin). For more information, refer to [Built-in role assignments]({{< relref "./roles.md#built-in-role-assignments" >}}).
To grant permissions to a user, you create a built-in role assignment to map a role to a built-in role. A built-in role assignment _modifies_ to one of the existing built-in roles in Grafana (Viewer, Editor, Admin). For more information, refer to [Built-in role assignments]({{< relref "./roles.md#built-in-role-assignments" >}}).
To learn more about which permissions are used for which resources, refer to [Resources with fine-grained permissions]({{< relref "./_index.md#resources-with-fine-grained-permissions" >}}).
@@ -23,61 +23,61 @@ scope
The following list contains fine-grained access control actions.
Actions | Applicable scopes | Descriptions
--- | --- | ---
`roles:list` | `roles:*` | List available roles without permissions.
`roles:read` | `roles:*` | Read a specific role with it's permissions.
`roles:write` | `permissions:delegate` | Create or update a custom role.
`roles:delete` | `permissions:delegate` | Delete a custom role.
`roles.builtin:list` | `roles:*` | List built-in role assignments.
`roles.builtin:add` | `permissions:delegate` | Create a built-in role assignment.
`roles.builtin:remove` | `permissions:delegate` | Delete a built-in role assignment.
`reports.admin:create` | `reports:*` | Create reports.
`reports.admin:write` | `reports:*` | Update reports.
`reports:delete` | `reports:*` | Delete reports.
`reports:read` | `reports:*` | List all available reports or get a specific report.
`reports:send` | `reports:*` | Send a report email.
`reports.settings:write` | n/a | Update report settings.
`reports.settings:read` | n/a | Read report settings.
`provisioning:reload` | `service:accesscontrol` | Reload provisioning files.
`users:read` | `global:users:*` | Read or search user profiles.
`users:write` | `global:users:*` | Update a users profile.
`users.teams:read` | `global:users:*` | Read a users teams.
`users.authtoken:list` | `global:users:*` | List authentication tokens that are assigned to a user.
`users.authtoken:update` | `global:users:*` | Update authentication tokens that are assigned to a user.
`users.password:update` | `global:users:*` | Update a users password.
`users:delete` | `global:users:*` | Delete a user.
`users:create` | n/a | Create a user.
`users:enable` | `global:users:*` | Enable a user.
`users:disable` | `global:users:*` | Disable a user.
`users.permissions:update` | `global:users:*` | Update a users organization-level permissions.
`users:logout` | `global:users:*` | Log out a user.
`users.quotas:list` | `global:users:*` | List a users quotas.
`users.quotas:update` | `global:users:*` | Update a users quotas.
`org.users.read` | `users:*` | Get user profiles within an organization.
`org.users.add` | `users:*` | Add a user to an organization.
`org.users.remove` | `users:*` | Remove a user from an organization.
`org.users.role:update` | `users:*` | Update the organization role (`Viewer`, `Editor`, `Admin`) for an organization.
`ldap.user:read` | n/a | Get a user via LDAP.
`ldap.user:sync` | n/a | Sync a user via LDAP.
`ldap.status:read` | n/a | Verify the LDAP servers availability.
`ldap.config:reload` | n/a | Reload the LDAP configuration.
`status:accesscontrol` | `service:accesscontrol` | Get access-control enabled status.
`settings:read` | `settings:**`<br>`settings:auth.saml:*`<br>`settings:auth.saml:enabled` (property level) | Read settings
`settings:write` | `settings:**`<br>`settings:auth.saml:*`<br>`settings:auth.saml:enabled` (property level) | Update settings
`server.stats:read` | n/a | Read server stats
`datasources:explore` | n/a | Enable explore
| Actions | Applicable scopes | Descriptions |
| -------------------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
| `roles:list` | `roles:*` | List available roles without permissions. |
| `roles:read` | `roles:*` | Read a specific role with it's permissions. |
| `roles:write` | `permissions:delegate` | Create or update a custom role. |
| `roles:delete` | `permissions:delegate` | Delete a custom role. |
| `roles.builtin:list` | `roles:*` | List built-in role assignments. |
| `roles.builtin:add` | `permissions:delegate` | Create a built-in role assignment. |
| `roles.builtin:remove` | `permissions:delegate` | Delete a built-in role assignment. |
| `reports.admin:create` | `reports:*` | Create reports. |
| `reports.admin:write` | `reports:*` | Update reports. |
| `reports:delete` | `reports:*` | Delete reports. |
| `reports:read` | `reports:*` | List all available reports or get a specific report. |
| `reports:send` | `reports:*` | Send a report email. |
| `reports.settings:write` | n/a | Update report settings. |
| `reports.settings:read` | n/a | Read report settings. |
| `provisioning:reload` | `service:accesscontrol` | Reload provisioning files. |
| `users:read` | `global:users:*` | Read or search user profiles. |
| `users:write` | `global:users:*` | Update a users profile. |
| `users.teams:read` | `global:users:*` | Read a users teams. |
| `users.authtoken:list` | `global:users:*` | List authentication tokens that are assigned to a user. |
| `users.authtoken:update` | `global:users:*` | Update authentication tokens that are assigned to a user. |
| `users.password:update` | `global:users:*` | Update a users password. |
| `users:delete` | `global:users:*` | Delete a user. |
| `users:create` | n/a | Create a user. |
| `users:enable` | `global:users:*` | Enable a user. |
| `users:disable` | `global:users:*` | Disable a user. |
| `users.permissions:update` | `global:users:*` | Update a users organization-level permissions. |
| `users:logout` | `global:users:*` | Log out a user. |
| `users.quotas:list` | `global:users:*` | List a users quotas. |
| `users.quotas:update` | `global:users:*` | Update a users quotas. |
| `org.users.read` | `users:*` | Get user profiles within an organization. |
| `org.users.add` | `users:*` | Add a user to an organization. |
| `org.users.remove` | `users:*` | Remove a user from an organization. |
| `org.users.role:update` | `users:*` | Update the organization role (`Viewer`, `Editor`, `Admin`) for an organization. |
| `ldap.user:read` | n/a | Get a user via LDAP. |
| `ldap.user:sync` | n/a | Sync a user via LDAP. |
| `ldap.status:read` | n/a | Verify the LDAP servers availability. |
| `ldap.config:reload` | n/a | Reload the LDAP configuration. |
| `status:accesscontrol` | `service:accesscontrol` | Get access-control enabled status. |
| `settings:read` | `settings:**`<br>`settings:auth.saml:*`<br>`settings:auth.saml:enabled` (property level) | Read settings |
| `settings:write` | `settings:**`<br>`settings:auth.saml:*`<br>`settings:auth.saml:enabled` (property level) | Update settings |
| `server.stats:read` | n/a | Read server stats |
| `datasources:explore` | n/a | Enable explore |
## Scope definitions
The following list contains fine-grained access control scopes.
Scopes | Descriptions
--- | ---
`roles:*` | Restrict an action to a set of roles. For example, `roles:*` matches any role, `roles:randomuid` matches only the role with UID `randomuid` and `roles:custom:reports:{editor,viewer}` matches both `custom:reports:editor` and `custom:reports:viewer` roles.
`permissions:delegate` | The scope is only applicable for roles associated with the Access Control itself and indicates that you can delegate your permissions only, or a subset of it, by creating a new role or making an assignment.
`reports:*` | Restrict an action to a set of reports. For example, `reports:*` matches any report and `reports:1` matches the report with id `1`.
`service:accesscontrol` | Restrict an action to target only the fine-grained access control service. For example, you can use this in conjunction with the `provisioning:reload` or the `status:accesscontrol` actions.
`global:users:*` | Restrict an action to a set of global users.
`users:*` | Restrict an action to a set of users from an organization.
`settings:**` | Restrict an action to a subset of settings. For example, `settings:**` matches all settings, `settings:auth.saml:*` matches all SAML settings, and `settings:auth.saml:enabled` matches the enable property on the SAML settings.
| Scopes | Descriptions |
| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `roles:*` | Restrict an action to a set of roles. For example, `roles:*` matches any role, `roles:randomuid` matches only the role with UID `randomuid` and `roles:custom:reports:{editor,viewer}` matches both `custom:reports:editor` and `custom:reports:viewer` roles. |
| `permissions:delegate` | The scope is only applicable for roles associated with the Access Control itself and indicates that you can delegate your permissions only, or a subset of it, by creating a new role or making an assignment. |
| `reports:*` | Restrict an action to a set of reports. For example, `reports:*` matches any report and `reports:1` matches the report with id `1`. |
| `service:accesscontrol` | Restrict an action to target only the fine-grained access control service. For example, you can use this in conjunction with the `provisioning:reload` or the `status:accesscontrol` actions. |
| `global:users:*` | Restrict an action to a set of global users. |
| `users:*` | Restrict an action to a set of users from an organization. |
| `settings:**` | Restrict an action to a subset of settings. For example, `settings:**` matches all settings, `settings:auth.saml:*` matches all SAML settings, and `settings:auth.saml:enabled` matches the enable property on the SAML settings. |

View File

@@ -6,7 +6,7 @@ weight = 120
+++
# Provisioning
You can create, change or remove [Custom roles]({{< relref "./roles.md#custom-roles" >}}) and create or remove [built-in role assignments]({{< relref "./roles.md#built-in-role-assignments" >}}), by adding one or more YAML configuration files in the [`provisioning/access-control/`]({{< relref "../../administration/configuration/#provisioning" >}}) directory.
Refer to [Grafana provisioning]({{< relref "../../administration/configuration/#provisioning" >}}) to learn more about provisioning.
@@ -38,16 +38,16 @@ apiVersion: 1
# Roles to insert into the database, or roles to update in the database
roles:
- name: custom:users:editor
description: "This role allows users to list, create, or update other users within the organization."
description: 'This role allows users to list, create, or update other users within the organization.'
version: 1
orgId: 1
permissions:
- action: "users:read"
scope: "users:*"
- action: "users:write"
scope: "users:*"
- action: "users:create"
scope: "users:*"
- action: 'users:read'
scope: 'users:*'
- action: 'users:write'
scope: 'users:*'
- action: 'users:create'
scope: 'users:*'
```
Here is an example YAML file to create a global role with a set of permissions, where the `global:true` option makes a role global:
@@ -59,26 +59,28 @@ apiVersion: 1
# Roles to insert into the database, or roles to update in the database
roles:
- name: custom:users:editor
description: "This role allows users to list, create, or update other users within the organization."
description: 'This role allows users to list, create, or update other users within the organization.'
version: 1
global: true
permissions:
- action: "users:read"
scope: "users:*"
- action: "users:write"
scope: "users:*"
- action: "users:create"
scope: "users:*"
- action: 'users:read'
scope: 'users:*'
- action: 'users:write'
scope: 'users:*'
- action: 'users:create'
scope: 'users:*'
```
The `orgId` is lost when the role is set to global.
### Delete roles
### Delete roles
To delete a role, add a list of roles under the `deleteRoles` section in the configuration file.
To delete a role, add a list of roles under the `deleteRoles` section in the configuration file.
> **Note:** Any role in the `deleteRoles` section is deleted before any role in the `roles` section is saved.
Here is an example YAML file to delete a role:
```yaml
# config file version
apiVersion: 1
@@ -105,19 +107,19 @@ apiVersion: 1
# Roles to insert/update in the database
roles:
- name: custom:users:editor
description: "This role allows users to list/create/update other users in the organization"
description: 'This role allows users to list/create/update other users in the organization'
version: 1
orgId: 1
permissions:
- action: "users:read"
scope: "users:*"
- action: "users:write"
scope: "users:*"
- action: "users:create"
scope: "users:*"
- action: 'users:read'
scope: 'users:*'
- action: 'users:write'
scope: 'users:*'
- action: 'users:create'
scope: 'users:*'
builtInRoles:
- name: "Editor"
- name: "Admin"
- name: 'Editor'
- name: 'Admin'
```
## Manage default built-in role assignments
@@ -129,15 +131,15 @@ During startup, Grafana creates [default built-in role assignments]({{< relref "
To remove default built-in role assignments, use the `removeDefaultAssignments` element in the configuration file. You need to provide the built-in role name and fixed role name.
Here is an example:
```yaml
# config file version
apiVersion: 1
# list of default built-in role assignments that should be removed
removeDefaultAssignments:
- builtInRole: "Grafana Admin"
fixedRole: "fixed:permissions:admin"
- builtInRole: 'Grafana Admin'
fixedRole: 'fixed:permissions:admin'
```
### Restore default assignment
@@ -145,14 +147,15 @@ removeDefaultAssignments:
To restore the default built-in role assignment, use the `addDefaultAssignments` element in the configuration file. You need to provide the built-in role name and the fixed-role name.
Here is an example:
```yaml
# config file version
apiVersion: 1
# list of default built-in role assignments that should be added back
addDefaultAssignments:
- builtInRole: "Admin"
fixedRole: "fixed:reporting:admin:read"
- builtInRole: 'Admin'
fixedRole: 'fixed:reporting:admin:read'
```
## Full example of a role configuration file
@@ -164,29 +167,29 @@ apiVersion: 1
# list of default built-in role assignments that should be removed
removeDefaultAssignments:
# <string>, must be one of the Organization roles (`Viewer`, `Editor`, `Admin`) or `Grafana Admin`
- builtInRole: "Grafana Admin"
- builtInRole: 'Grafana Admin'
# <string>, must be one of the existing fixed roles
fixedRole: "fixed:permissions:admin"
fixedRole: 'fixed:permissions:admin'
# list of default built-in role assignments that should be added back
addDefaultAssignments:
# <string>, must be one of the Organization roles (`Viewer`, `Editor`, `Admin`) or `Grafana Admin`
- builtInRole: "Admin"
- builtInRole: 'Admin'
# <string>, must be one of the existing fixed roles
fixedRole: "fixed:reporting:admin:read"
fixedRole: 'fixed:reporting:admin:read'
# list of roles that should be deleted
deleteRoles:
# <string> name of the role you want to create. Required if no uid is set
- name: "custom:reports:editor"
- name: 'custom:reports:editor'
# <string> uid of the role. Required if no name
uid: "customreportseditor1"
uid: 'customreportseditor1'
# <int> org id. will default to Grafana's default if not specified
orgId: 1
# <bool> force deletion revoking all grants of the role
force: true
- name: "custom:global:reports:reader"
uid: "customglobalreportsreader1"
- name: 'custom:global:reports:reader'
uid: 'customglobalreportsreader1'
# <bool> overwrite org id and removes a global role
global: true
force: true
@@ -194,44 +197,44 @@ deleteRoles:
# list of roles to insert/update depending on what is available in the database
roles:
# <string, required> name of the role you want to create. Required
- name: "custom:users:editor"
- name: 'custom:users:editor'
# <string> uid of the role. Has to be unique for all orgs.
uid: customuserseditor1
# <string> description of the role, informative purpose only.
description: "Role for our custom user editors"
description: 'Role for our custom user editors'
# <int> version of the role, Grafana will update the role when increased
version: 2
# <int> org id. will default to Grafana's default if not specified
orgId: 1
orgId: 1
# <list> list of the permissions granted by this role
permissions:
# <string, required> action allowed
- action: "users:read"
- action: 'users:read'
#<string> scope it applies to
scope: "users:*"
- action: "users:write"
scope: "users:*"
- action: "users:create"
scope: "users:*"
scope: 'users:*'
- action: 'users:write'
scope: 'users:*'
- action: 'users:create'
scope: 'users:*'
# <list> list of builtIn roles the role should be assigned to
builtInRoles:
# <string, required> name of the builtin role you want to assign the role to
- name: "Editor"
- name: 'Editor'
# <int> org id. will default to the role org id
orgId: 1
- name: "custom:global:users:reader"
uid: "customglobalusersreader1"
description: "Global Role for custom user readers"
orgId: 1
- name: 'custom:global:users:reader'
uid: 'customglobalusersreader1'
description: 'Global Role for custom user readers'
version: 1
# <bool> overwrite org id and creates a global role
global: true
permissions:
- action: "users:read"
scope: "users:*"
- action: 'users:read'
scope: 'users:*'
builtInRoles:
- name: "Viewer"
orgId: 1
- name: "Editor"
- name: 'Viewer'
orgId: 1
- name: 'Editor'
# <bool> overwrite org id and assign role globally
global: true
```
@@ -251,7 +254,7 @@ A basic set of validation rules are applied to the input `yaml` files.
### Roles
- `name` must not be empty
- `name` must not have `fixed:` prefix.
- `name` must not have `fixed:` prefix.
### Permissions
@@ -259,9 +262,9 @@ A basic set of validation rules are applied to the input `yaml` files.
### Built-in role assignments
- `name` must be one of the Organization roles (`Viewer`, `Editor`, `Admin`) or `Grafana Admin`.
- `name` must be one of the Organization roles (`Viewer`, `Editor`, `Admin`) or `Grafana Admin`.
- When `orgId` is not specified, it inherits the `orgId` from `role`. For global roles the default `orgId` is used.
- `orgId` in the `role` and in the assignment must be the same for none global roles.
- `orgId` in the `role` and in the assignment must be the same for none global roles.
### Role deletion

View File

@@ -10,6 +10,7 @@ weight = 105
A role represents set of permissions that allow you to perform specific actions on Grafana resources. Refer to [Permissions]({{< relref "./permissions.md" >}}) to understand how permissions work.
There are two types of roles:
- [Fixed roles]({{< relref "./roles.md#fixed-roles" >}}), which provide granular access for specific resources within Grafana and are managed by the Grafana itself.
- [Custom roles]({{< relref "./roles.md#custom-roles.md" >}}), which provide granular access based on the user specified set of permissions.
@@ -25,7 +26,7 @@ Fixed roles provide convenience and guarantee of consistent behaviour by combini
There are few basic rules for fixed roles:
- All fixed roles are _global_.
- All fixed roles have a `fixed:` prefix.
- All fixed roles have a `fixed:` prefix.
- You cant change or delete a fixed role.
For more information, refer to [Fine-grained access control references]({{< relref "./fine-grained-access-control-references.md#fine-grained-access-fixed-roles" >}}).
@@ -68,7 +69,7 @@ Note that you won't be able to create, update or delete a custom role with permi
## Built-in role assignments
To control what your users can access or not, you can assign or unassign [Custom roles]({{< ref "#custom-roles" >}}) or [Fixed roles]({{< ref "#fixed-roles" >}}) to the existing [Organization roles]({{< relref "../../permissions/organization_roles.md" >}}) or to [Grafana Server Admin]({{< relref "../../permissions/_index.md#grafana-server-admin-role" >}}) role.
To control what your users can access or not, you can assign or unassign [Custom roles]({{< ref "#custom-roles" >}}) or [Fixed roles]({{< ref "#fixed-roles" >}}) to the existing [Organization roles]({{< relref "../../permissions/organization_roles.md" >}}) or to [Grafana Server Admin]({{< relref "../../permissions/_index.md#grafana-server-admin-role" >}}) role.
These assignments are called built-in role assignments.
During startup, Grafana will create default assignments for you. When you make any changes to the built-on role assignments, Grafana will take them into account and wont overwrite during next start.
@@ -82,4 +83,4 @@ You can create or remove built-in role assignments using [Fine-grained access co
### Scope of assignments
A built-in role assignment can be either _global_ or _organization local_. _Global_ assignments are not mapped to any specific organization and will be applied to all organizations, whereas _organization local_ assignments are only applied for that specific organization.
You can only create _organization local_ assignments for _organization local_ roles.
You can only create _organization local_ assignments for _organization local_ roles.

View File

@@ -13,15 +13,17 @@ Before you get started, make sure to [enable fine-grained access control]({{< re
## Check all built-in role assignments
You can use the [Fine-grained access control HTTP API]({{< relref "../../http_api/access_control.md#get-all-built-in-role-assignments" >}}) to see all available built-in role assignments.
You can use the [Fine-grained access control HTTP API]({{< relref "../../http_api/access_control.md#get-all-built-in-role-assignments" >}}) to see all available built-in role assignments.
The response contains a mapping between one of the organization roles (`Viewer`, `Editor`, `Admin`) or `Grafana Admin` to the custom or fixed roles.
Example request:
```
curl --location --request GET '<grafana_url>/api/access-control/builtin-roles' --header 'Authorization: Basic YWRtaW46cGFzc3dvcmQ='
```
Example response:
```
{
"Admin": [
@@ -34,7 +36,7 @@ Example response:
"global": true,
"updated": "2021-05-17T20:49:18+02:00",
"created": "2021-05-13T16:24:26+02:00"
},
},
{
"version": 1,
"uid": "Kz9m_YjGz",
@@ -56,7 +58,7 @@ Example response:
"global": true,
"updated": "2021-05-17T20:49:18+02:00",
"created": "2021-05-13T16:24:26+02:00"
},
},
{
"version": 2,
"uid": "ajum_YjGk",
@@ -74,8 +76,8 @@ Example response:
"global": true,
"updated": "2021-05-17T20:49:17+02:00",
"created": "2021-05-13T16:24:26+02:00"
},
...
},
...
]
}
```
@@ -134,6 +136,7 @@ You can create your custom role by either using an [HTTP API]({{< relref "../../
You can take a look at [actions and scopes]({{< relref "./provisioning.md#action-definitions" >}}) to decide what permissions would you like to map to your role.
Example HTTP request:
```
curl --location --request POST '<grafana_url>/api/access-control/roles/' \
--header 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' \
@@ -163,7 +166,7 @@ Example response:
"global": true,
"permissions": [
{
"action": "users:create"
"action": "users:create"
"updated": "2021-05-17T22:07:31.569936+02:00",
"created": "2021-05-17T22:07:31.569935+02:00"
}
@@ -173,7 +176,7 @@ Example response:
}
```
Once the custom role is created, you can create a built-in role assignment by using an [HTTP API]({{< relref "../../http_api/access_control.md#create-a-built-in-role-assignment" >}}).
Once the custom role is created, you can create a built-in role assignment by using an [HTTP API]({{< relref "../../http_api/access_control.md#create-a-built-in-role-assignment" >}}).
If you created your role using [Grafana provisioning]({{< relref "./provisioning.md" >}}), you can also create the assignment with it.
Example HTTP request:
@@ -212,8 +215,8 @@ In order to create users, you would need to have `users:create` permission. By d
If you want to prevent Grafana Admin from creating users, you can do the following:
1. [Check all built-in role assignments]({{< ref "#check-all-built-in-role-assignments" >}}) to see what built-in role assignments are available.
1. From built-in role assignments, find the role which gives `users:create` permission. Refer to [fixed roles]({{< relref "./roles.md#fixed-roles" >}}) for full list of permission assignments.
1. [Check all built-in role assignments]({{< ref "#check-all-built-in-role-assignments" >}}) to see what built-in role assignments are available.
1. From built-in role assignments, find the role which gives `users:create` permission. Refer to [fixed roles]({{< relref "./roles.md#fixed-roles" >}}) for full list of permission assignments.
1. Remove the built-in role assignment by using an [Fine-grained access control HTTP API]({{< relref "../../http_api/access_control.md" >}}) or by using [Grafana provisioning]({{< relref "./provisioning" >}}).
## Allow Editors to create new custom roles
@@ -223,4 +226,4 @@ By default, Grafana Server Admin is the only user who can create and manage cust
1. First option is to create a built-in role assignment and map `fixed:permissions:admin:edit` and `fixed:permissions:admin:read` fixed roles to the `Editor` built-in role.
1. Second option is to [create a custom role]({{< ref "#create-your-custom-role" >}}) with `roles.builtin:add` and `roles:write` permissions, and create a built-in role assignment for `Editor` organization role.
Note that in any scenario, your `Editor` would be able to create and manage roles only with the permissions they have, or with a subset of them.
Note that in any scenario, your `Editor` would be able to create and manage roles only with the permissions they have, or with a subset of them.

View File

@@ -22,43 +22,43 @@ Audit logs are JSON objects representing user actions like:
Audit logs contain the following fields. The fields followed by **\*** are always available, the others depend on the type of action logged.
| Field name | Type | Description |
| ---------- | ---- | ----------- |
| `timestamp`\* | string | The date and time the request was made, in coordinated universal time (UTC) using the [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.6) format. |
| `user`\* | object | Information about the user that made the request. Either one of the `UserID` or `ApiKeyID` fields will contain content if `isAnonymous=false`. |
| `user.userId` | number | ID of the Grafana user that made the request. |
| `user.orgId`\* | number | Current organization of the user that made the request. |
| `user.orgRole` | string | Current role of the user that made the request. |
| `user.name` | string | Name of the Grafana user that made the request. |
| `user.tokenId` | number | ID of the user authentication token. |
| `user.apiKeyId` | number | ID of the Grafana API key used to make the request. |
| `user.isAnonymous`\* | boolean | If an anonymous user made the request, `true`. Otherwise, `false`. |
| `action`\* | string | The request action. For example, `create`, `update`, or `manage-permissions`. |
| `request`\* | object | Information about the HTTP request. |
| `request.params` | object | Requests path parameters. |
| `request.query` | object | Requests query parameters. |
| `request.body` | string | Requests body. |
| `result`\* | object | Information about the HTTP response. |
| `result.statusType` | string | If the request action was successful, `success`. Otherwise, `failure`. |
| `result.statusCode` | number | HTTP status of the request. |
| `result.failureMessage` | string | HTTP error message. |
| `result.body` | string | Response body. |
| `resources` | array | Information about the resources that the request action affected. This field can be null for non-resource actions such as `login` or `logout`. |
| `resources[x].id`\* | number | ID of the resource. |
| `resources[x].type`\* | string | The type of the resource that was logged: `alert`, `alert-notification`, `annotation`, `api-key`, `auth-token`, `dashboard`, `datasource`, `folder`, `org`, `panel`, `playlist`, `report`, `team`, `user`, or `version`. |
| `requestUri`\* | string | Request URI. |
| `ipAddress`\* | string | IP address that the request was made from. |
| `userAgent`\* | string | Agent through which the request was made. |
| `grafanaVersion`\* | string | Current version of Grafana when this log is created. |
| `additionalData` | object | Additional information that can be provided about the request. |
| Field name | Type | Description |
| ----------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `timestamp`\* | string | The date and time the request was made, in coordinated universal time (UTC) using the [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.6) format. |
| `user`\* | object | Information about the user that made the request. Either one of the `UserID` or `ApiKeyID` fields will contain content if `isAnonymous=false`. |
| `user.userId` | number | ID of the Grafana user that made the request. |
| `user.orgId`\* | number | Current organization of the user that made the request. |
| `user.orgRole` | string | Current role of the user that made the request. |
| `user.name` | string | Name of the Grafana user that made the request. |
| `user.tokenId` | number | ID of the user authentication token. |
| `user.apiKeyId` | number | ID of the Grafana API key used to make the request. |
| `user.isAnonymous`\* | boolean | If an anonymous user made the request, `true`. Otherwise, `false`. |
| `action`\* | string | The request action. For example, `create`, `update`, or `manage-permissions`. |
| `request`\* | object | Information about the HTTP request. |
| `request.params` | object | Requests path parameters. |
| `request.query` | object | Requests query parameters. |
| `request.body` | string | Requests body. |
| `result`\* | object | Information about the HTTP response. |
| `result.statusType` | string | If the request action was successful, `success`. Otherwise, `failure`. |
| `result.statusCode` | number | HTTP status of the request. |
| `result.failureMessage` | string | HTTP error message. |
| `result.body` | string | Response body. |
| `resources` | array | Information about the resources that the request action affected. This field can be null for non-resource actions such as `login` or `logout`. |
| `resources[x].id`\* | number | ID of the resource. |
| `resources[x].type`\* | string | The type of the resource that was logged: `alert`, `alert-notification`, `annotation`, `api-key`, `auth-token`, `dashboard`, `datasource`, `folder`, `org`, `panel`, `playlist`, `report`, `team`, `user`, or `version`. |
| `requestUri`\* | string | Request URI. |
| `ipAddress`\* | string | IP address that the request was made from. |
| `userAgent`\* | string | Agent through which the request was made. |
| `grafanaVersion`\* | string | Current version of Grafana when this log is created. |
| `additionalData` | object | Additional information that can be provided about the request. |
The `additionalData` field can contain the following information:
| Field name | Action | Description |
| ---------- | ------ | ----------- |
| `loginUsername` | `login` | Login used in the Grafana authentication form. |
| `extUserInfo` | `login` | User information provided by the external system that was used to log in. |
| `extUserInfo` | `login` | User information provided by the external system that was used to log in. |
| `authTokenCount` | `login` | Number of active authentication tokens for the user that logged in. |
| `terminationReason` | `logout` | The reason why the user logged out, such as a manual logout or a token expiring. |
| `terminationReason` | `logout` | The reason why the user logged out, such as a manual logout or a token expiring. |
### Recorded actions

View File

@@ -52,7 +52,7 @@ After you have enabled permissions for a data source you can assign query permis
If you have enabled permissions for a data source and want to return data source permissions to the default, then you can disable permissions with a click of a button.
Note that *all* existing permissions created for the data source will be deleted.
Note that _all_ existing permissions created for the data source will be deleted.
**Disable permissions for a data source:**

View File

@@ -362,6 +362,7 @@ This value limits the size of a single cache value. If a cache value (or query r
The default is `1`.
## [caching.encryption]
### enabled
When 'enabled' is `true`, query values in the cache are encrypted.

View File

@@ -17,7 +17,7 @@ To download your Grafana Enterprise license:
1. Log in to your [Grafana Cloud Account](https://grafana.com).
1. Go to your **Org Profile**.
1. Go to the section for Grafana Enterprise licenses in the side menu.
1. At the bottom of the license details page there is **Download Token** link that will download the *license.jwt* file containing your license to your computer.
1. At the bottom of the license details page there is **Download Token** link that will download the _license.jwt_ file containing your license to your computer.
## Step 2. Add your license to a Grafana instance
@@ -34,29 +34,29 @@ This is the preferred option for single instance installations of Grafana Enterp
### Place the license.jwt file in Grafana's data folder
The data folder is usually `/var/lib/grafana` on Linux systems.
The data folder is usually `/var/lib/grafana` on Linux systems.
You can also configure a custom location for the license file using the grafana.ini setting:
You can also configure a custom location for the license file using the grafana.ini setting:
```bash
[enterprise]
license_path = /company/secrets/license.jwt
```
```bash
[enterprise]
license_path = /company/secrets/license.jwt
```
This setting can also be set with an environment variable, which is useful if you're running Grafana with Docker and have a custom volume where you have placed the license file. In this case, set the environment variable `GF_ENTERPRISE_LICENSE_PATH` to point to the location of your license file.
This setting can also be set with an environment variable, which is useful if you're running Grafana with Docker and have a custom volume where you have placed the license file. In this case, set the environment variable `GF_ENTERPRISE_LICENSE_PATH` to point to the location of your license file.
### Set the content of the license file as a configuration option
You can add a license by pasting the content of the `license.jwt`
to the grafana.ini configuration file:
You can add a license by pasting the content of the `license.jwt`
to the grafana.ini configuration file:
```bash
[enterprise]
license_text = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0aGlzIjoiaXMiLCJub3QiOiJhIiwidmFsaWQiOiJsaWNlbnNlIn0.bxDzxIoJlYMwiEYKYT_l2s42z0Y30tY-6KKoyz9RuLE
```
This option can be set using the `GF_ENTERPRISE_LICENSE_TEXT`
environment variable.
```bash
[enterprise]
license_text = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0aGlzIjoiaXMiLCJub3QiOiJhIiwidmFsaWQiOiJsaWNlbnNlIn0.bxDzxIoJlYMwiEYKYT_l2s42z0Y30tY-6KKoyz9RuLE
```
This option can be set using the `GF_ENTERPRISE_LICENSE_TEXT`
environment variable.
## Step 3. Ensure that the license file's root URL matches the root_url configuration option

View File

@@ -21,13 +21,14 @@ Grafana licenses allow for a certain number of active users per instance. An act
In the context of licensing, each user is classified as either a viewer or an editor/admin. This classification is the user's **licensed role**, and it can be different from that user's [organization role]({{< relref "../../permissions/organization_roles.md" >}}) in Grafana.
- An editor/admin is a user who has permission to edit and save a dashboard. Examples of editors are as follows:
- Grafana server administrators.
- Users who are assigned an organization role of Editor or Admin.
- Users who have been granted admin or edit permissions at the dashboard or folder level. Refer to [Dashboard and folder permissions]({{< relref "../../permissions/dashboard-folder-permissions.md" >}}). This means that even if a user is assigned to an organization role of Viewer they will be counted as an editor.
- Grafana server administrators.
- Users who are assigned an organization role of Editor or Admin.
- Users who have been granted admin or edit permissions at the dashboard or folder level. Refer to [Dashboard and folder permissions]({{< relref "../../permissions/dashboard-folder-permissions.md" >}}). This means that even if a user is assigned to an organization role of Viewer they will be counted as an editor.
- A viewer is a user with the Viewer role, which does not permit the user to save a dashboard.
Additional details:
- When the number of maximum active viewers or editor/admins is reached, only those currently active users can sign in. New users or non-active users cannot sign in.
- When the number of maximum active viewers or editor/admins is reached, only those currently active users can sign in. New users or non-active users cannot sign in.
- A license limit banner will appear to admins when Grafana reaches its active user limit. Editor/admins and viewers will not see the banner.
- To see how many active users you have in each licensed role (Viewer or Editor/Admin), refer to the Licensing page in the Server Admin section of Grafana, which is located at `[your-grafana-url.com]/admin/licensing`. Please note that _licensed_ roles can differ from the Active Viewer/Editor/Admin counts on the /admin/stats page in Grafana. This is because the Stats page only counts a user's assigned organization role and does not account for dashboard and folder permissions.
- Restrictions are applied separately for viewers and editor/admins. If a Grafana instance reaches its limit of active viewers but not its limit of active editor/admins, new editors and admins will still be able to sign in.
@@ -59,6 +60,7 @@ License URL is the root URL of your Grafana instance. The license will not work
This CSV report helps to identify users, teams, and roles that have been granted Admin or Edit permissions at the dashboard or folder level.
To download the report:
1. Hover your cursor over the **Server Admin** (shield) icon in the side menu and then click **Licensing**.
1. At the bottom of the page, click **Download report**.

View File

@@ -30,6 +30,7 @@ You can make a panel retrieve fresh data more frequently by increasing the **Max
## Data sources that work with query caching
Query caching works for all [Enterprise data sources](https://grafana.com/grafana/plugins/?type=datasource&enterprise=1), and it works for the following [built-in data sources]({{< relref "../datasources/_index.md" >}}):
- CloudWatch
- Google Cloud Monitoring
- InfluxDB
@@ -49,11 +50,12 @@ To tell if a data source works with query caching, follow the instructions below
You must be an Org admin or Grafana admin to enable query caching for a data source. For more information on Grafana roles and permissions, visit the [Permissions page]({{< relref "../permissions/_index.md" >}}).
By default, data source queries are not cached. To enable query caching for a single data source:
1. On the side menu, click Configuration > Data Sources.
1. In the data source list, click the data source that you want to turn on caching for.
1. In the Cache tab, click Enable.
1. Open the Cache tab.
1. Press the Enable button.
1. Press the Enable button.
1. (Optional) Choose a custom TTL for that data source. If you skip this step, then Grafana uses the default TTL.
> **Note:** If query caching is enabled and the Cache tab is not visible in a data source's settings, then query caching is not available for that data source.
@@ -62,7 +64,8 @@ To configure global settings for query caching, refer the the [Query caching sec
## Disable query caching
To disable query caching for a single data source:
To disable query caching for a single data source:
1. On the side menu, click Configuration > Data Sources.
1. In the data source list, click the data source that you want to turn off caching for.
1. In the Cache tab, click Disable.

Some files were not shown because too many files have changed in this diff Show More