Docs: Refactor data sources content (#57573)

* Docs: Revise data source index

* Docs: Consolidate data source administration docs

* Docs: Revise panels docs related to data sources

* Docs: Revise Alertmanager data source

* Docs: Reorganize AWS CloudWatch data source docs

* Docs: Reorganize Azure Monitor data source docs

* Docs: Move azuremonitor to azure-monitor

* Docs: Revise Elasticsearch docs

* Docs: Move Elasticsearch index into bundle

* Docs: Revise GCM docs

* Docs: Revise Graphite docs

* Docs: Move Graphite index into bundle

* Docs: Revise InfluxDB docs

* Docs: Revise Jaeger docs

* Docs: Move Jaeger index into bundle

* Docs: Revise Loki docs

* Docs: Move Loki index into bundle

* Docs: Revise MS SQL docs

* Docs: Move MS SQL index into bundle

* Docs: Revise Prometheus docs

* Docs: Move Prometheus index into bundle

* Docs: Revise Tempo docs

* Docs: Move Tempo index into bundle

* Docs: Revise TestData DB docs

* Docs: Move TestData DB index into bundle

* Docs: Revise Zipkin docs

* Docs: Move Zipkin index into bundle

* Docs: Move other data sources' index pages into bundles

* Docs: Revise frontmatter

* Fixing hugo markdown errors

* Docs: Add query editor and template var sections to overview doc

* Docs: Remove CTAs across data source docs

* Docs: Remove CTA

* Docs: Remove CTA

* Docs: Fix links, images, typos, and usage consistency.

* Docs: Fix typos

* Docs: Fix CI issues

* Update docs/sources/datasources/_index.md

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* Update docs/sources/datasources/_index.md

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* Docs: Fix query editor links

* Update docs/sources/panels-visualizations/_index.md

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* Update docs/sources/panels-visualizations/_index.md

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>

* Docs: Rebundle child pages per writers' toolkit

* Docs: Fix prettier for CI

* Docs: Fix relrefs from outside data sources docs

* Docs: Fix broken relrefs within datasources

* Docs: Fix relrefs to data sources docs

* Fixed some more refs

Co-authored-by: Torkel Ödegaard <torkel@grafana.com>
This commit is contained in:
Garrett Guillotte 2022-11-01 08:22:06 -07:00 committed by GitHub
parent 22648d8581
commit 852d069a3c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
77 changed files with 5722 additions and 4410 deletions

View File

@ -105,7 +105,7 @@ title: Grafana documentation
<img src="/static/img/docs/logos/icon_cloudwatch.svg">
<h5>AWS CloudWatch</h5>
</a>
<a href="{{< relref "datasources/azuremonitor/" >}}" class="nav-cards__item nav-cards__item--ds">
<a href="{{< relref "datasources/azure-monitor/" >}}" class="nav-cards__item nav-cards__item--ds">
<img src="/static/img/docs/logos/icon_azure_monitor.jpg">
<h5>Azure Monitor</h5>
</a>

View File

@ -1,6 +1,7 @@
---
aliases:
- /docs/grafana/latest/datasources/add-a-data-source/
- /docs/grafana/latest/datasources/datasource_permissions/
- /docs/grafana/latest/features/datasources/add-a-data-source/
- /docs/grafana/latest/enterprise/datasource_permissions/
- /docs/grafana/latest/permissions/datasource_permissions/
@ -13,39 +14,50 @@ weight: 100
# Data source management
Grafana supports many different storage backends for your time series data (data source). Refer to [data sources]({{< relref "../../datasources/" >}}) for more information about using data sources in Grafana. Only users with the organization admin role can add data sources.
Grafana supports many different storage backends for your time series data (data source).
Refer to [data sources]({{< relref "../../datasources" >}}) for more information about using data sources in Grafana.
Only users with the organization admin role can add data sources.
## Add a data source
Before you can create your first dashboard, you need to add your data source.
> **Note:** Only users with the organization Admin role can add data sources.
> **Note:** Only users with the organization admin role can add data sources.
To add a data source:
**To add a data source:**
1. Move your cursor to the cog icon on the side menu which will show the configuration options.
1. Select the cog icon on the side menu to show the configuration options.
{{< figure src="/static/img/docs/v75/sidemenu-datasource-7-5.png" max-width="150px" class="docs-image--no-shadow">}}
1. Click on **Data sources**. The data sources page opens showing a list of previously configured data sources for the Grafana instance.
1. Select **Data sources**.
1. Click **Add data source** to see a list of all supported data sources.
This opens the data sources page, which displays a list of previously configured data sources for the Grafana instance.
1. Select **Add data source** to see a list of all supported data sources.
{{< figure src="/static/img/docs/v75/add-data-source-7-5.png" max-width="600px" class="docs-image--no-shadow">}}
1. Search for a specific data source by entering the name in the search dialog. Or you can scroll through supported data sources grouped into time series, logging, tracing and other categories.
1. Enter the name of a specific data source in the search dialog.
You can also scroll through supported data sources grouped into time series, logging, tracing, and other categories.
1. Move the cursor over the data source you want to add.
{{< figure src="/static/img/docs/v75/select-data-source-7-5.png" max-width="700px" class="docs-image--no-shadow">}}
1. Click **Select**. The data source configuration page opens.
1. Select **Select**.
1. Configure the data source following instructions specific to that data source. See [Data sources]({{< relref "../../datasources" >}}) for links to configuration instructions for all supported data sources.
This opens the data source configuration page.
1. Configure the data source following instructions specific to that data source.
For links to data source-specific documentation, see [Data sources]({{< relref "../../datasources" >}}).
## Data source permissions
Data source permissions allow you to restrict access for users to query a data source. For each data source there is a permission page that allows you to enable permissions and restrict query permissions to specific **Users** and **Teams**.
You can configure data source permissions to allow or deny certain users the ability to query a data source.
Each data source's configuration includes a permissions page where you can enable permissions and restrict query permissions to specific **Users** and **Teams**.
> **Note:** Available in [Grafana Enterprise]({{< relref "../../introduction/grafana-enterprise/" >}}) and [Grafana Cloud Pro and Advanced](/docs/grafana-cloud).
@ -182,3 +194,22 @@ If you experience performance issues or repeated queries become slower to execut
### Sending a request without cache
If a data source query request contains an `X-Cache-Skip` header, then Grafana skips the caching middleware, and does not search the cache for a response. This can be particularly useful when debugging data source queries using cURL.
## Add data source plugins
Grafana ships with several [built-in data sources]({{< relref "../../datasources#built-in-core-data-sources" >}}).
You can add additional data sources as plugins, which you can install or create yourself.
### Find data source plugins in the plugin catalog
To view available data source plugins, go to the [plugin catalog](/grafana/plugins/?type=datasource) and select the "Data sources" filter.
For details about the plugin catalog, refer to [Plugin management]({{< relref "../../administration/plugin-management/" >}}).
You can further filter the plugin catalog's results for data sources provided by the Grafana community, Grafana Labs, and partners.
If you use [Grafana Enterprise]{{< relref "../../enterprise/" >}}, you can also filter by Enterprise-supported plugins.
For more documentation on a specific data source plugin's features, including its query language and editor, refer to its plugin catalog page.
### Create a data source plugin
To build your own data source plugin, refer to the ["Build a data source plugin"](/tutorials/build-a-data-source-plugin/) tutorial and our documentation about [building a plugin](/developers/plugins/).

View File

@ -64,86 +64,109 @@ Currently we do not provide any scripts/manifests for configuring Grafana. Rathe
## Data sources
> This feature is available from v5.0
> **Note:** Available in Grafana v5.0 and higher.
It's possible to manage data sources in Grafana by adding one or more YAML config files in the [`provisioning/datasources`](/administration/configuration/#provisioning) directory. Each config file can contain a list of `datasources` that will get added or updated during start up. If the data source already exists, then Grafana updates it to match the configuration file. The config file can also contain a list of data sources that should be deleted. That list is called `deleteDatasources`. Grafana will delete data sources listed in `deleteDatasources` before inserting/updating those in the `datasource` list.
You can manage data sources in Grafana by adding YAML configuration files in the [`provisioning/datasources`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory.
Each config file can contain a list of `datasources` to add or update during startup.
If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.
### Running Multiple Grafana Instances
The configuration file can also list data sources to automatically delete, called `deleteDatasources`.
Grafana deletes the data sources listed in `deleteDatasources` _before_ adding or updating those in the `datasources` list.
If you are running multiple instances of Grafana you might run into problems if they have different versions of the `datasource.yaml` configuration file. The best way to solve this problem is to add a version number to each datasource in the configuration and increase it when you update the config. Grafana will only update datasources with the same or lower version number than specified in the config. That way, old configs cannot overwrite newer configs if they restart at the same time.
### Running multiple Grafana instances
### Example data source Config File
If you run multiple instances of Grafana, add a version number to each data source in the configuration and increase it when you update the configuration.
Grafana updates only data sources with the same or lower version number than specified in the config.
This prevents old configurations from overwriting newer ones if you have different versions of the `datasource.yaml` file that don't define version numbers, and then restart instances at the same time.
### Example data source config file
This example provisions a [Graphite data source]({{< relref "../../datasources/graphite" >}}):
```yaml
# config file version
# Configuration file version
apiVersion: 1
# list of datasources that should be deleted from the database
# List of data sources to delete from the database.
deleteDatasources:
- name: Graphite
orgId: 1
# list of datasources to insert/update depending
# what's available in the database
# List of data sources to insert/update depending on what's
# available in the database.
datasources:
# <string, required> name of the datasource. Required
# <string, required> Sets the name you use to refer to
# the data source in panels and queries.
- name: Graphite
# <string, required> datasource type. Required
# <string, required> Sets the data source type.
type: graphite
# <string, required> access mode. proxy or direct (Server or Browser in the UI). Required
# <string, required> Sets the access mode, either
# proxy or direct (Server or Browser in the UI).
# Some data sources are incompatible with any setting
# but proxy (Server).
access: proxy
# <int> org id. will default to orgId 1 if not specified
# <int> Sets the organization id. Defaults to orgId 1.
orgId: 1
# <string> custom UID which can be used to reference this datasource in other parts of the configuration, if not specified will be generated automatically
# <string> Sets a custom UID to reference this
# data source in other parts of the configuration.
# If not specified, Grafana generates one.
uid: my_unique_uid
# <string> url
# <string> Sets the data source's URL, including the
# port.
url: http://localhost:8080
# <string> database user, if used
# <string> Sets the database user, if necessary.
user:
# <string> database name, if used
# <string> Sets the database name, if necessary.
database:
# <bool> enable/disable basic auth
# <bool> Enables basic authorization.
basicAuth:
# <string> basic auth username
# <string> Sets the basic authorization username.
basicAuthUser:
# <bool> enable/disable with credentials headers
# <bool> Enables credential headers.
withCredentials:
# <bool> mark as default datasource. Max one per org
# <bool> Toggles whether the data source is pre-selected
# for new panels. You can set only one default
# data source per organization.
isDefault:
# <map> fields that will be converted to json and stored in jsonData
# <map> Fields to convert to JSON and store in jsonData.
jsonData:
# <string> Defines the Graphite service's version.
graphiteVersion: '1.1'
# <bool> Enables TLS authentication using a client
# certificate configured in secureJsonData.
tlsAuth: true
# <bool> Enables TLS authentication using a CA
# certificate.
tlsAuthWithCACert: true
# <string> json object of data that will be encrypted.
# <map> Fields to encrypt before storing in jsonData.
secureJsonData:
# <string> Defines the CA cert, client cert, and
# client key for encrypted authentication.
tlsCACert: '...'
tlsClientCert: '...'
tlsClientKey: '...'
# <string> database password, if used
# <string> Sets the database password, if necessary.
password:
# <string> basic auth password
# <string> Sets the basic authorization password.
basicAuthPassword:
version: 1
# <bool> allow users to edit datasources from the UI.
# <bool> Allows users to edit data sources from the
# Grafana UI.
editable: false
```
#### Custom Settings per Datasource
Please refer to each datasource documentation for specific provisioning examples.
| Datasource | Misc |
| ------------- | ---------------------------------------------------------------------------------- |
| Elasticsearch | Elasticsearch uses the `database` property to configure the index for a datasource |
For provisioning examples of specific data sources, refer to that [data source's documentation]({{< relref "../../datasources" >}}).
#### JSON Data
Since not all datasources have the same configuration settings we only have the most common ones as fields. The rest should be stored as a json blob in the `jsonData` field. Here are the most common settings that the core datasources use.
Since not all data sources have the same configuration settings, we include only the most common ones as fields.
To provision the rest of a data source's settings, include them as a JSON blob in the `jsonData` field.
> **Note:** Datasources tagged with _HTTP\*_ below denotes any data source which communicates using the HTTP protocol, e.g. all core data source plugins except MySQL, PostgreSQL and MSSQL.
Common settings in the [built-in core data sources]({{< relref "../../datasources#built-in-core-data-sources" >}}) include:
| Name | Type | Datasource | Description |
> **Note:** Data sources tagged with _HTTP\*_ communicate using the HTTP protocol, which includes all core data source plugins except MySQL, PostgreSQL, and MSSQL.
| Name | Type | Data source | Description |
| -------------------------- | ------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| tlsAuth | boolean | _HTTP\*_, MySQL | Enable TLS authentication using client cert configured in secure json data |
| tlsAuthWithCACert | boolean | _HTTP\*_, MySQL, PostgreSQL | Enable TLS authentication using CA cert |
@ -189,19 +212,19 @@ Since not all datasources have the same configuration settings we only have the
| maxOpenConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of open connections to the database (Grafana v5.4+) |
| maxIdleConns | number | MySQL, PostgreSQL and MSSQL | Maximum number of connections in the idle connection pool (Grafana v5.4+) |
| connMaxLifetime | number | MySQL, PostgreSQL and MSSQL | Maximum amount of time in seconds a connection may be reused (Grafana v5.4+) |
| keepCookies | array | _HTTP\*_ | Cookies that needs to be passed along while communicating with datasources |
| prometheusVersion | string | Prometheus | The version of the Prometheus datasource (e.g. `2.37.0`, `2.24.0`) |
| prometheusType | string | Prometheus | The type of the Prometheus datasources (i.e. `Prometheus`, `Cortex`, `Thanos`, or `Mimir`) |
| keepCookies | array | _HTTP\*_ | Cookies that needs to be passed along while communicating with data sources |
| prometheusVersion | string | Prometheus | The version of the Prometheus data source, such as `2.37.0`, `2.24.0` |
| prometheusType | string | Prometheus | The type of the Prometheus data sources. such as `Prometheus`, `Cortex`, `Thanos`, `Mimir` |
#### Secure Json Data
For examples of specific data sources' JSON data, refer to that [data source's documentation]({{< relref "../../datasources" >}}).
`{"authType":"keys","defaultRegion":"us-west-2","timeField":"@timestamp"}`
#### Secure JSON Data
Secure json data is a map of settings that will be encrypted with [secret key]({{< relref "../../setup-grafana/configure-grafana/#secret_key" >}}) from the Grafana config. The purpose of this is only to hide content from the users of the application. This should be used for storing TLS Cert and password that Grafana will append to the request on the server side. All of these settings are optional.
Secure JSON data is a map of settings that will be encrypted with [secret key]({{< relref "../../setup-grafana/configure-grafana#secret_key" >}}) from the Grafana config. The purpose of this is only to hide content from the users of the application. This should be used for storing TLS Cert and password that Grafana will append to the request on the server side. All of these settings are optional.
> **Note:** Datasources tagged with _HTTP\*_ below denotes any data source which communicates using the HTTP protocol, e.g. all core data source plugins except MySQL, PostgreSQL and MSSQL.
> **Note:** The _HTTP\*_ tag denotes data sources that communicate using the HTTP protocol, including all core data source plugins except MySQL, PostgreSQL, and MSSQL.
| Name | Type | Datasource | Description |
| Name | Type | Data source | Description |
| ----------------- | ------ | ---------------------------------- | -------------------------------------------------------- |
| tlsCACert | string | _HTTP\*_, MySQL, PostgreSQL | CA cert for out going requests |
| tlsClientCert | string | _HTTP\*_, MySQL, PostgreSQL | TLS Client cert for outgoing requests |
@ -213,10 +236,10 @@ Secure json data is a map of settings that will be encrypted with [secret key]({
| sigV4AccessKey | string | Elasticsearch and Prometheus | SigV4 access key. Required when using keys auth provider |
| sigV4SecretKey | string | Elasticsearch and Prometheus | SigV4 secret key. Required when using keys auth provider |
#### Custom HTTP headers for datasources
#### Custom HTTP headers for data sources
Data sources managed by Grafanas provisioning can be configured to add HTTP headers to all requests
going to that datasource. The header name is configured in the `jsonData` field and the header value should be
going to that data source. The header name is configured in the `jsonData` field and the header value should be
configured in `secureJsonData`.
```yaml
@ -234,12 +257,12 @@ datasources:
## Plugins
> This feature is available from v7.1
> **Note:** Available in Grafana v7.1 and higher.
You can manage plugin applications in Grafana by adding one or more YAML config files in the [`provisioning/plugins`]({{< relref "../../setup-grafana/configure-grafana/#provisioning" >}}) directory. Each config file can contain a list of `apps` that will be updated during start up. Grafana updates each app to match the configuration file.
You can manage plugin applications in Grafana by adding one or more YAML config files in the [`provisioning/plugins`]({{< relref "../../setup-grafana/configure-grafana#provisioning" >}}) directory. Each config file can contain a list of `apps` that will be updated during start up. Grafana updates each app to match the configuration file.
> **Note:** This feature enables you to provision plugin configurations, not the plugins themselves.
> The plugins must already be installed on the grafana instance
> The plugins must already be installed on the Grafana instance.
### Example plugin configuration file
@ -267,7 +290,7 @@ apps:
## Dashboards
You can manage dashboards in Grafana by adding one or more YAML config files in the [`provisioning/dashboards`]({{< relref "../../setup-grafana/configure-grafana/" >}}) directory. Each config file can contain a list of `dashboards providers` that load dashboards into Grafana from the local filesystem.
You can manage dashboards in Grafana by adding one or more YAML config files in the [`provisioning/dashboards`]({{< relref "../../setup-grafana/configure-grafana#dashboards" >}}) directory. Each config file can contain a list of `dashboards providers` that load dashboards into Grafana from the local filesystem.
The dashboard provider config file looks somewhat like this:

View File

@ -20,7 +20,7 @@ Dashboards and panels allow you to show your data in visual form. Each panel nee
- Ensure that you have the proper permissions. For more information about permissions, refer to [About users and permissions]({{< relref "../../../administration/roles-and-permissions/" >}}).
- Identify the dashboard to which you want to add the panel.
- Understand the query language of the target data source.
- Ensure that data source for which you are writing a query has been added. For more information about adding a data source, refer to [Add a data source]({{< relref "../../../datasources/add-a-data-source/" >}}) if you need instructions.
- Ensure that data source for which you are writing a query has been added. For more information about adding a data source, refer to [Add a data source]({{< relref "../../../administration/data-source-management#add-a-data-source" >}}) if you need instructions.
**To create a dashboard**:

View File

@ -85,7 +85,7 @@ Query expressions can contain references to other variables and in effect create
> **Note:** Query expressions are different for each data source. For more information, refer to the documentation for your [data source]({{< relref "../../../datasources/" >}}).
1. [Enter general options](#enter-general-options).
1. In the **Data source** list, select the target data source for the query. For more information about data sources, refer to [Add a data source]({{< relref "../../../datasources/add-a-data-source/" >}}).
1. In the **Data source** list, select the target data source for the query. For more information about data sources, refer to [Add a data source]({{< relref "../../../administration/data-source-management#add-a-data-source" >}}).
1. In the **Refresh** list, select when the variable should update options.
- **On Dashboard Load:** Queries the data source every time the dashboard loads. This slows down dashboard loading, because the variable query needs to be completed before dashboard can be initialized.
- **On Time Range Change:** Queries the data source when the dashboard time range changes. Only use this option if your variable options query contains a time range filter or is dependent on the dashboard time range.
@ -139,7 +139,7 @@ Constant variables are useful when you have complex values that you need to incl
_Data source_ variables enable you to quickly change the data source for an entire dashboard. They are useful if you have multiple instances of a data source, perhaps in different environments.
1. [Enter general options](#enter-general-options).
1. In the **Type** list, select the target data source for the variable. For more information about data sources, refer to [Add a data source]({{< relref "../../../datasources/add-a-data-source/" >}}).
1. In the **Type** list, select the target data source for the variable. For more information about data sources, refer to [Add a data source]({{< relref "../../../administration/data-source-management#add-a-data-source" >}}).
1. (Optional) In **Instance name filter**, enter a regex filter for which data source instances to choose from in the variable value drop-down list. Leave this field empty to display all instances.
1. (Optional) Enter [Selection Options]({{< relref "#configure-variable-selection-options" >}}).
1. In **Preview of values**, Grafana displays a list of the current variable values. Review them to ensure they match what you expect.
@ -180,7 +180,7 @@ _Ad hoc filters_ enable you to add key/value filters that are automatically adde
> **Note:** Ad hoc filter variables only work with Prometheus, Loki, InfluxDB, and Elasticsearch data sources.
1. [Enter general options](#enter-general-options).
1. In the **Data source** list, select the target data source. For more information about data sources, refer to [Add a data source]({{< relref "../../../datasources/add-a-data-source/" >}}).
1. In the **Data source** list, select the target data source. For more information about data sources, refer to [Add a data source]({{< relref "../../../administration/data-source-management#add-a-data-source" >}}).
1. Click **Add** to add the variable to the dashboard.
### Create ad hoc filters
@ -301,7 +301,7 @@ Currently only supported for Prometheus and Loki data sources. This variable rep
### $\_\_rate_interval
Currently only supported for Prometheus data sources. The `$__rate_interval` variable is meant to be used in the rate function. Refer to [Prometheus query variables]({{< relref "../../../datasources/prometheus.md#using-__rate_interval">}}) for details.
Currently only supported for Prometheus data sources. The `$__rate_interval` variable is meant to be used in the rate function. Refer to [Prometheus query variables]({{< relref "../../../datasources/prometheus/template-variables#use-__rate_interval" >}}) for details.
### $timeFilter or $\_\_timeFilter

View File

@ -2,51 +2,74 @@
aliases:
- /docs/grafana/latest/datasources/
- /docs/grafana/latest/datasources/overview/
- /docs/grafana/latest/data-sources/
title: Data sources
weight: 60
---
# Data sources
Grafana supports many different storage backends for your time series data (data source). Refer to [Add a data source]({{< relref "../administration/data-source-management/#add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
Grafana can query and integrate with many different types of databases. This is done by adding a **data source** of the type you want to query or integrate with.
When you have created and configured a data source you are ready to start exploring and visualizing data, either in Explore or in a new Dashboard. A dashboard is composed of [panels]({{< relref "../panels-visualizations/" >}}), each panel contains a set of queries to one or more data sources.
## Querying
You can also create new a alert rule from a data source query and have Grafana continuously evaluate it and notify you when things change.
Each data source has a specific Query Editor that is customized for the features and capabilities that the particular data source exposes. The query language and capabilities of each data source are obviously very different. You can combine data from multiple data sources onto a single Dashboard, but each Panel is tied to a specific data source that belongs to a particular Organization.
You can also query data sources without building a dashboard by using the [Explore]({{< relref "../explore/" >}}) feature.
## Supported data sources
## Manage data sources
> **Note:** For a list of all data source plugins created by Grafana Labs, view the [plugin catalog](https://grafana.com/grafana/plugins/?type=datasource) and select the "Data sources" and "Grafana Labs created" filters. For a list of all Enterprise-level plugins, which have additional support, also enable the [Enterprise filter](https://grafana.com/grafana/plugins/?enterprise=1&type=datasource).
Only users with the [organization administrator role]({{< relref "../administration/roles-and-permissions#organization-roles" >}}) can add or remove data sources.
To access data source management tools in Grafana as an administrator, navigate to **Configuration > Data Sources** in the Grafana sidebar.
These data sources have additional documentation:
For details on data source management, including instructions on how to add data sources and configure user permissions for queries, refer to the [administration documentation]({{< relref "../administration/data-source-management" >}}).
## Use query editors
{{< figure src="/static/img/docs/queries/influxdb-query-editor-7-2.png" class="docs-image--no-shadow" max-width="1000px" caption="The InfluxDB query editor" >}}
Each data source's **query editor** provides a customized user interface that helps you write queries that take advantage of its unique capabilities.
You use a data source's query editor when you create queries in [dashboard panels]({{< relref "../panels-visualizations/query-transform-data" >}}) or [Explore]({{< relref "../explore/" >}}).
Because of the differences between query languages, each data source query editor looks and functions differently.
Depending on your data source, the query editor might provide auto-completion features, metric names, variable suggestions, or a visual query-building interface.
For example, this video demonstrates the visual Prometheus query builder:
{{< vimeo 720004179 >}}
For general information about querying in Grafana, and common options and user interface elements across all query editors, refer to [Query and transform data]({{< relref "../panels-visualizations/query-transform-data/" >}}).
## Built-in core data sources
These built-in core data sources are included in the Grafana documentation:
- [Alertmanager]({{< relref "./alertmanager/" >}})
- [AWS CloudWatch]({{< relref "./aws-cloudwatch/" >}})
- [Azure Monitor]({{< relref "./azuremonitor/" >}})
- [Azure Monitor]({{< relref "./azure-monitor/" >}})
- [Elasticsearch]({{< relref "./elasticsearch/" >}})
- [Google Cloud Monitoring]({{< relref "./google-cloud-monitoring/" >}})
- [Graphite]({{< relref "./graphite/" >}})
- [InfluxDB]({{< relref "./influxdb/" >}})
- [Jaeger]({{< relref "./jaeger/" >}})
- [Loki]({{< relref "./loki/" >}})
- [Microsoft SQL Server (MSSQL)]({{< relref "./mssql/" >}})
- [MySQL]({{< relref "./mysql/" >}})
- [OpenTSDB]({{< relref "./opentsdb/" >}})
- [PostgreSQL]({{< relref "./postgres/" >}})
- [Prometheus]({{< relref "./prometheus/" >}})
- [Jaeger]({{< relref "./jaeger/" >}})
- [Zipkin]({{< relref "./zipkin/" >}})
- [Tempo]({{< relref "./tempo/" >}})
- [Testdata]({{< relref "./testdata/" >}})
- [Zipkin]({{< relref "./zipkin/" >}})
In addition to the data sources that you have configured in your Grafana instance, there are three special data sources available:
## Special data sources
- **Grafana -** A built-in data source that generates random walk data. Useful for testing visualizations and running experiments.
- **Mixed -** Select this to query multiple data sources in the same panel. When this data source is selected, Grafana allows you to select a data source for every new query that you add.
- The first query will use the data source that was selected before you selected **Mixed**.
- You cannot change an existing query to use the Mixed Data Source.
Grafana also includes three special data sources:
- **Grafana:** A built-in data source that generates random walk data and can poll the [Testdata]({{< relref "./testdata/" >}}) data source.
This helps you test visualizations and run experiments.
- **Mixed:** An abstraction that lets you query multiple data sources in the same panel.
When you select Mixed, you can then select a different data source for each new query that you add.
- The first query uses the data source that was selected before you selected **Mixed**.
- You can't change an existing query to use the Mixed data source.
- Grafana Play example: [Mixed data sources](https://play.grafana.org/d/000000100/mixed-datasources?orgId=1)
- **Dashboard -** Select this to use a result set from another panel in the same dashboard.
## Data source plugins
You can install additional data sources as plugins. To view available data source plugins, see the [Grafana Plugins catalog](https://grafana.com/plugins). To build your own, see the ["Build a data source plugin"](https://grafana.com/tutorials/build-a-data-source-plugin/) tutorial and our documentation about [building a plugin](/developers/plugins/).
- **Dashboard:** A data source that uses the result set from another panel in the same dashboard.

View File

@ -1,35 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/add-a-data-source/
- /docs/grafana/latest/features/datasources/add-a-data-source/
title: Add data source
weight: 100
---
# Add a data source
Before you can create your first dashboard, you need to add your data source.
> **Note:** Only users with the organization Admin role can add data sources.
To add a data source:
1. Move your cursor to the cog icon on the side menu which will show the configuration options.
{{< figure src="/static/img/docs/v75/sidemenu-datasource-7-5.png" max-width="150px" class="docs-image--no-shadow">}}
1. Click on **Data sources**. The data sources page opens showing a list of previously configured data sources for the Grafana instance.
1. Click **Add data source** to see a list of all supported data sources.
{{< figure src="/static/img/docs/v75/add-data-source-7-5.png" max-width="600px" class="docs-image--no-shadow">}}
1. Search for a specific data source by entering the name in the search dialog. Or you can scroll through supported data sources grouped into time series, logging, tracing and other categories.
1. Move the cursor over the data source you want to add.
{{< figure src="/static/img/docs/v75/select-data-source-7-5.png" max-width="700px" class="docs-image--no-shadow">}}
1. Click **Select**. The data source configuration page opens.
1. Configure the data source following instructions specific to that data source. See [Data sources]({{< relref "../administration/data-source-management/_index.md" >}}) for links to configuration instructions for all supported data sources.

View File

@ -1,42 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/alertmanager/
- /docs/grafana/latest/features/datasources/alertmanager/
description: Guide for using Alertmanager in Grafana
keywords:
- grafana
- prometheus
- guide
title: Alertmanager
weight: 150
---
# Alertmanager data source
Grafana includes built-in support for Prometheus Alertmanager. Once you add it as a data source, you can use the [Grafana Alerting UI](https://grafana.com/docs/grafana/latest/alerting/) to manage silences, contact points as well as notification policies. A drop-down option in these pages allows you to switch between Grafana and any configured Alertmanager data sources.
## Alertmanager implementations
[Prometheus](https://prometheus.io/) and [Grafana Mimir](https://grafana.com/docs/mimir/latest/) (default) implementations of Alertmanager are supported. You can specify implementation in the data source settings page. In case of Prometheus contact points and notification policies are read-only in the Grafana Alerting UI, as it does not support updating configuration via HTTP API.
## Provision the Alertmanager data source
Configure the Alertmanager data sources by updating Grafana's configuration files. For more information on how it works and the settings available, refer to the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}}).
Here is an example for provisioning the Alertmanager data source:
```yaml
apiVersion: 1
datasources:
- name: Alertmanager
type: alertmanager
url: http://localhost:9093
access: proxy
jsonData:
# optionally
basicAuth: true
basicAuthUser: my_user
secureJsonData:
basicAuthPassword: test_password
```

View File

@ -0,0 +1,46 @@
---
aliases:
- /docs/grafana/latest/features/datasources/alertmanager/
- /docs/grafana/latest/datasources/alertmanager/
- /docs/grafana/latest/data-sources/alertmanager/
description: Guide for using Alertmanager as a data source in Grafana
keywords:
- grafana
- prometheus
- alertmanager
- guide
- queries
menuTitle: Alertmanager
title: Alertmanager data source
weight: 150
---
# Alertmanager data source
Grafana includes built-in support for Prometheus Alertmanager. Once you add it as a data source, you can use the [Grafana Alerting UI](/docs/grafana/latest/alerting/) to manage silences, contact points as well as notification policies. A drop-down option in these pages allows you to switch between Grafana and any configured Alertmanager data sources.
## Alertmanager implementations
[Prometheus](https://prometheus.io/) and [Grafana Mimir](/docs/mimir/latest/) (default) implementations of Alertmanager are supported. You can specify implementation in the data source settings page. In case of Prometheus contact points and notification policies are read-only in the Grafana Alerting UI, as it does not support updating configuration via HTTP API.
## Provision the data source
Configure the Alertmanager data sources by updating Grafana's configuration files. For more information on how it works and the settings available, refer to the [provisioning docs page]({{< relref "../../administration/provisioning#data-sources" >}}).
For example, this YAML provisions an Alertmanager data source running on port 9093, with proxy access and basic authentication:
```yaml
apiVersion: 1
datasources:
- name: Alertmanager
type: alertmanager
url: http://localhost:9093
access: proxy
jsonData:
# optionally
basicAuth: true
basicAuthUser: my_user
secureJsonData:
basicAuthPassword: test_password
```

View File

@ -1,37 +1,56 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/
- /docs/grafana/latest/datasources/cloudwatch/
description: Guide for using CloudWatch in Grafana
- /docs/grafana/latest/datasources/aws-cloudwatch/
- /docs/grafana/latest/datasources/aws-cloudwatch/provision-cloudwatch/
- /docs/grafana/latest/datasources/aws-cloudwatch/preconfig-cloudwatch-dashboards/
- /docs/grafana/latest/data-sources/aws-cloudwatch/
- /docs/grafana/latest/data-sources/aws-cloudwatch/provision-cloudwatch/
- /docs/grafana/latest/data-sources/aws-cloudwatch/preconfig-cloudwatch-dashboards/
description: Guide for using AWS CloudWatch in Grafana
keywords:
- grafana
- cloudwatch
- guide
title: AWS CloudWatch
menuTitle: AWS CloudWatch
title: AWS CloudWatch data source
weight: 200
---
# AWS CloudWatch data source
Grafana ships with built-in support for CloudWatch. This topic describes queries, templates, variables, and other configuration specific to the CloudWatch data source. For instructions on how to add a data source to Grafana, refer to [Add a data source]({{< relref "../add-a-data-source/" >}}). Only users with the organization admin role can add data sources.
Grafana ships with built-in support for AWS CloudWatch.
This topic describes queries, templates, variables, and other configuration specific to the CloudWatch data source.
Once you have added the Cloudwatch data source, you can build dashboards or use Explore with CloudWatch metrics and CloudWatch Logs.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [provision the data source]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system, and should [control pricing]({{< relref "#control-pricing" >}}) and [manage service quotas]({{< relref "#manage-service-quotas" >}}) accordingly.
> **Note:** For troubleshooting issues when setting up the Cloudwatch data source, check the `/var/log/grafana/grafana.log` file.
Once you've added the data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
## Configure the CloudWatch data source
> **Note:** To troubleshoot issues while setting up the CloudWatch data source, check the `/var/log/grafana/grafana.log` file.
To access data source settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the AWS Cloudwatch data source.
## Configure the data source
For authentication options and configuration details, see [AWS authentication]({{< relref "aws-authentication/" >}}) topic.
**To access the data source configuration page:**
### CloudWatch specific data source configuration
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the CloudWatch data source.
#### IAM policies
### Configure AWS authentication
Grafana needs permissions granted via IAM to be able to read CloudWatch metrics and EC2 tags/instances/regions/alarms. You can attach these permissions to the IAM role or IAM user configured in the previous step.
A Grafana plugin's requests to AWS are made on behalf of an AWS Identity and Access Management (IAM) role or IAM user.
The IAM user or IAM role must have the associated policies to perform certain API actions.
##### Metrics only example:
For authentication options and configuration details, refer to [AWS authentication]({{< relref "./aws-authentication/" >}}).
#### IAM policy examples
To read CloudWatch metrics and EC2 tags, instances, regions, and alarms, you must grant Grafana permissions via IAM.
You can attach these permissions to the IAM role or IAM user you configured in [AWS authentication]({{< relref "./aws-authentication/" >}}).
**Metrics-only:**
```json
{
@ -66,7 +85,7 @@ Grafana needs permissions granted via IAM to be able to read CloudWatch metrics
}
```
##### Logs only example:
**Logs-only:**
```json
{
@ -101,7 +120,7 @@ Grafana needs permissions granted via IAM to be able to read CloudWatch metrics
}
```
##### Metrics and Logs example:
**Metrics and Logs:**
```json
{
@ -149,178 +168,160 @@ Grafana needs permissions granted via IAM to be able to read CloudWatch metrics
}
```
### Configure CloudWatch settings
#### Namespaces of Custom Metrics
Grafana is not able to load custom namespaces through the GetMetricData API. If you still want your custom metrics to show up in the fields in the query editor, you can specify the names of the namespaces containing the custom metrics in the _Namespaces of Custom Metrics_ field. The field accepts a multiple namespaces, separated by a comma.
Grafana can't load custom namespaces through the CloudWatch [GetMetricData API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html).
To make custom metrics appear in the data source's query editor fields, specify the names of the namespaces containing the custom metrics in the data source configuration's _Namespaces of Custom Metrics_ field.
The field accepts multiple namespaces separated by commas.
#### Timeout
Timeout specifically, for CloudWatch Logs queries. Log queries don't recognize standard Grafana query timeout as they don't keep a single request open and instead periodically poll for results. Because of limits on concurrently running queries in CloudWatch they can also take a longer time to finish.
Configure the timeout specifically for CloudWatch Logs queries.
Log queries don't keep a single request open, and instead periodically poll for results.
Therefore, they don't recognize the standard Grafana query timeout.
Because of limits on concurrently running queries in CloudWatch, they can also take longer to finish.
#### X-Ray trace links
Link an X-Ray data source in the "X-Ray trace link" section of the configuration page to automatically add links in your logs when the log contains `@xrayTraceId` field.
To automatically add links in your logs when the log contains the `@xrayTraceId` field, link an X-Ray data source in the "X-Ray trace link" section of the data source configuration.
![Trace link configuration](/static/img/docs/cloudwatch/xray-trace-link-configuration-8-2.png 'Trace link configuration')
{{< figure src="/static/img/docs/cloudwatch/xray-trace-link-configuration-8-2.png" max-width="800px" class="docs-image--no-shadow" caption="Trace link configuration" >}}
The data source select will contain only existing data source instances of type X-Ray so in order to use this feature you need to have existing X-Ray data source already configured, see [X-Ray docs](https://grafana.com/grafana/plugins/grafana-x-ray-datasource/) for details.
The data source select contains only existing data source instances of type X-Ray.
To use this feature, you must already have an X-Ray data source configured.
For details, see the [X-Ray data source docs](/grafana/plugins/grafana-x-ray-datasource/).
The X-Ray link will then appear in the log details section which is accessible by clicking on the log row either in Explore or in dashboard [Logs panel]({{< relref "../../panels-visualizations/visualizations/logs/" >}}). To log the `@xrayTraceId` in your logs see the [AWS X-Ray documentation](https://docs.amazonaws.cn/en_us/xray/latest/devguide/xray-services.html). To provide the field to Grafana your log queries also have to contain the `@xrayTraceId` field, for example using query `fields @message, @xrayTraceId`.
To view the X-Ray link, select the log row in either the Explore view or dashboard [Logs panel]({{< relref "../../panels-visualizations/visualizations/logs" >}}) to view the log details section.
![Trace link in log details](/static/img/docs/cloudwatch/xray-link-log-details-8-2.png 'Trace link in log details')
To log the `@xrayTraceId`, see the [AWS X-Ray documentation](https://docs.amazonaws.cn/en_us/xray/latest/devguide/xray-services.html).
## CloudWatch query editor
To provide the field to Grafana, your log queries must also contain the `@xrayTraceId` field, for example by using the query `fields @message, @xrayTraceId`.
The CloudWatch data source can query data from both CloudWatch metrics and CloudWatch Logs APIs, each with its own specialized query editor. You select which API you want to query with using the query mode switch on top of the editor.
{{< figure src="/static/img/docs/cloudwatch/xray-link-log-details-8-2.png" max-width="800px" class="docs-image--no-shadow" caption="Trace link in log details" >}}
![CloudWatch API modes](/static/img/docs/cloudwatch/cloudwatch-query-editor-api-modes-8.3.0.png)
### Configure the data source with grafana.ini
### Metrics query editor
The Grafana [configuration file]({{< relref "../../setup-grafana/configure-grafana#aws" >}}) includes an `AWS` section where you can configure data source options:
The metrics query editor allows you to build two types of queries - **Metric Search** and **Metric Query**.
| Configuration option | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allowed_auth_providers` | Specifies which authentication providers are allowed for the CloudWatch data source. The following providers are enabled by default in open-source Grafana: `default` (AWS SDK default), keys (Access and secret key), credentials (Credentials file), ec2_IAM_role (EC2 IAM role). |
| `assume_role_enabled` | Allows you to disable `assume role (ARN)` in the CloudWatch data source. The assume role (ARN) is enabled by default in open-source Grafana. |
| `list_metrics_page_limit` | Sets the limit of List Metrics API pages. When a custom namespace is specified in the query editor, the [List Metrics API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html) populates the _Metrics_ field and _Dimension_ fields. The API is paginated and returns up to 500 results per page, and the data source also limits the number of pages to 500 by default. This setting customizes that limit. |
#### Using the Metric Search option
### Provision the data source
To create a valid Metric Search query specify the namespace, metric name and at least one statistic.
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
If `Match Exact` is enabled, you also need to specify all the dimensions of the metric youre querying, so that the [metric schema](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html) matches exactly. If `Match Exact` is disabled, you can specify any number of dimensions by which youd like to filter. Up to 100 metrics matching your filter criteria will be returned.
#### Provisioning examples
##### Dynamic queries using dimension wildcards
You can monitor a dynamic list of metrics by using the asterisk (\*) wildcard for one or more dimension values.
![CloudWatch dimension wildcard](/static/img/docs/cloudwatch/cloudwatch-dimension-wildcard-8.3.0.png)
In this example, the query returns all metrics in the namespace `AWS/EC2` with a metric name of `CPUUtilization` and ANY value for the `InstanceId` dimension are queried. This can help you monitor metrics for AWS resources, like EC2 instances or containers. When new instances are created as part of an auto scaling event, they will automatically appear in the graph without you having to track the new instance IDs. This capability is currently limited to retrieving up to 100 metrics.
You can expand the [Query inspector](https://grafana.com/docs/grafana/latest/panels/queries/#query-inspector-button) button and click `Meta Data` to see the search expression that is automatically built to support wildcards. To learn more about search expressions, visit the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html). By default, the search expression is defined in such a way that the queried metrics must match the defined dimension names exactly. This means that in the example only metrics with exactly one dimension with the name InstanceId will be returned.
![CloudWatch Meta Inspector](/static/img/docs/cloudwatch/cloudwatch-meta-inspector-8.3.0.png)
You can disable `Match Exact` to include metrics that have other dimensions defined. Disabling `Match Exact` also creates a search expression even if you dont use wildcards. We simply search for any metric that matches at least the namespace, metric name, and all defined dimensions.
##### Multi-value template variables
When defining dimension values based on multi-valued template variables, a search expression is used to query for the matching metrics. This enables the use of multiple template variables in one query and also allows you to use template variables for queries that have the `Match Exact` option disabled.
Search expressions are currently limited to 1024 characters, so your query may fail if you have a long list of values. We recommend using the asterisk (`*`) wildcard instead of the `All` option if you want to query all metrics that have any value for a certain dimension name.
The use of multi-valued template variables is only supported for dimension values. Using multi-valued template variables for `Region`, `Namespace`, or `Metric Name` is not supported.
##### Metric math expressions
You can create new time series metrics by operating on top of CloudWatch metrics using mathematical functions. Arithmetic operators, unary subtraction and other functions are supported and can be applied to CloudWatch metrics. More details on the available functions can be found on [AWS Metric Math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html)
As an example, if you want to apply arithmetic operations on a metric, you can do it by giving an id (a unique string) to the raw metric as shown below. You can then use this id and apply arithmetic operations to it in the Expression field of the new metric.
Please note that in the case you use the expression field to reference another query, like `queryA * 2`, it will not be possible to create an alert rule based on that query.
##### Deep linking from Grafana panels to the CloudWatch console
{{< figure src="/static/img/docs/v65/cloudwatch-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch deep linking" >}}
Left clicking a time series in the panel shows a context menu with a link to `View in CloudWatch console`. Clicking that link will open a new tab that will take you to the CloudWatch console and display all the metrics for that query. If you're not currently logged in to the CloudWatch console, the link will forward you to the login page. The provided link is valid for any account but will only display the right metrics if you're logged in to the account that corresponds to the selected data source in Grafana.
This feature is not available for metrics that are based on metric math expressions.
### Using the Metric Query option
> **Note:** This query option is available in Grafana 8.3 and higher versions only.
Metrics Query in the CloudWatch plugin is what is referred to as **Metric Insights** in the AWS console. It's a fast, flexible, SQL-based query engine that enables you to identify trends and patterns across millions of operational metrics in real time. It uses a dialect of SQL. The query syntax is as follows.
**Using AWS SDK (default):**
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: default
defaultRegion: eu-west-2
```
SELECT FUNCTION(MetricName)
FROM Namespace | SCHEMA(...)
[ WHERE labelKey OPERATOR labelValue [AND|...]]
[ GROUP BY labelKey [, ...]]
[ ORDER BY FUNCTION() [DESC | ASC] ]
[ LIMIT number]
**Using credentials' profile name (non-default):**
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: credentials
defaultRegion: eu-west-2
customMetricsNamespaces: 'CWAgent,CustomNameSpace'
profile: secondary
```
The following table provides basic explanation of the query keywords. For details about the Metrics Insights syntax, refer to the [AWS documentation](https://docs.aws.amazon.com/console/cloudwatch/metricsinsights-syntax).
**Using accessKey and secretKey:**
| Keyword | Description |
| ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `FUNCTION` | Required. Specifies the aggregate function to use, and also specifies the name of the metric that is to be queried. Valid values are AVG, COUNT, MAX, MIN, and SUM |
| `MetricName` | Required. For example, `CPUUtilization`. |
| `FROM` | Required. Specifies the source of the metric. You can specify either the metric namespace that contains the metric that is to be queried, or a SCHEMA table function. Some namespace examples are 1`AWS/EC2`, `AWS/Lambda`. |
| `SCHEMA` | Optional. Allows you to narrow down the query results to only the metrics that is an exact match or to metrics that do noy match. |
| `WHERE` | Optional. Filters the results to only those metrics that match your specified expression. For example, `WHERE InstanceType != 'c3.4xlarge'`. |
| `GROUP BY` | Optional. Groups the query results into multiple time series. For example, `GROUP BY ServiceName`. |
| `ORDER BY` | Optional. Specifies the order of time series that are returned. Options are `ASC`, `DESC`. |
| `LIMIT` | Optional. Limits the number of time series returned. |
```yaml
apiVersion: 1
For information about limits for the Metrics Insights, please refer to the [AWS documentation](https://docs.aws.amazon.com/console/cloudwatch/metricsinsights).
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: keys
defaultRegion: eu-west-2
secureJsonData:
accessKey: '<your access key>'
secretKey: '<your secret key>'
```
**Builder mode**
**Using AWS SDK Default and ARN of IAM Role to Assume:**
To create a query in Builder mode:
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: default
assumeRoleArn: arn:aws:iam::123456789012:root
defaultRegion: eu-west-2
```
1. Browse and select a metric namespace, metric name, filter, group, and order options using information from the table above.
1. For each of these options, choose from the list of possible options.
## Query the data source
Grafana automatically constructs a SQL query based on your selections.
The CloudWatch data source can query data from both CloudWatch metrics and CloudWatch Logs APIs, each with its own specialized query editor.
**Code mode**
For details, see the [query editor documentation]({{< relref "./query-editor/" >}}).
To create a query in the Code mode:
## Use template variables
1. Write your SQL query.
1. To run the query, click the **Run query** above the code editor.
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
The code editor has a built in autocomplete feature that gives suggestions for keywords, aggregations, namespaces, metrics, labels and label values. The suggestions are shown when hitting space, comma or dollar character. You can also use the keyboard combination CTRL+Space.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).
![Code editor autocomplete](/static/img/docs/cloudwatch/cloudwatch-code-editor-autocomplete-8.3.0.gif)
## Import pre-configured dashboards
> **Note:** Usage of template variables in the code editor might interfere the autocompletion.
The CloudWatch data source ships with curated and pre-configured dashboards for five of the most popular AWS services:
### Common metric query editor fields
- **Amazon Elastic Compute Cloud:** `Amazon EC2`
- **Amazon Elastic Block Store:** `Amazon EBS`
- **AWS Lambda:** `AWS Lambda`
- **Amazon CloudWatch Logs:** `Amazon CloudWatch Logs`
- **Amazon Relational Database Service:** `Amazon RDS`
At the bottom of the metric query editor, you'll find three fields that are common to both _Metric Search_ and _Metric Query_.
**To import curated dashboards:**
#### Id
1. Navigate to the data source's [configuration page]({{< relref "#configure-the-data-source" >}}).
1. Select the **Dashboards** tab.
The GetMetricData API requires that all queries have a unique ID. Use this field to specify an ID of choice. The ID can include numbers, letters, and underscore, and must start with a lowercase letter. If no ID is specified, grafana will generate an ID using the following pattern `query[refId of the current query row]`, e.g `queryA` for the first query row in the panel editor.
This displays the curated selection of importable dashboards.
The ID can be used to reference queries in Metric Math expressions.
1. Select **Import** for the dashboard to import.
#### Period
{{< figure src="/static/img/docs/v65/cloudwatch-dashboard-import.png" caption="CloudWatch dashboard import" >}}
A period is the length of time associated with a specific Amazon CloudWatch statistic. Periods are defined in numbers of seconds, and valid values for period are 1, 5, 10, 30, or any multiple of 60.
**To customize an imported dashboard:**
If the period field is left blank or set to `auto`, then it calculates automatically based on the time range and [cloudwatch's retention policy](https://aws.amazon.com/about-aws/whats-new/2016/11/cloudwatch-extends-metrics-retention-and-new-user-interface/). The formula used is `time range in seconds / 2000`, and then it snaps to the next higher value in an array of predefined periods `[60, 300, 900, 3600, 21600, 86400]` after removing periods based on retention. By clicking `Show Query Preview` in the query editor, you can see what period Grafana used.
To customize one of these dashboards, we recommend that you save it under a different name.
If you don't, upgrading Grafana can overwrite the customized dashboard with the new version.
#### Label
## Create queries for alerting
The label field allows you to override the default name of the metric legend using CloudWatch dynamic labels. If you're using a time-based dynamic label such as `${MIN_MAX_TIME_RANGE}`, then the legend value is derived from the current timezone specified in the time range picker. To see the full list of label patterns and the dynamic label limitations, refer to the [CloudWatch dynamic labels](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/graph-dynamic-labels.html) documentation.
Alerting requires queries that return numeric data, which CloudWatch Logs support.
For example, you can enable alerts through the use of the `stats` command.
**Alias pattern deprecation:** In Grafana 9, dynamic labels have replaced alias patterns in the CloudWatch data source. Any existing alias pattern will get migrated to a corresponding dynamic label pattern. If you wish to use alias patterns instead of dynamic labels, set the feature toggle `cloudWatchDynamicLabels` to `false` in the Grafana configuration file. It will revert to the alias pattern system and use the previous alias formatting logic.
The alias field will be deprecated and removed in a release. During this interim period, we wont fix bugs related to the alias pattern system. For details on why we're doing this change, refer to [issue 48434](https://github.com/grafana/grafana/issues/48434).
## Using the Logs query editor
To query CloudWatch Logs:
1. Select the region and up to 20 log groups which you want to query.
1. Use the main input area to write your query in [CloudWatch Logs Query Language](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html).
You can also write queries returning time series data by using the [`stats` command](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Visualizing-Log-Data.html). When making `stats` queries in Explore, you have to make sure you are in Metrics Explore mode.
{{< figure src="/static/img/docs/v70/explore-mode-switcher.png" max-width="500px" class="docs-image--right" caption="Explore mode switcher" >}}
### Deep linking from Grafana panels to the CloudWatch console
{{< figure src="/static/img/docs/v70/cloudwatch-logs-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch Logs deep linking" >}}
If you'd like to view your query in the CloudWatch Logs Insights console, simply click the `CloudWatch Logs Insights` button next to the query editor.
If you're not currently logged in to the CloudWatch console, the link will forward you to the login page. The provided link is valid for any account but will only display the right metrics if you're logged in to the account that corresponds to the selected data source in Grafana.
## Alerting
Alerting require queries that return numeric data, which CloudWatch Logs support. For example through the use of the `stats` command, alerts are supported. For example, this is a valid query for alerting on messages that include the text "Exception":
This is also a valid query for alerting on messages that include the text "Exception":
```
filter @message like /Exception/
@ -328,36 +329,29 @@ filter @message like /Exception/
| sort exceptionCount desc
```
**NOTE**: When trying to alert on a query, if an error like `input data must be a wide series but got ...` is received, make sure that your query returns valid numeric data that can be printed in a Time series panel.
> **Note:** If you receive an error like `input data must be a wide series but got ...` when trying to alert on a query, make sure that your query returns valid numeric data that can be output to a Time series panel.
For more information on Grafana alerts, refer to [Alerting]({{< relref "../../alerting/" >}}) documentation.
For more information on Grafana alerts, refer to [Alerting]({{< relref "../../alerting" >}}).
## Configure CloudWatch with grafana.ini
## Control pricing
The Grafana [configuration]({{< relref "../../setup-grafana/configure-grafana/#aws" >}}) file includes an `AWS` section where you can customize the data source.
| Configuration option | Description |
| ------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allowed_auth_providers` | Specifies which authentication providers are allowed for the CloudWatch data source. The following providers are enabled by default in OSS Grafana: `default` (AWS SDK default), keys (Access and secret key), credentials (Credentials file), ec2_IAM_role (EC2 IAM role). |
| `assume_role_enabled` | Allows you to disable `assume role (ARN)` in the CloudWatch data source. By default, assume role (ARN) is enabled for OSS Grafana. |
| `list_metrics_page_limit` | When a custom namespace is specified in the query editor, the [List Metrics API](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html) is used to populate the _Metrics_ field and the _Dimension_ fields. The API is paginated and returns up to 500 results per page. The CloudWatch data source also limits the number of pages to 500. However, you can change this limit using the `list_metrics_page_limit` variable in the [grafana configuration file](https://grafana.com/docs/grafana/latest/administration/configuration/#aws). |
## Pricing
The Amazon CloudWatch data source for Grafana uses the `ListMetrics` and `GetMetricData` CloudWatch API calls to list and retrieve metrics.
The Amazon CloudWatch data source for Grafana uses [`ListMetrics`](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html) and [`GetMetricData`](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricData.html) CloudWatch API calls to list and retrieve metrics.
Pricing for CloudWatch Logs is based on the amount of data ingested, archived, and analyzed via CloudWatch Logs Insights queries.
Every time you pick a dimension in the query editor Grafana will issue a ListMetrics request. Whenever you make a change to the queries in the query editor, one new request to GetMetricData will be issued.
Each time you select a dimension in the query editor, Grafana issues a `ListMetrics` API request.
Each time you change queries in the query editor, Grafana issues a new request to the `GetMetricData` API.
In Grafana version 6.5 or higher, all API requests to GetMetricStatistics have been replaced with calls to GetMetricData to provide better support for CloudWatch metric math and enables the automatic generation of search expressions when using wildcards or disabling the `Match Exact` option. While GetMetricStatistics qualified for the CloudWatch API free tier, this is not the case for GetMetricData calls.
> **Note:** Grafana v6.5 and higher replaced all `GetMetricStatistics` API requests with calls to GetMetricData to provide better support for CloudWatch metric math, and enables the automatic generation of search expressions when using wildcards or disabling the `Match Exact` option.
> The `GetMetricStatistics` API qualified for the CloudWatch API free tier, but `GetMetricData` calls don't.
For more information, please refer to the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/).
For more information, refer to the [CloudWatch pricing page](https://aws.amazon.com/cloudwatch/pricing/).
## Service quotas
## Manage service quotas
AWS defines quotas, or limits, for resources, actions, and items in your AWS account. Depending on the number of queries in your dashboard and the number of users accessing the dashboard, you may reach the usage limits for various CloudWatch and CloudWatch Logs resources. Note that quotas are defined per account and per region. If you're using multiple regions or have set up more than one CloudWatch data source to query against multiple accounts, you need to request a quota increase for each account and each region in which you hit the limit.
AWS defines quotas, or limits, for resources, actions, and items in your AWS account.
Depending on the number of queries in your dashboard and the number of users accessing the dashboard, you might reach the usage limits for various CloudWatch and CloudWatch Logs resources.
Quotas are defined per account and per region.
To request a quota increase, visit the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/home?r#!/services/monitoring/quotas/L-5E141212). For more information, refer to the AWS documentation for [Service Quotas](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html) and [CloudWatch limits](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html).
If you use multiple regions or configured more than one CloudWatch data source to query against multiple accounts, you must request a quota increase for each account and region in which you reach the limit.
```
```
To request a quota increase, visit the [AWS Service Quotas console](https://console.aws.amazon.com/servicequotas/home?r#!/services/monitoring/quotas/L-5E141212).
For more information, refer to the AWS documentation for [Service Quotas](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html) and [CloudWatch limits](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html).

View File

@ -1,78 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/aws-authentication/
- /docs/grafana/latest/datasources/cloudwatch/
description: AWS authentication
keywords:
- grafana
- aws
- authentication
title: Authentication
weight: 5
---
# AWS authentication
Requests from a Grafana plugin to AWS are made on behalf of an IAM role or an IAM user. The IAM user or IAM role must have the associated policies to perform certain API actions. Since these policies are specific to each data source, refer to the data source documentation for details.
All requests to AWS APIs are performed on the server side by the Grafana backend using the official AWS SDK.
This topic has the following sections:
- [Authentication methods](#authentication-methods)
- [Assuming a role](#assuming-a-role)
- [Endpoint](#endpoint)
- [AWS credentials file](#aws-credentials-file)
- [EKS IAM roles for service accounts](#eks-iam-roles-for-service-accounts)
## Authentication methods
You can use one of the following authentication methods. Currently, `AWS SDK Default`, `Credentials file` and `Access and secret key` are enabled by default in open source Grafana. You can enable/disable them if necessary if you have server configuration access. For more information, refer to [allowed_auth_providers]({{< relref "../../setup-grafana/configure-grafana/#allowed_auth_providers" >}}) documentation.
- `AWS SDK Default` performs no custom configuration and instead uses the [default provider](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html) as specified by the AWS SDK for Go. It requires you to configure your AWS credentials separately, such as if you've [configured the CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html), if you're [running on an EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), [in an ECS task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html), or for a [Service Account in a Kubernetes cluster](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html).
- `Credentials file` corresponds directly to the [SharedCredentialsProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#SharedCredentialsProvider) provider in the Go SDK. It reads the AWS shared credentials file to find a given profile. While `AWS SDK Default` will also find the shared credentials file, this option allows you to specify which profile to use without using environment variables. This option doesn't have any implicit fallbacks to other credential providers, and it fails if the credentials provided from the file aren't correct.
- `Access and secret key` corresponds to the [StaticProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#StaticProvider) and uses the given access key ID and secret key to authenticate. This method doesn't have any fallbacks, and will fail if the provided key pair doesn't work.
- `Workspace IAM role` corresponds to the [EC2RoleProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/ec2rolecreds/#EC2RoleProvider). The EC2RoleProvider pulls credentials for a role attached to the EC2 instance that Grafana runs on. You can also achieve this by using the authentication method AWS SDK Default, but this option is different as it doesnt have any fallbacks. This option is currently only enabled by default in Amazon Managed Grafana.
## Assuming a role
The `Assume Role ARN` field allows you to specify which IAM role to assume. When left blank, the provided credentials are used directly and the associated role or user should have the required permissions. If this field is non-blank, on the other hand, the provided credentials are used to perform an [sts:AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) call.
You can disable this feature in the Grafana configuration. For more information, refer to [assume_role_enabled]({{< relref "../../setup-grafana/configure-grafana/#assume_role_enabled" >}}) documentation.
### External ID
If you are assuming a role in another account that was created with an external ID, then specify the external ID in this field. For more information, refer to the [AWS documentation on external ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).
## Endpoint
The `Endpoint` field allows you to specify a custom endpoint URL that overrides the default generated endpoint for the AWS service API. Leave this field blank if you want to use the default generated endpoint. For more information on why and how to use Service endpoints, refer to the [AWS service endpoints documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html).
## AWS credentials file
Create a file at `~/.aws/credentials`. That is the `HOME` path for user running grafana-server.
> **Note:** If you think you have the credentials file in the right place and it is still not working, you might try moving your .aws file to '/usr/share/grafana/' and make sure your credentials file has at most 0644 permissions.
Example content:
```bash
[default]
aws_access_key_id = asdsadasdasdasd
aws_secret_access_key = dasdasdsadasdasdasdsa
region = us-west-2
```
## EKS IAM roles for service accounts
The Grafana process in the container runs as user 472 (called "grafana"). When Kubernetes mounts your projected credentials, they will by default only be available to the root user. To allow user 472 to access the credentials (and avoid falling back to the IAM role attached to the EC2 instance), you need to provide a [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your pod.
```yaml
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
```

View File

@ -0,0 +1,106 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/aws-authentication/
- /docs/grafana/latest/datasources/cloudwatch/
- /docs/grafana/latest/data-sources/aws-cloudwatch/aws-authentication/
- /docs/grafana/latest/data-sources/elasticsearch/aws-authentication/
description: Guide to configuring AWS authentication in Grafana
keywords:
- grafana
- aws
- authentication
menuTitle: AWS authentication
title: Configure AWS authentication
weight: 200
---
# Configure AWS authentication
A Grafana plugin's requests to AWS are made on behalf of an AWS Identity and Access Management (IAM) role or IAM user.
The IAM user or IAM role must have the associated policies to perform certain API actions.
Since these policies are specific to each data source, refer to the data source documentation for details.
All requests to AWS APIs are performed on the server side by the Grafana backend using the official AWS SDK.
This topic has the following sections:
- [Select an authentication method]({{< relref "#select-an-authentication-method" >}})
- [Assume a role]({{< relref "#assume-a-role" >}})
- [Use a custom endpoint]({{< relref "#use-a-custom-endpoint" >}})
- [Use an AWS credentials file]({{< relref "#use-an-aws-credentials-file" >}})
- [Use EKS IAM roles for service accounts]({{< relref "#use-eks-iam-roles-for-service-accounts" >}})
## Select an authentication method
You can use one of the following authentication methods.
Open source Grafana enables the `AWS SDK Default`, `Credentials file`, and `Access and secret key` methods by default.
- `AWS SDK Default` performs no custom configuration and instead uses the [default provider](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html) as specified by the AWS SDK for Go.
It requires you to configure your AWS credentials separately, such as if you've [configured the CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html), if you're [running on an EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html), [in an ECS task](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html), or for a [Service Account in a Kubernetes cluster](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html).
- `Credentials file` corresponds directly to the [SharedCredentialsProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#SharedCredentialsProvider) provider in the Go SDK.
It reads the AWS shared credentials file to find a given profile.
While `AWS SDK Default` will also find the shared credentials file, this option allows you to specify which profile to use without using environment variables.
This option doesn't have any implicit fallbacks to other credential providers, and it fails if the credentials provided from the file aren't correct.
- `Access and secret key` corresponds to the [StaticProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/#StaticProvider) and uses the given access key ID and secret key to authenticate.
This method doesn't have any fallbacks, and will fail if the provided key pair doesn't work.
- `Workspace IAM role` corresponds to the [EC2RoleProvider](https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/ec2rolecreds/#EC2RoleProvider).
The EC2RoleProvider pulls credentials for a role attached to the EC2 instance that Grafana runs on.
You can also achieve this by using the authentication method AWS SDK Default, but this option is different as it doesn't have any fallbacks.
This option is enabled by default only in Amazon Managed Grafana.
If necessary, you can enable or disable them if you have server configuration access.
For more information, refer to the [`allowed_auth_providers` documentation]({{< relref "../../../setup-grafana/configure-grafana#allowed_auth_providers" >}}).
## Assume a role
You can specify which IAM role to assume in the **Assume Role ARN** field.
If this field is left blank, Grafana uses the provided credentials directly, and the associated role or user should have the required permissions.
If this field isn't blank, Grafana uses the provided credentials to perform an [sts:AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) call.
To disable this feature, refer to the [`assume_role_enabled` documentation]({{< relref "../../../setup-grafana/configure-grafana#assume_role_enabled" >}}).
### Use an external ID
To assume a role in another account that was created with an external ID, specify the external ID in the **External ID** field.
For more information, refer to the [AWS documentation on external ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html).
## Use a custom endpoint
You can specify a custom endpoint URL in the **Endpoint** field, which overrides the default generated endpoint for the AWS service API.
Leave this field blank to use the default generated endpoint.
For more information on why and how to use service endpoints, refer to the [AWS service endpoints documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html).
## Use an AWS credentials file
Create a file at `~/.aws/credentials`, the `HOME` path for the user running the `grafana-server` service.
> **Note:** If you think you have the credentials file in the right location, but it's not working, try moving your `.aws` file to `/usr/share/grafana/` and grant your credentials file at most 0644 permissions.
### Credentials file example
```bash
[default]
aws_access_key_id = asdsadasdasdasd
aws_secret_access_key = dasdasdsadasdasdasdsa
region = us-west-2
```
## Use EKS IAM roles for service accounts
The Grafana process in the container runs as user 472 (called "grafana").
When Kubernetes mounts your projected credentials, they're available by default to only the root user.
To grant user 472 permission to access the credentials, and avoid falling back to the IAM role attached to the EC2 instance, you must provide a [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your pod.
### Security context example
```yaml
securityContext:
fsGroup: 472
runAsUser: 472
runAsGroup: 472
```

View File

@ -1,35 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/preconfig-cloudwatch-dashboards/
- /docs/grafana/latest/datasources/cloudwatch/
description: Guide for using AWS CloudWatch in Grafana
keywords:
- grafana
- stackdriver
- google
- guide
- cloud
- monitoring
title: Curated CloudWatch dashboards
weight: 15
---
# Curated CloudWatch dashboards
The updated CloudWatch data source ships with pre-configured dashboards for five of the most popular AWS services:
- Amazon Elastic Compute Cloud `Amazon EC2`,
- Amazon Elastic Block Store `Amazon EBS`,
- AWS Lambda `AWS Lambda`,
- Amazon CloudWatch Logs `Amazon CloudWatch Logs`, and
- Amazon Relational Database Service `Amazon RDS`.
To import curated dashboards:
1. On the configuration page of your CloudWatch data source, click the **Dashboards** tab.
1. Click **Import** for the dashboard you would like to use.
In case you want to customize a dashboard, we recommend that you save it under a different name. Otherwise the dashboard will be overwritten when a new version of the dashboard is released.
{{< figure src="/static/img/docs/v65/cloudwatch-dashboard-import.png" caption="CloudWatch dashboard import" >}}

View File

@ -1,70 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/provision-cloudwatch/
- /docs/grafana/latest/datasources/cloudwatch/
description: Guide for provisioning CloudWatch
title: Provision CloudWatch
weight: 400
---
# Provision CloudWatch data source
You can configure the CloudWatch data source by customizing configuration files in Grafana's provisioning system. To know more about provisioning and learn about available configuration options, refer to the [Provisioning Grafana]({{< relref "../../administration/provisioning/#datasources" >}}) topic.
Here are some provisioning examples for this data source.
## Using AWS SDK (default)
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: default
defaultRegion: eu-west-2
```
## Using credentials' profile name (non-default)
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: credentials
defaultRegion: eu-west-2
customMetricsNamespaces: 'CWAgent,CustomNameSpace'
profile: secondary
```
## Using accessKey and secretKey
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: keys
defaultRegion: eu-west-2
secureJsonData:
accessKey: '<your access key>'
secretKey: '<your secret key>'
```
## Using AWS SDK Default and ARN of IAM Role to Assume
```yaml
apiVersion: 1
datasources:
- name: CloudWatch
type: cloudwatch
jsonData:
authType: default
assumeRoleArn: arn:aws:iam::123456789012:root
defaultRegion: eu-west-2
```

View File

@ -0,0 +1,224 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/
- /docs/grafana/latest/datasources/cloudwatch/
- /docs/grafana/latest/data-sources/aws-cloudwatch/query-editor/
description: Guide for using the AWS CloudWatch data source's query editor
keywords:
- grafana
- aws
- cloudwatch
- guide
- queries
menuTitle: Query editor
title: AWS CloudWatch query editor
weight: 300
---
# AWS CloudWatch query editor
This topic explains querying specific to the CloudWatch data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
The CloudWatch data source can query data from both CloudWatch metrics and CloudWatch Logs APIs, each with its own specialized query editor.
- [CloudWatch metrics]({{< relref "#query-cloudwatch-metrics" >}})
- [CloudWatch Logs]({{< relref "#query-cloudwatch-logs" >}})
{{< figure src="/static/img/docs/cloudwatch/cloudwatch-query-editor-api-modes-8.3.0.png" max-width="500px" class="docs-image--right" caption="CloudWatch API modes" >}}
Select which API to query by using the query mode switch on top of the editor.
## Query CloudWatch metrics
You can build two types of queries with the CloudWatch query editor:
- [Metric Search]({{< relref "#create-a-metric-search-query" >}})
- [Metric Query]({{< relref "#create-a-metric-insights-query" >}}), which uses the Metrics Insights feature
### Create a Metric Search query
To create a valid Metric Search query, specify the namespace, metric name, and at least one statistic.
If you enable `Match Exact`, you must also specify all dimensions of the metric you're querying so that the [metric schema](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html) matches exactly.
If `Match Exact` is disabled, you can specify any number of dimensions on which you'd like to filter.
The data source returns up to 100 metrics matching your filter criteria.
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
#### Create dynamic queries with dimension wildcards
Use the asterisk (`*`) wildcard for one or more dimension values to monitor a dynamic list of metrics.
{{< figure src="/static/img/docs/cloudwatch/cloudwatch-dimension-wildcard-8.3.0.png" max-width="500px" class="docs-image--right" caption="CloudWatch dimension wildcard" >}}
In this example, the query returns all metrics in the namespace `AWS/EC2` with a metric name of `CPUUtilization`, and also queries ANY value for the `InstanceId` dimension.
This can help you monitor metrics for AWS resources, like EC2 instances or containers.
When an auto-scaling event creates new instances, they automatically appear in the graph without you having to track the new instance IDs.
This capability is currently limited to retrieving up to 100 metrics.
You can expand the [Query inspector]({{< relref "../../../panels-visualizations/query-transform-data/#navigate-the-query-tab" >}}) button and click `Meta Data` to see the search expression that's automatically built to support wildcards.
To learn more about search expressions, refer to the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/search-expression-syntax.html).
The search expression is defined by default in such a way that the queried metrics must match the defined dimension names exactly.
This means that in the example, the query returns only metrics with exactly one dimension containing the name 'InstanceId'.
{{< figure src="/static/img/docs/cloudwatch/cloudwatch-meta-inspector-8.3.0.png" max-width="500px" class="docs-image--right" caption="CloudWatch Meta Inspector" >}}
You can disable `Match Exact` to include metrics that have other dimensions defined.
Disabling `Match Exact` also creates a search expression even if you don't use wildcards. We simply search for any metric that matches at least the namespace, metric name, and all defined dimensions.
#### Use multi-value template variables
When defining dimension values based on multi-valued template variables, the data source uses a search expression to query for the matching metrics.
This enables the use of multiple template variables in one query, and also lets you use template variables for queries that have the `Match Exact` option disabled.
Search expressions are limited to 1,024 characters, so your query might fail if you have a long list of values.
We recommend using the asterisk (`*`) wildcard instead of the `All` option to query all metrics that have any value for a certain dimension name.
The use of multi-valued template variables is supported only for dimension values.
Using multi-valued template variables for `Region`, `Namespace`, or `Metric Name` is not supported.
#### Use metric math expressions
You can create new time series metrics by operating on top of CloudWatch metrics using mathematical functions.
This includes support for arithmetic operators, unary subtraction, and other functions, and can be applied to CloudWatch metrics.
For details on the available functions, refer to [AWS Metric Math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html).
For example, to apply arithmetic operations to a metric, apply a unique string id to the raw metric, then use this id and apply arithmetic operations to it in the Expression field of the new metric.
> **Note:** If you use the expression field to reference another query, like `queryA * 2`, you can't create an alert rule based on that query.
#### Deep-link Grafana panels to the CloudWatch console
{{< figure src="/static/img/docs/v65/cloudwatch-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch deep linking" >}}
Left-clicking a time series in the panel shows a context menu with a link to `View in CloudWatch console`.
Clicking that link opens a new tab that takes you to the CloudWatch console and displays all metrics for that query.
If you're not logged in to the CloudWatch console, the link forwards you to the login page.
The provided link is valid for any account but displays the expected metrics only if you're logged in to the account that corresponds to the selected data source in Grafana.
This feature is not available for metrics based on [metric math expressions](#metric-math-expressions).
### Create a Metric Insights query
> **Note:** This query option is available only in Grafana v8.3 and higher.
The Metrics Query option in the CloudWatch data source is referred to as **Metric Insights** in the AWS console.
It's a fast, flexible, SQL-based query engine that you can use to identify trends and patterns across millions of operational metrics in real time.
The metrics query editor's Metrics Query option has two editing modes:
- [Builder mode](#create-a-query-in-builder-mode), which provides a visual query-building interface
- [Code mode](#create-a-query-in-code-mode), which provides a code editor for writing queries
#### Use Metric Insights syntax
Metric Insights uses a dialect of SQL and this query syntax:
```sql
SELECT FUNCTION(MetricName)
FROM Namespace | SCHEMA(...)
[ WHERE labelKey OPERATOR labelValue [AND|...]]
[ GROUP BY labelKey [, ...]]
[ ORDER BY FUNCTION() [DESC | ASC] ]
[ LIMIT number]
```
For details about the Metrics Insights syntax, refer to the [AWS reference documentation](https://docs.aws.amazon.com/console/cloudwatch/metricsinsights-syntax).
For information about Metrics Insights limits, refer to the [AWS feature documentation](https://docs.aws.amazon.com/console/cloudwatch/metricsinsights).
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
#### Use Metrics Insights keywords
This table summarizes common Metrics Insights query keywords:
| Keyword | Description |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `FUNCTION` | Required. Specifies the aggregate function to use, and also specifies the name of the metric to be queried. Valid values are AVG, COUNT, MAX, MIN, and SUM. |
| `MetricName` | Required. For example, `CPUUtilization`. |
| `FROM` | Required. Specifies the metric's source. You can specify either the metric namespace that contains the metric to be queried, or a SCHEMA table function. Namespace examples include `AWS/EC2`, `AWS/Lambda`. |
| `SCHEMA` | Optional. Narrows the query results to only the metrics that are an exact match, or to metrics that do not match. |
| `WHERE` | Optional. Filters the query results to only the metrics that match your specified expression. For example, `WHERE InstanceType != 'c3.4xlarge'`. |
| `GROUP BY` | Optional. Groups the query results into multiple time series. For example, `GROUP BY ServiceName`. |
| `ORDER BY` | Optional. Specifies the order in which time series are returned. Options are `ASC`, `DESC`. |
| `LIMIT` | Optional. Limits the number of time series returned. |
#### Create a query in Builder mode
**To create a query in Builder mode:**
1. Browse and select a metric namespace, metric name, filter, group, and order options using information from the [Metrics Insights keywords table](#metrics-insights-keywords).
1. For each of these keywords, choose from the list of possible options.
Grafana constructs a SQL query based on your selections.
#### Create a query in Code mode
You can also write your SQL query directly in a code editor by using Code mode.
The code editor has a built-in autocomplete feature that suggests keywords, aggregations, namespaces, metrics, labels, and label values.
The suggestions appear after typing a space, comma, or dollar (`$`) character, or the keyboard combination <key>CTRL</key>+<key>Space</key>.
{{< figure src="/static/img/docs/cloudwatch/cloudwatch-code-editor-autocomplete-8.3.0.png" max-width="500px" class="docs-image--right" caption="Code editor autocomplete" >}}
> **Note:** Template variables in the code editor can interfere with autocompletion.
To run the query, click **Run query** above the code editor.
### Common query editor fields
Three fields located at the bottom of the metrics query editor are common to both Metric Search and Metric Query editors.
#### Id
The GetMetricData API requires that all queries have a unique ID. Use this field to specify an ID of choice. The ID can include numbers, letters, and underscore, and must start with a lowercase letter. If no ID is specified, grafana will generate an ID using the following pattern `query[refId of the current query row]`, such as `queryA` for the first query row in the panel editor.
The ID can be used to reference queries in Metric Math expressions.
#### Period
A period is the length of time associated with a specific Amazon CloudWatch statistic. Periods are defined in numbers of seconds, and valid values for period are 1, 5, 10, 30, or any multiple of 60.
If the period field is left blank or set to `auto`, then it calculates automatically based on the time range and [cloudwatch's retention policy](https://aws.amazon.com/about-aws/whats-new/2016/11/cloudwatch-extends-metrics-retention-and-new-user-interface/). The formula used is `time range in seconds / 2000`, and then it snaps to the next higher value in an array of predefined periods `[60, 300, 900, 3600, 21600, 86400]` after removing periods based on retention. By clicking `Show Query Preview` in the query editor, you can see what period Grafana used.
#### Label
The label field allows you to override the default name of the metric legend using CloudWatch dynamic labels. If you're using a time-based dynamic label such as `${MIN_MAX_TIME_RANGE}`, then the legend value is derived from the current timezone specified in the time range picker. To see the full list of label patterns and the dynamic label limitations, refer to the [CloudWatch dynamic labels](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/graph-dynamic-labels.html) documentation.
> **Alias pattern deprecation:** Since Grafana v9.0, dynamic labels replaced alias patterns in the CloudWatch data source.
> Any existing alias pattern is migrated upon upgrade to a corresponding dynamic label pattern.
> To use alias patterns instead of dynamic labels, set the feature toggle `cloudWatchDynamicLabels` to `false` in the [Grafana configuration file]({{< relref "../../../setup-grafana/configure-grafana/" >}}).
> This reverts to the alias pattern system and uses the previous alias formatting logic.
The alias field will be deprecated and removed in a release.
During this interim period, we won't fix bugs related to the alias pattern system.
For details on why we're changing this feature, refer to [issue #48434](https://github.com/grafana/grafana/issues/48434).
## Query CloudWatch Logs
The logs query editor helps you write CloudWatch Logs Query Language queries across defined regions and log groups.
### Create a CloudWatch Logs query
1. Select the region and up to 20 log groups to query.
1. Use the main input area to write your query in [CloudWatch Logs Query Language](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html).
You can also write queries returning time series data by using the [`stats` command](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_Insights-Visualizing-Log-Data.html).
When making `stats` queries in [Explore]({{< relref "../../../explore/" >}}), make sure you are in Metrics Explore mode.
{{< figure src="/static/img/docs/v70/explore-mode-switcher.png" max-width="500px" class="docs-image--right" caption="Explore mode switcher" >}}
### Deep-link Grafana panels to the CloudWatch console
{{< figure src="/static/img/docs/v70/cloudwatch-logs-deep-linking.png" max-width="500px" class="docs-image--right" caption="CloudWatch Logs deep linking" >}}
To view your query in the CloudWatch Logs Insights console, click the `CloudWatch Logs Insights` button next to the query editor.
If you're not logged in to the CloudWatch console, the link forwards you to the login page.
The provided link is valid for any account, but displays the expected metrics only if you're logged in to the account that corresponds to the selected data source in Grafana.

View File

@ -1,81 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/template-queries-cloudwatch/
- /docs/grafana/latest/datasources/cloudwatch/
description: Template variables in CloudWatch queryh
title: Template variables in CloudWatch query
weight: 10
---
# Using template variables in CloudWatch queries
Instead of hard-coding server, application, and sensor names in your metric queries, you can use variables. The variables are listed as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the display of data in your dashboard.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../dashboards/variables" >}}) documentation.
## Query variable
You can use the following CloudWatch data source queries to specify the `Query Type` field in the Variable edit view. Use them to fill a variables options list with values like `regions`, `namespaces`, `metric names`, and `dimension keys/values`.
Read more about the available dimensions in the [CloudWatch Metrics and Dimensions Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html).
| Name | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Regions` | Returns a list of all AWS regions |
| `Namespaces` | Returns a list of all the namespaces CloudWatch supports. |
| `Metrics` | Returns a list of metrics in the namespace. (specify region or use "default" for custom metrics) |
| `Dimension Keys` | Returns a list of dimension keys in the namespace. |
| `Dimension Values` | Returns a list of dimension values matching the specified `region`, `namespace`, `metric`, and `dimension_key`. You can use dimension `filters` to get more specific results. |
| `EBS Volume IDs` | Returns a list of volume ids matching the specified `region` and `instance_id`. |
| `EC2 Instance Attributes` | Returns a list of attributes matching the specified `region`, `attribute_name`, and `filters`. |
| `Resource ARNs` | Returns a list of ARNs matching the specified `region`, `resource_type` and `tags`. |
| `Statistics` | Returns a list of all the standard statistics. |
| `LogGroups` | Returns a list of all log groups matching the specified `region`. |
For details about the metrics CloudWatch provides, please refer to the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
### Using variables in queries
Variables can be used in the variable form. Refer to the [variable syntax documentation]({{< relref "../../dashboards/variables/variable-syntax" >}}).
## ec2_instance_attribute examples
### Filters
The `ec2_instance_attribute` query takes in `filters` as a filter name and a comma-separated list of values.
You can specify [pre-defined filters of ec2:DescribeInstances](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html).
### Selecting attributes
Only 1 attribute per instance can be returned. Any flat attribute can be selected (i.e. if the attribute has a single value and isn't an object or array). Below is a list of available flat attributes:
- `AmiLaunchIndex`
- `Architecture`
- `ClientToken`
- `EbsOptimized`
- `EnaSupport`
- `Hypervisor`
- `IamInstanceProfile`
- `ImageId`
- `InstanceId`
- `InstanceLifecycle`
- `InstanceType`
- `KernelId`
- `KeyName`
- `LaunchTime`
- `Platform`
- `PrivateDnsName`
- `PrivateIpAddress`
- `PublicDnsName`
- `PublicIpAddress`
- `RamdiskId`
- `RootDeviceName`
- `RootDeviceType`
- `SourceDestCheck`
- `SpotInstanceRequestId`
- `SriovNetSupport`
- `SubnetId`
- `VirtualizationType`
- `VpcId`
You can select tags by prepending the tag name with `Tags.`. For example, the tag `Name` is selected with `Tags.Name`.

View File

@ -0,0 +1,94 @@
---
aliases:
- /docs/grafana/latest/datasources/aws-cloudwatch/template-queries-cloudwatch/
- /docs/grafana/latest/data-sources/aws-cloudwatch/template-variables/
description: Guide on using template variables in CloudWatch queries
keywords:
- grafana
- aws
- cloudwatch
- templates
- variables
menuTitle: Template variables
title: CloudWatch template variables
weight: 400
---
# CloudWatch template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
You can specify these CloudWatch data source queries in the Variable edit view's **Query Type** field.
Use them to fill a variable's options list with values like `regions`, `namespaces`, `metric names`, and `dimension keys/values`.
| Name | List returned |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Regions** | All AWS regions. |
| **Namespaces** | All namespaces CloudWatch supports. |
| **Metrics** | Metrics in the namespace. (Specify region or use "default" for custom metrics.) |
| **Dimension Keys** | Dimension keys in the namespace. |
| **Dimension Values** | Dimension values matching the specified `region`, `namespace`, `metric`, and `dimension_key`. Use dimension `filters` for more specific results. |
| **EBS Volume IDs** | Volume ids matching the specified `region` and `instance_id`. |
| **EC2 Instance Attributes** | Attributes matching the specified `region`, `attribute_name`, and `filters`. |
| **Resource ARNs** | ARNs matching the specified `region`, `resource_type`, and `tags`. |
| **Statistics** | All standard statistics. |
| **LogGroups** | All log groups matching the specified `region`. |
For details on the available dimensions, refer to the [CloudWatch Metrics and Dimensions Reference](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html).
For details about the metrics CloudWatch provides, refer to the [CloudWatch documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/CW_Support_For_AWS.html).
### Use variables in queries
Use Grafana's variable syntax to include variables in queries.
For details, refer to the [variable syntax documentation]({{< relref "../../../dashboards/variables/variable-syntax" >}}).
## Use ec2_instance_attribute
### Filters
The `ec2_instance_attribute` query takes `filters` as a filter name and a comma-separated list of values.
You can specify [pre-defined filters of ec2:DescribeInstances](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html).
### Select attributes
A query returns only one attribute per instance.
You can select any attribute that has a single value and isn't an object or array, also known as a flat attribute:
- `AmiLaunchIndex`
- `Architecture`
- `ClientToken`
- `EbsOptimized`
- `EnaSupport`
- `Hypervisor`
- `IamInstanceProfile`
- `ImageId`
- `InstanceId`
- `InstanceLifecycle`
- `InstanceType`
- `KernelId`
- `KeyName`
- `LaunchTime`
- `Platform`
- `PrivateDnsName`
- `PrivateIpAddress`
- `PublicDnsName`
- `PublicIpAddress`
- `RamdiskId`
- `RootDeviceName`
- `RootDeviceType`
- `SourceDestCheck`
- `SpotInstanceRequestId`
- `SriovNetSupport`
- `SubnetId`
- `VirtualizationType`
- `VpcId`
You can select tags by prepending the tag name with `Tags.`.
For example, select the tag `Name` by using `Tags.Name`.

View File

@ -0,0 +1,161 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/
- /docs/grafana/latest/features/datasources/azuremonitor/
- /docs/grafana/latest/datasources/azuremonitor/deprecated-application-insights/
- /docs/grafana/latest/datasources/azure-monitor/
- /docs/grafana/latest/data-sources/azure-monitor/
description: Guide for using Azure Monitor in Grafana
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- guide
menuTitle: Azure Monitor
title: Azure Monitor data source
weight: 300
---
# Azure Monitor data source
Grafana ships with built-in support for Azure Monitor, the Azure service to maximize the availability and performance of applications and services in the Azure Cloud.
This topic explains configuring and querying specific to the Azure Monitor data source.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management" >}}).
Only users with the organization administrator role can add data sources.
Once you've added the Azure Monitor data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards" >}}) and use [Explore]({{< relref "../../explore" >}}).
The Azure Monitor data source supports visualizing data from three Azure services:
- **Azure Monitor Metrics:** Collect numeric data from resources in your Azure account.
- **Azure Monitor Logs:** Collect log and performance data from your Azure account, and query using the Kusto Query Language (KQL).
- **Azure Resource Graph:** Query your Azure resources across subscriptions.
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Azure Monitor data source.
### Configure Azure Active Directory (AD) authentication
You must create an app registration and service principal in Azure AD to authenticate the data source.
For configuration details, refer to the [Azure documentation for service principals](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in).
The app registration you create must have the `Reader` role assigned on the subscription.
For more information, refer to [Azure documentation for role assignments](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current).
If you host Grafana in Azure, such as in App Service or Azure Virtual Machines, you can configure the Azure Monitor data source to use Managed Identity for secure authentication without entering credentials into Grafana.
For details, refer to [Configuring using Managed Identity]({{< relref "#configuring-using-managed-identity" >}}).
| Name | Description |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Authentication** | Enables Managed Identity. Selecting Managed Identity hides many of the other fields. For details, see [Configuring using Managed Identity](#configuring-using-managed-identity). |
| **Azure Cloud** | Sets the national cloud for your Azure account. For most users, this is the default "Azure". For details, see the [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-national-cloud). |
| **Directory (tenant) ID** | Sets the directory/tenant ID for the Azure AD app registration to use for authentication. For details, see the [Azure tenant and app ID docs](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in). |
| **Application (client) ID** | Sets the application/client ID for the Azure AD app registration to use for authentication. |
| **Client secret** | Sets the application client secret for the Azure AD app registration to use for authentication. For details, see the [Azure application secret docs](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret). |
| **Default subscription** | _(Optional)_ Sets a default subscription for template variables to use. |
| **Default workspace** | _(Optional)_ Sets a default workspace for Log Analytics-based template variable queries to use. |
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning examples
**Azure AD App Registration (client secret):**
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: clientsecret
cloudName: azuremonitor # See table below
tenantId: <tenant-id>
clientId: <client-id>
subscriptionId: <subscription-id> # Optional, default subscription
secureJsonData:
clientSecret: <client-secret>
version: 1
```
**Managed Identity:**
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: msi
subscriptionId: <subscription-id> # Optional, default subscription
version: 1
```
#### Supported cloud names
| Azure Cloud | `cloudName` Value |
| ---------------------------------------------------- | -------------------------- |
| **Microsoft Azure public cloud** | `azuremonitor` (_Default_) |
| **Microsoft Chinese national cloud** | `chinaazuremonitor` |
| **US Government cloud** | `govazuremonitor` |
| **Microsoft German national cloud ("Black Forest")** | `germanyazuremonitor` |
### Configure Managed Identity
If you host Grafana in Azure, such as an App Service or with Azure Virtual Machines, and have managed identity enabled on your VM, you can use managed identity to configure Azure Monitor in Grafana.
This lets you securely authenticate data sources without manually configuring credentials via Azure AD App Registrations for each.
For details on Azure managed identities, refer to the [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
**To enable managed identity for Grafana:**
1. Set the `managed_identity_enabled` flag in the `[azure]` section of the [Grafana server configuration]({{< relref "../../setup-grafana/configure-grafana/#azure" >}}).
```ini
[azure]
managed_identity_enabled = true
```
2. In the Azure Monitor data source configuration, set **Authentication** to **Managed Identity**.
This hides the directory ID, application ID, and client secret fields, and the data source uses managed identity to authenticate to Azure Monitor Metrics and Logs, and Azure Resource Graph.
{{< figure src="/static/img/docs/azure-monitor/managed-identity.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}
## Query the data source
The Azure Monitor data source can query data from Azure Monitor Metrics and Logs, and the Azure Resource Graph, each with its own specialized query editor.
For details, see the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).
## Application Insights and Insights Analytics (removed))
Until Grafana v8.0, you could query the same Azure Application Insights data using Application Insights and Insights Analytics.
These queries were deprecated in Grafana v7.5. In Grafana v8.0, Application Insights and Insights Analytics were made read-only in favor of querying this data through Metrics and Logs. These query methods were completely removed in Grafana v9.0.
If you're upgrading from a Grafana version prior to v9.0 and relied on Application Insights and Analytics queries, refer to the [Grafana v9.0 documentation](/v9.0/datasources/azuremonitor/deprecated-application-insights/) for help migrating these queries to Metrics and Logs queries.

View File

@ -0,0 +1,289 @@
---
aliases:
- /docs/grafana/latest/data-sources/azure-monitor/query-editor/
description: Guide for using the Azure Monitor data source's query editor
keywords:
- grafana
- microsoft
- azure
- monitor
- metrics
- logs
- resources
- queries
menuTitle: Query editor
title: Azure Monitor query editor
weight: 300
---
# Azure Monitor query editor
This topic explains querying specific to the Azure Monitor data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
The Azure Monitor data source's query editor has three modes depending on which Azure service you want to query:
- **Metrics** for [Azure Monitor Metrics]({{< relref "#query-azure-monitor-metrics" >}})
- **Logs** for [Azure Monitor Logs]({{< relref "#query-azure-monitor-logs" >}})
- [**Azure Resource Graph**]({{< relref "#query-azure-resource-graph" >}})
## Query Azure Monitor Metrics
Azure Monitor Metrics collects numeric data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference), and you can query them to investigate your resources' health and usage and maximise availability and performance.
Monitor Metrics use a lightweight format that stores only numeric data in a specific structure and supports near real-time scenarios, making it useful for fast detection of issues.
In contrast, Azure Monitor Logs can store a variety of data types, each with their own structure.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Metrics sample query visualizing CPU percentage over time" >}}
### Create a Metrics query
**To create a Metrics query:**
1. In a Grafana panel, select the **Azure Monitor** data source.
1. Select the **Metrics** service.
1. Select a resource from which to metrics by using the subscription, resource group, resource type, and resource fields.
1. To select a different namespace than the default—for instance, to select resources like storage accounts that are organized under multiple namespaces—use the **Namespace** option.
> **Note:** Not all metrics returned by the Azure Monitor Metrics API have values.
> The data source retrieves lists of supported metrics for each subscription and ignores metrics that never have values.
1. Select a metric from the **Metric** field.
Optionally, you can apply further aggregations or filter by dimensions.
1. Change the aggregation from the default average to show minimum, maximum, or total values.
1. Specify a custom time grain. By default, Grafana selects a time grain interval for you based on your selected time range.
1. For metrics with multiple dimensions, you can split and filter the returned metrics.
For example, the Application Insights dependency calls metric supports returning multiple time series for successful and unsuccessful calls.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics-dimensions.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}
The available options change depending on what is relevant to the selected metric.
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
### Format legend aliases
You can change the legend label for Metrics by using aliases.
In the Legend Format field, you can combine aliases defined below any way you want.
For example:
- `Blob Type: {{ blobtype }}` becomes `Blob Type: PageBlob`, `Blob Type: BlockBlob`
- `{{ resourcegroup }} - {{ resourcename }}` becomes `production - web_server`
| Alias pattern | Description |
| ----------------------------- | ------------------------------------------------------------------------------------------------------ |
| `{{ resourcegroup }}` | Replaced with the the resource group. |
| `{{ namespace }}` | Replaced with the resource type or namespace, such as `Microsoft.Compute/virtualMachines`. |
| `{{ resourcename }}` | Replaced with the resource name. |
| `{{ metric }}` | Replaced with the metric name, such as "Percentage CPU". |
| _`{{ arbitaryDimensionID }}`_ | Replaced with the value of the specified dimension. For example, `{{ blobtype }}` becomes `BlockBlob`. |
| `{{ dimensionname }}` | _(Legacy for backward compatibility)_ Replaced with the name of the first dimension. |
| `{{ dimensionvalue }}` | _(Legacy for backward compatibility)_ Replaced with the value of the first dimension. |
### Filter using dimensions
Some metrics also have dimensions, which associate additional metadata.
Dimensions are represented as key-value pairs assigned to each value of a metric.
Grafana can display and filter metrics based on dimension values.
The data source supports the `equals`, `not equals`, and `starts with` operators as detailed in the [Monitor Metrics API documentation](https://docs.microsoft.com/en-us/rest/api/monitor/metrics/list).
For more information onmulti-dimensional metrics, refer to the [Azure Monitor data platform metrics documentation](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics) and [Azure Monitor filtering documentation](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-charts#filters).
## Query Azure Monitor Logs
Azure Monitor Logs collects and organises log and performance data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference), and makes many sources of data available to query together with the [Kusto Query Language (KQL)](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/).
While Azure Monitor Metrics stores only simplified numerical data, Logs can store different data types, each with their own structure.
You can also perform complex analysis of Logs data by using KQL.
{{< figure src="/static/img/docs/azure-monitor/query-editor-logs.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Logs sample query comparing successful requests to failed requests" >}}
### Create a Logs query
**To create a Logs query:**
1. In a Grafana panel, select the **Azure Monitor** data source.
1. Select the **Logs** service.
1. Select a resource to query.
Alternatively, you can dynamically query all resources under a single resource group or subscription.
1. Enter your KQL query.
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
### Logs query examples
Azure Monitor Logs queries are written using the Kusto Query Language (KQL), a rich language similar to SQL.
The Azure documentation includes resources to help you learn KQL:
- [Log queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview)
- [Getting started with Kusto](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/concepts/)
- [Tutorial: Use Kusto queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor)
- [SQL to Kusto cheat sheet](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/sqlcheatsheet)
This example query returns a virtual machine's CPU performance, averaged over 5ms time grains:
```kusto
Perf
# $__timeFilter is a special Grafana macro that filters the results to the time span of the dashboard
| where $__timeFilter(TimeGenerated)
| where CounterName == "% Processor Time"
| summarize avg(CounterValue) by bin(TimeGenerated, 5m), Computer
| order by TimeGenerated asc
```
Use time series queries for values that change over time, usually for graph visualisations such as the Time series panel.
Each query should return at least a datetime column and numeric value column.
The result must also be sorted in ascending order by the datetime column.
You can also create a query with at least one non-numeric, non-datetime column.
Azure Monitor considers those columns to be dimensions, and they become labels in the response.
For example, this query returns the aggregated count grouped by hour, Computer, and the CounterName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize count() by bin(TimeGenerated, 1h), Computer, CounterName
| order by TimeGenerated asc
```
You can also select additional number value columns, with or without multiple dimensions.
For example, this query returns a count and average value by hour, Computer, CounterName, and InstanceName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize Samples=count(), ["Avg Value"]=avg(CounterValue)
by bin(TimeGenerated, $__interval), Computer, CounterName, InstanceName
| order by TimeGenerated asc
```
Use table queries with the Table panel to produce a list of columns and rows.
This query returns rows with the six specified columns:
```kusto
AzureActivity
| where $__timeFilter()
| project TimeGenerated, ResourceGroup, Category, OperationName, ActivityStatus, Caller
| order by TimeGenerated desc
```
### Use macros in Logs queries
To help you write queries, you can use several Grafana macros in the `where` clause:
| Macro | Description |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__timeFilter()` | Filters the results to the time range of the dashboard.<br/>Example: `TimeGenerated >= datetime(2018-06-05T18:09:58.907Z) and TimeGenerated <= datetime(2018-06-05T20:09:58.907Z)`. |
| `$__timeFilter(datetimeColumn)` | Like `$__timeFilter()`, but specifies a custom field to filter on. |
| `$__timeFrom()` | Expands to the start of the dashboard time range.<br/>Example: `datetime(2018-06-05T18:09:58.907Z)`. |
| `$__timeTo()` | Expands to the end of the dashboard time range.<br/>Example: `datetime(2018-06-05T20:09:58.907Z)`. |
| `$__escapeMulti($myVar)` | Escapes illegal characters in multi-value template variables.<br/>If `$myVar` has the values `'\\grafana-vm\Network(eth0)\Total','\\hello!'` as a string, use this to expand it to `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`.<br/><br/>If using single-value variables, escape the variable inline instead: `@'\$myVar'`. |
| `$__contains(colName, $myVar)` | Expands multi-value template variables.<br/>If `$myVar` has the value `'value1','value2'`, use this to expand it to `colName in ('value1','value2')`.<br/><br/>If using the `All` option, check the `Include All Option` checkbox, and type the value `all` in the `Custom all value` field. If `$myVar` has the value `all`, the macro instead expands to `1 == 1`.<br/>For template variables with many options, this avoids building a large "where..in" clause, which improves performance. |
Additionally, Grafana has the built-in [`$__interval` macro]({{< relref "../../../panels-visualizations/query-transform-data#query-options" >}}), which calculates an interval in seconds.
## Query Azure Resource Graph
Azure Resource Graph (ARG) is an Azure service designed to extend Azure Resource Management with efficient resource exploration and the ability to query at scale across a set of subscriptions, so that you can more effectively govern an environment.
By querying ARG, you can query resources with complex filtering, iteratively explore resources based on governance requirements, and assess the impact of applying policies in a vast cloud environment.
{{< figure src="/static/img/docs/azure-monitor/query-editor-arg.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Resource Graph sample query listing virtual machines on an account" >}}
### Create a Resource Graph query
ARG queries are written in a variant of the [Kusto Query Language (KQL)](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language), but not all Kusto language features are available in ARG.
An Azure Resource Graph query is formatted as table data.
If your Azure credentials grant you access to multiple subscriptions, you can choose multiple subscriptions before entering queries.
### Resource Graph query examples
The Azure documentation also includes [sample queries](https://docs.microsoft.com/en-gb/azure/governance/resource-graph/samples/starter) to help you get started.
**Sort results by resource properties:**
This query returns all resources in the selected subscriptions, but only the name, type, and location properties:
```kusto
Resources
| project name, type, location
| order by name asc
```
This query uses `order by` to sort the properties by the `name` property in ascending (`asc`) order.
You can change which property to sort by and the order (`asc` or `desc`).
This query uses `project` to show only the listed properties in the results.
You can use this to add or remove properties in your queries.
**Query resources with complex filtering:**
You can filter for Azure resources with a tag name and value.
For example, this query returns a list of resources with an `environment` tag value of `Internal`:
```kusto
Resources
| where tags.environment=~'internal'
| project name
```
This query uses `=~` in the `type` match to make the query case-insensitive.
You can also use `project` with other properties, or add or remove more.
**Group and aggregate the values by property:**
You can use `summarize` and `count` to define how to group and aggregate values by property.
For example, this query returns counts of healthy, unhealthy, and not applicable resources per recommendation:
```kusto
securityresources
| where type == 'microsoft.security/assessments'
| extend resourceId=id,
    recommendationId=name,
    resourceType=type,
    recommendationName=properties.displayName,
    source=properties.resourceDetails.Source,
    recommendationState=properties.status.code,
    description=properties.metadata.description,
    assessmentType=properties.metadata.assessmentType,
    remediationDescription=properties.metadata.remediationDescription,
    policyDefinitionId=properties.metadata.policyDefinitionId,
    implementationEffort=properties.metadata.implementationEffort,
    recommendationSeverity=properties.metadata.severity,
    category=properties.metadata.categories,
    userImpact=properties.metadata.userImpact,
    threats=properties.metadata.threats,
    portalLink=properties.links.azurePortal
| summarize numberOfResources=count(resourceId) by tostring(recommendationName), tostring(recommendationState)
```
In ARG, many nested properties (`properties.displayName`) are of a `dynamic` type and should be cast to a string with `tostring()` in order to operate on them.
### Use macros in Resource Graph queries
To help you write queries, you can use several Grafana macros in the `where` clause:
| Macro | Description |
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__timeFilter()` | Expands to `timestamp ≥ datetime(2018-06-05T18:09:58.907Z) and timestamp ≤ datetime(2018-06-05T20:09:58.907Z)`, where the from and to datetimes are from the Grafana time picker. |
| `$__timeFilter(datetimeColumn)` | Expands to `datetimeColumn ≥ datetime(2018-06-05T18:09:58.907Z) and datetimeColumn ≤ datetime(2018-06-05T20:09:58.907Z)`, where the from and to datetimes are from the Grafana time picker. |
| `$__timeFrom()` | Returns the From datetime from the Grafana picker.<br/>Example: `datetime(2018-06-05T18:09:58.907Z)`. |
| `$__timeTo()` | Returns the To datetime from the Grafana picker.<br/>Example: `datetime(2018-06-05T20:09:58.907Z)`. |
| `$__escapeMulti($myVar)` | Escapes illegal characters from multi-value template variables.<br/>If `$myVar` has the values `'\\grafana-vm\Network(eth0)\Total','\\hello!'` as a string, this expands it to `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`.<br>If you use single-value variables, escape the variable inline instead: `@'\$myVar'`. |
| `$__contains(colName, $myVar)` | Expands multi-value template variables.<br/>If `$myVar` has the value `'value1','value2'`, this expands it to `colName in ('value1','value2')`.<br/>If using the `All` option, then check the `Include All Option` checkbox and in the `Custom all value` field type in the following value: `all`.<br/>If `$myVar` has value `all`, this instead expands to `1 == 1`.<br/>For template variables with many options, this avoids building a large "where..in" clause, which improves performance. |
## Working with large Azure resource data sets
If a request exceeds the [maximum allowed value of records](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/work-with-data#paging-results), the result is paginated and only the first page of results are returned.
You can use filters to reduce the amount of records returned under that value.

View File

@ -0,0 +1,67 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/template-variables/
- /docs/grafana/latest/data-sources/azure-monitor/template-variables/
description: Using template variables with Azure Monitor in Grafana
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- templates
- variables
- queries
menuTitle: Template variables
title: Azure Monitor template variables
weight: 400
---
# Azure Monitor template variables
Instead of hard-coding details such as resource group or resource name values in metric queries, you can use variables.
This helps you create more interactive, dynamic, and reusable dashboards.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
You can specify these Azure Monitor data source queries in the Variable edit view's **Query Type** field.
| Name | Description |
| ------------------- | -------------------------------------------------------------------------------------------- |
| **Subscriptions** | Returns subscriptions. |
| **Resource Groups** | Returns resource groups for a specified subscription. |
| **Namespaces** | Returns metric namespaces for the specified subscription and resource group. |
| **Resource Names** | Returns a list of resource names for a specified subscription, resource group and namespace. |
| **Metric Names** | Returns a list of metric names for a resource. |
| **Workspaces** | Returns a list of workspaces for the specified subscription. |
| **Logs** | Use a KQL query to return values. |
| **Resource Graph** | Use an ARG query to return values. |
Any Log Analytics Kusto Query Language (KQL) query that returns a single list of values can also be used in the Query field.
For example:
| Query | List of values returned |
| ----------------------------------------------------------------------------------------- | --------------------------------------- |
| `workspace("myWorkspace").Heartbeat \| distinct Computer` | Virtual machines |
| `workspace("$workspace").Heartbeat \| distinct Computer` | Virtual machines with template variable |
| `workspace("$workspace").Perf \| distinct ObjectName` | Objects from the Perf table |
| `workspace("$workspace").Perf \| where ObjectName == "$object"` `\| distinct CounterName` | Metric names from the Perf table |
### Query variable example
This time series query uses query variables:
```kusto
Perf
| where ObjectName == "$object" and CounterName == "$metric"
| where TimeGenerated >= $__timeFrom() and TimeGenerated <= $__timeTo()
| where $__contains(Computer, $computer)
| summarize avg(CounterValue) by bin(TimeGenerated, $__interval), Computer
| order by TimeGenerated asc
```

View File

@ -1,302 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/
- /docs/grafana/latest/features/datasources/azuremonitor/
description: Guide for using Azure Monitor in Grafana
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- guide
title: Azure Monitor
weight: 300
---
# Azure Monitor data source
Grafana includes built-in support for Azure Monitor, the Azure service to maximize the availability and performance of your applications and services in the Azure Cloud. The Azure Monitor data source supports visualizing data from three Azure services:
- **Azure Monitor Metrics** to collect numeric data from resources in your Azure account.
- **Azure Monitor Logs** to collect log and performance data from your Azure account, and query using the powerful Kusto Language.
- **Azure Resource Graph** to quickly query your Azure resources across subscriptions.
This topic explains configuring, querying, and other options specific to the Azure Monitor data source. Refer to [Add a data source]({{< relref "../add-a-data-source/" >}}) for instructions on how to add a data source to Grafana.
## Azure Monitor configuration
To access Azure Monitor configuration, hover your mouse over the **Configuration** (gear) icon, click **Data Sources**, and then select the Azure Monitor data source. If you haven't already, you'll need to [add the Azure Monitor data source]({{< relref "../add-a-data-source/" >}}).
You must create an app registration and service principal in Azure AD to authenticate the data source. For more information, refer to [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) for configuration details. The app registration you create will need to have the `Reader` role assigned on the subscription. For more information, refer to [Azure docs for role assignments](https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal?tabs=current).
Alternatively, if you host Grafana in Azure (such as in App Service or Azure Virtual Machines), you can configure the Azure Monitor data source to use Managed Identity to securely authenticate without entering credentials into Grafana. Refer to [Configuring using Managed Identity](#configuring-using-managed-identity) for details.
| Name | Description |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Authentication | Enables Managed Identity. Selecting Managed Identity will hide many of the fields below. See [Configuring using Managed Identity](#configuring-using-managed-identity) for more details. |
| Azure Cloud | The national cloud for your Azure account. For most users, this is the default "Azure". For more information, see [the Azure documentation.](https://docs.microsoft.com/en-us/azure/active-directory/develop/authentication-national-cloud) |
| Directory (tenant) ID | The directory/tenant ID for the Azure AD app registration to use for authentication. See [Get tenant and app ID values for signing in](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) from the Azure documentation. |
| Application (client) ID | The application/client ID for the Azure AD app registration to use for authentication. |
| Client secret | The application client secret for the Azure AD app registration to use for authentication. See [Create a new application secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret) from the Azure documentation. |
| Default subscription | _(optional)_ Sets a default subscription for template variables to use |
| Default workspace | _(optional)_ Sets a default workspace for Log Analytics-based template variable queries to use |
## Azure Monitor query editor
The Azure Monitor data source has three different modes depending on which Azure service you wish to query:
- **Metrics** for [Azure Monitor Metrics](#querying-azure-monitor-metrics)
- **Logs** for [Azure Monitor Logs](#querying-azure-monitor-logs)
- [**Azure Resource Graph**](#querying-azure-resource-graph)
### Querying Azure Monitor Metrics
Azure Monitor Metrics collects numeric data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference) and allows you to query them to investigate the health and utilization of your resources to maximise availability and performance.
Metrics are a lightweight format that only stores simple numeric data in a particular structure. Metrics is capable for supporting near real-time scenarios making it useful for fast detection of issues. Azure Monitor Logs can store a variety of different data types each with their own structure.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Metrics sample query visualizing CPU percentage over time" >}}
#### Your first Azure Monitor Metrics query
1. Select the Metrics service
1. Select a resource to pull metrics from using the subscription, resource group, resource type, and resource fields.
1. Some resources, such as storage accounts, organise metrics under multiple metric namespaces. Grafana will pick a default namespace, but change this to see which other metrics are available.
1. Select a metric from the Metric field.
Optionally, you can apply further aggregations or filter by dimensions for further analysis.
1. Change the aggregation from the default average to show minimum, maximum or total values.
1. Set a specific custom time grain. By default Grafana will automatically select a time grain interval based on your selected time range.
1. For metrics that have multiple dimensions, you can split and filter further the returned metrics. For example, the Application Insights dependency calls metric supports returning multiple time series for successful vs unsuccessful calls.
{{< figure src="/static/img/docs/azure-monitor/query-editor-metrics-dimensions.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}
The options available will change depending on what is most relevant to the selected metric.
#### Legend alias formatting
The legend label for Metrics can be changed using aliases. In the Legend Format field, you can combine aliases defined below any way you want e.g
- `Blob Type: {{ blobtype }}` becomes `Blob Type: PageBlob`, `Blob Type: BlockBlob`
- `{{ resourcegroup }} - {{ resourcename }}` becomes `production - web_server`
| Alias pattern | Description |
| ----------------------------- | ------------------------------------------------------------------------------------------- |
| `{{ resourcegroup }}` | Replaced with the the resource group |
| `{{ namespace }}` | Replaced with the resource type / namespace (e.g. Microsoft.Compute/virtualMachines) |
| `{{ resourcename }}` | Replaced with the resource name |
| `{{ metric }}` | Replaced with the metric name (e.g. Percentage CPU) |
| _`{{ arbitaryDimensionID }}`_ | Replaced with the value of the specified dimension. (e.g. {{ blobtype }} becomes BlockBlob) |
| `{{ dimensionname }}` | _(Legacy for backwards compatibility)_ Replaced with the name of the first dimension |
| `{{ dimensionvalue }}` | _(Legacy for backwards compatibility)_ Replaced with the value of the first dimension |
#### Dimensions
Some metrics have additional metadata associated - dimensions. Dimensions are represented as key-value pairs assigned to each value of a metric. Grafana allows for the display and filtering of metrics based on dimension values.
Multiple operators are supported (as detailed [here](https://docs.microsoft.com/en-us/rest/api/monitor/metrics/list)) - the `equals`, `not equals`, and `starts with` operators.
Further documentation on multi-dimensional metrics is available [here](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/data-platform-metrics#multi-dimensional-metrics), and documentation on filtering [here](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-charts#filters).
#### Supported Azure Monitor metrics
Not all metrics returned by the Azure Monitor Metrics API have values. To make it easier for you when building a query, the Grafana data source has a list of supported metrics and ignores metrics which will never have values. This list is updated regularly as new services and metrics are added to the Azure cloud. For more information about the list of metrics, refer to [current supported namespaces](https://github.com/grafana/grafana/blob/main/public/app/plugins/datasource/grafana-azure-monitor-datasource/azureMetadata/metricNamespaces.ts).
### Querying Azure Monitor Logs
Azure Monitor Logs collects and organises log and performance data from [supported resources](https://docs.microsoft.com/en-us/azure/azure-monitor/monitor-reference) and makes many sources of data available to query together with the sophisticated [Kusto Query Language (KQL)](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/).
While Azure Monitor Metrics only stores simplified numerical data, Logs can store different data types each with their own structure and can perform complexe analysis of data using KQL.
{{< figure src="/static/img/docs/azure-monitor/query-editor-logs.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Logs sample query comparing successful requests to failed requests" >}}
#### Your first Azure Monitor Logs query
1. Select the Logs service
2. Select a resource to query. Alternatively, you can dynamically query all resources under a single resource group or subscription.
3. Enter in your KQL query. See below for examples.
##### Kusto Query Language
Azure Monitor Logs queries are written using the Kusto Query Language (KQL), a rich language designed to be easy to read and write, which should be familiar to those know who SQL. The Azure documentation has plenty of resource to help with learning KQL:
- [Log queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/azure-monitor/logs/log-query-overview)
- [Getting started with Kusto](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/concepts/)
- [Tutorial: Use Kusto queries in Azure Monitor](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor)
- [SQL to Kusto cheat sheet](https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/sqlcheatsheet)
Here is an example query that returns a virtual machine's CPU performance, averaged over 5m time grains
```kusto
Perf
# $__timeFilter is a special Grafana macro that filters the results to the time span of the dashboard
| where $__timeFilter(TimeGenerated)
| where CounterName == "% Processor Time"
| summarize avg(CounterValue) by bin(TimeGenerated, 5m), Computer
| order by TimeGenerated asc
```
Time series queries are for values that change over time, usually for graph visualisations such as the Time series panel. Each query should return at least a datetime column and a numeric value column. The result must also be sorted in ascending order by the datetime column.
A query can also have one or more non-numeric/non-datetime columns, and those columns are considered dimensions and become labels in the response. For example, a query that returns the aggregated count grouped by hour, Computer, and the CounterName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize count() by bin(TimeGenerated, 1h), Computer, CounterName
| order by TimeGenerated asc
```
You can also select additional number value columns (with, or without multiple dimensions). For example, getting a count and average value by hour, Computer, CounterName, and InstanceName:
```kusto
Perf
| where $__timeFilter(TimeGenerated)
| summarize Samples=count(), ["Avg Value"]=avg(CounterValue)
by bin(TimeGenerated, $__interval), Computer, CounterName, InstanceName
| order by TimeGenerated asc
```
Table queries are mainly used in the Table panel and show a list of columns and rows. This example query returns rows with the six specified columns:
```kusto
AzureActivity
| where $__timeFilter()
| project TimeGenerated, ResourceGroup, Category, OperationName, ActivityStatus, Caller
| order by TimeGenerated desc
```
##### Logs macros
To make writing queries easier there are several Grafana macros that can be used in the where clause of a query:
| Macro | Description |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__timeFilter()` | Used to filter the results to the time range of the dashboard.<br/>Example: `TimeGenerated >= datetime(2018-06-05T18:09:58.907Z) and TimeGenerated <= datetime(2018-06-05T20:09:58.907Z)`. |
| `$__timeFilter(datetimeColumn)` | Like `$__timeFilter()`, but specifies a custom field to filter on. |
| `$__timeFrom()` | Expands to the start of the dashboard time range.<br/>Example: `datetime(2018-06-05T18:09:58.907Z)`. |
| `$__timeTo()` | Expands to the end of the dashboard time range.<br/>Example: `datetime(2018-06-05T20:09:58.907Z)`. |
| `$__escapeMulti($myVar)` | Used with multi-value template variables that contain illegal characters.<br/>If `$myVar` has the following two values as a string `'\\grafana-vm\Network(eth0)\Total','\\hello!'`, then it expands to `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`.<br/><br/>If using single value variables there is no need for this macro, simply escape the variable inline instead - `@'\$myVar'`. |
| `$__contains(colName, $myVar)` | Used with multi-value template variables.<br/>If `$myVar` has the value `'value1','value2'`, it expands to: `colName in ('value1','value2')`.<br/><br/>If using the `All` option, then check the `Include All Option` checkbox and in the `Custom all value` field type in the value `all`. If `$myVar` has value `all` then the macro will instead expand to `1 == 1`. For template variables with a lot of options, this will increase the query performance by not building a large "where..in" clause. |
Additionally, Grafana has the built-in `$__interval` macro
### Querying Azure Resource Graph
Azure Resource Graph (ARG) is a service in Azure that is designed to extend Azure Resource Management by providing efficient and performant resource exploration, with the ability to query at scale across a given set of subscriptions so that you can effectively govern your environment. By querying ARG, you can query resources with complex filtering, iteratively explore resources based on governance requirements, and assess the impact of applying policies in a vast cloud environment.
{{< figure src="/static/img/docs/azure-monitor/query-editor-arg.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Resource Graph sample query listing virtual machines on an account" >}}
### Your first Azure Resource Graph query
ARG queries are written in a variant of the [Kusto Query Language](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language), but not all Kusto language features are available in ARG. An Azure Resource Graph query is formatted as table data.
If your credentials give you access to multiple subscriptions, then you can choose multiple subscriptions before entering queries.
#### Sort results by resource properties
Here is an example query that returns all resources in the selected subscriptions, but only the name, type, and location properties:
```kusto
Resources
| project name, type, location
| order by name asc
```
The query uses `order by` to sort the properties by the `name` property in ascending (`asc`) order. You can change what property to sort by and the order (`asc` or `desc`). The query uses `project` to show only the listed properties in the results. You can add or remove properties.
#### Query resources with complex filtering
Filtering for Azure resources with a tag name of `environment` that have a value of `Internal`. You can change these to any desired tag key and value. The `=~` in the `type` match tells Resource Graph to be case insensitive. You can project by other properties or add/remove more.
For example, a query that returns a list of resources with an `environment` tag value of `Internal`:
```kusto
Resources
| where tags.environment=~'internal'
| project name
```
#### Group and aggregate the values by property
You can also use `summarize` and `count` to define how to group and aggregate the values by property. For example, returning count of healthy, unhealthy, and not applicable resources per recommendation:
```kusto
securityresources
| where type == 'microsoft.security/assessments'
| extend resourceId=id,
    recommendationId=name,
    resourceType=type,
    recommendationName=properties.displayName,
    source=properties.resourceDetails.Source,
    recommendationState=properties.status.code,
    description=properties.metadata.description,
    assessmentType=properties.metadata.assessmentType,
    remediationDescription=properties.metadata.remediationDescription,
    policyDefinitionId=properties.metadata.policyDefinitionId,
    implementationEffort=properties.metadata.implementationEffort,
    recommendationSeverity=properties.metadata.severity,
    category=properties.metadata.categories,
    userImpact=properties.metadata.userImpact,
    threats=properties.metadata.threats,
    portalLink=properties.links.azurePortal
| summarize numberOfResources=count(resourceId) by tostring(recommendationName), tostring(recommendationState)
```
In Azure Resource Graph many nested properties (`properties.displayName`) are of a `dynamic` type, and should be cast to a string with `tostring()` to operate on them.
The Azure documentation also hosts [many sample queries](https://docs.microsoft.com/en-gb/azure/governance/resource-graph/samples/starter) to help you get started
### Azure Resource Graph macros
You can use Grafana macros when constructing a query. Use the macros in the where clause of a query:
- `$__timeFilter()` - Expands to
`timestamp ≥ datetime(2018-06-05T18:09:58.907Z) and`
`timestamp ≤ datetime(2018-06-05T20:09:58.907Z)` where the from and to datetimes are from the Grafana time picker.
- `$__timeFilter(datetimeColumn)` - Expands to
`datetimeColumn ≥ datetime(2018-06-05T18:09:58.907Z) and`
`datetimeColumn ≤ datetime(2018-06-05T20:09:58.907Z)` where the from and to datetimes are from the Grafana time picker.
- `$__timeFrom()` - Returns the From datetime from the Grafana picker. Example: `datetime(2018-06-05T18:09:58.907Z)`.
- `$__timeTo()` - Returns the To datetime from the Grafana picker. Example: `datetime(2018-06-05T20:09:58.907Z)`.
- `$__escapeMulti($myVar)` - Use with multi-value template variables that contain illegal characters. If `$myVar` has the two values as a string `'\\grafana-vm\Network(eth0)\Total','\\hello!'`, then it expands to: `@'\\grafana-vm\Network(eth0)\Total', @'\\hello!'`. If you are using single value variables, then there is no need for this macro, simply escape the variable inline instead - `@'\$myVar'`.
- `$__contains(colName, $myVar)` - Use with multi-value template variables. If `$myVar` has the value `'value1','value2'`, the it expands to: `colName in ('value1','value2')`.
If using the `All` option, then check the `Include All Option` checkbox and in the `Custom all value` field type in the following value: `all`. If `$myVar` has value `all` then the macro will instead expand to `1 == 1`. For template variables with a lot of options, this will increase the query performance by not building a large "where..in" clause.
### Working with large Azure resource data sets
If a request exceeds the [maximum allowed value of records](https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/work-with-data#paging-results), the result is paginated and only the first page of results are returned. You can use filters to reduce the amount of records returned under that value.
## Going further with Azure Monitor
See the following topics to learn more about the Azure Monitor data source:
- [Azure Monitor template variables]({{< relref "template-variables/" >}}) for more interactive, dynamic, and reusable dashboards.
- [Provisioning Azure Monitor]({{< relref "provisioning/" >}}) for configuring the Azure Monitor data source using YAML files
- [Deprecating Application Insights]({{< relref "provisioning/" >}}) and migrating to Metrics and Logs queries
### Configuring using Managed Identity
Customers who host Grafana in Azure (e.g. App Service, Azure Virtual Machines) and have managed identity enabled on their VM, will now be able to use the managed identity to configure Azure Monitor in Grafana. This will simplify the data source configuration, requiring the data source to be securely authenticated without having to manually configure credentials via Azure AD App Registrations for each data source. For more details on Azure managed identities, refer to the [Azure documentation](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview).
To enable managed identity for Grafana, set the `managed_identity_enabled` flag in the `[azure]` section of the [Grafana server config](https://grafana.com/docs/grafana/latest/administration/configuration/#azure).
```ini
[azure]
managed_identity_enabled = true
```
Then, in the Azure Monitor data source configuration and set Authentication to Managed Identity. The directory ID, application ID and client secret fields will be hidden and the data source will use managed identity for authenticating to Azure Monitor Metrics, Logs, and Azure Resource Graph.
{{< figure src="/static/img/docs/azure-monitor/managed-identity.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Metrics screenshot showing Dimensions" >}}

View File

@ -1,41 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/deprecated-application-insights/
description: Template to provision the Azure Monitor data source
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- guide
title: Application Insights deprecation
weight: 999
---
# Deprecated Application Insights and Insights Analytics
Application Insights and Insights Analytics are two ways to query the same Azure Application Insights data, which can also be queried from Metrics and Logs. In Grafana 8.0, Application Insights and Insights Analytics were deprecated and made read-only in favor of querying this data through Metrics and Logs.
These query methods were completely removed in Grafana 9.0.
Azure Monitor Metrics and Azure Monitor Logs do not use Application Insights API keys, so make sure the data source is configured with an Azure AD app registration that has access to Application Insights.
## Application Insights
New Application Insights queries can be made with the Metrics service and selecting the "Application Insights" resource type. Application Insights has metrics available between two different metric
{{< figure src="/static/img/docs/azure-monitor/app-insights-metrics.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Monitor Application Insights example" >}}
## Insights Analytics
New Insights Analaytics queries can be written with Kusto in the Logs query type by selecting your Application Insights resource.
{{< figure src="/static/img/docs/azure-monitor/app-insights-logs.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Application Insights example" >}}
The new resource picker for Logs shows all resources on your Azure subscription compatible with Logs.
{{< figure src="/static/img/docs/azure-monitor/app-insights-resource-picker.png" max-width="800px" class="docs-image--no-shadow" caption="Azure Logs Application Insights resource picker" >}}

View File

@ -1,67 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/provisioning/
description: Template to provision the Azure Monitor data source
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- guide
title: Provisioning Azure Monitor
weight: 2
---
# Configure the data source with provisioning
You can configure data sources using config files with Grafanas provisioning system. For more information on how it works and all the settings you can set for data sources on the [Provisioning documentation page]({{< relref "../../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
## Azure AD App Registration (client secret)
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: clientsecret
cloudName: azuremonitor # See table below
tenantId: <tenant-id>
clientId: <client-id>
subscriptionId: <subscription-id> # Optional, default subscription
secureJsonData:
clientSecret: <client-secret>
version: 1
```
## Managed Identity
```yaml
apiVersion: 1 # config file version
datasources:
- name: Azure Monitor
type: grafana-azure-monitor-datasource
access: proxy
jsonData:
azureAuthType: msi
subscriptionId: <subscription-id> # Optional, default subscription
version: 1
```
## Supported cloud names
| Azure Cloud | Value |
| ------------------------------------------------ | -------------------------- |
| Microsoft Azure public cloud | `azuremonitor` (_default_) |
| Microsoft Chinese national cloud | `chinaazuremonitor` |
| US Government cloud | `govazuremonitor` |
| Microsoft German national cloud ("Black Forest") | `germanyazuremonitor` |

View File

@ -1,57 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/azuremonitor/template-variables/
description: Using template variables with Azure Monitor in Grafana
keywords:
- grafana
- microsoft
- azure
- monitor
- application
- insights
- log
- analytics
- guide
title: Azure Monitor template variables
weight: 2
---
# Template variables
Instead of hard-coding values for fields like resource group or resource name in your queries, you can use variables in their place to create more interactive, dynamic, and reusable dashboards.
Check out the [Templating]({{< relref "../../dashboards/variables" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
The Azure Monitor data source provides the following queries you can specify in the Query field in the Variable edit view
| Name | Description |
| --------------- | -------------------------------------------------------------------------------------------- |
| Subscriptions | Returns subscriptions. |
| Resource Groups | Returns resource groups for a specified subscription. |
| Namespaces | Returns metric namespaces for the specified subscription and resource group. |
| Resource Names | Returns a list of resource names for a specified subscription, resource group and namespace. |
| Metric Names | Returns a list of metric names for a resource. |
| Workspaces | Returns a list of workspaces for the specified subscription. |
| Logs | Use a KQL query to return values. |
| Resource Graph | Use an ARG query to return values. |
Any Log Analytics KQL query that returns a single list of values can also be used in the Query field. For example:
| Query | Description |
| ----------------------------------------------------------------------------------------- | --------------------------------------------------------- |
| `workspace("myWorkspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines |
| `workspace("$workspace").Heartbeat \| distinct Computer` | Returns a list of Virtual Machines with template variable |
| `workspace("$workspace").Perf \| distinct ObjectName` | Returns a list of objects from the Perf table |
| `workspace("$workspace").Perf \| where ObjectName == "$object"` `\| distinct CounterName` | Returns a list of metric names from the Perf table |
Example of a time series query using variables:
```kusto
Perf
| where ObjectName == "$object" and CounterName == "$metric"
| where TimeGenerated >= $__timeFrom() and TimeGenerated <= $__timeTo()
| where $__contains(Computer, $computer)
| summarize avg(CounterValue) by bin(TimeGenerated, $__interval), Computer
| order by TimeGenerated asc
```

View File

@ -1,74 +0,0 @@
---
aliases:
- /docs/grafana/latest/enterprise/datasource_permissions/
- /docs/sources/permissions/datasource_permissions/
description: Grafana Datasource Permissions Guide
keywords:
- grafana
- configuration
- documentation
- datasource
- permissions
- users
- teams
- enterprise
title: Data source permissions
weight: 500
---
# Data source permissions
Data source permissions allow you to restrict access for users to query a data source. For each data source there is a permission page that allows you to enable permissions and restrict query permissions to specific **Users** and **Teams**.
> **Note:** Available in [Grafana Enterprise]({{< relref "../enterprise/" >}}) and [Grafana Cloud Pro and Advanced]({{< ref "/docs/grafana-cloud" >}}).
## Enable data source permissions
{{< figure src="/static/img/docs/enterprise/datasource_permissions_enable_still.png" class="docs-image--no-shadow docs-image--right" max-width= "600px" animated-gif="/static/img/docs/enterprise/datasource_permissions_enable.gif" >}}
By default, data sources in an organization can be queried by any user in that organization. For example, a user with the `Viewer` role can issue any possible query to a data source, not just
queries that exist on dashboards they have access to.
When permissions are enabled for a data source in an organization, the user who created the datasource can edit the datasource and in addition, viewers can query the datasource.
**Enable permissions for a data source:**
1. Navigate to **Configuration > Data Sources**.
1. Select the data source you want to enable permissions for.
1. On the Permissions tab, click **Enable**.
<div class="clearfix"></div>
> **Caution:** Enabling permissions for the default data source makes users not listed in the permissions unable to invoke queries. Panels using default data source will return `Access denied to data source` error for those users.
## Allow users and teams to query a data source
{{< figure src="/static/img/docs/enterprise/datasource_permissions_add_still.png" class="docs-image--no-shadow docs-image--right" max-width= "600px" animated-gif="/static/img/docs/enterprise/datasource_permissions_add.gif" >}}
After you have enabled permissions for a data source you can assign query permissions to users and teams which will allow access to query the data source.
**Assign query permission to users and teams:**
1. Navigate to **Configuration > Data Sources**.
1. Select the data source you want to assign query permissions for.
1. On the Permissions tab, click **Add Permission**.
1. Select **Team** or **User**.
1. Select the entity you want to allow query access and then click **Save**.
<div class="clearfix"></div>
## Disable data source permissions
{{< figure src="/static/img/docs/enterprise/datasource_permissions_disable_still.png" class="docs-image--no-shadow docs-image--right" max-width= "600px" animated-gif="/static/img/docs/enterprise/datasource_permissions_disable.gif" >}}
If you have enabled permissions for a data source and want to return data source permissions to the default, then you can disable permissions with a click of a button.
Note that _all_ existing permissions created for the data source will be deleted.
**Disable permissions for a data source:**
1. Navigate to **Configuration > Data Sources**.
1. Select the data source you want to disable permissions for.
1. On the Permissions tab, click **Disable Permissions**.
<div class="clearfix"></div>

View File

@ -1,269 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/elasticsearch/
- /docs/grafana/latest/features/datasources/elasticsearch/
description: Guide for using Elasticsearch in Grafana
keywords:
- grafana
- elasticsearch
- guide
title: Elasticsearch
weight: 325
---
# Using Elasticsearch in Grafana
Grafana ships with advanced support for Elasticsearch. You can do many types of simple or complex Elasticsearch queries to
visualize logs or metrics stored in Elasticsearch. You can also annotate your graphs with log events stored in Elasticsearch.
Supported Elasticsearch versions:
- v7.10+
- v8.0+ (experimental)
## Adding the data source
1. Open the side menu by clicking the Grafana icon in the top header.
1. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
1. Click the `+ Add data source` button in the top header.
1. Select **Elasticsearch** from the **Type** dropdown.
> **Note:** If you're not seeing the `Data Sources` link in your side menu it means that your current user does not have the `Admin` role for the current organization.
| Name | Description |
| --------- | --------------------------------------------------------------------------------- |
| `Name` | Data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Data source is pre-selected for new panels. |
| `Url` | The HTTP protocol, IP, and port of your Elasticsearch server. |
| `Access` | Do not use Access. Use "Server (default)" or the datasource won't function. |
### Index settings
![Elasticsearch data source details](/static/img/docs/elasticsearch/elasticsearch-ds-details-7-4.png)
Here you can specify a default for the `time field` and specify the name of your Elasticsearch index. You can use
a time pattern for the index name or a wildcard.
### Elasticsearch version
Select the version of your Elasticsearch data source from the version selection dropdown. Different query compositions and functionalities are available in the query editor for different versions.
Available Elasticsearch versions are `2.x`, `5.x`, `5.6+`, `6.0+`, `7.0+`, `7.7+` and `7.10+`. Select the option that best matches your data source version.
Grafana assumes that you are running the lowest possible version for a specified range. This ensures that new features or breaking changes in a future Elasticsearch release will not affect your configuration.
For example, suppose you are running Elasticsearch `7.6.1` and you selected `7.0+`. If a new feature is made available for Elasticsearch `7.5.0` or newer releases, then a `7.5+` option will be available. However, your configuration will not be affected until you explicitly select the new `7.5+` option in your settings.
### Min time interval
A lower limit for the auto group by time interval. Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
### X-Pack enabled
Enables `X-Pack` specific features and options, providing the query editor with additional aggregations such as `Rate` and `Top Metrics`.
#### Include frozen indices
When `X-Pack enabled` is active and the configured Elasticsearch version is higher than `6.6.0`, you can configure Grafana to not ignore [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
### Logs
There are two parameters, `Message field name` and `Level field name`, that can optionally be configured from the data source settings page that determine
which fields will be used for log messages and log levels when visualizing logs in [Explore]({{< relref "../explore/" >}}).
For example, if you're using a default setup of Filebeat for shipping logs to Elasticsearch the following configuration should work:
- **Message field name:** message
- **Level field name:** fields.level
### Data links
Data links create a link from a specified field that can be accessed in logs view in Explore.
Each data link configuration consists of:
- **Field -** Name of the field used by the data link.
- **URL/query -** If the link is external, then enter the full link URL. If the link is internal link, then this input serves as query for the target data source. In both cases, you can interpolate the value from the field with `${__value.raw }` macro.
- **URL Label -** (Optional) Set a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link -** Select if the link is internal or external. In case of internal link, a data source selector allows you to select the target data source. Only tracing data sources are supported.
## Metric Query editor
![Elasticsearch Query Editor](/static/img/docs/elasticsearch/query-editor-7-4.png)
The Elasticsearch query editor allows you to select multiple metrics and group by multiple terms or filters. Use the plus and minus icons to the right to add/remove
metrics or group by clauses. Some metrics and group by clauses haves options, click the option text to expand the row to view and edit metric or group by options.
## Series naming and alias patterns
You can control the name for time series via the `Alias` input field.
| Pattern | Description |
| -------------------- | ------------------------------------------------- |
| `{{term fieldname}}` | replaced with value of a term group by |
| `{{metric}}` | replaced with metric name (ex. Average, Min, Max) |
| `{{field}}` | replaced with the metric field name |
## Pipeline metrics
Some metric aggregations are called Pipeline aggregations, for example, _Moving Average_ and _Derivative_. Elasticsearch pipeline metrics require another metric to be based on. Use the eye icon next to the metric to hide metrics from appearing in the graph. This is useful for metrics you only have in the query for use in a pipeline metric.
![Pipeline aggregation editor](/static/img/docs/elasticsearch/pipeline-aggregation-editor-7-4.png)
## Templating
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data
being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables/" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
The Elasticsearch data source supports two types of queries you can use in the _Query_ field of _Query_ variables. The query is written using a custom JSON string. The field should be mapped as a [keyword](https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html#keyword) in the Elasticsearch index mapping. If it is [multi-field](https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html) with both a `text` and `keyword` type, then use `"field":"fieldname.keyword"`(sometimes`fieldname.raw`) to specify the keyword field in your query.
| Query | Description |
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword using term aggregation. Query will use current dashboard time range as time range query. |
| `{"find": "terms", "field": "hostname", "query": '<lucene query>'}` | Returns a list of values for a keyword field using term aggregation and a specified lucene query filter. Query will use current dashboard time range as time range for query. |
There is a default size limit of 500 on terms queries. Set the size property in your query to set a custom limit.
You can use other variables inside the query. Example query definition for a variable named `$host`.
```
{"find": "terms", "field": "hostname", "query": "source:$source"}
```
In the above example, we use another variable named `$source` inside the query definition. Whenever you change, via the dropdown, the current value of the `$source` variable, it will trigger an update of the `$host` variable so it now only contains hostnames filtered by in this case the
`source` document property.
These queries by default return results in term order (which can then be sorted alphabetically or numerically as for any variable).
To produce a list of terms sorted by doc count (a top-N values list), add an `orderBy` property of "doc_count".
This automatically selects a descending sort; using "asc" with doc_count (a bottom-N list) can be done by setting `order: "asc"` but [is discouraged](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) as it "increases the error on document counts".
To keep terms in the doc count order, set the variable's Sort dropdown to **Disabled**; you might alternatively still want to use e.g. **Alphabetical** to re-sort them.
```
{"find": "terms", "field": "hostname", "orderBy": "doc_count"}
```
### Using variables in queries
There are two syntaxes:
- `$<varname>` Example: hostname:$hostname
- `[[varname]]` Example: hostname:[[hostname]]
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the _Multi-value_ or _Include all value_
options are enabled, Grafana converts the labels from plain text to a lucene compatible condition.
![Query with template variables](/static/img/docs/elasticsearch/elastic-templating-query-7-4.png)
In the above example, we have a lucene query that filters documents based on the `hostname` property using a variable named `$hostname`. It is also using
a variable in the _Terms_ group by field input box. This allows you to use a variable to quickly change how the data is grouped.
Example dashboard:
[Elasticsearch Templated Dashboard](https://play.grafana.org/d/CknOEXDMk/elasticsearch-templated?orgId=1d)
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view. Grafana can query any Elasticsearch index
for annotation events.
| Name | Description |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `Query` | You can leave the search query blank or specify a lucene query. |
| `Time` | The name of the time field, needs to be date field. |
| `Time End` | Optional name of the time end field needs to be date field. If set, then annotations will be marked as a region between time and time-end. |
| `Text` | Event description field. |
| `Tags` | Optional field name to use for event tags (can be an array or a CSV string). |
## Querying Logs
Querying and displaying log data from Elasticsearch is available in [Explore]({{< relref "../explore/" >}}), and in the [logs panel]({{< relref "../panels-visualizations/visualizations/logs/" >}}) in dashboards.
Select the Elasticsearch data source, and then optionally enter a lucene query to display your logs.
When switching from a Prometheus or Loki data source in Explore, your query is translated to an Elasticsearch log query with a correct Lucene filter.
### Log Queries
Once the result is returned, the log panel shows a list of log rows and a bar chart where the x-axis shows the time and the y-axis shows the frequency/count.
Note that the fields used for log message and level is based on an [optional data source configuration](#logs).
### Filter Log Messages
Optionally enter a lucene query into the query field to filter the log messages. For example, using a default Filebeat setup you should be able to use `fields.level:error` to only show error log messages.
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
database: '[metrics-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: '@timestamp'
```
or, for logs:
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
database: '[filebeat-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: '@timestamp'
esVersion: '7.0.0'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Amazon Elasticsearch Service
AWS users using Amazon's Elasticsearch Service can use Grafana's Elasticsearch data source to visualize Elasticsearch data.
If you are using an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, then you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For more details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
### AWS Signature Version 4 authentication
> **Note:** Only available in Grafana v7.3+.
In order to sign requests to your Amazon Elasticsearch Service domain, SigV4 can be enabled in the Grafana [configuration]({{< relref "../setup-grafana/configure-grafana/#sigv4_auth_enabled" >}}).
Once AWS SigV4 is enabled, it can be configured on the Elasticsearch data source configuration page. Refer to [Cloudwatch authentication]({{< relref "aws-cloudwatch/aws-authentication/" >}}) for more information about authentication options.
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}

View File

@ -0,0 +1,206 @@
---
aliases:
- /docs/grafana/latest/datasources/elasticsearch/
- /docs/grafana/latest/features/datasources/elasticsearch/
- /docs/grafana/latest/data-sources/elasticsearch/
description: Guide for using Elasticsearch in Grafana
keywords:
- grafana
- elasticsearch
- guide
menuTitle: Elasticsearch
title: Elasticsearch data source
weight: 325
---
# Elasticsearch data source
Grafana ships with built-in support for Elasticsearch.
You can make many types of queries to visualize logs or metrics stored in Elasticsearch, and annotate graphs with log events stored in Elasticsearch.
This topic explains configuring and querying specific to the Elasticsearch data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../panels-visualizations/query-transform-data" >}}).
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the Elasticsearch data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore" >}}).
## Supported Elasticsearch versions
This data source supports these versions of Elasticsearch:
- v7.10+
- v8.0+ (experimental)
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Elasticsearch data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| ----------- | -------------------------------------------------------------------------- |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets the data source that's pre-selected for new panels. |
| **Url** | Sets the HTTP protocol, IP, and port of your Elasticsearch server. |
| **Access** | Don't modify Access. Use "Server (default)" or the data source won't work. |
You must also configure settings specific to the Elasticsearch data source.
### Index settings
{{< figure src="/static/img/docs/elasticsearch/elasticsearch-ds-details-7-4.png" max-width="500px" class="docs-image--right" caption="Elasticsearch data source details" >}}
Use the index settings to specify a default for the `time field` and your Elasticsearch index's name.
You can use a time pattern, such as `YYYY.MM.DD`, or a wildcard for the index name.
### Elasticsearch version
Select the version of your Elasticsearch data source from the version selection dropdown.
Different versions provide different query compositions and functionalities in the [query editor]({{< relref "./query-editor/" >}}).
Available Elasticsearch versions are `2.x`, `5.x`, `5.6+`, `6.0+`, `7.0+`, `7.7+`, and `7.10+`.
Grafana assumes you're running the lowest possible version for a specified range.
This ensures that new features or breaking changes in a future Elasticsearch release don't affect your configuration.
For example, if you run Elasticsearch `7.6.1` and select `7.0+`, and a new feature is made available for Elasticsearch `7.5.0` or newer releases, then a `7.5+` option will be available.
However, your configuration won't be affected until you explicitly select the new `7.5+` option in your settings.
### Configure Min time interval
The **Min time interval** setting defines a lower limit for the auto group-by time interval.
This value _must_ be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Elasticsearch write frequency.
For example, set this to `1m` if Elasticsearch writes data every minute.
You can also override this setting in a dashboard panel under its data source options.
### X-Pack enabled
Toggle this to enable `X-Pack`-specific features and options, which provide the [query editor]({{< relref "./query-editor/" >}}) with additional aggregations, such as `Rate` and `Top Metrics`.
#### Include frozen indices
When the "X-Pack enabled" setting is active and the configured Elasticsearch version is higher than `6.6.0`, you can configure Grafana to not ignore [frozen indices](https://www.elastic.co/guide/en/elasticsearch/reference/7.13/frozen-indices.html) when performing search requests.
> **Note:** Frozen indices are [deprecated in Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/frozen-indices.html) since v7.14.
### Logs
You can optionally configure the two Logs parameters **Message field name** and **Level field name** to determine which fields the data source uses for log messages and log levels when visualizing logs in [Explore]({{< relref "../../explore/" >}}).
For example, if you're using a default setup of Filebeat for shipping logs to Elasticsearch, set:
- **Message field name:** `message`
- **Level field name:** `fields.level`
### Data links
Data links create a link from a specified field that can be accessed in Explore's logs view.
Each data link configuration consists of:
| Parameter | Description |
| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Field** | Sets the name of the field used by the data link. |
| **URL/query** | Sets the full link URL if the link is external. If the link is internal, this input serves as a query for the target data source.<br/>In both cases, you can interpolate the value from the field with the `${__value.raw }` macro. |
| **URL Label** | (Optional) Sets a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting. |
| **Internal link** | Sets whether the link is internal or external. For an internal link, you can select the target data source with a data source selector. This supports only tracing data sources. |
### Configure Amazon Elasticsearch Service
If you use Amazon Elasticsearch Service, you can use Grafana's Elasticsearch data source to visualize data from it.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
#### AWS Signature Version 4 authentication
> **Note:** Available in Grafana v7.3 and higher.
To sign requests to your Amazon Elasticsearch Service domain, you can enable SigV4 in Grafana's [configuration]({{< relref "../../setup-grafana/configure-grafana/#sigv4_auth_enabled" >}}).
Once AWS SigV4 is enabled, you can configure it on the Elasticsearch data source configuration page.
For more information about AWS authentication options, refer to [AWS authentication]({{< relref "../aws-cloudwatch/aws-authentication/" >}}).
{{< figure src="/static/img/docs/v73/elasticsearch-sigv4-config-editor.png" max-width="500px" class="docs-image--no-shadow" caption="SigV4 configuration for AWS Elasticsearch Service" >}}
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning examples
**Basic provisioning:**
```yaml
apiVersion: 1
datasources:
- name: Elastic
type: elasticsearch
access: proxy
database: '[metrics-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: '@timestamp'
```
**Provision for logs:**
```yaml
apiVersion: 1
datasources:
- name: elasticsearch-v7-filebeat
type: elasticsearch
access: proxy
database: '[filebeat-]YYYY.MM.DD'
url: http://localhost:9200
jsonData:
interval: Daily
timeField: '@timestamp'
esVersion: '7.0.0'
logMessageField: message
logLevelField: fields.level
dataLinks:
- datasourceUid: my_jaeger_uid # Target UID needs to be known
field: traceID
url: '$${__value.raw}' # Careful about the double "$$" because of env var expansion
```
## Query the data source
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
For details, see the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -0,0 +1,69 @@
---
aliases:
- /docs/grafana/latest/data-sources/elasticsearch/query-editor/
- /docs/grafana/latest/data-sources/elasticsearch/template-variables/
description: Guide for using the Elasticsearch data source's query editor
keywords:
- grafana
- elasticsearch
- lucene
- metrics
- logs
- queries
menuTitle: Query editor
title: Elasticsearch query editor
weight: 300
---
# Elasticsearch query editor
{{< figure src="/static/img/docs/elasticsearch/query-editor-7-4.png" max-width="500px" class="docs-image--no-shadow" caption="Elasticsearch Query Editor" >}}
This topic explains querying specific to the Elasticsearch data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Select and edit metrics
You can select multiple metrics and group by multiple terms or filters when using the Elasticsearch query editor.
Use the plus and minus icons to the right to add and remove metrics or group by clauses.
To expand the row to view and edit any available metric or group-by options, click the option text.
## Use template variables
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
## Name a time series
You can control the name for time series via the `Alias` input field.
| Pattern | Replacement value |
| -------------------- | -------------------------------------- |
| `{{term fieldname}}` | Value of a term group-by |
| `{{metric}}` | Metric name, such as Average, Min, Max |
| `{{field}}` | Metric field name |
## Control pipeline metrics visibility
Some metric aggregations, such as _Moving Average_ and _Derivative_, are called **Pipeline** aggregations.
Elasticsearch pipeline metrics must be based on another metric.
Use the eye icon next to the metric to prevent metrics from appearing in the graph.
This is useful for metrics you only have in the query for use in a pipeline metric.
{{< figure src="/static/img/docs/elasticsearch/pipeline-aggregation-editor-7-4.png" max-width="500px" class="docs-image--no-shadow" caption="Pipeline aggregation editor" >}}
## Create a query
Write the query using a custom JSON string, with the field mapped as a [keyword](https://www.elastic.co/guide/en/elasticsearch/reference/current/keyword.html#keyword) in the Elasticsearch index mapping.
If the query is [multi-field](https://www.elastic.co/guide/en/elasticsearch/reference/current/multi-fields.html) with both a `text` and `keyword` type, use `"field":"fieldname.keyword"` (sometimes `fieldname.raw`) to specify the keyword field in your query.
| Query | Description |
| ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `{"find": "fields", "type": "keyword"}` | Returns a list of field names with the index type `keyword`. |
| `{"find": "terms", "field": "hostname.keyword", "size": 1000}` | Returns a list of values for a keyword using term aggregation. Query will use current dashboard time range as time range query. |
| `{"find": "terms", "field": "hostname", "query": '<Lucene query>'}` | Returns a list of values for a keyword field using term aggregation and a specified Lucene query filter. Query will use current dashboard time range as time range for query. |
Queries of `terms` have a 500-result limit by default.
To set a custom limit, set the `size` property in your query.

View File

@ -0,0 +1,67 @@
---
aliases:
- /docs/grafana/latest/datasources/elasticsearch/template-variables/
- /docs/grafana/latest/data-sources/elasticsearch/template-variables/
description: Using template variables with Elasticsearch in Grafana
keywords:
- grafana
- elasticsearch
- templates
- variables
- queries
menuTitle: Template variables
title: Elasticsearch template variables
weight: 400
---
# Elasticsearch template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Choose a variable syntax
The Elasticsearch data source supports two variable syntaxes for use in the **Query** field:
- `$varname`, such as `hostname:$hostname`, which is easy to read and write but doesn't let you use a variable in the middle of a word.
- `[[varname]]`, such as `hostname:[[hostname]]`
When the _Multi-value_ or _Include all value_ options are enabled, Grafana converts the labels from plain text to a Lucene-compatible condition.
For details, see the [Multi-value variables]({{< relref "../../../dashboards/variables/add-template-variables#multi-value-variables" >}}) documentation.
## Use variables in queries
You can use other variables inside the query.
This example is used to define a variable named `$host`:
```
{"find": "terms", "field": "hostname", "query": "source:$source"}
```
This uses another variable named `$source` inside the query definition.
Whenever you change the value of the `$source` variable via the dropdown, Grafana triggers an update of the `$host` variable to contain only hostnames filtered by, in this case, the `source` document property.
These queries by default return results in term order (which can then be sorted alphabetically or numerically as for any variable).
To produce a list of terms sorted by doc count (a top-N values list), add an `orderBy` property of "doc_count".
This automatically selects a descending sort.
> **Note:** To use an ascending sort (`asc`) with doc_count (a bottom-N list), set `order: "asc"`. However, Elasticsearch [discourages this](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-terms-aggregation.html#search-aggregations-bucket-terms-aggregation-order) because sorting by ascending doc count can return inaccurate results.
To keep terms in the doc count order, set the variable's Sort dropdown to **Disabled**.
You can alternatively use other sorting criteria, such as **Alphabetical**, to re-sort them.
```
{"find": "terms", "field": "hostname", "orderBy": "doc_count"}
```
## Template variable examples
{{< figure src="/static/img/docs/elasticsearch/elastic-templating-query-7-4.png" max-width="500px" class="docs-image--no-shadow" caption="Query with template variables" >}}
In the above example, a Lucene query filters documents based on the `hostname` property using a variable named `$hostname`.
The example also uses a variable in the _Terms_ group by field input box, which you can use to quickly change how data is grouped.
To view an example dashboard on Grafana Play, see the [Elasticsearch Templated Dashboard](https://play.grafana.org/d/CknOEXDMk/elasticsearch-templated?orgId=1d).

View File

@ -1,9 +1,12 @@
---
aliases:
- /docs/grafana/latest/datasources/google-cloud-monitoring/
- /docs/grafana/latest/features/datasources/stackdriver/
- /docs/grafana/next/datasources/cloudmonitoring/
- /docs/grafana/next/features/datasources/cloudmonitoring/
- /docs/grafana/latest/features/datasources/cloudmonitoring/
- /docs/grafana/latest/datasources/google-cloud-monitoring/
- /docs/grafana/latest/datasources/cloudmonitoring/
- /docs/grafana/latest/datasources/google-cloud-monitoring/preconfig-cloud-monitoring-dashboards/
- /docs/grafana/latest/data-sources/google-cloud-monitoring/
- /docs/grafana/latest/data-sources/google-cloud-monitoring/preconfig-cloud-monitoring-dashboards/
description: Guide for using Google Cloud Monitoring in Grafana
keywords:
- grafana
@ -12,299 +15,78 @@ keywords:
- guide
- cloud
- monitoring
title: Google Cloud Monitoring
menuTitle: Google Cloud Monitoring
title: Google Cloud Monitoring data source
weight: 350
---
# Using Google Cloud Monitoring in Grafana
# Google Cloud Monitoring data source
Grafana ships with built-in support for Google Cloud Monitoring. Add it as a data source to build dashboards for your Google Cloud Monitoring metrics. For instructions on how to add a data source, refer to [Add a data source]({{< relref "../add-a-data-source/" >}}). Only users with the organization admin role can add data sources.
Grafana ships with built-in support for Google Cloud Monitoring.
This topic describes queries, templates, variables, and other configuration specific to the Google Cloud Monitoring data source.
> **Note** Before Grafana v7.1, Google Cloud Monitoring was referred to as Google Stackdriver.
> **Note:** Before Grafana v7.1, Google Cloud Monitoring was referred to as Google Stackdriver.
## Configure the Google Cloud Monitoring data source
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
To access Google Cloud Monitoring settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and click **Add data source**, then click the Google Cloud Monitoring data source.
Once you've added the Google Cloud Monitoring data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) and apply [annotations]({{< relref "#annotations" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
| Name | Description |
| --------- | ------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it is pre-selected for new panels. |
## Configure the data source
For authentication options and configuration details, see the [Google authentication]({{< relref "google-authentication/" >}}) documentation.
**To access the data source configuration page:**
### Google Cloud Monitoring specific data source configuration
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the AWS CloudWatch data source.
The following APIs need to be enabled first:
Set the data source's basic configuration options carefully:
- [Monitoring API](https://console.cloud.google.com/apis/library/monitoring.googleapis.com)
- [Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com)
| Name | Description |
| ----------- | ------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets whether the data source is pre-selected for new panels. |
Click on the links above and click the `Enable` button:
### Configure Google authentication
{{< figure src="/static/img/docs/v71/cloudmonitoring_enable_api.png" max-width="450px" class="docs-image--no-shadow" caption="Enable GCP APIs" >}}
Before you can request data from Google Cloud Monitoring, you must configure authentication.
All requests to Google APIs are performed on the server-side by the Grafana backend.
#### Using GCP Service Account Key File
For authentication options and configuration details, refer to [Google authentication]({{< relref "./google-authentication/" >}}).
The GCP Service Account must have the **Monitoring Viewer** role as shown in the image below:
When configuring Google authentication, note these additional Google Cloud Monitoring-specific steps:
#### Configure a GCP Service Account
When you [create a Google Cloud Platform (GCP) Service Account and key file]({{< relref "./google-authentication/#create-a-gcp-service-account-and-key-file" >}}), the Service Account must have the **Monitoring Viewer** role (**Role > Select a role > Monitoring > Monitoring Viewer**):
{{< figure src="/static/img/docs/v71/cloudmonitoring_service_account_choose_role.png" max-width="600px" class="docs-image--no-shadow" caption="Choose role" >}}
#### Using GCE Default Service Account
#### Grant the GCE Default Service Account scope
If Grafana is running on a Google Compute Engine (GCE) virtual machine, the service account in use must have access to the `Cloud Monitoring API` scope.
If Grafana is running on a Google Compute Engine (GCE) virtual machine, then when you [Configure a GCE Default Service Account]({{< relref "./google-authentication/#configure-a-gce-default-service-account" >}}), you must also grant that Service Account access to the "Cloud Monitoring API" scope.
## Using the Query Editor
### Enable necessary Google Cloud Platform APIs
The Google Cloud Monitoring query editor allows you to build two types of queries - **Metric** and **Service Level Objective (SLO)**. Both types return time series data.
Before you can request data from Google Cloud Monitoring, you must first enable necessary APIs on the Google end.
### Metric Queries
1. Open the Monitoring and Cloud Resource Manager API pages:
{{< figure src="/static/img/docs/google-cloud-monitoring/metric-query-builder-8-0.png" max-width= "400px" class="docs-image--right" >}}
- [Monitoring API](https://console.cloud.google.com/apis/library/monitoring.googleapis.com)
- [Cloud Resource Manager API](https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com)
The metric query editor allows you to select metrics, group/aggregate by labels and by time, and use filters to specify which time series you want in the results.
1. On each page, click the `Enable` button.
To create a metric query, follow these steps:
{{< figure src="/static/img/docs/v71/cloudmonitoring_enable_api.png" max-width="450px" class="docs-image--no-shadow" caption="Enable GCP APIs" >}}
1. Choose the option **Metrics** in the **Query Type** dropdown
1. Choose a project from the **Project** dropdown
1. Choose a Google Cloud Platform service from the **Service** dropdown
1. Choose a metric from the **Metric** dropdown.
1. Use the plus and minus icons in the filter and group by sections to add/remove filters or group by clauses. This step is optional.
### Provision the data source
Google Cloud Monitoring supports different kinds of metrics like `GAUGE`, `DELTA,` and `CUMULATIVE`. They support different aggregation options, for example, reducers and aligners. The Grafana query editor displays the list of available aggregation methods for a selected metric and sets a default reducer and aligner when you select the metric.
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Filter
#### Provisioning examples
To add a filter, click the plus icon and choose a field to filter by and enter a filter value e.g. `instance_name = grafana-1`. You can remove the filter by clicking on the trash icon.
##### Simple wildcards
When the operator is set to `=` or `!=` it is possible to add wildcards to the filter value field. E.g `us-*` will capture all values that starts with "us-" and `*central-a` will capture all values that ends with "central-a". `*-central-*` captures all values that has the substring of -central-. Simple wildcards are less expensive than regular expressions.
##### Regular expressions
When the operator is set to `=~` or `!=~` it is possible to add regular expressions to the filter value field. E.g `us-central[1-3]-[af]` would match all values that starts with "us-central", is followed by a number in the range of 1 to 3, a dash and then either an "a" or an "f". Leading and trailing slashes are not needed when creating regular expressions.
#### Pre-processing
Preprocessing options are displayed in the UI when the selected metric has a metric kind of `delta` or `cumulative`. If the selected metric has a metric kind of `gauge`, no pre-processing option will be displayed.
If you select `Rate`, data points are aligned and converted to a rate per time series. If you select `Delta`, data points are aligned by their delta (difference) per time series.
#### Grouping
You can reduce the amount of data returned for a metric by combining different time series. To combine multiple time series, specify a grouping and a function.
##### Group by
Group by resource or metric labels to reduce the number of time series and to aggregate the results by a group. For example, group by `instance_name` to view an aggregated metric for a Compute instance.
###### Metadata labels
Resource metadata labels contain information that can uniquely identify a resource in Google Cloud. Metadata labels are only returned in the time series response if they're part of the **Group By** segment in the time series request.
There's no API for retrieving metadata labels. As a result, you cannot populate the group by list with the metadata labels that are available for the selected service and metric. However, the **Group By** field list comes with a pre-defined set of common system labels.
User labels cannot be predefined, but you can enter them manually in the **Group By** field. If a metadata label, user label, or system label is included in the **Group By** segment, then you can create filters based on it and expand its value on the **Alias** field.
##### Group by function
Select a grouping function to combine the time series in the group into a single time series.
#### Alignment
The process of alignment consists of collecting all data points received in a fixed length of time, applying a function to combine those data points, and assigning a timestamp to the result.
##### Alignment function
During alignment, all data points are received in a fixed interval. Within each interval (determined by the alignment period) and for each time series, the data is aggregated into a single point. The value of that point is determined by the type of alignment function used. For more information on alignment functions, refer to [alignment metric selector](https://cloud.google.com/monitoring/charts/metrics-selector#alignment).
##### Alignment period
The `Alignment Period` groups a metric by time if an aggregation is chosen. The default is to use the GCP Google Cloud Monitoring default groupings (which allows you to compare graphs in Grafana with graphs in the Google Cloud Monitoring UI).
The option is called `cloud monitoring auto` and the defaults are:
- 1m for time ranges < 23 hours
- 5m for time ranges >= 23 hours and < 6 days
- 1h for time ranges >= 6 days
The other automatic option is `grafana auto`. This will automatically set the group by time depending on the time range chosen and the width of the time series panel. For more information about grafana auto, refer to the [interval variable]({{< relref "../../dashboards/variables/add-template-variables/#add-an-interval-variable" >}}).
You can also choose fixed time intervals to group by, like `1h` or `1d`.
#### Alias patterns
The Alias By field allows you to control the format of the legend keys. The default is to show the metric name and labels. This can be long and hard to read. Using the following patterns in the alias field, you can format the legend key the way you want it.
#### Metric type patterns
| Alias Pattern | Description | Example Result |
| -------------------- | ---------------------------- | ------------------------------------------------- |
| `{{metric.type}}` | returns the full Metric Type | `compute.googleapis.com/instance/cpu/utilization` |
| `{{metric.name}}` | returns the metric name part | `instance/cpu/utilization` |
| `{{metric.service}}` | returns the service part | `compute` |
#### Label patterns
In the Group By dropdown, you can see a list of metric and resource labels for a metric. These can be included in the legend key using alias patterns.
| Alias Pattern Format | Description | Alias Pattern Example | Example Result |
| -------------------------------- | ---------------------------------------- | --------------------------------- | ---------------- |
| `{{metric.label.xxx}}` | returns the metric label value | `{{metric.label.instance_name}}` | `grafana-1-prod` |
| `{{resource.label.xxx}}` | returns the resource label value | `{{resource.label.zone}}` | `us-east1-b` |
| `{{metadata.system_labels.xxx}}` | returns the meta data system label value | `{{metadata.system_labels.name}}` | `grafana` |
| `{{metadata.user_labels.xxx}}` | returns the meta data user label value | `{{metadata.user_labels.tag}}` | `production` |
Example Alias By: `{{metric.type}} - {{metric.label.instance_name}}`
Example Result: `compute.googleapis.com/instance/cpu/usage_time - server1-prod`
It is also possible to resolve the name of the Monitored Resource Type.
| Alias Pattern Format | Description | Example Result |
| -------------------- | ----------------------------------------------- | -------------- |
| `{{resource.type}}` | returns the name of the monitored resource type | `gce_instance` |
Example Alias By: `{{resource.type}} - {{metric.type}}`
Example Result: `gce_instance - compute.googleapis.com/instance/cpu/usage_time`
#### Deep linking from Grafana panels to the Metrics Explorer in Google Cloud Console
> **Note:** Available in Grafana v7.1 and later versions.
{{< figure src="/static/img/docs/v71/cloudmonitoring_deep_linking.png" max-width="500px" class="docs-image--right" caption="Google Cloud Monitoring deep linking" >}}
Click on a time series in the panel to see a context menu with a link to View in Metrics Explorer in Google Cloud Console. Clicking that link opens the Metrics Explorer in the Google Cloud Console and runs the query from the Grafana panel there.
The link navigates the user first to the Google Account Chooser and after successfully selecting an account, the user is redirected to the Metrics Explorer. The provided link is valid for any account, but it only displays the query if your account has access to the GCP project specified in the query.
#### Automatic unit detection
Grafana issues one query to the Cloud Monitoring API per query editor row, and each API response includes a unit. Grafana will attempt to convert the returned unit into a unit that is understood by the Grafana time series panel. If the conversion was successful, then the unit will be displayed on the Y-axis on the panel. If the query editor rows returned different units, then the unit from the last query editor row is used in the time series panel.
### SLO (Service Level Objective) queries
> **Note:** Available in Grafana v7.0 and later versions.
{{< figure src="/static/img/docs/google-cloud-monitoring/slo-query-builder-8-0.png" max-width= "400px" class="docs-image--right" >}}
The SLO query builder in the Google Cloud Monitoring data source allows you to display SLO data in time series format. To get an understanding of the basic concepts in service monitoring, please refer to Google Cloud Monitoring's [official docs](https://cloud.google.com/monitoring/service-monitoring).
#### How to create an SLO query
To create an SLO query, follow these steps:
1. Choose the option **Service Level Objectives (SLO)** in the **Query Type** dropdown.
1. Choose a project from the **Project** dropdown.
1. Choose an [SLO service](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/services) from the **Service** dropdown.
1. Choose an [SLO](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/services.serviceLevelObjectives) from the **SLO** dropdown.
1. Choose a [time series selector](https://cloud.google.com/stackdriver/docs/solutions/slo-monitoring/api/timeseries-selectors#ts-selector-list) from the **Selector** dropdown.
The friendly names for the time series selectors are shown in Grafana. Here is the mapping from the friendly name to the system name that is used in the Service Monitoring documentation:
| Selector dropdown value | Corresponding time series selector used |
| -------------------------- | --------------------------------------- |
| SLI Value | select_slo_health |
| SLO Compliance | select_slo_compliance |
| SLO Error Budget Remaining | select_slo_budget_fraction |
| SLO Burn Rate | select_slo_burn_rate |
#### Alias patterns for SLO queries
The Alias By field allows you to control the format of the legend keys for SLO queries too.
| Alias Pattern | Description | Example Result |
| -------------- | ---------------------------- | ------------------- |
| `{{project}}` | returns the GCP project name | `myProject` |
| `{{service}}` | returns the service name | `myService` |
| `{{slo}}` | returns the SLO | `latency-slo` |
| `{{selector}}` | returns the selector | `select_slo_health` |
#### Alignment period/group by time for SLO queries
SLO queries use the same [alignment period functionality as metric queries]({{< relref "#metric-queries" >}}).
### MQL (Monitoring Query Language) queries
> **Note:** Available in Grafana v7.4 and later versions.
The MQL query builder in the Google Cloud Monitoring data source allows you to display MQL results in time series format. To get an understanding of the basic concepts in MQL, refer to [Introduction to Monitoring Query Language](https://cloud.google.com/monitoring/mql).
#### Create an MQL query
To create an MQL query, follow these steps:
1. In the **Query Type** list, select **Metrics**.
2. Click **<> Edit MQL** right next to the **Query Type** field. This will toggle the metric query builder mode so that raw MQL queries can be used.
3. Choose a project from the **Project** list.
4. Add the [MQL](https://cloud.google.com/monitoring/mql/query-language) query of your choice in the text area.
#### Alias patterns for MQL queries
MQL queries use the same alias patterns as [metric queries]({{< relref "#metric-queries" >}}).
`{{metric.service}}` is not supported. `{{metric.type}}` and `{{metric.name}}` show the time series key in the response.
## Templating
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data
being displayed in your dashboard.
Check out the [Templating]({{< relref "../../dashboards/variables/" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query Variable
Variable of the type _Query_ allows you to query Google Cloud Monitoring for various types of data. The Google Cloud Monitoring data source plugin provides the following `Query Types`.
| Name | Description |
| -------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| `Metric Types` | Returns a list of metric type names that are available for the specified service. |
| `Labels Keys` | Returns a list of keys for `metric label` and `resource label` in the specified metric. |
| `Labels Values` | Returns a list of values for the label in the specified metric. |
| `Resource Types` | Returns a list of resource types for the specified metric. |
| `Aggregations` | Returns a list of aggregations (cross series reducers) for the specified metric. |
| `Aligners` | Returns a list of aligners (per series aligners) for the specified metric. |
| `Alignment periods` | Returns a list of all alignment periods that are available in Google Cloud Monitoring query editor in Grafana |
| `Selectors` | Returns a list of selectors that can be used in SLO (Service Level Objectives) queries |
| `SLO Services` | Returns a list of Service Monitoring services that can be used in SLO queries |
| `Service Level Objectives (SLO)` | Returns a list of SLO's for the specified SLO service |
### Using variables in queries
Refer to the [variable syntax documentation]({{< relref "../../dashboards/variables/variable-syntax" >}}).
## Annotations
{{< figure src="/static/img/docs/google-cloud-monitoring/annotations-8-0.png" max-width= "400px" class="docs-image--right" >}}
[Annotations]({{< relref "../../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view. Annotation rendering is expensive so it is important to limit the number of rows returned. There is no support for showing Google Cloud Monitoring annotations and events yet but it works well with [custom metrics](https://cloud.google.com/monitoring/custom-metrics/) in Google Cloud Monitoring.
With the query editor for annotations, you can select a metric and filters. The `Title` and `Text` fields support templating and can use data returned from the query. For example, the Title field could have the following text:
`{{metric.type}} has value: {{metric.value}}`
Example Result: `monitoring.googleapis.com/uptime_check/http_status has this value: 502`
### Patterns for the Annotation Query Editor
| Alias Pattern Format | Description | Alias Pattern Example | Example Result |
| ------------------------ | -------------------------------- | -------------------------------- | ------------------------------------------------- |
| `{{metric.value}}` | value of the metric/point | `{{metric.value}}` | `555` |
| `{{metric.type}}` | returns the full Metric Type | `{{metric.type}}` | `compute.googleapis.com/instance/cpu/utilization` |
| `{{metric.name}}` | returns the metric name part | `{{metric.name}}` | `instance/cpu/utilization` |
| `{{metric.service}}` | returns the service part | `{{metric.service}}` | `compute` |
| `{{metric.label.xxx}}` | returns the metric label value | `{{metric.label.instance_name}}` | `grafana-1-prod` |
| `{{resource.label.xxx}}` | returns the resource label value | `{{resource.label.zone}}` | `us-east1-b` |
## Configure the data source with provisioning
You can provision CloudWatch data source by modifying Grafana's configuration files. To learn more about provisioning and all the settings you can set, see the [provisioning documentation]({{< relref "../../administration/provisioning/#data-sources" >}})
Here is a provisioning example using the JWT (Service Account key file) authentication type.
**Using the JWT (Service Account key file) authentication type:**
```yaml
apiVersion: 1
@ -327,7 +109,7 @@ datasources:
-----END PRIVATE KEY-----
```
Here is a provisioning example using GCE Default Service Account authentication.
**Using GCE Default Service Account authentication:**
```yaml
apiVersion: 1
@ -339,3 +121,41 @@ datasources:
jsonData:
authenticationType: gce
```
## Import pre-configured dashboards
The Google Cloud Monitoring data source ships with pre-configured dashboards for some of the most popular GCP services.
These curated dashboards are based on similar dashboards in the GCP dashboard samples repository.
{{< figure src="/static/img/docs/google-cloud-monitoring/curated-dashboards-7-4.png" class="docs-image--no-shadow" max-width="650px" caption="Curated dashboards for Google Cloud Monitoring" >}}
**To import curated dashboards:**
1. Navigate to the data source's [configuration page]({{< relref "#configure-the-data-source" >}}).
1. Select the **Dashboards** tab.
This displays the curated selection of importable dashboards.
1. Select **Import** for the dashboard to import.
The dashboards include a [template variable]({{< relref "./template-variables/" >}}) populated with the projects accessible by the configured [Service Account]({{< relref "./google-authentication/" >}}) each time you load the dashboard.
After Grafana loads the dashboard, you can select a project from the dropdown list.
**To customize an imported dashboard:**
To customize one of these dashboards, we recommend that you save it under a different name.
If you don't, upgrading Grafana can overwrite the customized dashboard with the new version.
## Query the data source
The Google Cloud Monitoring query editor helps you build two types of queries: **Metric** and **Service Level Objective (SLO)**.
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -11,29 +11,38 @@ title: Authentication
weight: 5
---
# Google authentication
# Configure Google authentication
Requests from a Grafana plugin to Google are made on behalf of an IAM role or an IAM user. The IAM user or IAM role must have the associated policies to perform certain API actions. Since these policies are specific to each data source, refer to the data source documentation for details. All requests to Google APIs are performed on the server-side by the Grafana backend.
Requests from a Grafana plugin to Google are made on behalf of an Identity and Access Management (IAM) role or IAM user.
The IAM user or IAM role must have the associated policies to perform certain API actions.
Since these policies are specific to each data source, refer to the data source documentation for details.
You can authenticate a Grafana plugin to Google by uploading a Google JWT file or by automatically retrieving credentials from the Google metadata server. The latter option is only available when running Grafana on GCE virtual machine.
All requests to Google APIs are performed on the server-side by the Grafana backend.
You can authenticate a Grafana plugin to Google by uploading a Google JSON Web Token (JWT) file, or by automatically retrieving credentials from the Google metadata server.
The latter option is available only when running Grafana on a GCE virtual machine.
## Using Google Service Account Key File
## Use a Google Service Account key file
To authenticate the Grafana plugin with the Google API, create a Google Cloud Platform (GCP) Service Account for the Project you want to show data. A Grafana data source integrates with one GCP Project. If you want to visualize data from multiple GCP Projects, then create one data source per GCP Project.
To authenticate the Grafana plugin with the Google API, create a Google Cloud Platform (GCP) Service Account for the Project you want to show data.
### Create a GCP Service Account for a Project
Each Grafana data source integrates with one GCP Project.
To visualize data from multiple GCP Projects, create one data source per GCP Project.
### Create a GCP Service Account and key file
1. Navigate to the [APIs and Services Credentials page](https://console.cloud.google.com/apis/credentials).
1. Click on the **Create credentials** dropdown and select the **Service account** option.
1. In **Service account name**, enter a name for the account.
1. From the **Role** dropdown, choose the roles required by the specific plugin.
1. Click **Done**.
1. Use the newly created account to [create a service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console). A JSON key file is created and downloaded to your computer.
1. Store this file in a secure place as it allows access to your Google data.
1. Upload the key to Grafana via the data source configuration page.
The file contents is encrypted and saved in the Grafana database. Don't forget to save the file after uploading!
1. Use the newly created account to [create a service account key](https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console).
A JSON key file is created and downloaded to your computer.
1. Store the key file in a secure place, because it grants access to your Google data.
1. In the Grafana data source configuration page, upload the key file.
The file's contents are encrypted and saved in the Grafana database.
Remember to save the file after uploading.
## Using GCE Default Service Account
## Configure a GCE Default Service Account
When Grafana is running on a Google Compute Engine (GCE) virtual machine, Grafana can automatically retrieve default credentials from the metadata server. As a result, there is no need to generate a private key file for the service account. You also do not need to upload the file to Grafana. The following preconditions must be met before Grafana can retrieve default credentials.

View File

@ -1,34 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/google-cloud-monitoring/preconfig-cloud-monitoring-dashboards/
- /docs/grafana/latest/features/datasources/stackdriver/
- /docs/grafana/next/features/datasources/cloudmonitoring/
description: Guide for using Google Cloud Monitoring in Grafana
keywords:
- grafana
- stackdriver
- google
- guide
- cloud
- monitoring
title: Preconfigured dashboards
weight: 10
---
# Preconfigured Cloud Monitoring dashboards
Google Cloud Monitoring data source ships with pre-configured dashboards for some of the most popular GCP services. These curated dashboards are based on similar dashboards in the GCP dashboard samples repository. See also, [Using Google Cloud Monitoring in Grafana]({{< relref "_index.md" >}}) for detailed instructions on how to add and configure the Google Cloud Monitoring data source.
## Curated dashboards
To import the curated dashboards:
1. On the configuration page of your Cloud Monitoring data source, click the **Dashboards** tab.
1. Click **Import** for the dashboard you would like to use.
The data source of the newly created dashboard panels will be the one selected above. The dashboards have a template variable that is populated with the projects accessible by the configured service account every time the dashboard is loaded. After the dashboard is loaded, you can select the project you prefer from the drop-down list.
In case you want to customize a dashboard, we recommend that you save it under a different name. Otherwise the dashboard will be overwritten when a new version of the dashboard is released.
{{< figure src="/static/img/docs/google-cloud-monitoring/curated-dashboards-7-4.png" max-width= "650px" >}}

View File

@ -0,0 +1,294 @@
---
aliases:
- /docs/grafana/latest/data-sources/google-cloud-monitoring/query-editor/
description: Guide for using the Google Cloud Monitoring data source's query editor
keywords:
- grafana
- google
- cloud
- monitoring
- queries
menuTitle: Query editor
title: Google Cloud Monitoring query editor
weight: 300
---
# Google Cloud Monitoring query editor
This topic explains querying specific to the Google Cloud Monitoring data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
The Google Cloud Monitoring query editor helps you build queries for two types of data, which both return time series data:
- [Metrics]({{< relref "#query-metrics" >}})
You can also create [Monitoring Query Language (MQL)]({{< relref "#use-the-monitoring-query-language" >}}) queries.
- [Service Level Objectives (SLO)]({{< relref "#query-service-level-objectives" >}})
You also use the query editor when you [annotate]({{< relref "#apply-annotations" >}}) visualizations.
## Query metrics
{{< figure src="/static/img/docs/google-cloud-monitoring/metric-query-builder-8-0.png" max-width="400px" class="docs-image--no-shadow" caption="Google Cloud Monitoring metrics query builder" >}}
The metrics query editor helps you select metrics, group and aggregate by labels and time, and use filters to specify which time series you want to query.
### Create a metrics query
1. Select the **Metrics** option in the **Query Type** dropdown.
1. Select a project from the **Project** dropdown.
1. Select a Google Cloud Platform service from the **Service** dropdown.
1. Select a metric from the **Metric** dropdown.
1. _(Optional)_ Use the plus and minus icons in the filter and group-by sections to add and remove filters or group-by clauses.
Google Cloud Monitoring supports several metrics types, such as `GAUGE`, `DELTA,` and `CUMULATIVE`.
Each supports different aggregation options, such as reducers and aligners.
The metrics query editor lists available aggregation methods for a selected metric, and sets a default reducer and aligner when you select a metric.
### Apply a filter
**To add and apply a filter:**
1. Click the plus icon in the filter section.
1. Select a field to filter by.
1. Enter a filter value, such as `instance_name = grafana-1`.
To remove the filter, click the trash icon.
#### Use simple wildcards
When you set the operator to `=` or `!=`, you can add wildcards to the filter value field.
For example, entering `us-*` captures all values that start with "us-", and entering `*central-a` captures all values that end with "central-a".
`*-central-*` captures all values with the substring of "-central-".
Simple wildcards perform better than regular expressions.
#### Use regular expressions
When you set the operator to `=~` or `!=~`, you can add regular expressions to the filter value field.
For example, entering `us-central[1-3]-[af]` matches all values that start with "us-central", then are followed by a number in the range of 1 to 3, a dash, and then either an "a" or an "f".
You don't need leading and trailing slashes when you create these regular expressions.
### Configure pre-processing options
The query editor displays pre-processing options when the selected metric has a metric type of `DELTA` or `CUMULATIVE`.
- The **Rate** option aligns and converts data points to a rate per time series.
- The **Delta** option aligns data points by their delta (difference) per time series.
### Group time series
To combine multiple time series and reduce the amount of data returned for a metric, specify a grouping and a function.
#### Group by
Use the **Group By** segment to group resource or metric labels, which reduces the number of time series and aggregates the results by group.
For example, if you group by `instance_name`, Grafana displays an aggregated metric for the specified Compute instance.
##### Metadata labels
Resource metadata labels contain information that can uniquely identify a resource in Google Cloud.
A time series response returns metadata labels only if they're part of the Group By segment in the time series request.
There's no API for retrieving metadata labels, so you can't populate the Group By list with the metadata labels that are available for the selected service and metric.
However, the **Group By** field list comes with a pre-defined set of common system labels.
You can't pre-define user labels, but you can enter them manually in the **Group By** field.
If you include a metadata label, user label, or system label in the **Group By** segment, you can create filters based on it and expand its value on the **Alias** field.
#### Group by function
To combine the time series in the group into a single time series, select a grouping function.
### Align data
You can align all data points received in a fixed length of time by applying an alignment function to combine those data points, and then assigning a timestamp to the result.
#### Select an alignment function
During alignment, all data points are received in a fixed interval.
Within each interval, as defined by the [alignment period]({{< relref "#alignment-period" >}})), and for each time series, the data is aggregated into a single point.
The value of that point is determined by the type of alignment function you use.
For more information on alignment functions, refer to [alignment metric selector](https://cloud.google.com/monitoring/charts/metrics-selector#alignment).
#### Define the alignment period
You can set the **Alignment Period** field to group a metric by time if you've chosen an aggregation.
The default, called "cloud monitoring auto", uses GCP Google Cloud Monitoring's default groupings.
These enable you to compare graphs in Grafana with graphs in the Google Cloud Monitoring UI.
The default values for "cloud monitoring auto" are:
- 1m for time ranges < 23 hours
- 5m for time ranges >= 23 hours and < 6 days
- 1h for time ranges >= 6 days
The other automatic option is "grafana auto", which automatically sets the Group By time depending on the time range chosen and width of the time series panel.
For more information about "grafana auto", refer to [Interval variable]({{< relref "../../../dashboards/variables/add-template-variables/#add-an-interval-variable" >}}).
You can also choose fixed time intervals to group by, like `1h` or `1d`.
### Set alias patterns
The **Alias By** helps you control the format of legend keys.
By default, Grafana shows the metric name and labels, which can be long and difficult to read.
You can use patterns in the alias field to customize the legend key's format:
| Alias pattern | Description | Alias pattern example | Example result |
| -------------------------------- | ------------------------------------------------ | --------------------------------- | ------------------------------------------------- |
| `{{metric.label.xxx}}` | Returns the metric label value. | `{{metric.label.instance_name}}` | `grafana-1-prod` |
| `{{metric.type}}` | Returns the full Metric Type. | `{{metric.type}}` | `compute.googleapis.com/instance/cpu/utilization` |
| `{{metric.name}}` | Returns the metric name part. | `{{metric.name}}` | `instance/cpu/utilization` |
| `{{metric.service}}` | Returns the service part. | `{{metric.service}}` | `compute` |
| `{{resource.label.xxx}}` | Returns the resource label value. | `{{resource.label.zone}}` | `us-east1-b` |
| `{{resource.type}}` | Returns the name of the monitored resource type. | `{{resource.type}}` | `gce_instance` |
| `{{metadata.system_labels.xxx}}` | Returns the metadata system label value. | `{{metadata.system_labels.name}}` | `grafana` |
| `{{metadata.user_labels.xxx}}` | Returns the metadata user label value. | `{{metadata.user_labels.tag}}` | `production` |
#### Alias pattern examples
**Using metric labels:**
You can select from a list of metric and resource labels for a metric in the Group By dropdown.
You can then include them in the legend key using alias patterns.
For example, given this value of Alias By: `{{metric.type}} - {{metric.label.instance_name}}`
An expected result would look like: `compute.googleapis.com/instance/cpu/usage_time - server1-prod`
**Using the Monitored Resource Type:**
You can also resolve the name of the Monitored Resource Type with the `resource.type` alias.
For example, given this value of Alias By: `{{resource.type}} - {{metric.type}}`
An expected result would look like: `gce_instance - compute.googleapis.com/instance/cpu/usage_time`
### Deep-link from Grafana panels to the Google Cloud Console Metrics Explorer
> **Note:** Available in Grafana v7.1 and higher.
{{< figure src="/static/img/docs/v71/cloudmonitoring_deep_linking.png" max-width="500px" class="docs-image--no-shadow" caption="Google Cloud Monitoring deep linking" >}}
You can click on a time series in the panel to access a context menu, which contains a link to **View in Metrics Explorer in Google Cloud Console**.
The link points to the Google Cloud Console's Metrics Explorer and runs the Grafana panel's query there.
If you select the link, you first select an account in the Google Account Chooser.
Google then redirects you to the Metrics Explorer.
The provided link is valid for any account, but displays the query only if your account has access to the query's specified GCP project.
### Understand automatic unit detection
Grafana issues one query to the Cloud Monitoring API per query editor row, and each API response includes a unit.
Grafana attempts to convert the returned unit into a unit compatible with its time series panel.
If successful, Grafana displays the unit on the panel's Y-axis.
If the query editor rows return different units, Grafana uses the unit from the last query editor row in the time series panel.
### Use the Monitoring Query Language
> **Note:** Available in Grafana v7.4 and higher.
The Monitoring Query Language (MQL) query builder helps you query and display MQL results in time series format.
To understand basic MQL concepts, refer to [Introduction to Monitoring Query Language](https://cloud.google.com/monitoring/mql).
#### Create an MQL query
**To create an MQL query:**
1. Select the **Metrics** option in the **Query Type** dropdown.
1. Select **<> Edit MQL** next to the **Query Type** field.
This toggles the MQL query builder mode.
1. Select a project from the **Project** dropdown.
1. Enter your MQL query in the text area.
### Set alias patterns for MQL queries
MQL queries use the same alias patterns as [metric queries]({{< relref "#set-alias-patterns" >}}).
However, `{{metric.service}}` is not supported, and `{{metric.type}}` and `{{metric.name}}` show the time series key in the response.
## Query Service Level Objectives
> **Note:** Available in Grafana v7.0 and higher.
{{< figure src="/static/img/docs/google-cloud-monitoring/slo-query-builder-8-0.png" max-width="400px" class="docs-image--no-shadow" caption="Service Level Objectives (SLO) query editor" >}}
The SLO query builder helps you visualize SLO data in time series format.
To understand basic concepts in service monitoring, refer to the [Google Cloud Monitoring documentation](https://cloud.google.com/monitoring/service-monitoring).
### Create an SLO query
**To create an SLO query:**
1. Select the **Service Level Objectives (SLO)** option in the **Query Type** dropdown.
1. Select a project from the **Project** dropdown.
1. Select an [SLO service](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/services) from the **Service** dropdown.
1. Select an [SLO](https://cloud.google.com/monitoring/api/ref_v3/rest/v3/services.serviceLevelObjectives) from the **SLO** dropdown.
1. Select a [time series selector](https://cloud.google.com/stackdriver/docs/solutions/slo-monitoring/api/timeseries-selectors#ts-selector-list) from the **Selector** dropdown.
Grafana's time series selectors use descriptive names that map to system names that Google uses in the Service Monitoring documentation:
| Selector dropdown value | Corresponding time series selector |
| ------------------------------ | ---------------------------------- |
| **SLI Value** | `select_slo_health` |
| **SLO Compliance** | `select_slo_compliance` |
| **SLO Error Budget Remaining** | `select_slo_budget_fraction` |
| **SLO Burn Rate** | `select_slo_burn_rate` |
### Alias patterns for SLO queries
The **Alias By** field helps you control the format of legend keys for SLO queries.
| Alias pattern | Description | Example result |
| -------------- | ----------------------------- | ------------------- |
| `{{project}}` | Returns the GCP project name. | `myProject` |
| `{{service}}` | Returns the service name. | `myService` |
| `{{slo}}` | Returns the SLO. | `latency-slo` |
| `{{selector}}` | Returns the selector. | `select_slo_health` |
### Alignment period and group-by time for SLO queries
SLO queries use the same alignment period functionality as [metric queries]({{< relref "#define-the-alignment-period" >}}).
## Apply annotations
{{< figure src="/static/img/docs/google-cloud-monitoring/annotations-8-0.png" max-width= "400px" class="docs-image--right" >}}
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
Rendering annotations is expensive, and it's important to limit the number of rows returned.
There's no support for displaying Google Cloud Monitoring's annotations and events, but it works well with [custom metrics](https://cloud.google.com/monitoring/custom-metrics/) in Google Cloud Monitoring.
With the query editor for annotations, you can select a metric and filters.
The `Title` and `Text` fields support templating and can use data returned from the query.
For example, the Title field could have the following text:
`{{metric.type}} has value: {{metric.value}}`
Example result: `monitoring.googleapis.com/uptime_check/http_status has this value: 502`
### Patterns for the annotation query editor
| Alias pattern format | Description | Alias pattern example | Example result |
| ------------------------ | --------------------------------- | -------------------------------- | ------------------------------------------------- |
| `{{metric.value}}` | Value of the metric/point. | `{{metric.value}}` | `555` |
| `{{metric.type}}` | Returns the full Metric Type. | `{{metric.type}}` | `compute.googleapis.com/instance/cpu/utilization` |
| `{{metric.name}}` | Returns the metric name part. | `{{metric.name}}` | `instance/cpu/utilization` |
| `{{metric.service}}` | Returns the service part. | `{{metric.service}}` | `compute` |
| `{{metric.label.xxx}}` | Returns the metric label value. | `{{metric.label.instance_name}}` | `grafana-1-prod` |
| `{{resource.label.xxx}}` | Returns the resource label value. | `{{resource.label.zone}}` | `us-east1-b` |

View File

@ -0,0 +1,47 @@
---
aliases:
- /docs/grafana/latest/data-sources/google-cloud-monitoring/template-variables/
description: Guide for using template variables when querying the Google Cloud Monitoring data source
keywords:
- grafana
- google
- cloud
- monitoring
- queries
- template
- variable
menuTitle: Template variables
title: Google Cloud Monitoring template variables
weight: 300
---
# Google Cloud Monitoring template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
Variables of the type _Query_ help you query Google Cloud Monitoring for various types of data.
The Google Cloud Monitoring data source plugin provides the following **Query Types**:
| Name | List returned |
| ---------------------------------- | --------------------------------------------------------------------- |
| **Metric Types** | Metric type names available for the specified service. |
| **Labels Keys** | Keys for `metric label` and `resource label` in the specified metric. |
| **Labels Values** | Values for the label in the specified metric. |
| **Resource Types** | Resource types for the specified metric. |
| **Aggregations** | Aggregations (cross-series reducers) for the specified metric. |
| **Aligners** | Aligners (per-series aligners) for the specified metric. |
| **Alignment periods** | All alignment periods available in the query editor in Grafana. |
| **Selectors** | Selectors for SLO (Service Level Objectives) queries. |
| **SLO Services** | Service Monitoring services for SLO queries. |
| **Service Level Objectives (SLO)** | SLOs for the specified SLO service. |
## Use variables in queries
Use Grafana's variable syntax to include variables in queries.
For details, refer to the [variable syntax documentation]({{< relref "../../../dashboards/variables/variable-syntax" >}}).

View File

@ -1,251 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/graphite/
- /docs/grafana/latest/features/datasources/graphite/
description: Guide for using graphite in Grafana
keywords:
- grafana
- graphite
- guide
title: Graphite
weight: 600
---
# Using Graphite in Grafana
Grafana has an advanced Graphite query editor that lets you quickly navigate the metric space, add functions,
change function parameters and much more. The editor can handle all types of graphite queries. It can even handle complex nested
queries through the use of query references.
Refer to [Add a data source]({{< relref "add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only organization admins can add data sources. To learn more about the Graphite data source, refer to Graphite's [product documentation](https://graphite.readthedocs.io/en/stable/).
## Graphite settings
To access Graphite settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the Graphite data source.
| Name | Description |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP, and port of your graphite-web or graphite-api install. |
| `Auth` | Refer to [Authentication]({{< relref "../setup-grafana/configure-security/configure-authentication/" >}}) for more information. |
| `Basic Auth` | Enable basic authentication to the data source. |
| `User` | User name for basic authentication. |
| `Password` | Password for basic authentication. |
| `Custom HTTP Headers` | Click **Add header** to add a custom HTTP header. |
| `Header` | Enter the custom header name. |
| `Value` | Enter the custom header value. |
| `Graphite details` |
| `Version` | Select your version of Graphite. |
| `Type` | Select your type of Graphite. |
## Graphite query editor
Grafana includes a Graphite-specific query editor to help you build your queries.
To see the raw text of the query that is sent to Graphite, click the **Toggle text edit mode** (pencil) icon.
### Choose metrics to query
Click **Select metric** to start navigating the metric space. Once you start, you can continue using the mouse or keyboard arrow keys. You can select a wildcard and still continue.
{{< figure src="/static/img/docs/graphite/graphite-query-editor-still.png"
animated-gif="/static/img/docs/graphite/graphite-query-editor.gif" >}}
### Functions
Click the plus icon next to **Function** to add a function. You can search for the function or select it from the menu. Once
a function is selected, it will be added and your focus will be in the text box of the first parameter.
- To edit or change a parameter, click on it and it will turn into a text box.
- To delete a function, click the function name followed by the x icon.
{{< figure src="/static/img/docs/graphite/graphite-functions-still.png"
animated-gif="/static/img/docs/graphite/graphite-functions-demo.gif" >}}
Some functions like aliasByNode support an optional second argument. To add an argument, hover your mouse over the first argument and then click the `+` symbol that appears. To remove the second optional parameter, click on it and leave it blank and the editor will remove it.
To learn more, refer to [Graphite's documentation on functions](https://graphite.readthedocs.io/en/latest/functions.html).
### Sort labels
If you want consistent ordering, use sortByName. This can be particularly annoying when you have the same labels on multiple graphs, and they are both sorted differently and using different colors. To fix this, use `sortByName()`.
### Nested queries
You can reference queries by the row “letter” that theyre on (similar to Microsoft Excel). If you add a second query to a graph, you can reference the first query simply by typing in #A. This provides an easy and convenient way to build compounded queries.
### Avoiding many queries by using wildcards
Occasionally one would like to see multiple time series plotted on the same graph. For example we might want to see how the CPU is being utilized on a machine. You might
initially create the graph by adding a query for each time series, such as `cpu.percent.user.g`,
`cpu.percent.system.g`, and so on. This results in _n_ queries made to the data source, which is inefficient.
To be more efficient one can use wildcards in your search, returning all the time series in one query. For example, `cpu.percent.*.g`.
### Modify the metric name in my tables or charts
Use `alias` functions to change metric names on Grafana tables or graphs For example `aliasByNode()` or `aliasSub()`.
## Point consolidation
All Graphite metrics are consolidated so that Graphite doesn't return more data points than there are pixels in the graph. By default,
this consolidation is done using `avg` function. You can control how Graphite consolidates metrics by adding the Graphite consolidateBy function.
> **Note:** This means that legend summary values (max, min, total) cannot all be correct at the same time. They are calculated
> client-side by Grafana. And depending on your consolidation function, only one or two can be correct at the same time.
## Combine time series
To combine time series, click **Combine** in the **Functions** list.
## Data exploration and tags
In Graphite, _everything_ is a tag.
When exploring data, previously-selected tags are used to filter the remaining result set. To select data, you use the
`seriesByTag` function, which takes tag expressions (`=`, `!=`, `=~`, `!=~`) to filter timeseries.
The Grafana query builder does this for you automatically when you select a tag.
> **Tip:** The regular expression search can be quite slow on high-cardinality tags, so try to use other tags to reduce the scope first.
> Starting off with a particular name/namespace can help reduce the results.
## Template variables
Instead of hard-coding things like server, application, and sensor name in your metric queries, you can use variables in their place.
Variables are shown as drop-down select boxes at the top of the dashboard. These dropdowns make it easy to change the data
being displayed in your dashboard.
For more information, refer to [Variables and templates]({{< relref "../dashboards/variables/" >}}).
Graphite 1.1 introduced tags and Grafana added support for Graphite queries with tags in version 5.0. To create a variable using tag values, use the Grafana functions `tags` and `tag_values`.
| Query | Description |
| ----------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `tags()` | Returns all tags. |
| `tags(server=~backend\*)` | Returns only tags that occur in series matching the filter expression. |
| `tag_values(server)` | Return tag values for the specified tag. |
| `tag_values(server, server=~backend\*)` | Returns filtered tag values that occur for the specified tag in series matching those expressions. |
| `tag_values(server, server=~backend\*, app=~${apps:regex})` | Multiple filter expressions and expressions can contain other variables. |
For more details, see the [Graphite docs on the autocomplete API for tags](http://graphite.readthedocs.io/en/latest/tags.html#auto-complete-support).
### Query variable
The query you specify in the query field should be a metric find type of query. For example, a query like `prod.servers.*` fills the
variable with all possible values that exist in the wildcard position.
The results contain all possible values occurring only at the last level of the query. To get full metric names matching the query
use expand function (`expand(*.servers.*)`).
#### Comparison between expanded and non-expanded metric search results
The expanded query returns the full names of matching metrics. In combination with regex, it can extract any part of the metric name. By contrast, a non-expanded query only returns the last part of the metric name. It does not allow you to extract other parts of metric names.
Here are some example metrics:
- `prod.servers.001.cpu`
- `prod.servers.002.cpu`
- `test.servers.001.cpu`
The following examples show how expanded and non-expanded queries can be used to fetch specific parts of the metrics name.
| non-expanded query | results | expanded query | expanded results |
| ------------------ | ---------- | ------------------------- | ---------------------------------------------------------------- |
| `*` | prod, test | `expand(*)` | prod, test |
| `*.servers` | servers | `expand(*.servers)` | prod.servers, test.servers |
| `test.servers` | servers | `expand(test.servers)` | test.servers |
| `*.servers.*` | 001,002 | `expand(*.servers.*)` | prod.servers.001, prod.servers.002, test.servers.001 |
| `test.servers.*` | 001 | `expand(test.servers.*)` | test.servers.001 |
| `*.servers.*.cpu` | cpu | `expand(*.servers.*.cpu)` | prod.servers.001.cpu, prod.servers.002.cpu, test.servers.001.cpu |
As you can see from the results, the non-expanded query is the same as an expanded query with a regex matching the last part of the name.
You can also create nested variables that use other variables in their definition. For example
`apps.$app.servers.*` uses the variable `$app` in its query definition.
#### Using `__searchFilter` to filter query variable results
> Available from Grafana 6.5 and above
Using `__searchFilter` in the query field will filter the query result based on what the user types in the dropdown select box.
When nothing has been entered by the user the default value for `__searchFilter` is `*` and `` when used as part of a regular expression.
The example below shows how to use `__searchFilter` as part of the query field to enable searching for `server` while the user types in the dropdown select box.
Query
```bash
apps.$app.servers.$__searchFilter
```
TagValues
```bash
tag_values(server, server=~${__searchFilter:regex})
```
### Variable usage
You can use a variable in a metric node path or as a parameter to a function.
![variable](/static/img/docs/v2/templated_variable_parameter.png)
There are two syntaxes:
- `$<varname>` Example: apps.frontend.$server.requests.count
- `${varname}` Example: apps.frontend.${server}.requests.count
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. Use
the second syntax in expressions like `my.server${serverNumber}.count`.
Example:
[Graphite Templated Dashboard](https://play.grafana.org/dashboard/db/graphite-templated-nested)
### Variable usage in tag queries
Multi-value variables in tag queries use the advanced formatting syntax introduced in Grafana 5.0 for variables: `{var:regex}`. Non-tag queries will use the default glob formatting for multi-value variables.
Example of a tag expression with regex formatting and using the Equal Tilde operator, `=~`:
```text
server=~${servers:regex}
```
For more information, refer to [Advanced variable format options]({{< relref "../dashboards/variables/variable-syntax/#advanced-variable-format-options" >}}).
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view.
Graphite supports two ways to query annotations. A regular metric query, for this you use the `Graphite query` textbox. A Graphite events query, use the `Graphite event tags` textbox,
specify a tag or wildcard (leave empty should also work)
## Get Grafana metrics into Graphite
Grafana exposes metrics for Graphite on the `/metrics` endpoint. For detailed instructions, refer to [Internal Grafana metrics]({{< relref "../setup-grafana/set-up-grafana-monitoring/" >}}).
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
jsonData:
graphiteVersion: '1.1'
```
## Integration with Loki
Graphite queries get converted to Loki queries when the data source selection changes in Explore. Loki label names and values are extracted from the Graphite queries according to mappings information provided in Graphite data source configuration. Queries using tags with `seriesByTags()` are also transformed without any additional setup.
Refer to the Graphite data source settings for more details.

View File

@ -0,0 +1,100 @@
---
aliases:
- /docs/grafana/latest/features/datasources/graphite/
- /docs/grafana/latest/datasources/graphite/
- /docs/grafana/latest/data-sources/graphite/
description: Guide for using Graphite in Grafana
keywords:
- grafana
- graphite
- guide
menuTitle: Graphite
title: Graphite data source
weight: 600
---
# Graphite data source
Grafana includes built-in support for Graphite.
This topic explains options, variables, querying, and other features specific to the Graphite data source, which include its feature-rich query editor.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Once you've added the Graphite data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Graphite data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets whether the data source is pre-selected for new panels. You can set only one default data source per organization. |
| **URL** | Sets the HTTP protocol, IP, and port of your graphite-web or graphite-api installation. |
| **Auth** | For details, refer to [Configure Authentication]({{< relref "../../setup-grafana/configure-security/configure-authentication/" >}}). |
| **Basic Auth** | Enables basic authentication to the data source. |
| **User** | Sets the user name for basic authentication. |
| **Password** | Sets the password for basic authentication. |
| **Custom HTTP Headers** | Click **Add header** to add a custom HTTP header. |
| **Header** | Defines the custom header name. |
| **Value** | Defines the custom header value. |
You can also configure settings specific to the Graphite data source:
| Name | Description |
| ----------- | -------------------------------- |
| **Version** | Select your version of Graphite. |
| **Type** | Select your type of Graphite. |
### Integrate with Loki
When you change the data source selection in [Explore]({{< relref "../../explore/" >}}), Graphite queries are converted to Loki queries.
Grafana extracts Loki label names and values from the Graphite queries according to mappings provided in the Graphite data source configuration.
Queries using tags with `seriesByTags()` are also transformed without any additional setup.
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for lists of common configuration options and JSON data options, refer to [Provisioning data sources]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: Graphite
type: graphite
access: proxy
url: http://localhost:8080
jsonData:
graphiteVersion: '1.1'
```
## Query the data source
Grafana includes a Graphite-specific query editor to help you build queries.
The query editor helps you quickly navigate the metric space, add functions, and change function parameters.
It can handle all types of Graphite queries, including complex nested queries through the use of query references.
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).
## Get Grafana metrics into Graphite
Grafana exposes metrics for Graphite on the `/metrics` endpoint.
For detailed instructions, refer to [Internal Grafana metrics]({{< relref "../../setup-grafana/set-up-grafana-monitoring" >}}).

View File

@ -0,0 +1,121 @@
---
aliases:
- /docs/grafana/latest/data-sources/graphite/query-editor/
description: Guide for using the Graphite data source's query editor
keywords:
- grafana
- microsoft
- graphite
- monitor
- metrics
- logs
- resources
- queries
menuTitle: Query editor
title: Graphite query editor
weight: 300
---
# Graphite query editor
Grafana includes a Graphite-specific query editor to help you build queries.
The query editor helps you quickly navigate the metric space, add functions, and change function parameters.
It can handle all types of Graphite queries, including complex nested queries through the use of query references.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## View the raw query
To see the raw text of the query that Grafana sends to Graphite, click the **Toggle text edit mode** (pencil) icon.
## Choose metrics to query
Click **Select metric** to navigate the metric space.
Once you begin, you can use the mouse or keyboard arrow keys.
You can also select a wildcard and still continue.
{{< figure src="/static/img/docs/graphite/graphite-query-editor-still.png" animated-gif="/static/img/docs/graphite/graphite-query-editor.gif" >}}
## Functions
Click the plus icon next to **Function** to add a function. You can search for the function or select it from the menu. Once
a function is selected, it will be added and your focus will be in the text box of the first parameter.
- To edit or change a parameter, click on it and it will turn into a text box.
- To delete a function, click the function name followed by the x icon.
{{< figure src="/static/img/docs/graphite/graphite-functions-still.png" animated-gif="/static/img/docs/graphite/graphite-functions-demo.gif" >}}
Some functions like aliasByNode support an optional second argument. To add an argument, hover your mouse over the first argument and then click the `+` symbol that appears. To remove the second optional parameter, click on it and leave it blank and the editor will remove it.
To learn more, refer to [Graphite's documentation on functions](https://graphite.readthedocs.io/en/latest/functions.html).
### Sort labels
If you have the same labels on multiple graphs, they are both sorted differently and use different colors.
To avoid this and consistently order labels by name, use the `sortByName()` function.
### Modify the metric name in my tables or charts
Use `alias` functions, such as `aliasByNode()` or `aliasSub()`, to change metric names on Grafana tables or graphs.
### Consolidate data points
Grafana consolidates all Graphite metrics so that Graphite doesn't return more data points than there are pixels in the graph.
By default, Grafana consolidates data points using the `avg` function.
To control how Graphite consolidates metrics, use the Graphite `consolidateBy()` function.
> **Note:** Legend summary values (max, min, total) can't all be correct at the same time because they are calculated client-side by Grafana.
> Depending on your consolidation function, only one or two can be correct at the same time.
### Combine time series
To combine time series, click **Combine** in the **Functions** list.
### Select and explor data with tags
In Graphite, _everything_ is a tag.
When exploring data, previously selected tags filter the remaining result set.
To select data, use the `seriesByTag` function, which takes tag expressions (`=`, `!=`, `=~`, `!=~`) to filter timeseries.
The Grafana query builder does this for you automatically when you select a tag.
> **Tip:** The regular expression search can be slow on high-cardinality tags, so try to use other tags to reduce the scope first.
> To help reduce the results, start by filtering on a particular name or namespace.
## Nest queries
You can reference a query by the "letter" of its row, similar to a spreadsheet.
If you add a second query to a graph, you can reference the first query by entering `#A`.
This helps you build compounded queries.
## Use wildcards to make fewer queries
To view multiple time series plotted on the same graph, use wildcards in your search to return all of the matching time series in one query.
For example, to see how the CPU is being utilized on a machine, you can create a graph and use the single query `cpu.percent.*.g` to retrieve all time series that match that pattern.
This is more efficient than adding a query for each time series, such as `cpu.percent.user.g`, `cpu.percent.system.g`, and so on, which results in many queries to the data source.
## Apply annotations
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
Graphite supports two ways to query annotations:
- A regular metric query, using the `Graphite query` textbox.
- A Graphite events query, using the `Graphite event tags` textbox with a tag, wildcard, or empty value
## Get Grafana metrics into Graphite
Grafana exposes metrics for Graphite on the `/metrics` endpoint.
For detailed instructions, refer to [Internal Grafana metrics]({{< relref "../../../setup-grafana/set-up-grafana-monitoring" >}}).
## Integration with Loki
Graphite queries get converted to Loki queries when the data source selection changes in Explore. Loki label names and values are extracted from the Graphite queries according to mappings information provided in Graphite data source configuration. Queries using tags with `seriesByTags()` are also transformed without any additional setup.
Refer to the Graphite data source settings for more details.

View File

@ -0,0 +1,129 @@
---
aliases:
- /docs/grafana/latest/data-sources/graphite/template-variables/
description: Guide for using template variables when querying the Graphite data source
keywords:
- grafana
- graphite
- queries
- template
- variable
menuTitle: Template variables
title: Graphite template variables
weight: 300
---
# Graphite template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use tag variables
To create a variable using tag values, use the Grafana functions `tags` and `tag_values`.
| Query | Description |
| --------------------------------------- | -------------------------------------------------------------------------------------------------- |
| `tags()` | Returns all tags. |
| `tags(server=~backend\*)` | Returns only tags that occur in series matching the filter expression. |
| `tag_values(server)` | Returns tag values for the specified tag. |
| `tag_values(server, server=~backend\*)` | Returns filtered tag values that occur for the specified tag in series matching those expressions. |
Multiple filter expressions and expressions can contain other variables. For example:
```
tag_values(server, server=~backend\*, app=~${apps:regex})
```
For details, refer to the [Graphite docs on the autocomplete API for tags](http://graphite.readthedocs.io/en/latest/tags.html#auto-complete-support).
### Use multi-valie variables in tag queries
Multi-value variables in tag queries use the advanced formatting syntax for variables introduced in Grafana v5.0: `{var:regex}`.
Non-tag queries use the default glob formatting for multi-value variables.
#### Tag expression example
**Using regex formatting and the Equal Tilde operator, `=~`:**
```text
server=~${servers:regex}
```
For more information, refer to [Advanced variable format options]({{< relref "../../../dashboards/variables/variable-syntax#advanced-variable-format-options" >}}).
## Use other query variables
When writing queries, use the metric find type of query.
For example, a query like `prod.servers.*` fills the variable with all possible values that exist in the wildcard position.
The results contain all possible values occurring only at the last level of the query.
To get full metric names matching the query, use the `expand` function: `expand(*.servers.*)`.
### Compare expanded and non-expanded metric search results
The expanded query returns the full names of matching metrics.
In combination with regular expressions, you can use it to extract any part of the metric name.
By contrast, a non-expanded query returns only the last part of the metric name, and doesn't let you extract other parts of metric names.
Given these example metrics:
- `prod.servers.001.cpu`
- `prod.servers.002.cpu`
- `test.servers.001.cpu`
These examples demonstrate how expanded and non-expanded queries can fetch specific parts of the metrics name:
| Non-expanded query | Results | Expanded query | Expanded results |
| ------------------ | ---------- | ------------------------- | ---------------------------------------------------------------- |
| `*` | prod, test | `expand(*)` | prod, test |
| `*.servers` | servers | `expand(*.servers)` | prod.servers, test.servers |
| `test.servers` | servers | `expand(test.servers)` | test.servers |
| `*.servers.*` | 001,002 | `expand(*.servers.*)` | prod.servers.001, prod.servers.002, test.servers.001 |
| `test.servers.*` | 001 | `expand(test.servers.*)` | test.servers.001 |
| `*.servers.*.cpu` | cpu | `expand(*.servers.*.cpu)` | prod.servers.001.cpu, prod.servers.002.cpu, test.servers.001.cpu |
The non-expanded query is the same as an expanded query, with a regex matching the last part of the name.
You can also create nested variables that use other variables in their definition.
For example, `apps.$app.servers.*` uses the variable `$app` in its query definition.
### Use `__searchFilter` to filter query variable results
> **Note:** Available in Grafana v6.5 and higher.
You can use `__searchFilter` in the query field to filter the query result based on what the user types in the dropdown select box.
The default value for `__searchFilter` is `*` if you've not entered anything, and `` when used as part of a regular expression.
#### Search filter example
To use `__searchFilter` as part of the query field to enable searching for `server` while the user types in the dropdown select box:
Query
```bash
apps.$app.servers.$__searchFilter
```
TagValues
```bash
tag_values(server, server=~${__searchFilter:regex})
```
## Choose a variable syntax
![variable](/static/img/docs/v2/templated_variable_parameter.png)
The Graphite data source supports two variable syntaxes for use in the **Query** field:
- `$<varname>`, for example `apps.frontend.$server.requests.count`, which is easier to read and write but does not allow you to use a variable in the middle of a word.
- `${varname}`, for example `apps.frontend.${server}.requests.count`, to use in expressions like `my.server${serverNumber}.count`.
### Templated dashboard example
To view an example templated dashboard, refer to [Graphite Templated Dashboard](https://play.grafana.org/dashboard/db/graphite-templated-nested).

View File

@ -1,13 +1,17 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/
- /docs/grafana/latest/features/datasources/influxdb/
- /docs/grafana/latest/datasources/influxdb/
- /docs/grafana/latest/datasources/influxdb/provision-influxdb/
- /docs/grafana/latest/data-sources/influxdb/
- /docs/grafana/latest/data-sources/influxdb/provision-influxdb/
description: Guide for using InfluxDB in Grafana
keywords:
- grafana
- influxdb
- guide
- flux
menuTitle: InfluxDB
title: InfluxDB data source
weight: 700
---
@ -16,45 +20,40 @@ weight: 700
{{< docs/shared "influxdb/intro.md" >}}
This topic explains options, variables, querying, and other options specific to this data source. Refer to [Add a data source]({{< relref "../add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
Grafana includes built-in support for InfluxDB.
This topic explains options, variables, querying, and other features specific to the InfluxDB data source, which include its [feature-rich code editor for queries and visual query builder]({{< relref "./query-editor/" >}}).
## Data source options
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
To access data source settings, hover your mouse over the **Configuration** (gear) icon, then click **Data sources**, and then click the data source.
Once you've added the InfluxDB data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
InfluxDB data source options differ depending on which [query language](#query-languages) you select: InfluxQL or Flux.
## Configure the data source
> **Note:** Though not required, it's a good practice to append the language choice to the data source name. For example:
**To access the data source configuration page:**
- InfluxDB-InfluxQL
- InfluxDB-Flux
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the InfluxDB data source.
### InfluxQL (classic InfluxDB query)
Set the data source's basic configuration options carefully:
These options apply if you are using the InfluxQL query language. If you are using Flux, refer to [Flux support in Grafana]({{< relref "influxdb-flux/" >}}).
| Name | Description |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. We recommend something like `InfluxDB-InfluxQL`. |
| **Default** | Sets whether the data source is pre-selected for new panels. |
| **URL** | The HTTP protocol, IP address, and port of your InfluxDB API. InfluxDB's default API port is 8086. |
| **Min time interval** | _(Optional)_ Refer to [Min time interval]({{< relref "#configure-min-time-interval" >}}). |
| **Max series** | _(Optional)_ Limits the number of series and tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have many small time series and not all are shown. Defaults to 1,000. |
| Name | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-InfluxQL`. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB API port is by default 8086. |
| `Access` | Direct browser access has been deprecated. Use “Server (default)” or the datasource wont function. |
| `Allowed cookies` | Cookies that will be forwarded to the data source. All other cookies will be deleted. |
| `Database` | The ID of the bucket you want to query from, copied from the [Buckets page](https://docs.influxdata.com/influxdb/v2.0/organizations/buckets/view-buckets/) of the InfluxDB UI. |
| `User` | The username you use to sign into InfluxDB. |
| `Password` | The token you use to query the bucket above, copied from the [Tokens page](https://docs.influxdata.com/influxdb/v2.0/security/tokens/view-tokens/) of the InfluxDB UI. |
| `HTTP mode` | How to query the database (`GET` or `POST` HTTP verb). The `POST` verb allows heavy queries that would return an error using the `GET` verb. Default is `GET`. |
| `Min time interval` | (Optional) Refer to [Min time interval]({{< relref "#min-time-interval" >}}). |
| `Max series` | (Optional) Limits the number of series/tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have lots of small time series and not all are shown. Defaults to 1000. |
You can also configure settings specific to the InfluxDB data source:
### Flux
### Min time interval
For information on data source settings and using Flux in Grafana, refer to [Flux support in Grafana]({{< relref "influxdb-flux/" >}}).
The **Min time interval** setting defines a lower limit for the auto group-by time interval.
#### Min time interval
A lower limit for the auto group by time interval. Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value _must_ be formatted as a number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
This value _must_ be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
@ -67,91 +66,123 @@ This option can also be overridden/configured in a dashboard panel under data so
| `s` | second |
| `ms` | millisecond |
## Query languages
We recommend setting this value to match your InfluxDB write frequency.
For example, use `1m` if InfluxDB writes data every minute.
You can query InfluxDB using InfluxQL or Flux:
You can also override this setting in a dashboard panel under its data source options.
- [InfluxQL](https://docs.influxdata.com/influxdb/v1.8/query_language/explore-data/) is a SQL-like language for querying InfluxDB, with statements such as SELECT, FROM, WHERE, and GROUP BY that are familiar to SQL users. InfluxQL is available in InfluxDB 1.0 onwards.
- [Flux](https://docs.influxdata.com/influxdb/v2.0/query-data/get-started/) provides significantly broader functionality than InfluxQL, supporting not only queries, but built-in functions for data shaping, string manipulation, joining to non-InfluxDB data sources and more, but also processing time-series data. Its more similar to JavaScript with a functional style.
### Select a query language
To help you choose the best language for your needs, heres a comparison of [Flux vs InfluxQL](https://docs.influxdata.com/influxdb/v1.8/flux/flux-vs-influxql/), and [why InfluxData created Flux](https://www.influxdata.com/blog/why-were-building-flux-a-new-data-scripting-and-query-language/).
InfluxDB data source options differ depending on which query language you select:
## InfluxQL query editor
- [InfluxQL](https://docs.influxdata.com/influxdb/v1.8/query_language/explore-data/), a SQL-like language for querying InfluxDB, with statements such as SELECT, FROM, WHERE, and GROUP BY that are familiar to SQL users.
InfluxQL is available in InfluxDB 1.0 onwards.
- [Flux](https://docs.influxdata.com/influxdb/v2.0/query-data/get-started/), which provides significantly broader functionality than InfluxQL. It supports not only queries but also built-in functions for data shaping, string manipulation, and joining to non-InfluxDB data sources, but also processing time-series data.
It's similar to JavaScript with a functional style.
Enter edit mode by clicking the panel title and clicking **Edit**. The editor allows you to select metrics and tags.
To help choose the best language for your needs, refer to a [comparison of Flux vs InfluxQL](https://docs.influxdata.com/influxdb/v1.8/flux/flux-vs-influxql/) and [why InfluxData created Flux](https://www.influxdata.com/blog/why-were-building-flux-a-new-data-scripting-and-query-language/).
![InfluxQL query editor](/static/img/docs/influxdb/influxql-query-editor-8-0.png)
> **Note:** Though not required, we recommend that you append your query language choice to the data source's **Name** setting:
>
> - InfluxDB-InfluxQL
> - InfluxDB-Flux
### Filter data (WHERE)
### Configure InfluxQL
To add a tag filter, click the plus icon to the right of the `WHERE` condition. You can remove tag filters by clicking on the tag key and then selecting `--remove tag filter--`.
Configure these options if you select the InfluxQL (classic InfluxDB) query language:
**Regex matching**
| Name | Description |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Access** | Only Server access mode is functional. Direct browser access is deprecated. |
| **Allowed cookies** | Defines which cookies are forwarded to the data source. All other cookies are deleted. |
| **Database** | Sets the ID of the bucket to query. Copy this from the [Buckets page](https://docs.influxdata.com/influxdb/v2.0/organizations/buckets/view-buckets/) of the InfluxDB UI. |
| **User** | Sets the username to sign into InfluxDB. |
| **Password** | Defines the token you use to query the bucket defined in **Database**. Copy this from the [Tokens page](https://docs.influxdata.com/influxdb/v2.0/security/tokens/view-tokens/) of the InfluxDB UI. |
| **HTTP mode** | Sets the HTTP method used to query your data source. The POST verb allows for larger queries that would return an error using the GET verb. Defaults to GET. |
You can type in regex patterns for metric names or tag filter values. Be sure to wrap the regex pattern in forward slashes (`/`). Grafana automatically adjusts the filter tag condition to use the InfluxDB regex match condition operator (`=~`).
### Configure Flux
### Field and Aggregation functions
Configure these options if you select the Flux query language:
In the `SELECT` row you can specify what fields and functions you want to use. If you have a
group by time you need an aggregation function. Some functions like derivative require an aggregation function. The editor tries to simplify and unify this part of the query. For example:
| Name | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Organization** | The [Influx organization](https://v2.docs.influxdata.com/v2.0/organizations/) that will be used for Flux queries. This is also used to for the `v.organization` query macro. |
| **Token** | The authentication token used for Flux queries. With Influx 2.0, use the [influx authentication token to function](https://v2.docs.influxdata.com/v2.0/security/tokens/create-token/). For influx 1.8, the token is `username:password`. |
| **Default bucket** | _(Optional)_ The [Influx bucket](https://v2.docs.influxdata.com/v2.0/organizations/buckets/) that will be used for the `v.defaultBucket` macro in Flux queries. |
![](/static/img/docs/influxdb/select_editor.png)
### Provision the data source
The above generates the following InfluxDB `SELECT` clause:
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
```sql
SELECT derivative(mean("value"), 10s) /10 AS "REQ/s" FROM ....
#### Provisioning examples
**InfluxDB 1.x example:**
```yaml
apiVersion: 1
datasources:
- name: InfluxDB_v1
type: influxdb
access: proxy
database: site
user: grafana
url: http://localhost:8086
jsonData:
httpMode: GET
secureJsonData:
password: grafana
```
#### Select multiple fields
**InfluxDB 2.x for Flux example:**
Use the plus button and select Field > field to add another SELECT clause. You can also
specify an asterix `*` to select all fields.
```yaml
apiVersion: 1
### Group By
To group by a tag, click the plus icon at the end of the GROUP BY row. Pick a tag from the dropdown that appears.
You can remove the "Group By" by clicking on the `tag` and then click on the x icon.
### Text Editor Mode (RAW)
You can switch to raw query mode by clicking hamburger icon and then `Switch editor mode`.
> If you use Raw Query be sure your query at minimum have `WHERE $timeFilter`.
> Also, always have a group by time and an aggregation function, otherwise InfluxDB can easily return hundreds of thousands of data points that will hang the browser.
### Alias patterns
- $m = replaced with measurement name
- $measurement = replaced with measurement name
- $1 - $9 = replaced with part of measurement name (if you separate your measurement name with dots)
- $col = replaced with column name
- $tag_exampletag = replaced with the value of the `exampletag` tag. The syntax is `$tag*yourTagName`(must start with`$tag*`). To use your tag as an alias in the ALIAS BY field then the tag must be used to group by in the query.
- You can also use [[tag_hostname]] pattern replacement syntax. For example, in the ALIAS BY field using this text `Host: [[tag_hostname]]` would substitute in the `hostname` tag value for each legend value and an example legend value would be: `Host: server1`.
## Querying logs
Querying and displaying log data from InfluxDB is available in [Explore]({{< relref "../../explore/" >}}), and in the [logs panel]({{< relref "../../panels-visualizations/visualizations/logs/" >}}) in dashboards.
Select the InfluxDB data source, and then enter a query to display your logs.
### Log queries
The Logs Explorer (the `Measurements/Fields` button) next to the query field shows a list of measurements and fields. Choose the desired measurement that contains your log data and then choose which field Explore should use to display the log message.
Once the result is returned, the log panel shows a list of log rows and a bar chart where the x-axis shows the time and the y-axis shows the frequency/count.
### Filter search
To add a filter, click the plus icon to the right of the `Measurements/Fields` button or a condition. You can remove tag filters by clicking on the first select and choosing `--remove filter--`.
## Annotations
[Annotations]({{< relref "../../dashboards/build-dashboards/annotate-visualizations" >}}) allows you to overlay rich event information on top of graphs. Add annotation queries using the Annotations view in the Dashboard menu.
An example query:
```SQL
SELECT title, description from events WHERE $timeFilter ORDER BY time ASC
datasources:
- name: InfluxDB_v2_Flux
type: influxdb
access: proxy
url: http://localhost:8086
jsonData:
version: Flux
organization: organization
defaultBucket: bucket
tlsSkipVerify: true
secureJsonData:
token: token
```
For InfluxDB, you need to enter a query like the one in the example above. The `where $timeFilter` component is required. If you only select one column, then you do not need to enter anything in the column mapping fields. The **Tags** field can be a comma-separated string.
**InfluxDB 2.x for InfluxQL example:**
```yaml
apiVersion: 1
datasources:
- name: InfluxDB_v2_InfluxQL
type: influxdb
access: proxy
url: http://localhost:8086
# This database should be mapped to a bucket
database: site
jsonData:
httpMode: GET
httpHeaderName1: 'Authorization'
secureJsonData:
httpHeaderValue1: 'Token <token>'
```
## Query the data source
The InfluxDB data source's query editor has two modes, InfluxQL and Flux, depending on your choice of query language in the [data source configuration]({{< relref "#configure-the-data-source" >}}):
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -1,79 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/influxdb-flux/
description: Guide for Flux in Grafana
title: Flux support in Grafana
weight: 200
---
# Flux query language in Grafana
Grafana supports Flux running on InfluxDB 1.8+. See [1.8 compatibility](https://github.com/influxdata/influxdb-client-go/#influxdb-18-api-compatibility) for more information and connection details.
| Name | Description |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. We recommend something like `InfluxDB-Flux`. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP address and port of your InfluxDB API. InfluxDB 2.0 API port is by default 8086. |
| `Organization` | The [Influx organization](https://v2.docs.influxdata.com/v2.0/organizations/) that will be used for Flux queries. This is also used to for the `v.organization` query macro. |
| `Token` | The authentication token used for Flux queries. With Influx 2.0, use the [influx authentication token to function](https://v2.docs.influxdata.com/v2.0/security/tokens/create-token/). For influx 1.8, the token is `username:password`. |
| `Default bucket` | (Optional) The [Influx bucket](https://v2.docs.influxdata.com/v2.0/organizations/buckets/) that will be used for the `v.defaultBucket` macro in Flux queries. |
| `Min time interval` | (Optional) Refer to [Min time interval]({{< relref "#min-time-interval" >}}). |
| `Max series` | (Optional) Limits the number of series/tables that Grafana processes. Lower this number to prevent abuse, and increase it if you have lots of small time series and not all are shown. Defaults to 1000. |
## Min time interval
A lower limit for the auto group by time interval. Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
You can use the [Flux query and scripting language](https://www.influxdata.com/products/flux/). Grafana's Flux query editor is a text editor for raw Flux queries with Macro support.
## Supported macros
The macros support copying and pasting from [Chronograf](https://www.influxdata.com/time-series-platform/chronograf/).
| Macro example | Description |
| ------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `v.timeRangeStart` | Will be replaced by the start of the currently active time selection. For example, _2020-06-11T13:31:00Z_ |
| `v.timeRangeStop` | Will be replaced by the end of the currently active time selection. For example, _2020-06-11T14:31:00Z_ |
| `v.windowPeriod` | Will be replaced with an interval string compatible with Flux that corresponds to Grafana's calculated interval based on the time range of the active time selection. For example, _5s_ |
| `v.defaultBucket` | Will be replaced with the data source configuration's "Default Bucket" setting |
| `v.organization` | Will be replaced with the data source configuration's "Organization" setting |
For example, the following query will be interpolated as the query that follows it, with interval and time period values changing according to active time selection\):
Grafana Flux query:
```flux
from(bucket: v.defaultBucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "cpu" or r["_measurement"] == "swap")
|> filter(fn: (r) => r["_field"] == "usage_system" or r["_field"] == "free")
|> aggregateWindow(every: v.windowPeriod, fn: mean)
|> yield(name: "mean")
```
Interpolated query send to Influx:
```flux
from(bucket: "grafana")
|> range(start: 2020-06-11T13:59:07Z, stop: 2020-06-11T14:59:07Z)
|> filter(fn: (r) => r["_measurement"] == "cpu" or r["_measurement"] == "swap")
|> filter(fn: (r) => r["_field"] == "usage_system" or r["_field"] == "free")
|> aggregateWindow(every: 2s, fn: mean)
|> yield(name: "mean")
```
You can view the interpolated version of a query with the query inspector. For more information, refer to [Panel Inspector]({{< relref "../../panels-visualizations/panel-inspector/" >}}).

View File

@ -1,68 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/influxdb-templates/
description: Guide for templates in InfluxDB
title: InfluxDB templates
weight: 300
---
## InfluxDB templates
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place.
For more information, refer to [Templates and variables]({{< relref "../../dashboards/variables/" >}}).
## Using variables in InfluxDB queries
There are two syntaxes:
`$<varname>` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^$host$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
`[[varname]]` Example:
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^[[host]]$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the **Multi-value** or **Include all value** options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use `=~` instead of `=`.
Example dashboard:
[InfluxDB Templated Dashboard](https://play.grafana.org/dashboard/db/influxdb-templated)
## Query variables
If you add a query template variable, then you can write an InfluxDB exploration (metadata) query. These queries can return things like measurement names, key names or key values. For more information, refer to [Add query variable]({{< relref "../../dashboards/variables/add-template-variables/#add-a-query-variable" >}}).
For example, you can have a variable that contains all values for tag `hostname` if you specify a query like this in the query variable **Query**.
```sql
SHOW TAG VALUES WITH KEY = "hostname"
```
## Chained or nested variables
You can also create nested variables, sometimes called [chained variables]({{< relref "../../dashboards/variables/add-template-variables/#chained-variables" >}}).
For example, if you had another variable, for example `region`. Then you could have the hosts variable only show hosts from the current selected region with a query like this:
```sql
SHOW TAG VALUES WITH KEY = "hostname" WHERE region = '$region'
```
You can fetch key names for a given measurement.
```sql
SHOW TAG KEYS [FROM <measurement_name>]
```
If you have a variable with key names you can use this variable in a group by clause. This will allow you to change group by using the variable list at the top of the dashboard.
### Ad hoc filters variable
InfluxDB supports the special `Ad hoc filters` variable type. This variable allows you to specify any number of key/value filters on the fly. These filters are automatically applied to all your InfluxDB queries.
For more information, refer to [Add ad hoc filters]({{< relref "../../dashboards/variables/add-template-variables/#add-ad-hoc-filters" >}}).

View File

@ -1,69 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/provision-influxdb/
description: Guide for provisioning InfluxDB
title: Provision InfluxDB
weight: 400
---
# Provision InfluxDB
You can configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../../administration/provisioning/#datasources" >}}).
Here are some provisioning examples for this data source.
## InfluxDB 1.x example
```yaml
apiVersion: 1
datasources:
- name: InfluxDB_v1
type: influxdb
access: proxy
database: site
user: grafana
url: http://localhost:8086
jsonData:
httpMode: GET
secureJsonData:
password: grafana
```
## InfluxDB 2.x for Flux example
```yaml
apiVersion: 1
datasources:
- name: InfluxDB_v2_Flux
type: influxdb
access: proxy
url: http://localhost:8086
secureJsonData:
token: token
jsonData:
version: Flux
organization: organization
defaultBucket: bucket
tlsSkipVerify: true
```
## InfluxDB 2.x for InfluxQl example
```yaml
apiVersion: 1
datasources:
- name: InfluxDB_v2_InfluxQL
type: influxdb
access: proxy
url: http://localhost:8086
# This database should be mapped to a bucket
database: site
jsonData:
httpMode: GET
httpHeaderName1: 'Authorization'
secureJsonData:
httpHeaderValue1: 'Token <token>'
```

View File

@ -0,0 +1,196 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/influxdb-flux/
- /docs/grafana/latest/datasources/influxdb/query-editor/
- /docs/grafana/latest/data-sources/influxdb/query-editor/
description: Guide for Flux in Grafana
title: Flux support in Grafana
weight: 200
---
# InfluxDB query editor
This topic explains querying specific to the InfluxDB data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
The InfluxDB data source's query editor has two modes depending on your choice of query language in the [data source configuration]({{< relref "../#configure-the-data-source" >}}):
- [InfluxQL]({{< relref "#influxql-query-editor" >}})
- [Flux]({{< relref "#flux-query-editor" >}})
You also use the query editor to retrieve [log data]({{< relref "#query-logs" >}}) and [annotate]({{< relref "#apply-annotations" >}}) visualizations.
## InfluxQL query editor
The InfluxQL query editor helps you select metrics and tags to create InfluxQL queries.
**To enter edit mode:**
1. Click the panel title.
1. Click **Edit**.
![InfluxQL query editor](/static/img/docs/influxdb/influxql-query-editor-8-0.png)
### Filter data (WHERE)
To add a tag filter, click the plus icon to the right of the `WHERE` condition.
To remove tag filters, click the tag key, then select **--remove tag filter--**.
#### Match by regular expressions
You can enter regular expressions for metric names or tag filter values.
Wrap the regex pattern in forward slashes (`/`).
Grafana automatically adjusts the filter tag condition to use the InfluxDB regex match condition operator (`=~`).
### Field and aggregation functions
In the `SELECT` row, you can specify which fields and functions to use.
If you have a group by time, you must have an aggregation function.
Some functions like `derivative` also require an aggregation function.
The editor helps you build this part of the query.
For example:
![](/static/img/docs/influxdb/select_editor.png)
This query editor input generates an InfluxDB `SELECT` clause:
```sql
SELECT derivative(mean("value"), 10s) /10 AS "REQ/s" FROM ....
```
**To select multiple fields:**
1. Click the plus button.
1. Select **Field > field** to add another `SELECT` clause.
You can also `SELECT` an asterisk (`*`) to select all fields.
### Group query results
To group results by a tag, define it in a "Group By".
**To group by a tag:**
1. Click the plus icon at the end of the GROUP BY row.
1. Select a tag from the dropdown that appears.
**To remove the "Group By":**
1. Click the tag.
1. Click the "x" icon.
### Text editor mode (RAW)
You can write raw InfluxQL queries by switching the editor mode.
However, be careful when writing queries manually.
If you use raw query mode, your query _must_ include at least `WHERE $timeFilter`.
Also, _always_ provide a group by time and an aggregation function.
Otherwise, InfluxDB can easily return hundreds of thousands of data points that can hang your browser.
**To switch to raw query mode:**
1. Click the hamburger icon.
1. Toggle **Switch editor mode**.
### Alias patterns
| Alias pattern | Replaced with |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$m` | Measurement name. |
| `$measurement` | Measurement name. |
| `$1` - `$9` | Part of measurement name (if you separate your measurement name with dots). |
| `$col` | Column name. |
| `$tag_exampletag` | The value of the `exampletag` tag. The syntax is `$tag*yourTagName` and must start with `$tag*`. To use your tag as an alias in the ALIAS BY field, you must use the tag to group by in the query. |
You can also use `[[tag_hostname]]` pattern replacement syntax.
For example, entering the value `Host: [[tag_hostname]]` in the ALIAS BY field replaces it with the `hostname` tag value for each legend value.
An example legend value would be `Host: server1`.
## Flux query editor
Grafana supports Flux when running InfluxDB v1.8 and higher.
If your data source is [configured for Flux]({{< relref "./#configure-the-data-source" >}}), you can use the [Flux query and scripting language](https://www.influxdata.com/products/flux/) in the query editor, which serves as a text editor for raw Flux queries with macro support.
For more information and connection details, refer to [1.8 compatibility](https://github.com/influxdata/influxdb-client-go/#influxdb-18-api-compatibility).
### Use macros
You can enter macros in the query to replace them with values from Grafana's context.
Macros support copying and pasting from [Chronograf](https://www.influxdata.com/time-series-platform/chronograf/).
| Macro example | Replaced with |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `v.timeRangeStart` | The start of the currently active time selection, such as `2020-06-11T13:31:00Z`. |
| `v.timeRangeStop` | The end of the currently active time selection, such as `2020-06-11T14:31:00Z`. |
| `v.windowPeriod` | An interval string compatible with Flux that corresponds to Grafana's calculated interval based on the time range of the active time selection, such as `5s`. |
| `v.defaultBucket` | The data source configuration's "Default Bucket" setting. |
| `v.organization` | The data source configuration's "Organization" setting. |
For example, the query editor interpolates this query:
```flux
from(bucket: v.defaultBucket)
|> range(start: v.timeRangeStart, stop: v.timeRangeStop)
|> filter(fn: (r) => r["_measurement"] == "cpu" or r["_measurement"] == "swap")
|> filter(fn: (r) => r["_field"] == "usage_system" or r["_field"] == "free")
|> aggregateWindow(every: v.windowPeriod, fn: mean)
|> yield(name: "mean")
```
Into this query to send to InfluxDB, with interval and time period values changing according to the active time selection:
```flux
from(bucket: "grafana")
|> range(start: 2020-06-11T13:59:07Z, stop: 2020-06-11T14:59:07Z)
|> filter(fn: (r) => r["_measurement"] == "cpu" or r["_measurement"] == "swap")
|> filter(fn: (r) => r["_field"] == "usage_system" or r["_field"] == "free")
|> aggregateWindow(every: 2s, fn: mean)
|> yield(name: "mean")
```
To view the interpolated version of a query with the query inspector, refer to [Panel Inspector]({{< relref "../../../panels-visualizations/panel-inspector" >}}).
## Query logs
You can query and display log data from InfluxDB in [Explore]({{< relref "../../../explore" >}}) and with the [Logs panel]({{< relref "../../../panels-visualizations/visualizations/logs" >}}) for dashboards.
Select the InfluxDB data source, then enter a query to display your logs.
### Create log queries
The Logs Explorer next to the query field, accessed by the **Measurements/Fields** button, lists measurements and fields.
Choose the desired measurement that contains your log data, then choose which field to use to display the log message.
Once InfluxDB returns the result, the log panel lists log rows and displays a bar chart, where the x axis represents the time and the y axis represents the frequency/count.
### Filter search
To add a filter, click the plus icon to the right of the **Measurements/Fields** button, or next to a condition.
To remove tag filters, click the first select, then choose **--remove filter--**.
## Apply annotations
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
For InfluxDB, your query **must** include `WHERE $timeFilter`.
If you select only one column, you don't need to enter anything in the column mapping fields.
The **Tags** field's value can be a comma-separated string.
### Annotation query example
```sql
SELECT title, description from events WHERE $timeFilter ORDER BY time ASC
```

View File

@ -0,0 +1,85 @@
---
aliases:
- /docs/grafana/latest/datasources/influxdb/influxdb-templates/
- /docs/grafana/latest/data-sources/influxdb/influxdb-templates/
- /docs/grafana/latest/data-sources/influxdb/template-variables/
description: Guide for template variables in InfluxDB
keywords:
- grafana
- influxdb
- queries
- template
- variable
menuTitle: Template variables
title: InfluxDB template variables
weight: 300
---
# InfluxDB template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
If you add a query template variable, you can write an InfluxDB exploration (metadata) query.
These queries can return results like measurement names, key names, or key values.
For more information, refer to [Add query variable]({{< relref "../../../dashboards/variables/add-template-variables#add-a-query-variable" >}}).
For example, to create a variable that contains all values for tag `hostname`, specify a query like this in the query variable **Query**:
```sql
SHOW TAG VALUES WITH KEY = "hostname"
```
### Chain or nest variables
You can also create nested variables, sometimes called [chained variables]({{< relref "../../../dashboards/variables/add-template-variables#chained-variables" >}}).
For example, if you had a variable called `region`, you could have the `hosts` variable show only hosts from the selected region with a query like:
```sql
SHOW TAG VALUES WITH KEY = "hostname" WHERE region = '$region'
```
You can fetch key names for a given measurement:
```sql
SHOW TAG KEYS [FROM <measurement_name>]
```
If you have a variable with key names, you can use this variable in a group-by clause.
This helps you change group-by using the variable list at the top of the dashboard.
### Use ad hoc filters
InfluxDB supports the special **Ad hoc filters** variable type.
You can use this variable type to specify any number of key/value filters, and Grafana applies them automatically to all of your InfluxDB queries.
For more information, refer to [Add ad hoc filters]({{< relref "../../../dashboards/variables/add-template-variables#add-ad-hoc-filters" >}}).
## Choose a variable syntax
The InfluxDB data source supports two variable syntaxes for use in the **Query** field:
- `$<varname>`, which is easier to read and write but does not allow you to use a variable in the middle of a word.
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^$host$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
- `${varname}`
```sql
SELECT mean("value") FROM "logins" WHERE "hostname" =~ /^[[host]]$/ AND $timeFilter GROUP BY time($__interval), "hostname"
```
When you enable the **Multi-value** or **Include all value** options, Grafana converts the labels from plain text to a regex-compatible string, so you must use `=~` instead of `=`.
### Templated dashboard example
To view an example templated dashboard, refer to [InfluxDB Templated Dashboard](https://play.grafana.org/dashboard/db/influxdb-templated).

View File

@ -1,197 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/jaeger/
- /docs/grafana/latest/features/datasources/jaeger/
description: Guide for using Jaeger in Grafana
keywords:
- grafana
- jaeger
- guide
- tracing
title: Jaeger
weight: 800
---
# Jaeger data source
Grafana ships with built-in support for Jaeger, which provides open source, end-to-end distributed tracing.
Just add it as a data source and you are ready to query your traces in [Explore]({{< relref "../explore/" >}}).
## Add data source
To access Jaeger settings, click the **Configuration** (gear) icon, then click **Data Sources** > **Jaeger**.
| Name | Description |
| ------------ | ---------------------------------------------------------------------- |
| `Name` | The data source name in panels, queries, and Explore. |
| `Default` | The pre-selected data source for a new panel. |
| `URL` | The URL of the Jaeger instance. For example, `http://localhost:16686`. |
| `Basic Auth` | Enable basic authentication for the Jaeger data source. |
| `User` | Specify a user name for basic authentication. |
| `Password` | Specify a password for basic authentication. |
### Trace to logs
> **Note:** This feature is available in Grafana 7.4+.
This is a configuration for the [trace to logs feature]({{< relref "../explore/trace-integration/" >}}). Select target data source (at this moment limited to Loki and Splunk \[logs\] data sources) and select which tags will be used in the logs query.
- **Data source -** Target data source.
- **Tags -** The tags that will be used in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`.
- **Map tag names -** When enabled, allows configuring how Jaeger tag names map to logs label names. For example, map `service.name` to `service`.
- **Span start time shift -** Shift in the start time for the logs query based on the span start time. In order to extend to the past, you need to use a negative value. Use time interval units like 5s, 1m, 3h. The default is 0.
- **Span end time shift -** Shift in the end time for the logs query based on the span end time. Time units can be used here, for example, 5s, 1m, 3h. The default is 0.
- **Filter by Trace ID -** Toggle to append the trace ID to the logs query.
- **Filter by Span ID -** Toggle to append the span ID to the logs query.
![Trace to logs settings](/static/img/docs/explore/trace-to-log-7-4.png 'Screenshot of the trace to logs settings')
### Trace to metrics
> **Note:** This feature is behind the `traceToMetrics` feature toggle.
To configure trace to metrics, select the target Prometheus data source and create any desired linked queries.
-- **Data source -** Target data source.
-- **Tags -** You can use tags in the linked queries. The key is the span attribute name. The optional value is the corresponding metric label name (for example, map `k8s.pod` to `pod`). You may interpolate these tags into your queries using the `$__tags` keyword.
Each linked query consists of:
-- **Link Label -** (Optional) Descriptive label for the linked query.
-- **Query -** Query that runs when navigating from a trace to the metrics data source. Interpolate tags using the `$__tags` keyword. For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, it results in `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Node Graph
This is a configuration for the beta Node Graph visualization. The Node Graph is shown after the trace view is loaded and is disabled by default.
-- **Enable Node Graph -** Enables the Node Graph visualization.
### Span bar label
You can configure the span bar label. The span bar label allows you add additional information to the span bar row.
Select one of the following four options. The default selection is Duration.
- **None -** Do not show any additional information on the span bar row.
- **Duration -** Show the span duration on the span bar row.
- **Tag -** Show the span tag on the span bar row. Note: You will also need to specify the tag key to use to get the tag value. For example, `span.kind`.
## Query traces
You can query and display traces from Jaeger via [Explore]({{< relref "../explore/" >}}).
{{< figure src="/static/img/docs/explore/jaeger-search-form.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger query editor" >}}
You can query by trace ID or use the search form to find traces. To query by trace ID, select the TraceID from the Query type selector and insert the ID into the text input.
{{< figure src="/static/img/docs/explore/jaeger-trace-id.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger query editor with trace ID selected" >}}
To perform a search, set the query type selector to Search, then use the following fields to find traces:
- Service - Returns a list of services.
- Operation - Field gets populated once you select a service. It then lists the operations related to the selected service. Select `All` option to query all operations.
- Tags - Use values in the [logfmt](https://brandur.org/logfmt) format. For example `error=true db.statement="select * from User"`.
- Min Duration - Filter all traces with a duration higher than the set value. Possible values are `1.2s, 100ms, 500us`.
- Max Duration - Filter all traces with a duration lower than the set value. Possible values are `1.2s, 100ms, 500us`.
- Limit - Limits the number of traces returned.
## Upload JSON trace file
You can upload a JSON file that contains a single trace to visualize it. If the file has multiple traces then the first trace is used for visualization.
{{< figure src="/static/img/docs/explore/jaeger-upload-json.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger data source in explore with upload selected" >}}
Here is an example JSON:
```json
{
"data": [
{
"traceID": "2ee9739529395e31",
"spans": [
{
"traceID": "2ee9739529395e31",
"spanID": "2ee9739529395e31",
"flags": 1,
"operationName": "CAS",
"references": [],
"startTime": 1616095319593196,
"duration": 1004,
"tags": [
{
"key": "sampler.type",
"type": "string",
"value": "const"
}
],
"logs": [],
"processID": "p1",
"warnings": null
}
],
"processes": {
"p1": {
"serviceName": "loki-all",
"tags": [
{
"key": "jaeger.version",
"type": "string",
"value": "Go-2.25.0"
}
]
}
},
"warnings": null
}
],
"total": 0,
"limit": 0,
"offset": 0,
"errors": null
}
```
## Linking Trace ID from logs
You can link to Jaeger trace from logs in Loki by configuring a derived field with internal link. See the [Derived fields]({{< relref "loki/#derived-fields" >}}) section in the [Loki data source]({{< relref "loki/" >}}) documentation for details.
## Configure the data source with provisioning
You can set up the data source via configuration files with Grafana's provisioning system. Refer to [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}}) for more information on configuring various settings.
Here is an example with basic auth and trace-to-logs field.
```yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
uid: jaeger-spectra
access: proxy
url: http://localhost:16686/
basicAuth: true
basicAuthUser: my_user
editable: true
isDefault: false
jsonData:
tracesToLogs:
# Field with internal link pointing to a logs data source in Grafana.
# datasourceUid value must match the `datasourceUid` value of the logs data source.
datasourceUid: 'loki'
tags: ['job', 'instance', 'pod', 'namespace']
mappedTags: [{ key: 'service.name', value: 'service' }]
mapTagNamesEnabled: false
spanStartTimeShift: '1h'
spanEndTimeShift: '1h'
filterByTraceID: false
filterBySpanID: false
tracesToMetrics:
datasourceUid: 'prom'
tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
queries:
- name: 'Sample query'
query: 'sum(rate(tempo_spanmetrics_latency_bucket{$__tags}[5m]))'
secureJsonData:
basicAuthPassword: my_password
```

View File

@ -0,0 +1,244 @@
---
aliases:
- /docs/grafana/latest/features/datasources/jaeger/
- /docs/grafana/latest/datasources/jaeger/
- /docs/grafana/latest/data-sources/jaeger/
description: Guide for using Jaeger in Grafana
keywords:
- grafana
- jaeger
- guide
- tracing
menuTitle: Jaeger
title: Jaeger data source
weight: 800
---
# Jaeger data source
Grafana ships with built-in support for Jaeger, which provides open source, end-to-end distributed tracing.
This topic explains configuration and queries specific to the Jaeger data source.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
You can also [upload a JSON trace file]({{< relref "#upload-a-json-trace-file" >}}), and [link a trace ID from logs]({{< relref "#link-a-trace-id-from-logs" >}}) in Loki.
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Jaeger data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| -------------- | ------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Defines whether this data source is pre-selected for new panels. |
| **URL** | Sets the URL of the Jaeger instance, such as `http://localhost:16686`. |
| **Basic Auth** | Enables basic authentication for the Jaeger data source. |
| **User** | Defines the user name for basic authentication. |
| **Password** | Defines the password for basic authentication. |
You can also configure settings specific to the Jaeger data source:
### Configure trace to logs
> **Note:** Available in Grafana v7.4 and higher.
The **Trace to logs** section configures the [trace to logs feature]({{< relref "../../explore/trace-integration/" >}}).
Select a target data source, limited to Loki and Splunk \[logs\] data sources, and which tags to use in the logs query.
{{< figure src="/static/img/docs/explore/trace-to-log-8-2.png" class="docs-image--no-shadow" caption="Screenshot of the trace to logs settings" >}}
| Name | Description |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Sets the target data source. |
| **Tags** | Defines the tags to use in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`. |
| **Map tag names** | Enables configuring how Jaeger tag names map to logs label names. For example, map `service.name` to `service`. |
| **Span start time shift** | Shifts the start time for the logs query based on the span start time. To extend to the past, use a negative value. Use time interval units like `5s`, `1m`, `3h`. Default is `0`. |
| **Span end time shift** | Shifts the end time for the logs query based on the span end time. Use time interval units. Default is `0`. |
| **Filter by Trace ID** | Toggles whether to append the trace ID to the logs query. |
| **Filter by Span ID** | Toggles whether to append the span ID to the logs query. |
### Configure trace to metrics
> **Note:** This feature is behind the `traceToMetrics` [feature toggle]({{< relref "../../setup-grafana/configure-grafana#feature_toggles" >}}).
The **Trace to metrics** section configures the [trace to metrics feature](/blog/2022/08/18/new-in-grafana-9.1-trace-to-metrics-allows-users-to-navigate-from-a-trace-span-to-a-selected-data-source/).
Use the settings to select the target Prometheus data source, and create any desired linked queries.
| Setting name | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Defines the target data source. |
| **Tags** | Defines the tags used in linked queries. The key sets the span attribute name, and the optional value sets the corresponding metric label name. For example, you can map `k8s.pod` to `pod`. To interpolate these tags into queries, use the `$__tags` keyword. |
Each linked query consists of:
- **Link Label:** _(Optional)_ Descriptive label for the linked query.
- **Query:** The query ran when navigating from a trace to the metrics data source.
Interpolate tags using the `$__tags` keyword.
For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, the result looks like `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Enable Node Graph
The **Node Graph** setting enables the beta [Node Graph visualization]({{< relref "../../panels-visualizations/visualizations/node-graph/" >}}), which is disabled by default.
Once enabled, Grafana displays the Node Graph after loading the trace view.
### Configure the span bar label
The **Span bar label** section helps you display additional information in the span bar row.
You can choose one of three options:
| Name | Description |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| **None** | Adds nothing to the span bar row. |
| **Duration** | _(Default)_ Displays the span duration on the span bar row. |
| **Tag** | Displays the span tag on the span bar row. You must also specify which tag key to use to get the tag value, such as `span.kind`. |
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning#data-sources" >}}).
#### Provisioning examples
**Using basic auth and the trace-to-logs field:**
```yaml
apiVersion: 1
datasources:
- name: Jaeger
type: jaeger
uid: jaeger-spectra
access: proxy
url: http://localhost:16686/
basicAuth: true
basicAuthUser: my_user
editable: true
isDefault: false
jsonData:
tracesToLogs:
# Field with internal link pointing to a logs data source in Grafana.
# datasourceUid value must match the datasourceUid value of the logs data source.
datasourceUid: 'loki'
tags: ['job', 'instance', 'pod', 'namespace']
mappedTags: [{ key: 'service.name', value: 'service' }]
mapTagNamesEnabled: false
spanStartTimeShift: '1h'
spanEndTimeShift: '1h'
filterByTraceID: false
filterBySpanID: false
tracesToMetrics:
datasourceUid: 'prom'
tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
queries:
- name: 'Sample query'
query: 'sum(rate(tempo_spanmetrics_latency_bucket{$__tags}[5m]))'
secureJsonData:
basicAuthPassword: my_password
```
## Query traces
You can query and display traces from Jaeger via [Explore]({{< relref "../../explore" >}}).
{{< figure src="/static/img/docs/explore/jaeger-search-form.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger query editor" >}}
You can query by trace ID, or use the search form to find traces.
### Query by trace ID
{{< figure src="/static/img/docs/explore/jaeger-trace-id.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger query editor with trace ID selected" >}}
**To query by trace ID:**
1. Select **TraceID** from the **Query** type selector.
1. Insert the ID into the text input.
### Query by search
**To search for traces:**
1. Select **Search** from the **Query** type selector.
1. Fill out the search form:
| Name | Description |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| **Service** | Returns a list of services. |
| **Operation** | Populated when you select a service with related operations. Select `All` to query all operations. |
| **Tags** | Sets tags with values in the [logfmt](https://brandur.org/logfmt) format, such as `error=true db.statement="select * from User"`. |
| **Min Duration** | Filters all traces with a duration higher than the set value. Valid values are `1.2s, 100ms, 500us`. |
| **Max Duration** | Filters all traces with a duration lower than the set value. Valid values are `1.2s, 100ms, 500us`. |
| **Limit** | Limits the number of traces returned. |
## Upload a JSON trace file
You can upload a JSON file that contains a single trace and visualize it.
If the file has multiple traces, Grafana visualizes its first trace.
{{< figure src="/static/img/docs/explore/jaeger-upload-json.png" class="docs-image--no-shadow" caption="Screenshot of the Jaeger data source in explore with upload selected" >}}
### Trace JSON example
```json
{
"data": [
{
"traceID": "2ee9739529395e31",
"spans": [
{
"traceID": "2ee9739529395e31",
"spanID": "2ee9739529395e31",
"flags": 1,
"operationName": "CAS",
"references": [],
"startTime": 1616095319593196,
"duration": 1004,
"tags": [
{
"key": "sampler.type",
"type": "string",
"value": "const"
}
],
"logs": [],
"processID": "p1",
"warnings": null
}
],
"processes": {
"p1": {
"serviceName": "loki-all",
"tags": [
{
"key": "jaeger.version",
"type": "string",
"value": "Go-2.25.0"
}
]
}
},
"warnings": null
}
],
"total": 0,
"limit": 0,
"offset": 0,
"errors": null
}
```
## Link a trace ID from logs
You can link to a Jaeger trace from logs in [Loki](/docs/loki/latest/) by configuring a derived field with an internal link.
For details, refer to [Derived fields]({{< relref "../loki/#configure-derived-fields" >}}) section of the [Loki data source]({{< relref "../loki/" >}}) documentation.

View File

@ -1,298 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/loki/
- /docs/grafana/latest/features/datasources/loki/
description: Guide for using Loki in Grafana
keywords:
- grafana
- loki
- logging
- guide
title: Loki
weight: 800
---
# Using Loki in Grafana
Grafana ships with built-in support for Loki, an open source log aggregation system by Grafana Labs. This topic explains options, variables, querying, and other options specific to this data source.
Add it as a data source and you are ready to build dashboards or query your log data in [Explore]({{< relref "../explore/" >}}). Refer to [Add a data source]({{< relref "add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
## Hosted Loki
You can run Loki on your own hardware or use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs). The free forever plan includes Grafana, 50 GB of Loki logs, 10K Prometheus series, and more. [Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-loki&plcmt=in-text).
## Loki settings
To access Loki settings, click the **Configuration** (gear) icon, then click **Data Sources**, and then click the Loki data source.
| Name | Description |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels, queries, and Explore. |
| `Default` | Default data source that is pre-selected for new panels. |
| `URL` | URL of the Loki instance, e.g., `http://localhost:3100`. |
| `Allowed cookies` | Grafana Proxy deletes forwarded cookies by default. Specify cookies by name that should be forwarded to the data source. |
| `Maximum lines` | Upper limit for the number of log lines returned by Loki (default is 1000). Lower this limit if your browser is sluggish when displaying logs in Explore. |
> **Note:** To troubleshoot configuration and other issues, check the log file located at /var/log/grafana/grafana.log on Unix systems or in <grafana_install_dir>/data/log on other platforms and manual installations.
### Derived fields
The Derived Fields configuration allows you to:
- Add fields parsed from the log message.
- Add a link that uses the value of the field.
For example, you can use this functionality to link to your tracing backend directly from your logs, or link to a user profile page if a userId is present in the log line. These links appear in the [log details]({{< relref "../explore/logs-integration/#labels-and-detected-fields" >}}).
> **Note:** Grafana Cloud users can request modifications to this feature by [opening a support ticket in the Cloud Portal](https://grafana.com/profile/org#support).
Each derived field consists of:
- **Name -** Shown in the log details as a label.
- **Regex -** A Regex pattern that runs on the log message and captures part of it as the value of the new field. Can only contain a single capture group.
- **URL/query -** If the link is external, then enter the full link URL. If the link is internal link, then this input serves as query for the target data source. In both cases, you can interpolate the value from the field with `${__value.raw}` macro.
- **URL Label -** (Optional) Set a custom display label for the link. The link label defaults to the full external URL or name of the linked internal data source and is overridden by this setting.
- **Internal link -** Select if the link is internal or external. In case of internal link, a data source selector allows you to select the target data source. Only tracing data sources are supported.
You can use a debug section to see what your fields extract and how the URL is interpolated. Click **Show example log message** to show the text area where you can enter a log message.
{{< figure src="/static/img/docs/v75/loki_derived_fields_settings.png" class="docs-image--no-shadow" max-width="800px" caption="Screenshot of the derived fields debugging" >}}
The new field with the link shown in log details:
{{< figure src="/static/img/docs/explore/detected-fields-link-7-4.png" max-width="800px" caption="Detected fields link in Explore" >}}
## Loki query editor
Loki query editor is separated into 2 distinct modes that you can switch between. See docs for each section below.
At the top of the editor, select `Run queries` to run a query. Select `Builder | Code` tabs to switch between the editor modes. If the query editor is in Builder mode, there are additional elements explained in the Builder section.
> **Note:** In Explore, to run Loki queries, select `Run query`.
Each mode is synchronized with the other modes, so you can switch between them without losing your work, although there are some limitations. Some more complex queries are not yet supported in the builder mode. If you try to switch from `Code` to `Builder` with such query, editor will show a popup explaining that you can lose some parts of the query, and you can decide if you still want to continue to `Builder` mode or not.
### Code mode
Code mode allows you to write raw queries in a textual editor. It implements autocomplete features and syntax highlighting to help with writing complex queries. In addition, it also contains `Log browser` to further aid with writing queries (see more docs below).
For more information about Loki query language, refer to the [Loki documentation](https://grafana.com/docs/loki/latest/logql/).
#### Autocomplete
Autocomplete kicks automatically in appropriate times during typing. Autocomplete can suggest both static functions, aggregations and parsers but also dynamic items like labels. Autocomplete dropdown also shows documentation for the suggested items, either static one or dynamic metric documentation where available.
#### Log browser
With Loki log browser you can easily navigate through your list of labels and values and construct the query of your choice. Log browser has multi-step selection:
1. Choose the labels you would like to consider for your search.
2. Search for the values for selected labels. Search filed supports fuzzy search. Log browser also supports facetting and therefore it shows you only possible label combinations.
3. Choose the type of query - logs query or rate metrics query. Additionally, you can also validate selector.
{{< figure src="/static/img/docs/v75/loki_log_browser.png" class="docs-image--no-shadow" max-width="800px" caption="Screenshot of the log browser for Loki" >}}
#### Options
| Name | Description |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Type` | Choose the type of query to run. The `instant` type queries against a single point in time. We are using "To" time from the time range. The `range` type queries over the selected range of time. |
| `Line limit` | Upper limit for number of log lines returned by query. The default is the Maximum lines limit set in Loki settings. |
| `Legend` | Available only in Dashboard. Controls the name of the time series, using name or pattern. For example `{{hostname}}` is replaced with the label value for the label `hostname`. |
| `Resolution` | Resolution 1/1 sets step parameter of Loki metrics range queries such that each pixel corresponds to one data point. For better performance, lower resolutions can be picked. 1/2 only retrieves a data point for every other pixel, and 1/10 retrieves one data point per 10 pixels. |
### Builder mode
#### Toolbar
In addition to `Run query` button and mode switcher, in builder mode additional elements are available:
| Name | Description |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| Kick start your query | A list of useful operation patterns that can be used to quickly add multiple operations to your query to achieve a specific goal. |
| Explain | Toggle to show a step by step explanation of all query parts and the operations. |
| Raw query | Toggle to show raw query generated by the builder that will be sent to Loki instance. |
#### Labels selector
Select desired labels and their values from the dropdown list. When label is selected, available values are fetched from the server. Use the `+` button to add more labels. Use the `x` button to remove a label.
#### Operations
Use the `+ Operations` button to add operation to your query. Operations are grouped into sections for easier navigation. When the operations dropdown is open, write into the search input to search and filter operations list.
Operations in a query are shown as boxes in the operations section. Each has a header with a name and additional action buttons. Hover over the operation header to show the action buttons. Click the `v` button to quickly replace the operation with different one of the same type. Click the `info` button to open operations' description tooltip. Click the `x` button to remove the operation.
Operation can have additional parameters under the operation header. See the operation description or Loki docs for more details about each operation.
Some operations make sense only in specific order, if adding an operation would result in nonsensical query, operation will be added to the correct place. To order operations manually drag operation box by the operation name and drop in appropriate place.
##### Hints
In same cases the query editor can detect which operations would be most appropriate for a selected log stream. In such cases it will show a hint next to the `+ Operations` button. Click on the hint to add the operations to your query.
#### Explain mode
Explain mode helps with understanding the query. It shows a step by step explanation of all query parts and the operations.
#### Raw query
This section is shown only if the `Raw query` switch from the query editor top toolbar is set to `on`. It shows the raw query that will be created and executed by the query editor.
## Querying with Loki
There are two types of LogQL queries:
- Log queries
- Metric queries
### Log queries
Loki log queries return the contents of the log lines. Querying and displaying log data from Loki is available via [Explore]({{< relref "../explore/" >}}), and with the [logs panel]({{< relref "../panels-visualizations/visualizations/logs/" >}}) in dashboards. Select the Loki data source, and then enter a LogQL query to display your logs.F or more information about log queries and LogQL, refer to the [Loki log queries documentation](https://grafana.com/docs/loki/latest/logql/log_queries/).
#### Log context
When using a search expression as detailed above, you can retrieve the context surrounding your filtered results.
By clicking the `Show Context` link on the filtered rows, you'll be able to investigate the log messages that came before and after the
log message you're interested in.
#### Live tailing
Loki supports Live tailing which displays logs in real-time. This feature is supported in [Explore]({{< relref "../explore/#loki-specific-features" >}}).
Note that Live Tailing relies on two Websocket connections: one between the browser and the Grafana server, and another between the Grafana server and the Loki server. If you run any reverse proxies, please configure them accordingly. The following example for Apache2 can be used for proxying between the browser and the Grafana server:
```
ProxyPassMatch "^/(api/datasources/proxy/\d+/loki/api/v1/tail)" "ws://127.0.0.1:3000/$1"
```
The following example shows basic NGINX proxy configuration. It assumes that the Grafana server is available at `http://localhost:3000/`, Loki server is running locally without proxy, and your external site uses HTTPS. If you also host Loki behind NGINX proxy, then you might want to repeat the following configuration for Loki as well.
In the `http` section of NGINX configuration, add the following map definition:
```
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
```
In your `server` section, add the following configuration:
```
location ~ /(api/datasources/proxy/\d+/loki/api/v1/tail) {
proxy_pass http://localhost:3000$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
}
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
```
> **Note:** This feature is only available in Grafana v6.3+.
### Metric queries
LogQL supports wrapping a log query with functions that allow for creating metrics out of the logs. For more information about metric queries, refer to the [Loki metric queries documentation](https://grafana.com/docs/loki/latest/logql/metric_queries/)
## Templating
Instead of hard-coding things like server, application and sensor name in your metric queries, you can use variables in their place. Variables are shown as drop-down select boxes at the top of the dashboard. These drop-down boxes make it easy to change the data being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables" >}}) documentation for an introduction to the templating feature and the different types of template variables.
## Query variable
Variable of the type _Query_ allows you to query Loki for a list labels or label values. The Loki data source plugin
provides a form to select the type of values that can be expected for a given variable.
The form has these options:
| Query Type | Label | Stream Selector | Description |
| ------------ | ------------ | --------------------- | -------------------------------------------------------------------------------------- |
| Label names | Not required | Not required | Returns a list of label names. |
| Label values | `label` | | Returns a list of label values for the `label`. |
| Label values | `label` | `log stream selector` | Returns a list of label values for the `label` in the specified `log stream selector`. |
### Ad hoc filters variable
Loki supports the special ad hoc filters variable type. It allows you to specify any number of label/value filters on the fly. These filters are automatically applied to all your Loki queries.
### Using interval and range variables
You can use some global built-in variables in query variables; `$__interval`, `$__interval_ms`, `$__range`, `$__range_s` and `$__range_ms`. For more information, refer to [Global built-in variables]({{< relref "../dashboards/variables/add-template-variables/#global-variables/" >}}).
## Annotations
You can use any non-metric Loki query as a source for [annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}). Log content will be used as annotation text and your log stream labels as tags, so there is no need for additional mapping.
## Configure the data source with provisioning
You can set up the data source via config files with Grafana's provisioning system.
You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here is an example:
```yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://localhost:3100
jsonData:
maxLines: 1000
```
Here's another with basic auth and derived field. Keep in mind that `$` character needs to be escaped in YAML values as it is used to interpolate environment variables:
```yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://localhost:3100
basicAuth: true
basicAuthUser: my_user
jsonData:
maxLines: 1000
derivedFields:
# Field with internal link pointing to data source in Grafana.
# Right now, Grafana supports only Jaeger and Zipkin data sources as link targets.
# datasourceUid value can be anything, but it should be unique across all defined data source uids.
- datasourceUid: my_jaeger_uid
matcherRegex: "traceID=(\\w+)"
name: TraceID
# url will be interpreted as query for the datasource
url: '$${__value.raw}'
# Field with external link.
- matcherRegex: "traceID=(\\w+)"
name: TraceID
url: 'http://localhost:16686/trace/$${__value.raw}'
secureJsonData:
basicAuthPassword: test_password
```
Here's an example of a Jaeger data source corresponding to the above example. Note that the Jaeger `uid` value does match the Loki `datasourceUid` value.
```
datasources:
- name: Jaeger
type: jaeger
url: http://jaeger-tracing-query:16686/
access: proxy
# UID should match the datasourceUid in derivedFields.
uid: my_jaeger_uid
```

View File

@ -0,0 +1,160 @@
---
aliases:
- /docs/grafana/latest/features/datasources/loki/
- /docs/grafana/latest/datasources/loki/
- /docs/grafana/latest/data-sources/loki/
description: Guide for using Loki in Grafana
keywords:
- grafana
- loki
- logging
- guide
menuTitle: Loki
title: Loki data source
weight: 800
---
# Loki data source
Grafana ships with built-in support for [Loki](/docs/loki/latest/), an open-source log aggregation system by Grafana Labs.
This topic explains configuring and querying specific to the Loki data source.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the Loki data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}), use [Explore]({{< relref "../../explore/" >}}), and [annotate visualizations]({{< relref "./query-editor/#apply-annotations" >}}).
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Loki data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets the data source that's pre-selected for new panels. |
| **URL** | Sets the HTTP protocol, IP, and port of your Loki instance, such as `http://localhost:3100`. |
| **Allowed cookies** | Defines which cookies are forwarded to the data source. Grafana Proxy deletes all other cookies. |
| **Maximum lines** | Sets the upper limit for the number of log lines returned by Loki. Defaults to 1,000. Lower this limit if your browser is sluggish when displaying logs in Explore. |
> **Note:** To troubleshoot configuration and other issues, check the log file located at `/var/log/grafana/grafana.log` on Unix systems, or in `<grafana_install_dir>/data/log` on other platforms and manual installations.
### Configure derived fields
The **Derived Fields** configuration helps you:
- Add fields parsed from the log message
- Add a link that uses the value of the field
For example, you can link to your tracing backend directly from your logs, or link to a user profile page if the log line contains a corresponding userId.
These links appear in the [log details]({{< relref "../../explore/logs-integration/#labels-and-detected-fields" >}}).
> **Note:** If you use Grafana Cloud, you can request modifications to this feature by [opening a support ticket in the Cloud Portal](/profile/org#support).
Each derived field consists of:
| Field name | Description |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | Sets the field name. Displayed as a label in the log details. |
| **Regex** | Defines a regular expression to evaluate on the log message and capture part of it as the value of the new field. Can contain only one capture group. |
| **URL/query** | Sets the full link URL if the link is external, or a query for the target data source if the link is internal. You can interpolate the value from the field with the `${__value.raw}` macro. |
| **URL Label** | _(Optional)_ Sets a custom display label for the link. This setting overrides the link label, which defaults to the full external URL or name of the linked internal data source. |
| **Internal link** | Defines whether the link is internal or external. For internal links, you can select the target data source from a selector. This supports only tracing data sources. |
#### Troubleshoot interpolation
You can use a debug section to see what your fields extract and how the URL is interpolated.
Select **Show example log message** to display a text area where you can enter a log message.
{{< figure src="/static/img/docs/v75/loki_derived_fields_settings.png" class="docs-image--no-shadow" max-width="800px" caption="Screenshot of the derived fields debugging" >}}
The new field with the link shown in log details:
{{< figure src="/static/img/docs/explore/detected-fields-link-7-4.png" max-width="800px" caption="Detected fields link in Explore" >}}
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning examples
```yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://localhost:3100
jsonData:
maxLines: 1000
```
**Using basic authorization and a derived field:**
You must escape the dollar (`$`) character in YAML values because it can be used to interpolate environment variables:
```yaml
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://localhost:3100
basicAuth: true
basicAuthUser: my_user
jsonData:
maxLines: 1000
derivedFields:
# Field with internal link pointing to data source in Grafana.
# Right now, Grafana supports only Jaeger and Zipkin data sources as link targets.
# datasourceUid value can be anything, but it should be unique across all defined data source uids.
- datasourceUid: my_jaeger_uid
matcherRegex: "traceID=(\\w+)"
name: TraceID
# url will be interpreted as query for the datasource
url: '$${__value.raw}'
# Field with external link.
- matcherRegex: "traceID=(\\w+)"
name: TraceID
url: 'http://localhost:16686/trace/$${__value.raw}'
secureJsonData:
basicAuthPassword: test_password
```
**Using a Jaeger data source:**
In this example, the Jaeger data source's `uid` value should match the Loki data source's `datasourceUid` value.
```
datasources:
- name: Jaeger
type: jaeger
url: http://jaeger-tracing-query:16686/
access: proxy
# UID should match the datasourceUid in derivedFields.
uid: my_jaeger_uid
```
## Query the data source
The Loki data source's query editor helps you create log and metric queries that use Loki's query language, [LogQL](/docs/loki/latest/logql/).
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -0,0 +1,224 @@
---
aliases:
- /docs/grafana/latest/data-sources/loki/query-editor/
description: Guide for using the Loki data source's query editor
keywords:
- grafana
- loki
- logs
- queries
menuTitle: Query editor
title: Loki query editor
weight: 300
---
# Loki query editor
The Loki data source's query editor helps you create [log]({{< relref "#create-a-logs-query" >}}) and [metric]({{< relref "#create-a-metrics-query" >}}) queries that use Loki's query language, [LogQL](/docs/loki/latest/logql/).
This topic explains querying specific to the Loki data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
You can switch the Loki query editor between two modes:
- [Code mode]({{< relref "#code-mode" >}}), which provides a feature-rich editor for writing queries
- [Builder mode]({{< relref "#builder-mode" >}}), which provides a visual query designer
To switch between the editor modes, select the corresponding **Builder** and **Code** tabs.
To run a query, select **Run queries** located at the top of the editor.
> **Note:** To run Loki queries in [Explore]({{< relref "../../../explore/" >}}), select **Run query**.
Each mode is synchronized with the other modes, so you can switch between them without losing your work, although there are some limitations.
Builder mode doesn't yet support some complex queries.
When you switch from Code mode to Builder mode with such a query, the editor displays a popup that explains how you might lose parts of the query if you continue.
You can then decide whether you still want to switch to Builder mode.
You can also augment queries by using [template variables]({{< relref "./template-variables/" >}}).
## Code mode
In **Code mode**, you can write complex queries using a text editor with autocompletion features and syntax highlighting.
It also contains a [log browser]({{< relref "#log-browser" >}}) to further help you write queries.
For more information about Loki's query language, refer to the [Loki documentation](/docs/loki/latest/logql/).
### Use autocompletion
Code mode's autocompletion feature works automatically while typing.
The query editor can autocomplete static functions, aggregations, and keywords, and also dynamic items like labels.
The autocompletion dropdown includes documentation for the suggested items where available.
### Log browser
You can use the Loki log browser to navigate through your labels and values, and build queries.
**To navigate Loki and build a query:**
1. Choose labels to locate.
1. Search for the values of your selected labels.
The search field supports fuzzy search, and the log browser also supports faceting to list only possible label combinations.
1. Choose a query type between [**logs query**]({{< relref "#create-a-log-query" >}}) and [**rate metrics query**]({{< relref "#create-a-metric-query" >}}).
You can also validate the selector.
{{< figure src="/static/img/docs/v75/loki_log_browser.png" class="docs-image--no-shadow" max-width="800px" caption="The Loki log browser" >}}
### Configure query settings
| Name | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Type** | Selects the query type to run. The `instant` type queries against a single point in time. We use the "To" time from the time range. The `range` type queries over the selected range of time. |
| **Line limit** | Defines the upper limit for the number of log lines returned by a query. The default is Loki's configured maximum lines limit. |
| **Legend** | _(Available only in a dashboard)_ Controls the time series name, using a name or pattern. For example, `{{hostname}}` is replaced with the label value for the label `hostname`. |
| **Resolution** | Sets the step parameter of Loki metrics range queries. With a resolution of `1/1`, each pixel corresponds to one data point. `1/2` retrieves one data point for every other pixel, `1/10` retrieves one data point per 10 pixels, and so on. Lower resolutions perform better. |
## Builder mode
Use Builder mode to visually construct queries, without needing to manually enter LogQL.
### Review toolbar features
In addition to the **Run query** button and mode switcher, Builder mode provides additional elements:
| Name | Description |
| ------------------------- | ----------------------------------------------------------------------------------------- |
| **Kick start your query** | A list of useful operation patterns you can use to add multiple operations to your query. |
| **Explain** | Displays a step-by-step explanation of all query components and operations. |
| **Raw query** | Displays the raw LogQL query that the Builder would send to Loki. |
### Use the Labels selector
Select labels and their values from the dropdown list.
When you select a label, Grafana retrieves available values from the server.
Use the `+` button to add a label and the `x` button to remove a label.
### Operations
Select the `+ Operations` button to add operations to your query.
The query editor groups operations into related sections, and you can type while the operations dropdown is open to search and filter the list.
The query editor displays a query's operations as boxes in the operations section.
Each operation's header displays its name, and additional action buttons appear when you hover your cursor over the header:
| Button | Action |
| ------ | ----------------------------------------------------------------- |
| `v` | Replaces the operation with different operation of the same type. |
| `info` | Opens the operation's description tooltip. |
| `x` | Removes the operation. |
Some operations have additional parameters under the operation header.
For details about each operation, use the `info` button to view the operation's description, or refer to the [Loki documentation](/docs/loki/latest/operations/).
Some operations make sense only when used in a specific order.
If adding an operation would result in nonsensical query, the query editor adds the operation to the correct place.
To re-order operations manually, drag the operation box by its name and drop it into the desired place.
#### Hints
In same cases the query editor can detect which operations would be most appropriate for a selected log stream. In such cases it will show a hint next to the `+ Operations` button. Click on the hint to add the operations to your query.
### Explain mode
Explain mode helps with understanding the query. It shows a step by step explanation of all query parts and the operations.
### Raw query
This section is shown only if the `Raw query` switch from the query editor top toolbar is set to `on`. It shows the raw query that will be created and executed by the query editor.
There are two types of LogQL queries:
- Log queries
- Metric queries
## Create a log query
Loki log queries return the contents of the log lines.
You can query and display log data from Loki via [Explore]({{< relref "../../../explore" >}}), and with the [Logs panel]({{< relref "../../../panels-visualizations/visualizations/logs" >}}) in dashboards.
To display the results of a log query, select the Loki data source, then enter a LogQL query.
For more information about log queries and LogQL, refer to the [Loki log queries documentation](/docs/loki/latest/logql/log_queries/).
### Show log context
When using a search expression as detailed above, you can retrieve the context surrounding your filtered results.
By clicking the `Show Context` link on the filtered rows, you'll be able to investigate the log messages that came before and after the
log message you're interested in.
### Tail live logs
Loki supports live tailing of logs in real-time in [Explore]({{< relref "../../../explore#loki-specific-features" >}}).
Live tailing relies on two Websocket connections: one between the browser and Grafana server, and another between the Grafana server and Loki server.
#### Proxying examples
If you use reverse proxies, configure them accordingly to use live tailing:
**Using Apache2 for proxying between the browser and the Grafana server:**
```
ProxyPassMatch "^/(api/datasources/proxy/\d+/loki/api/v1/tail)" "ws://127.0.0.1:3000/$1"
```
**Using NGINX:**
This example provides a basic NGINX proxy configuration.
It assumes that the Grafana server is available at `http://localhost:3000/`, the Loki server is running locally without proxy, and your external site uses HTTPS.
If you also host Loki behind an NGINX proxy, repeat the following configuration for Loki.
In the `http` section of NGINX configuration, add the following map definition:
```
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
```
In your `server` section, add the following configuration:
```
location ~ /(api/datasources/proxy/\d+/loki/api/v1/tail) {
proxy_pass http://localhost:3000$request_uri;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
proxy_set_header Connection $connection_upgrade;
proxy_set_header Upgrade $http_upgrade;
}
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto "https";
}
```
> **Note:** Available in Grafana v6.3 and higher.
## Create a metric query
You can use LogQL to wrap a log query with functions that create metrics from your logs.
For more information about metric queries, refer to the [Loki metric queries documentation](/docs/loki/latest/logql/metric_queries/).
## Apply annotations
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
You can use any non-metric Loki query as a source for annotations.
Grafana automatically uses log content as annotation text and your log stream labels as tags.
You don't need to create any additional mapping.

View File

@ -0,0 +1,49 @@
---
aliases:
- /docs/grafana/latest/data-sources/loki/template-variables/
description: Guide for using template variables when querying the Loki data source
keywords:
- grafana
- loki
- logs
- queries
- template
- variable
menuTitle: Template variables
title: Loki template variables
weight: 300
---
# Loki template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
Variables of the type _Query_ help you query Loki for lists of labels or label values.
The Loki data source provides a form to select the type of values expected for a given variable.
The form has these options:
| Query type | Example label | Example stream selector | List returned |
| ------------ | ------------- | ----------------------- | ---------------------------------------------------------------- |
| Label names | Not required | Not required | Label names. |
| Label values | `label` | | Label values for `label`. |
| Label values | `label` | `log stream selector` | Label values for `label` in the specified `log stream selector`. |
## Use ad hoc filters
Loki supports the special **Ad hoc filters** variable type.
You can use this variable type to specify any number of key/value filters, and Grafana applies them automatically to all of your Loki queries.
For more information, refer to [Add ad hoc filters]({{< relref "../../../dashboards/variables/add-template-variables#add-ad-hoc-filters" >}}).
## Use interval and range variables
You can use some global built-in variables in query variables: `$__interval`, `$__interval_ms`, `$__range`, `$__range_s`, and `$__range_ms`.
For more information, refer to [Global built-in variables]({{< relref "../../../dashboards/variables/add-template-variables#global-variables" >}}).

View File

@ -1,624 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/mssql/
- /docs/grafana/latest/features/datasources/mssql/
description: Guide for using Microsoft SQL Server in Grafana
keywords:
- grafana
- MSSQL
- Microsoft
- SQL
- guide
- Azure SQL Database
title: Microsoft SQL Server
weight: 900
---
# Using Microsoft SQL Server in Grafana
Grafana ships with a built-in Microsoft SQL Server (MS SQL) data source plugin that allows you to query and visualize data from any Microsoft SQL Server 2005 or newer, including Microsoft Azure SQL Database. This topic explains options, variables, querying, and other options specific to the MS SQL data source. Refer to [Add a data source]({{< relref "add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
## Data source options
To access data source settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the data source.
| Name | Description |
| ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `Host` | The IP address/hostname and optional port of your MS SQL instance. If you omit the port, then the driver default is used (0). You can specify multiple connection properties such as ApplicationIntent using ';' character to separate each property. |
| `Database` | Name of your MS SQL database. |
| `Authentication` | Authentication mode. Either using SQL Server Authentication or Windows Authentication (single sign on for Windows users). |
| `User` | Database user's login/username |
| `Password` | Database user's password |
| `Encrypt` | This option determines whether or to which extent a secure SSL TCP/IP connection will be negotiated with the server, default `false`. |
| `Max open` | The maximum number of open connections to the database, default `unlimited`. |
| `Max idle` | The maximum number of connections in the idle connection pool, default `2`. |
| `Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours. |
### Min time interval
A lower limit for the [$__interval]({{< relref "../dashboards/variables/add-template-variables/#__interval" >}}) and [$__interval_ms]({{< relref "../dashboards/variables/add-template-variables/#__interval_ms" >}}) variables.
Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
### Database user permissions
The database user you specify when you add the data source should only be granted SELECT permissions on
the specified database and tables you want to query. Grafana does not validate that the query is safe. The query
could include any SQL statement. For example, statements like `DELETE FROM user;` and `DROP TABLE user;` would be
executed. To protect against this we _highly_ recommend you create a specific MS SQL user with restricted permissions.
Example:
```sql
CREATE USER grafanareader WITH PASSWORD 'password'
GRANT SELECT ON dbo.YourTable3 TO grafanareader
```
Make sure the user does not get any unwanted privileges from the public role.
### Known Issues
If you're using an older version of Microsoft SQL Server like 2008 and 2008R2 you may need to disable encryption to be able to connect.
If possible, we recommend you to use the latest service pack available for optimal compatibility.
## Query builder
{{< figure src="/static/img/docs/v92/mssql_query_builder.png" class="docs-image--no-shadow" >}}
The MS SQL query builder is available when editing a panel using a MS SQL data source. The built query can be run by pressing the `Run query` button in the top right corner of the editor.
### Format
The response from MS SQL can be formatted as either a table or as a time series. To use the time series format one of the columns must be named `time`.
### Dataset and Table selection
In the dataset dropdown, choose the MS SQL database to query. The dropdown is be populated with the databases that the user has access to.
When the dataset is selected, the table dropdown is populated with the tables that are available.
### Columns and Aggregation functions (SELECT)
Using the dropdown, select a column to include in the data. You can also specify an optional aggregation function.
Add further value columns by clicking the plus button and another column dropdown appears.
### Filter data (WHERE)
To add a filter, flip the switch at the top of the editor.
Using the first dropdown, select if all the filters need to match (AND) or if only one of the filters needs to match (OR).
To add more columns to filter on use the plus button.
### Group By
To group the results by column, flip the group switch at the top of the editor. You can then choose which column to group the results by. The group by clause can be removed by pressing the X button.
### Preview
By flipping the preview switch at the top of the editor, you can get a preview of the SQL query generated by the query builder.
## Code editor
{{< figure src="/static/img/docs/v92/sql_code_editor.png" class="docs-image--no-shadow" >}}
To make advanced queries, switch to the code editor by clicking `code` in the top right corner of the editor. The code editor support autocompletion of tables, columns, SQL keywords, standard sql functions, Grafana template variables and Grafana macros. Columns cannot be completed before a table has been specified.
You can expand the code editor by pressing the `chevron` pointing downwards in the lower right corner of the code editor.
`CTRL/CMD + Return` works as a keyboard shortcut to run the query.
<div class="clearfix"></div>
## Macros
To simplify syntax and to allow for dynamic parts, like date range filters, the query can contain macros.
| Macro example | Description |
| ----------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__time(dateColumn)` | Will be replaced by an expression to rename the column to _time_. For example, _dateColumn as time_ |
| `$__timeEpoch(dateColumn)` | Will be replaced by an expression to convert a DATETIME column type to Unix timestamp and rename it to _time_. <br/>For example, _DATEDIFF(second, '1970-01-01', dateColumn) AS time_ |
| `$__timeFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name. <br/>For example, _dateColumn BETWEEN '2017-04-21T05:01:17Z' AND '2017-04-21T05:06:17Z'_ |
| `$__timeFrom()` | Will be replaced by the start of the currently active time selection. For example, _'2017-04-21T05:01:17Z'_ |
| `$__timeTo()` | Will be replaced by the end of the currently active time selection. For example, _'2017-04-21T05:06:17Z'_ |
| `$__timeGroup(dateColumn,'5m'[, fillvalue])` | Will be replaced by an expression usable in GROUP BY clause. Providing a _fillValue_ of _NULL_ or _floating value_ will automatically fill empty series in timerange with that value. <br/>For example, _CAST(ROUND(DATEDIFF(second, '1970-01-01', time_column)/300.0, 0) as bigint)\*300_. |
| `$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value. |
| `$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points. |
| `$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+). |
| `$__timeGroupAlias(dateColumn,'5m')` | Will be replaced identical to \$\_\_timeGroup but with an added column alias (only available in Grafana 5.3+). |
| `$__unixEpochFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as Unix timestamp. For example, _dateColumn > 1494410783 AND dateColumn < 1494497183_ |
| `$__unixEpochFrom()` | Will be replaced by the start of the currently active time selection as Unix timestamp. For example, _1494410783_ |
| `$__unixEpochTo()` | Will be replaced by the end of the currently active time selection as Unix timestamp. For example, _1494497183_ |
| `$__unixEpochNanoFilter(dateColumn)` | Will be replaced by a time range filter using the specified column name with times represented as nanosecond timestamp. For example, _dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872_ |
| `$__unixEpochNanoFrom()` | Will be replaced by the start of the currently active time selection as nanosecond timestamp. For example, _1494410783152415214_ |
| `$__unixEpochNanoTo()` | Will be replaced by the end of the currently active time selection as nanosecond timestamp. For example, _1494497183142514872_ |
| `$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as \$\_\_timeGroup but for times stored as Unix timestamp (only available in Grafana 5.3+). |
| `$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+). |
We plan to add many more macros. If you have suggestions for what macros you would like to see, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
The query editor has a link named `Generated SQL` that shows up after a query has been executed, while in panel edit mode. Click on it and it will expand and show the raw interpolated SQL string that was executed.
## Table queries
If the `Format as` query option is set to `Table` then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns and rows your query returns.
**Example database table:**
```sql
CREATE TABLE [event] (
time_sec bigint,
description nvarchar(100),
tags nvarchar(100),
)
```
```sql
CREATE TABLE [mssql_types] (
c_bit bit, c_tinyint tinyint, c_smallint smallint, c_int int, c_bigint bigint, c_money money, c_smallmoney smallmoney, c_numeric numeric(10,5),
c_real real, c_decimal decimal(10,2), c_float float,
c_char char(10), c_varchar varchar(10), c_text text,
c_nchar nchar(12), c_nvarchar nvarchar(12), c_ntext ntext,
c_datetime datetime, c_datetime2 datetime2, c_smalldatetime smalldatetime, c_date date, c_time time, c_datetimeoffset datetimeoffset
)
INSERT INTO [mssql_types]
SELECT
1, 5, 20020, 980300, 1420070400, '$20000.15', '£2.15', 12345.12,
1.11, 2.22, 3.33,
'char10', 'varchar10', 'text',
N'☺nchar12☺', N'☺nvarchar12☺', N'☺text☺',
GETDATE(), CAST(GETDATE() AS DATETIME2), CAST(GETDATE() AS SMALLDATETIME), CAST(GETDATE() AS DATE), CAST(GETDATE() AS TIME), SWITCHOFFSET(CAST(GETDATE() AS DATETIMEOFFSET), '-07:00')
```
Query editor with example query:
{{< figure src="/static/img/docs/v51/mssql_table_query.png" max-width="500px" class="docs-image--no-shadow" >}}
The query:
```sql
SELECT * FROM [mssql_types]
```
You can control the name of the Table panel columns by using regular `AS` SQL column selection syntax. Example:
```sql
SELECT
c_bit as [column1], c_tinyint as [column2]
FROM
[mssql_types]
```
The resulting table panel:
{{< figure src="/static/img/docs/v51/mssql_table_result.png" max-width="1489px" class="docs-image--no-shadow" >}}
## Time series queries
If you set Format as to _Time series_, then the query must have a column named time that returns either a SQL datetime or any numeric datatype representing Unix epoch in seconds. In addition, result sets of time series queries must be sorted by time for panels to properly visualize the result.
A time series query result is returned in a [wide data frame format]({{< relref "../developers/plugins/data-frames/#wide-format" >}}). Any column except time or of type string transforms into value fields in the data frame query result. Any string column transforms into field labels in the data frame query result.
> For backward compatibility, there's an exception to the above rule for queries that return three columns including a string column named metric. Instead of transforming the metric column into field labels, it becomes the field name, and then the series name is formatted as the value of the metric column. See the example with the metric column below.
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}).
**Example with `metric` column:**
```sql
SELECT
$__timeGroup(time_date_time, '5m') as time,
min("value_double"),
'min' as metric
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY $__timeGroup(time_date_time, '5m')
ORDER BY 1
```
Data frame result:
```text
+---------------------+-----------------+
| Name: time | Name: min |
| Labels: | Labels: |
| Type: []time.Time | Type: []float64 |
+---------------------+-----------------+
| 2020-01-02 03:05:00 | 3 |
| 2020-01-02 03:10:00 | 6 |
+---------------------+-----------------+
```
**Example using the fill parameter in the $\_\_timeGroup macro to convert null values to be zero instead:**
```sql
SELECT
$__timeGroup(createdAt, '5m', 0) as time,
sum(value) as value,
hostname
FROM test_data
WHERE
$__timeFilter(createdAt)
GROUP BY
$__timeGroup(createdAt, '5m', 0),
hostname
ORDER BY 1
```
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}) display name value of `${__field.labels.hostname}`.
Data frame result:
```text
+---------------------+---------------------------+---------------------------+
| Name: time | Name: value | Name: value |
| Labels: | Labels: hostname=10.0.1.1 | Labels: hostname=10.0.1.2 |
| Type: []time.Time | Type: []float64 | Type: []float64 |
+---------------------+---------------------------+---------------------------+
| 2020-01-02 03:05:00 | 3 | 4 |
| 2020-01-02 03:10:00 | 6 | 7 |
+---------------------+---------------------------+---------------------------+
```
**Example with multiple columns:**
```sql
SELECT
$__timeGroup(time_date_time, '5m'),
min(value_double) as min_value,
max(value_double) as max_value
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY $__timeGroup(time_date_time, '5m')
ORDER BY 1
```
Data frame result:
```text
+---------------------+-----------------+-----------------+
| Name: time | Name: min_value | Name: max_value |
| Labels: | Labels: | Labels: |
| Type: []time.Time | Type: []float64 | Type: []float64 |
+---------------------+-----------------+-----------------+
| 2020-01-02 03:04:00 | 3 | 4 |
| 2020-01-02 03:05:00 | 6 | 7 |
+---------------------+-----------------+-----------------+
```
## Templating
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables" >}}) documentation for an introduction to the templating feature and the different types of template variables.
### Query variable
If you add a template variable of the type `Query`, you can write a MS SQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable **Query** setting.
```sql
SELECT hostname FROM host
```
A query can return multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`.
```sql
SELECT [host].[hostname], [other_host].[hostname2] FROM host JOIN other_host ON [host].[city] = [other_host].[city]
```
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allow you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value:
```sql
SELECT hostname __text, id __value FROM host
```
You can also create nested variables. For example, if you had another variable named `region`. Then you could have
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable, then use the `IN` comparison operator rather than `=` to match against multiple values):
```sql
SELECT hostname FROM host WHERE region IN ($region)
```
### Using Variables in Queries
> From Grafana 4.3.0 to 4.6.0, template variables are always quoted automatically so if it is a string value do not wrap them in quotes in where clauses.
>
> From Grafana 5.0.0, template variable values are only quoted when the template variable is a `multi-value`.
If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values.
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp time,
aint value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp
```
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp as time,
aint as value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]])
ORDER BY atimestamp
```
#### Disabling Quoting for Multi-value Variables
Grafana automatically creates a quoted, comma-separated string for multi-value variables. For example: if `server01` and `server02` are selected then it will be formatted as: `'server01', 'server02'`. Do disable quoting, use the csv formatting option for variables:
`${servers:csv}`
Read more about variable formatting options in the [Variables]({{< relref "../dashboards/variables/variable-syntax/#advanced-variable-format-options" >}}) documentation.
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
**Columns:**
| Name | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value. |
| `timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+) |
| `text` | Event description field. |
| `tags` | Optional field name to use for event tags as a comma separated string. |
**Example database tables:**
```sql
CREATE TABLE [events] (
time_sec bigint,
description nvarchar(100),
tags nvarchar(100),
)
```
We also use the database table defined in [Time series queries](#time-series-queries).
**Example query using time column with epoch values:**
```sql
SELECT
time_sec as time,
description as [text],
tags
FROM
[events]
WHERE
$__unixEpochFilter(time_sec)
ORDER BY 1
```
**Example region query using time and timeend columns with epoch values:**
> Only available in Grafana v6.6+.
```sql
SELECT
time_sec as time,
time_end_sec as timeend,
description as [text],
tags
FROM
[events]
WHERE
$__unixEpochFilter(time_sec)
ORDER BY 1
```
**Example query using time column of native SQL date/time data type:**
```sql
SELECT
time,
measurement as text,
convert(varchar, valueOne) + ',' + convert(varchar, valueTwo) as tags
FROM
metric_values
WHERE
$__timeFilter(time_column)
ORDER BY 1
```
## Stored procedure support
Stored procedures have been verified to work. However, please note that we haven't done anything special to support this, so there might be edge cases where it won't work as you would expect.
Stored procedures should be supported in table, time series and annotation queries as long as you use the same naming of columns and return data in the same format as describe above under respective section.
Please note that any macro function will not work inside a stored procedure.
### Examples
{{< figure src="/static/img/docs/v51/mssql_metrics_graph.png" class="docs-image--no-shadow docs-image--right" >}}
For the following examples, the database table is defined in [Time series queries](#time-series-queries). Let's say that we want to visualize four series in a graph panel, such as all combinations of columns `valueOne`, `valueTwo` and `measurement`. Graph panel to the right visualizes what we want to achieve. To solve this, we need to use two queries:
**First query:**
```sql
SELECT
$__timeGroup(time, '5m') as time,
measurement + ' - value one' as metric,
avg(valueOne) as valueOne
FROM
metric_values
WHERE
$__timeFilter(time)
GROUP BY
$__timeGroup(time, '5m'),
measurement
ORDER BY 1
```
**Second query:**
```sql
SELECT
$__timeGroup(time, '5m') as time,
measurement + ' - value two' as metric,
avg(valueTwo) as valueTwo
FROM
metric_values
GROUP BY
$__timeGroup(time, '5m'),
measurement
ORDER BY 1
```
#### Stored procedure using time in epoch format
We can define a stored procedure that will return all data we need to render 4 series in a graph panel like above.
In this case the stored procedure accepts two parameters `@from` and `@to` of `int` data types which should be a timerange (from-to) in epoch format
which will be used to filter the data to return from the stored procedure.
We're mimicking the `$__timeGroup(time, '5m')` in the select and group by expressions, and that's why there are a lot of lengthy expressions needed -
these could be extracted to MS SQL functions, if wanted.
```sql
CREATE PROCEDURE sp_test_epoch(
@from int,
@to int
) AS
BEGIN
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int) as time,
measurement + ' - value one' as metric,
avg(valueOne) as value
FROM
metric_values
WHERE
time >= DATEADD(s, @from, '1970-01-01') AND time <= DATEADD(s, @to, '1970-01-01')
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int),
measurement
UNION ALL
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int) as time,
measurement + ' - value two' as metric,
avg(valueTwo) as value
FROM
metric_values
WHERE
time >= DATEADD(s, @from, '1970-01-01') AND time <= DATEADD(s, @to, '1970-01-01')
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int),
measurement
ORDER BY 1
END
```
Then we can use the following query for our graph panel.
```sql
DECLARE
@from int = $__unixEpochFrom(),
@to int = $__unixEpochTo()
EXEC dbo.sp_test_epoch @from, @to
```
#### Stored procedure using time in datetime format
We can define a stored procedure that will return all data we need to render 4 series in a graph panel like above.
In this case the stored procedure accepts two parameters `@from` and `@to` of `datetime` data types which should be a timerange (from-to)
which will be used to filter the data to return from the stored procedure.
We're mimicking the `$__timeGroup(time, '5m')` in the select and group by expressions and that's why there's a lot of lengthy expressions needed -
these could be extracted to MS SQL functions, if wanted.
```sql
CREATE PROCEDURE sp_test_datetime(
@from datetime,
@to datetime
) AS
BEGIN
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int) as time,
measurement + ' - value one' as metric,
avg(valueOne) as value
FROM
metric_values
WHERE
time >= @from AND time <= @to
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int),
measurement
UNION ALL
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int) as time,
measurement + ' - value two' as metric,
avg(valueTwo) as value
FROM
metric_values
WHERE
time >= @from AND time <= @to
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int),
measurement
ORDER BY 1
END
```
Then we can use the following query for our graph panel.
```sql
DECLARE
@from datetime = $__timeFrom(),
@to datetime = $__timeTo()
EXEC dbo.sp_test_datetime @from, @to
```
## Alerting
Time series queries should work in alerting conditions. Table formatted queries are not yet supported in alert rule
conditions.
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: MSSQL
type: mssql
url: localhost:1433
database: grafana
user: grafana
jsonData:
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
secureJsonData:
password: 'Password!'
```

View File

@ -0,0 +1,138 @@
---
aliases:
- /docs/grafana/latest/features/datasources/mssql/
- /docs/grafana/latest/datasources/mssql/
- /docs/grafana/latest/data-sources/mssql/
description: Guide for using Microsoft SQL Server in Grafana
keywords:
- grafana
- MSSQL
- Microsoft
- SQL
- guide
- Azure SQL Database
menuTitle: Microsoft SQL Server
title: Microsoft SQL Server data source
weight: 900
---
# Microsoft SQL Server data source
Grafana ships with built-in support for Microsoft SQL Server (MS SQL).
You can query and visualize data from any Microsoft SQL Server 2005 or newer, including Microsoft Azure SQL Database.
This topic explains configuration specific to the Microsoft SQL Server data source.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the Microsoft SQL Server data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Microsoft SQL Server data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets the data source that's pre-selected for new panels. |
| **Host** | Sets the IP address/hostname and optional port of your MS SQL instance. Default port is 0, the driver default. You can specify multiple connection properties, such as `ApplicationIntent`, by separating each property with a semicolon (`;`). |
| **Database** | Sets the name of your MS SQL database. |
| **Authentication** | Sets the authentication mode, either using SQL Server Authentication or Windows Authentication (single sign-on for Windows users). |
| **User** | Defines the database user's username. |
| **Password** | Defines the database user's password. |
| **Encrypt** | Determines whether to negotiate a secure SSL TCP/IP connection with the server, or to which extent. Default is `false`. |
| **Max open** | Sets the maximum number of open connections to the database. Default is `unlimited`. |
| **Max idle** | Sets the maximum number of connections in the idle connection pool. Default is `2`. |
| **Max lifetime** | Sets the maximum number of seconds that the data source can reuse a connection. Default is `14400` (4 hours). |
You can also configure settings specific to the Microsoft SQL Server data source:
### Min time interval
The **Min time interval** setting defines a lower limit for the [`$__interval`]({{< relref "../../dashboards/variables/add-template-variables#__interval" >}}) and [`$__interval_ms`]({{< relref "../../dashboards/variables/add-template-variables#__interval_ms" >}}) variables.
This value _must_ be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
| `y` | year |
| `M` | month |
| `w` | week |
| `d` | day |
| `h` | hour |
| `m` | minute |
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your Microsoft SQL Server write frequency.
For example, use `1m` if Microsoft SQL Server writes data every minute.
You can also override this setting in a dashboard panel under its data source options.
### Database user permissions
Grafana doesn't validate that a query is safe, and could include any SQL statement.
For example, Microsoft SQL Server would execute destructive queries like `DELETE FROM user;` and `DROP TABLE user;` if the querying user has permission to do so.
To protect against this, we strongly recommend that you create a specific MS SQL user with restricted permissions.
Grant only `SELECT` permissions on the specified database and tables that you want to query to the database user you specified when you added the data source:
```sql
CREATE USER grafanareader WITH PASSWORD 'password'
GRANT SELECT ON dbo.YourTable3 TO grafanareader
```
Also, ensure that the user doesn't have any unwanted privileges from the public role.
### Diagnose connection issues
If you use older versions of Microsoft SQL Server, such as 2008 and 2008R2, you might need to disable encryption before you can connect the data source.
We recommend that you use the latest available service pack for optimal compatibility.
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: MSSQL
type: mssql
url: localhost:1433
database: grafana
user: grafana
jsonData:
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
secureJsonData:
password: 'Password!'
```
## Query the data source
You can create queries with the Microsoft SQL Server data source's query editor when editing a panel that uses a MS SQL data source.
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -0,0 +1,539 @@
---
aliases:
- /docs/grafana/latest/data-sources/mssql/query-editor/
description: Guide for using the Microsoft SQL Server data source's query editor
keywords:
- grafana
- MSSQL
- Microsoft
- SQL
- guide
- Azure SQL Database
- queries
menuTitle: Query editor
title: Microsoft SQL Server query editor
weight: 300
---
# Microsoft SQL Server query editor
You can create queries with the Microsoft SQL Server data source's query editor when editing a panel that uses a MS SQL data source.
This topic explains querying specific to the MS SQL data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
You can switch the query editor between two modes:
- [Code mode]({{< relref "#code-mode" >}}), which provides a feature-rich editor for writing queries
- [Builder mode]({{< relref "#builder-mode" >}}), which provides a visual query designer
To switch between the editor modes, select the corresponding **Builder** and **Code** tabs above the editor.
To run a query, select **Run query** located at the top right corner of the editor.
The query editor also provides:
- [Macros]){{< relref "#use-macros" >}}
- [Annotations]){{< relref "#apply-annotations" >}}
- [Stored procedures]){{< relref "#use-stored-procedures" >}}
## Configure common options
You can configure a MS SQL-specific response format in the query editor regardless of its mode.
### Choose a response format
Grafana can format the response from MS SQL as either a table or as a time series.
To choose a response format, select either the **Table** or **Time series** formats from the **Format** dropdown.
To use the time series format, you must name one of the MS SQL columns `time`.
You can use time series queries, but not table queries, in alerting conditions.
For details about using these formats, refer to [Use table queries]({{< relref "#use-table-queries" >}}) and [Use time series queries]({{< relref "#use-time-series-queries" >}}).
## Code mode
{{< figure src="/static/img/docs/v92/sql_code_editor.png" class="docs-image--no-shadow" >}}
In **Code mode**, you can write complex queries using a text editor with autocompletion features and syntax highlighting.
For more information about Transact-SQL (T-SQL), the query language used by Microsoft SQL Server, refer to the [Transact-SQL tutorial](https://learn.microsoft.com/en-us/sql/t-sql/tutorial-writing-transact-sql-statements).
### Use toolbar features
Code mode has several features in a toolbar located in the editor's lower-right corner.
To reformat the query, click the brackets button (`{}`).
To expand the code editor, click the chevron button pointing downward.
To run the query, click the **Run query** button or use the keyboard shortcut <key>Ctrl</key>/<key>Cmd</key> + <key>Enter</key>/<key>Return</key>.
### Use autocompletion
Code mode's autocompletion feature works automatically while typing.
To manually trigger autocompletion, use the keyboard shortcut <key>Ctrl</key>/<key>Cmd</key> + <key>Space</key>.
Code mode supports autocompletion of tables, columns, SQL keywords, standard SQL functions, Grafana template variables, and Grafana macros.
> **Note:** You can't autocomplete columns until you've specified a table.
## Builder mode
{{< figure src="/static/img/docs/v92/mssql_query_builder.png" class="docs-image--no-shadow" >}}
In **Builder mode**, you can build queries using a visual interface.
### Select a dataset and table
In the **Dataset** dropdown, select the MS SQL database to query.
Grafana populates the dropdown with the databases that the configured user can access.
When you select a dataset, Grafana populates the **Table** dropdown with available tables.
### Select columns and aggregation functions (SELECT)
Select a column from the **Column** dropdown to include it in the data.
You can select an optional aggregation function for the column in the **Aggregation** dropdown.
To add more value columns, click the plus (`+`) button to the right of the column's row.
### Filter data (WHERE)
To add a filter, toggle the **Filter** switch at the top of the editor.
This reveals a **Filter by column value** section with two dropdown selectors.
Use the first dropdown to choose whether all of the filters need to match (`AND`), or if only one of the filters needs to match (`OR`).
Use the second dropdown to choose a filter.
To filter on more columns, click the plus (`+`) button to the right of the condition dropdown.
To remove a filter, click the `x` button next to that filter's dropdown.
### Group results
To group results by column, toggle the **Group** switch at the top of the editor.
This reveals a **Group by column** dropdown where you can select which column to group the results by.
To remove the group-by clause, click the `x` button.
### Preview the query
To preview the SQL query generated by Builder mode, toggle the **Preview** switch at the top of the editor.
This reveals a preview pane containing the query, and an copy icon at the top right that copies the query to your clipboard.
## Use macros
To simplify syntax and to allow for dynamic components, such as date range filters, you can add macros to your query.
| Macro example | Replaced by |
| ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `$__time(dateColumn)` | An expression to rename the column to _time_. For example, _dateColumn as time_ |
| `$__timeEpoch(dateColumn)` | An expression to convert a DATETIME column type to Unix timestamp and rename it to _time_.<br/>For example, _DATEDIFF(second, '1970-01-01', dateColumn) AS time_ |
| `$__timeFilter(dateColumn)` | A time range filter using the specified column name.<br/>For example, _dateColumn BETWEEN '2017-04-21T05:01:17Z' AND '2017-04-21T05:06:17Z'_ |
| `$__timeFrom()` | The start of the currently active time selection. For example, _'2017-04-21T05:01:17Z'_ |
| `$__timeTo()` | The end of the currently active time selection. For example, _'2017-04-21T05:06:17Z'_ |
| `$__timeGroup(dateColumn,'5m'[, fillvalue])` | An expression usable in GROUP BY clause. Providing a _fillValue_ of _NULL_ or _floating value_ will automatically fill empty series in timerange with that value.<br/>For example, _CAST(ROUND(DATEDIFF(second, '1970-01-01', time_column)/300.0, 0) as bigint)\*300_. |
| `$__timeGroup(dateColumn,'5m', 0)` | Same as above but with a fill parameter so missing points in that series will be added by grafana and 0 will be used as value. |
| `$__timeGroup(dateColumn,'5m', NULL)` | Same as above but NULL will be used as value for missing points. |
| `$__timeGroup(dateColumn,'5m', previous)` | Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used (only available in Grafana 5.3+). |
| `$__timeGroupAlias(dateColumn,'5m')` | Same as `$__timeGroup` but with an added column alias (only available in Grafana 5.3+). |
| `$__unixEpochFilter(dateColumn)` | A time range filter using the specified column name with times represented as Unix timestamp. For example, _dateColumn > 1494410783 AND dateColumn < 1494497183_ |
| `$__unixEpochFrom()` | The start of the currently active time selection as Unix timestamp. For example, _1494410783_ |
| `$__unixEpochTo()` | The end of the currently active time selection as Unix timestamp. For example, _1494497183_ |
| `$__unixEpochNanoFilter(dateColumn)` | A time range filter using the specified column name with times represented as nanosecond timestamp. For example, _dateColumn > 1494410783152415214 AND dateColumn < 1494497183142514872_ |
| `$__unixEpochNanoFrom()` | The start of the currently active time selection as nanosecond timestamp. For example, _1494410783152415214_ |
| `$__unixEpochNanoTo()` | The end of the currently active time selection as nanosecond timestamp. For example, _1494497183142514872_ |
| `$__unixEpochGroup(dateColumn,'5m', [fillmode])` | Same as `$__timeGroup` but for times stored as Unix timestamp (only available in Grafana 5.3+). |
| `$__unixEpochGroupAlias(dateColumn,'5m', [fillmode])` | Same as above but also adds a column alias (only available in Grafana 5.3+). |
To suggest more macros, please [open an issue](https://github.com/grafana/grafana) in our GitHub repo.
### View the interpolated query
The query editor also includes a link named **Generated SQL** that appears after running a query while in panel edit mode.
To display the raw interpolated SQL string that the data source executed, click on this link.
## Use table queries
If the **Format** query option is set to **Table** for a [Table panel]{{< relref "../../../panels-visualizations/visualizations/table/" >}}, you can enter any type of SQL query.
The Table panel then displays the query results with whatever columns and rows are returned.
**Example database table:**
```sql
CREATE TABLE [event] (
time_sec bigint,
description nvarchar(100),
tags nvarchar(100),
)
```
```sql
CREATE TABLE [mssql_types] (
c_bit bit, c_tinyint tinyint, c_smallint smallint, c_int int, c_bigint bigint, c_money money, c_smallmoney smallmoney, c_numeric numeric(10,5),
c_real real, c_decimal decimal(10,2), c_float float,
c_char char(10), c_varchar varchar(10), c_text text,
c_nchar nchar(12), c_nvarchar nvarchar(12), c_ntext ntext,
c_datetime datetime, c_datetime2 datetime2, c_smalldatetime smalldatetime, c_date date, c_time time, c_datetimeoffset datetimeoffset
)
INSERT INTO [mssql_types]
SELECT
1, 5, 20020, 980300, 1420070400, '$20000.15', '£2.15', 12345.12,
1.11, 2.22, 3.33,
'char10', 'varchar10', 'text',
N'☺nchar12☺', N'☺nvarchar12☺', N'☺text☺',
GETDATE(), CAST(GETDATE() AS DATETIME2), CAST(GETDATE() AS SMALLDATETIME), CAST(GETDATE() AS DATE), CAST(GETDATE() AS TIME), SWITCHOFFSET(CAST(GETDATE() AS DATETIMEOFFSET), '-07:00')
```
Query editor with example query:
{{< figure src="/static/img/docs/v51/mssql_table_query.png" max-width="500px" class="docs-image--no-shadow" >}}
The query:
```sql
SELECT * FROM [mssql_types]
```
To control the name of the Table panel columns, use the standard `AS` SQL column selection syntax.
For example:
```sql
SELECT
c_bit as [column1], c_tinyint as [column2]
FROM
[mssql_types]
```
The resulting table panel:
{{< figure src="/static/img/docs/v51/mssql_table_result.png" max-width="1489px" class="docs-image--no-shadow" >}}
## Use time series queries
If you set the **Format** setting in the query editor to **Time series**, then the query must have a column named `time` that returns either a SQL datetime or any numeric datatype representing Unix epoch in seconds.
Result sets of time series queries must also be sorted by time for panels to properly visualize the result.
A time series query result is returned in a [wide data frame format]({{< relref "../../../developers/plugins/data-frames#wide-format" >}}).
Any column except time or of type string transforms into value fields in the data frame query result.
Any string column transforms into field labels in the data frame query result.
### Create a metric query
For backward compatibility, there's an exception to the above rule for queries that return three columns and include a string column named `metric`.
Instead of transforming the `metric` column into field labels, it becomes the field name, and then the series name is formatted as the value of the `metric` column.
See the example with the `metric` column below.
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../../../panels-visualizations/configure-standard-options#display-name" >}}).
**Example with `metric` column:**
```sql
SELECT
$__timeGroup(time_date_time, '5m') as time,
min("value_double"),
'min' as metric
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY $__timeGroup(time_date_time, '5m')
ORDER BY 1
```
Data frame result:
```text
+---------------------+-----------------+
| Name: time | Name: min |
| Labels: | Labels: |
| Type: []time.Time | Type: []float64 |
+---------------------+-----------------+
| 2020-01-02 03:05:00 | 3 |
| 2020-01-02 03:10:00 | 6 |
+---------------------+-----------------+
```
### Time series query examples
**Using the fill parameter in the $\_\_timeGroup macro to convert null values to be zero instead:**
```sql
SELECT
$__timeGroup(createdAt, '5m', 0) as time,
sum(value) as value,
hostname
FROM test_data
WHERE
$__timeFilter(createdAt)
GROUP BY
$__timeGroup(createdAt, '5m', 0),
hostname
ORDER BY 1
```
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [Standard options definitions]({{< relref "../../../panels-visualizations/configure-standard-options#display-name" >}}) display name value of `${__field.labels.hostname}`.
Data frame result:
```text
+---------------------+---------------------------+---------------------------+
| Name: time | Name: value | Name: value |
| Labels: | Labels: hostname=10.0.1.1 | Labels: hostname=10.0.1.2 |
| Type: []time.Time | Type: []float64 | Type: []float64 |
+---------------------+---------------------------+---------------------------+
| 2020-01-02 03:05:00 | 3 | 4 |
| 2020-01-02 03:10:00 | 6 | 7 |
+---------------------+---------------------------+---------------------------+
```
**Using multiple columns:**
```sql
SELECT
$__timeGroup(time_date_time, '5m'),
min(value_double) as min_value,
max(value_double) as max_value
FROM test_data
WHERE $__timeFilter(time_date_time)
GROUP BY $__timeGroup(time_date_time, '5m')
ORDER BY 1
```
Data frame result:
```text
+---------------------+-----------------+-----------------+
| Name: time | Name: min_value | Name: max_value |
| Labels: | Labels: | Labels: |
| Type: []time.Time | Type: []float64 | Type: []float64 |
+---------------------+-----------------+-----------------+
| 2020-01-02 03:04:00 | 3 | 4 |
| 2020-01-02 03:05:00 | 6 | 7 |
+---------------------+-----------------+-----------------+
```
## Apply annotations
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
**Columns:**
| Name | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `time` | The name of the date/time field. Could be a column with a native SQL date/time data type or epoch value. |
| `timeend` | Optional name of the end date/time field. Could be a column with a native SQL date/time data type or epoch value. (Grafana v6.6+) |
| `text` | Event description field. |
| `tags` | Optional field name to use for event tags as a comma separated string. |
**Example database tables:**
```sql
CREATE TABLE [events] (
time_sec bigint,
description nvarchar(100),
tags nvarchar(100),
)
```
We also use the database table defined in [Time series queries]({{< relref "#time-series-queries" >}}).
**Example query using time column with epoch values:**
```sql
SELECT
time_sec as time,
description as [text],
tags
FROM
[events]
WHERE
$__unixEpochFilter(time_sec)
ORDER BY 1
```
**Example region query using time and timeend columns with epoch values:**
> Only available in Grafana v6.6+.
```sql
SELECT
time_sec as time,
time_end_sec as timeend,
description as [text],
tags
FROM
[events]
WHERE
$__unixEpochFilter(time_sec)
ORDER BY 1
```
**Example query using time column of native SQL date/time data type:**
```sql
SELECT
time,
measurement as text,
convert(varchar, valueOne) + ',' + convert(varchar, valueTwo) as tags
FROM
metric_values
WHERE
$__timeFilter(time_column)
ORDER BY 1
```
## Use stored procedures
Stored procedures have been verified to work.
However, please note that we haven't done anything special to support this, so there might be edge cases where it won't work as you would expect.
Stored procedures should be supported in table, time series and annotation queries as long as you use the same naming of columns and return data in the same format as describe above under respective section.
Please note that any macro function will not work inside a stored procedure.
### Examples
{{< figure src="/static/img/docs/v51/mssql_metrics_graph.png" class="docs-image--no-shadow docs-image--right" >}}
For the following examples, the database table is defined in [Time series queries](#time-series-queries). Let's say that we want to visualize four series in a graph panel, such as all combinations of columns `valueOne`, `valueTwo` and `measurement`. Graph panel to the right visualizes what we want to achieve. To solve this, we need to use two queries:
**First query:**
```sql
SELECT
$__timeGroup(time, '5m') as time,
measurement + ' - value one' as metric,
avg(valueOne) as valueOne
FROM
metric_values
WHERE
$__timeFilter(time)
GROUP BY
$__timeGroup(time, '5m'),
measurement
ORDER BY 1
```
**Second query:**
```sql
SELECT
$__timeGroup(time, '5m') as time,
measurement + ' - value two' as metric,
avg(valueTwo) as valueTwo
FROM
metric_values
GROUP BY
$__timeGroup(time, '5m'),
measurement
ORDER BY 1
```
#### Stored procedure using time in epoch format
We can define a stored procedure that will return all data we need to render 4 series in a graph panel like above.
In this case the stored procedure accepts two parameters `@from` and `@to` of `int` data types which should be a timerange (from-to) in epoch format
which will be used to filter the data to return from the stored procedure.
We're mimicking the `$__timeGroup(time, '5m')` in the select and group by expressions, and that's why there are a lot of lengthy expressions needed -
these could be extracted to MS SQL functions, if wanted.
```sql
CREATE PROCEDURE sp_test_epoch(
@from int,
@to int
) AS
BEGIN
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int) as time,
measurement + ' - value one' as metric,
avg(valueOne) as value
FROM
metric_values
WHERE
time >= DATEADD(s, @from, '1970-01-01') AND time <= DATEADD(s, @to, '1970-01-01')
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int),
measurement
UNION ALL
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int) as time,
measurement + ' - value two' as metric,
avg(valueTwo) as value
FROM
metric_values
WHERE
time >= DATEADD(s, @from, '1970-01-01') AND time <= DATEADD(s, @to, '1970-01-01')
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, DATEADD(second, DATEDIFF(second,GETDATE(),GETUTCDATE()), time))/600 as int)*600 as int),
measurement
ORDER BY 1
END
```
Then we can use the following query for our graph panel.
```sql
DECLARE
@from int = $__unixEpochFrom(),
@to int = $__unixEpochTo()
EXEC dbo.sp_test_epoch @from, @to
```
#### Stored procedure using time in datetime format
We can define a stored procedure that will return all data we need to render 4 series in a graph panel like above.
In this case the stored procedure accepts two parameters `@from` and `@to` of `datetime` data types which should be a timerange (from-to)
which will be used to filter the data to return from the stored procedure.
We're mimicking the `$__timeGroup(time, '5m')` in the select and group by expressions and that's why there's a lot of lengthy expressions needed -
these could be extracted to MS SQL functions, if wanted.
```sql
CREATE PROCEDURE sp_test_datetime(
@from datetime,
@to datetime
) AS
BEGIN
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int) as time,
measurement + ' - value one' as metric,
avg(valueOne) as value
FROM
metric_values
WHERE
time >= @from AND time <= @to
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int),
measurement
UNION ALL
SELECT
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int) as time,
measurement + ' - value two' as metric,
avg(valueTwo) as value
FROM
metric_values
WHERE
time >= @from AND time <= @to
GROUP BY
cast(cast(DATEDIFF(second, {d '1970-01-01'}, time)/600 as int)*600 as int),
measurement
ORDER BY 1
END
```
Then we can use the following query for our graph panel.
```sql
DECLARE
@from datetime = $__timeFrom(),
@to datetime = $__timeTo()
EXEC dbo.sp_test_datetime @from, @to
```

View File

@ -0,0 +1,95 @@
---
aliases:
- /docs/grafana/latest/data-sources/mssql/template-variables/
description: Using template variables with Microsoft SQL Server in Grafana
keywords:
- grafana
- MSSQL
- Microsoft
- SQL
- Azure SQL Database
- templates
- variables
- queries
menuTitle: Template variables
title: Microsoft SQL Server template variables
weight: 400
---
# Microsoft SQL Server template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Query variable
If you add a template variable of the type `Query`, you can write a MS SQL query that can
return things like measurement names, key names or key values that are shown as a dropdown select box.
For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable **Query** setting.
```sql
SELECT hostname FROM host
```
A query can return multiple columns and Grafana will automatically create a list from them. For example, the query below will return a list with values from `hostname` and `hostname2`.
```sql
SELECT [host].[hostname], [other_host].[hostname2] FROM host JOIN other_host ON [host].[city] = [other_host].[city]
```
Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown will have a text and value that allow you to have a friendly name as text and an id as the value. An example query with `hostname` as the text and `id` as the value:
```sql
SELECT hostname __text, id __value FROM host
```
You can also create nested variables. For example, if you had another variable named `region`. Then you could have
the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable, then use the `IN` comparison operator rather than `=` to match against multiple values):
```sql
SELECT hostname FROM host WHERE region IN ($region)
```
## Using variables in queries
> From Grafana 4.3.0 to 4.6.0, template variables are always quoted automatically so if it is a string value do not wrap them in quotes in where clauses.
>
> From Grafana 5.0.0, template variable values are only quoted when the template variable is a `multi-value`.
If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values.
There are two syntaxes:
`$<varname>` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp time,
aint value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in($hostname)
ORDER BY atimestamp
```
`[[varname]]` Example with a template variable named `hostname`:
```sql
SELECT
atimestamp as time,
aint as value
FROM table
WHERE $__timeFilter(atimestamp) and hostname in([[hostname]])
ORDER BY atimestamp
```
### Disabling Quoting for Multi-value Variables
Grafana automatically creates a quoted, comma-separated string for multi-value variables. For example: if `server01` and `server02` are selected then it will be formatted as: `'server01', 'server02'`. Do disable quoting, use the csv formatting option for variables:
`${servers:csv}`
Read more about variable formatting options in the [Variables]({{< relref "../../../dashboards/variables/variable-syntax#advanced-variable-format-options" >}}) documentation.

View File

@ -1,28 +1,29 @@
---
aliases:
- /docs/grafana/latest/datasources/mysql/
- /docs/grafana/latest/features/datasources/mysql/
- /docs/grafana/latest/datasources/mysql/
- /docs/grafana/latest/data-sources/mysql/
description: Guide for using MySQL in Grafana
keywords:
- grafana
- mysql
- guide
title: MySQL
menuTitle: MySQL
title: MySQL data source
weight: 1000
---
# Using MySQL in Grafana
# MySQL data source
> Starting from Grafana v5.1 you can name the time column _time_ in addition to earlier supported _time_sec_. Usage of _time_sec_ will eventually be deprecated.
Grafana ships with a built-in MySQL data source plugin that allows you to query and visualize data from a MySQL compatible database.
## Adding the data source
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
1. Open the side menu by clicking the Grafana icon in the top header.
1. In the side menu under the `Dashboards` link you should find a link named `Data Sources`.
1. Click the `+ Add data source` button in the top header.
1. Select _MySQL_ from the _Type_ dropdown.
## Configure the data source
### Data source options
@ -41,10 +42,9 @@ Grafana ships with a built-in MySQL data source plugin that allows you to query
### Min time interval
A lower limit for the [$__interval]({{< relref "../dashboards/variables/add-template-variables/#__interval" >}}) and [$__interval_ms]({{< relref "../dashboards/variables/add-template-variables/#__interval_ms" >}}) variables.
Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
The **Min time interval** setting defines a lower limit for the [`$__interval`]({{< relref "../../dashboards/variables/add-template-variables#__interval" >}}) and [`$__interval_ms`]({{< relref "../../dashboards/variables/add-template-variables#__interval_ms" >}}) variables.
This value _must_ be formatted as a number followed by a valid time identifier:
| Identifier | Description |
| ---------- | ----------- |
@ -57,6 +57,11 @@ number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 se
| `s` | second |
| `ms` | millisecond |
We recommend setting this value to match your MySQL write frequency.
For example, use `1m` if MySQL writes data every minute.
You can also override this setting in a dashboard panel under its data source options.
### Database User Permissions (Important!)
The database user you specify when you add the data source should only be granted SELECT permissions on
@ -73,11 +78,40 @@ Example:
You can use wildcards (`*`) in place of database or table if you want to grant access to more databases and tables.
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: MySQL
type: mysql
url: localhost:3306
database: grafana
user: grafana
jsonData:
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
secureJsonData:
password: ${GRAFANA_MYSQL_PASSWORD}
```
## Query builder
{{< figure src="/static/img/docs/v92/mysql_query_builder.png" class="docs-image--no-shadow" >}}
The MySQL query builder is available when editing a panel using a MySQL data source. The built query can be run by pressing the `Run query` button in the top right corner of the editor.
The MySQL query builder is available when editing a panel using a MySQL data source.
This topic explains querying specific to the MySQL data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../panels-visualizations/query-transform-data/" >}}).
You can run the built query by pressing the `Run query` button in the top right corner of the editor.
### Format
@ -178,11 +212,11 @@ The resulting table panel:
If you set Format as to _Time series_, then the query must have a column named time that returns either a SQL datetime or any numeric datatype representing Unix epoch in seconds. In addition, result sets of time series queries must be sorted by time for panels to properly visualize the result.
A time series query result is returned in a [wide data frame format]({{< relref "../developers/plugins/data-frames/#wide-format" >}}). Any column except time or of type string transforms into value fields in the data frame query result. Any string column transforms into field labels in the data frame query result.
A time series query result is returned in a [wide data frame format]({{< relref "../../developers/plugins/data-frames#wide-format" >}}). Any column except time or of type string transforms into value fields in the data frame query result. Any string column transforms into field labels in the data frame query result.
> For backward compatibility, there's an exception to the above rule for queries that return three columns including a string column named metric. Instead of transforming the metric column into field labels, it becomes the field name, and then the series name is formatted as the value of the metric column. See the example with the metric column below.
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}).
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../../panels-visualizations/configure-standard-options#display-name" >}}).
**Example with `metric` column:**
@ -224,7 +258,7 @@ GROUP BY time, hostname
ORDER BY time
```
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [[Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}) display value of `${__field.labels.hostname}`.
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [[Standard options definitions]({{< relref "../../panels-visualizations/configure-standard-options#display-name" >}}) display value of `${__field.labels.hostname}`.
Data frame result:
@ -274,7 +308,7 @@ This feature is currently available in the nightly builds and will be included i
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables/" >}}) documentation for an introduction to the templating feature and the different types of template variables.
Check out the [Templating]({{< relref "../../dashboards/variables" >}}) documentation for an introduction to the templating feature and the different types of template variables.
### Query Variable
@ -369,11 +403,11 @@ Grafana automatically creates a quoted, comma-separated string for multi-value v
`${servers:csv}`
Read more about variable formatting options in the [Variables]({{< relref "../dashboards/variables/variable-syntax/#advanced-variable-format-options" >}}) documentation.
Read more about variable formatting options in the [Variables]({{< relref "../../dashboards/variables/variable-syntax#advanced-variable-format-options" >}}) documentation.
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
[Annotations]({{< relref "../../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
**Example query using time column with epoch values:**
@ -427,26 +461,3 @@ WHERE
## Alerting
Time series queries should work in alerting conditions. Table formatted queries are not yet supported in alert rule conditions.
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: MySQL
type: mysql
url: localhost:3306
database: grafana
user: grafana
jsonData:
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
secureJsonData:
password: ${GRAFANA_MYSQL_PASSWORD}
```

View File

@ -1,34 +1,61 @@
---
aliases:
- /docs/grafana/latest/datasources/opentsdb/
- /docs/grafana/latest/features/datasources/opentsdb/
- /docs/grafana/latest/features/opentsdb/
- /docs/grafana/latest/features/datasources/opentsdb/
- /docs/grafana/latest/datasources/opentsdb/
- /docs/grafana/latest/data-sources/opentsdb/
description: Guide for using OpenTSDB in Grafana
keywords:
- grafana
- opentsdb
- guide
title: OpenTSDB
menuTitle: OpenTSDB
title: OpenTSDB data source
weight: 1100
---
# Using OpenTSDB in Grafana
# OpenTSDB data source
Grafana ships with advanced support for OpenTSDB. This topic explains options, variables, querying, and other options specific to the OpenTSDB data source. Refer to [Add a data source]({{< relref "add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
Grafana ships with advanced support for OpenTSDB.
This topic explains configuration, variables, querying, and other features specific to the OpenTSDB data source.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
## OpenTSDB settings
To access OpenTSDB settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the OpenTSDB data source.
| Name | Description |
| ----------------- | --------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `URL` | The HTTP protocol, IP, and port of your OpenTSDB server (default port is usually 4242) |
| `Allowed cookies` | List the names of cookies to forward to the data source. |
| `Version` | Version = opentsdb version, either <=2.1 or 2.2 |
| `Resolution` | Metrics from opentsdb may have datapoints with either second or millisecond resolution. |
| `Lookup limit` | Default is 1000. |
| Name | Description |
| ------------------- | --------------------------------------------------------------------------------------- |
| **Name** | The data source name. This is how you refer to the data source in panels and queries. |
| **Default** | Default data source means that it will be pre-selected for new panels. |
| **URL** | The HTTP protocol, IP, and port of your OpenTSDB server (default port is usually 4242) |
| **Allowed cookies** | List the names of cookies to forward to the data source. |
| **Version** | Version = opentsdb version, either <=2.1 or 2.2 |
| **Resolution** | Metrics from opentsdb may have datapoints with either second or millisecond resolution. |
| **Lookup limit** | Default is 1000. |
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: OpenTsdb
type: opentsdb
access: proxy
url: http://localhost:4242
jsonData:
tsdbResolution: 1
tsdbVersion: 1
```
## Query editor
@ -51,7 +78,7 @@ Instead of hard-coding things like server, application and sensor name in your m
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data
being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables/" >}}) documentation for an introduction to the templating feature and the different
Check out the [Templating]({{< relref "../../dashboards/variables/" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
@ -85,22 +112,3 @@ Some examples are mentioned below to make nested template queries work successfu
| `tag_values(cpu, hostname, env=$env, region=$region)` | Return tag values for cpu metric, selected env tag value, selected region tag value and tag key hostname |
For details on OpenTSDB metric queries, check out the official [OpenTSDB documentation](http://opentsdb.net/docs/build/html/index.html)
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: OpenTsdb
type: opentsdb
access: proxy
url: http://localhost:4242
jsonData:
tsdbResolution: 1
tsdbVersion: 1
```

View File

@ -1,44 +1,50 @@
---
aliases:
- /docs/grafana/latest/datasources/postgres/
- /docs/grafana/latest/features/datasources/postgres/
- /docs/grafana/latest/datasources/postgres/
- /docs/grafana/latest/data-sources/postgres/
description: Guide for using PostgreSQL in Grafana
keywords:
- grafana
- postgresql
- guide
title: PostgreSQL
menuTitle: PostgreSQL
title: PostgreSQL data source
weight: 1200
---
# PostgreSQL data source
Grafana ships with a built-in PostgreSQL data source plugin that allows you to query and visualize data from a PostgreSQL compatible database. This topic explains options, variables, querying, and other options specific to this data source. For instructions about how to add a data source to Grafana, refer to [Add a data source]({{< relref "add-a-data-source/" >}}). Only users with the organization admin role can add data sources.
Grafana ships with a built-in PostgreSQL data source plugin that allows you to query and visualize data from a PostgreSQL compatible database.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
## PostgreSQL settings
To access PostgreSQL settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the PostgreSQL data source.
| Name | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source means that it will be pre-selected for new panels. |
| `Host` | The IP address/hostname and optional port of your PostgreSQL instance. _Do not_ include the database name. The connection string for connecting to Postgres will not be correct and it may cause errors. |
| `Database` | Name of your PostgreSQL database. |
| `User` | Database user's login/username |
| `Password` | Database user's password |
| `SSL Mode` | Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. When SSL Mode is disabled, SSL Method and Auth Details would not be visible. |
| `SSL Auth Details Method` | Determines whether the SSL Auth details will be configured as a file path or file content. Grafana v7.5+ |
| `SSL Auth Details Value` | File path or file content of SSL root certificate, client certificate and client key |
| `Max open` | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+). |
| `Max idle` | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+). |
| `Max lifetime` | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours (Grafana v5.4+). |
| `Version` | Determines which functions are available in the query builder (only available in Grafana 5.3+). |
| `TimescaleDB` | A time-series database built as a PostgreSQL extension. When enabled, Grafana uses `time_bucket` in the `$__timeGroup` macro to display TimescaleDB specific aggregate functions in the query builder (only available in Grafana 5.3+). For more information, see [TimescaleDB documentation](https://docs.timescale.com/timescaledb/latest/tutorials/grafana/grafana-timescalecloud/#connect-timescaledb-and-grafana). |
| Name | Description |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | The data source name. This is how you refer to the data source in panels and queries. |
| **Default** | Default data source means that it will be pre-selected for new panels. |
| **Host** | The IP address/hostname and optional port of your PostgreSQL instance. _Do not_ include the database name. The connection string for connecting to Postgres will not be correct and it may cause errors. |
| **Database** | Name of your PostgreSQL database. |
| **User** | Database user's login/username |
| **Password** | Database user's password |
| **SSL Mode** | Determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. When SSL Mode is disabled, SSL Method and Auth Details would not be visible. |
| **SSL Auth Details Method** | Determines whether the SSL Auth details will be configured as a file path or file content. Grafana v7.5+ |
| **SSL Auth Details Value** | File path or file content of SSL root certificate, client certificate and client key |
| **Max open** | The maximum number of open connections to the database, default `unlimited` (Grafana v5.4+). |
| **Max idle** | The maximum number of connections in the idle connection pool, default `2` (Grafana v5.4+). |
| **Max lifetime** | The maximum amount of time in seconds a connection may be reused, default `14400`/4 hours (Grafana v5.4+). |
| **Version** | Determines which functions are available in the query builder (only available in Grafana 5.3+). |
| **TimescaleDB** | A time-series database built as a PostgreSQL extension. When enabled, Grafana uses `time_bucket` in the `$__timeGroup` macro to display TimescaleDB specific aggregate functions in the query builder (only available in Grafana 5.3+). For more information, see [TimescaleDB documentation](https://docs.timescale.com/timescaledb/latest/tutorials/grafana/grafana-timescalecloud/#connect-timescaledb-and-grafana). |
### Min time interval
A lower limit for the [$__interval]({{< relref "../dashboards/variables/add-template-variables/#__interval" >}}) and [$__interval_ms]({{< relref "../dashboards/variables/add-template-variables/#__interval_ms" >}}) variables.
A lower limit for the [$__interval]({{< relref "../../dashboards/variables/add-template-variables/#__interval" >}}) and [$__interval_ms]({{< relref "../../dashboards/variables/add-template-variables/#__interval_ms" >}}) variables.
Recommended to be set to write frequency, for example `1m` if your data is written every minute.
This option can also be overridden/configured in a dashboard panel under data source options. It's important to note that this value **needs** to be formatted as a
number followed by a valid time identifier, e.g. `1m` (1 minute) or `30s` (30 seconds). The following time identifiers are supported:
@ -73,7 +79,7 @@ Make sure the user does not get any unwanted privileges from the public role.
## Query builder
{{< figure src="/static/img/docs/v92/postgresql_query_builder.png" class="docs-image--no-shadow" >}}
{{< figure src="/static/img/docs/v92/postgresql_query_builder.png" class="docs-image--no-shadow" caption="PostgreSQL query builder" >}}
The PostgreSQL query builder is available when editing a panel using a PostgreSQL data source. The built query can be run by pressing the `Run query` button in the top right corner of the editor.
@ -107,6 +113,41 @@ To group the results by column, flip the group switch at the top of the editor.
By flipping the preview switch at the top of the editor, you can get a preview of the SQL query generated by the query builder.
### Provision the data source
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../../administration/provisioning#datasources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: Postgres
type: postgres
url: localhost:5432
database: grafana
user: grafana
secureJsonData:
password: 'Password!'
jsonData:
sslmode: 'disable' # disable/require/verify-ca/verify-full
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
postgresVersion: 903 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
timescaledb: false
```
> **Note:** In the above code, the `postgresVersion` value of `10` refers to version PostgreSQL 10 and above.
#### Troubleshoot provisioning
If you encounter metric request errors or other issues:
- Make sure your data source YAML file parameters exactly match the example. This includes parameter names and use of quotation marks.
- Make sure the `database` name is not included in the `url`.
## Code editor
{{< figure src="/static/img/docs/v92/sql_code_editor.png" class="docs-image--no-shadow" >}}
@ -174,11 +215,11 @@ The resulting table panel:
If you set Format as to _Time series_, then the query must have a column named time that returns either a SQL datetime or any numeric datatype representing Unix epoch in seconds. In addition, result sets of time series queries must be sorted by time for panels to properly visualize the result.
A time series query result is returned in a [wide data frame format]({{< relref "../developers/plugins/data-frames/#wide-format" >}}). Any column except time or of type string transforms into value fields in the data frame query result. Any string column transforms into field labels in the data frame query result.
A time series query result is returned in a [wide data frame format]({{< relref "../../developers/plugins/data-frames#wide-format" >}}). Any column except time or of type string transforms into value fields in the data frame query result. Any string column transforms into field labels in the data frame query result.
> For backward compatibility, there's an exception to the above rule for queries that return three columns including a string column named metric. Instead of transforming the metric column into field labels, it becomes the field name, and then the series name is formatted as the value of the metric column. See the example with the metric column below.
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}).
To optionally customize the default series name formatting, refer to [Standard options definitions]({{< relref "../../panels-visualizations/configure-standard-options#display-name" >}}).
**Example with `metric` column:**
@ -220,7 +261,7 @@ GROUP BY time, hostname
ORDER BY time
```
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [Standard options definitions]({{< relref "../panels-visualizations/configure-standard-options/#display-name" >}}) display value of `${__field.labels.hostname}`.
Given the data frame result in the following example and using the graph panel, you will get two series named _value 10.0.1.1_ and _value 10.0.1.2_. To render the series with a name of _10.0.1.1_ and _10.0.1.2_ , use a [Standard options definitions]({{< relref "../../panels-visualizations/configure-standard-options#display-name" >}}) display value of `${__field.labels.hostname}`.
Data frame result:
@ -265,7 +306,7 @@ Data frame result:
Instead of hard-coding things like server, application and sensor name in your metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data being displayed in your dashboard.
Refer to [Templates and variables]({{< relref "../dashboards/variables" >}}) for an introduction to the templating feature and the different types of template variables.
Refer to [Templates and variables]({{< relref "../../dashboards/variables" >}}) for an introduction to the templating feature and the different types of template variables.
### Query variable
@ -358,11 +399,11 @@ Grafana automatically creates a quoted, comma-separated string for multi-value v
`${servers:csv}`
Read more about variable formatting options in the [Variables]({{< relref "../dashboards/variables/variable-syntax/#advanced-variable-format-options" >}}) documentation.
Read more about variable formatting options in the [Variables]({{< relref "../../dashboards/variables/variable-syntax#advanced-variable-format-options" >}}) documentation.
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
[Annotations]({{< relref "../../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation queries via the Dashboard menu / Annotations view.
**Example query using time column with epoch values:**
@ -417,36 +458,3 @@ WHERE
Time series queries should work in alerting conditions. Table formatted queries are not yet supported in alert rule
conditions.
## Configure the data source with provisioning
It's now possible to configure data sources using config files with Grafana's provisioning system. You can read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}})
Here are some provisioning examples for this data source.
```yaml
apiVersion: 1
datasources:
- name: Postgres
type: postgres
url: localhost:5432
database: grafana
user: grafana
secureJsonData:
password: 'Password!'
jsonData:
sslmode: 'disable' # disable/require/verify-ca/verify-full
maxOpenConns: 0 # Grafana v5.4+
maxIdleConns: 2 # Grafana v5.4+
connMaxLifetime: 14400 # Grafana v5.4+
postgresVersion: 903 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10
timescaledb: false
```
> **Note:** In the above code, the `postgresVersion` value of `10` refers to version PostgreSQL 10 and above.
If you encounter metric request errors or other issues:
- Make sure your data source YAML file parameters exactly match the example. This includes parameter names and use of quotation marks.
- Make sure the `database` name is not included in the `url`.

View File

@ -1,319 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/prometheus/
- /docs/grafana/latest/features/datasources/prometheus/
description: Guide for using Prometheus in Grafana
keywords:
- grafana
- prometheus
- guide
title: Prometheus
weight: 1300
---
# Prometheus data source
Grafana includes built-in support for Prometheus. This topic explains options, variables, querying, and other options specific to the Prometheus data source. Refer to [Add a data source]({{< relref "add-a-data-source/" >}}) for instructions on how to add a data source to Grafana. Only users with the organization admin role can add data sources.
> **Note:** You can use [Grafana Cloud](https://grafana.com/products/cloud/features/#cloud-logs) to avoid the overhead of installing, maintaining, and scaling your observability stack. The free forever plan includes Grafana, 10K Prometheus series, 50 GB logs, and more.[Create a free account to get started](https://grafana.com/auth/sign-up/create-user?pg=docs-grafana-install&plcmt=in-text).
## Prometheus settings
To access Prometheus settings, hover your mouse over the **Configuration** (gear) icon, then click **Data Sources**, and then click the Prometheus data source.
| Name | Description |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Name` | The data source name. This is how you refer to the data source in panels and queries. |
| `Default` | Default data source that is pre-selected for new panels. |
| `Url` | The URL of your Prometheus server, for example, `http://prometheus.example.org:9090`. |
| `Access` | Only Server access mode is functional. If Server mode is already selected this option is hidden. Otherwise change to Server mode to prevent errors. |
| `Basic Auth` | Enable basic authentication to the Prometheus data source. |
| `User` | User name for basic authentication. |
| `Password` | Password for basic authentication. |
| `Scrape interval` | Set this to the typical scrape and evaluation interval configured in Prometheus. Defaults to 15s. |
| `HTTP method` | Use either POST or GET HTTP method to query your data source. POST is the recommended and pre-selected method as it allows bigger queries. Change this to GET if you have a Prometheus version older than 2.1 or if POST requests are restricted in your network. |
| `Type` | The type of your Prometheus server, i.e `Prometheus`, `Cortex`, `Thanos` or `Mimir`. When this value is selected in the configuration UI, the Prometheus version field attempts to detect the version automatically using the Prometheus [buildinfo](https://semver.org/) API. Some Prometheus types do not support this API, and you will need to manually populate the version in those cases (e.g. Cortex). |
| `Version` | The version of your Prometheus server, note that this field is not visible until the Prometheus type is selected. |
| `Disable metrics lookup` | Checking this option will disable the metrics chooser and metric/label support in the query field's autocomplete. This helps if you have performance issues with bigger Prometheus instances. |
| `Custom Query Parameters` | Add custom parameters to the Prometheus query URL. For example `timeout`, `partial_response`, `dedup`, or `max_source_resolution`. Multiple parameters should be concatenated together with an '&amp;'. |
| **Exemplars configuration** | |
| `Internal link` | Enable this option is you have an internal link. When you enable this option, you will see a data source selector. Select the backend tracing data store for your exemplar data. |
| `Data source` | You will see this option only if you enable `Internal link` option. Select the backend tracing data store for your exemplar data. |
| `URL` | You will see this option only if the `Internal link` option is disabled. Enter the full URL of the external link. You can interpolate the value from the field with `${__value.raw }` macro. |
| `URL Label` | (Optional) add a custom display label to override the value of the `Label name` field. |
| `Label name` | Add a name for the exemplar traceID property. |
## Prometheus query editor
Prometheus query editor is separated into 2 distinct modes that you can switch between. See docs for each section below.
![Editor toolbar](/static/img/docs/prometheus/header-9-1.png 'Editor toolbar')
At the top of the editor, select `Run queries` to run a query. Select `Builder | Code` tabs to switch between the editor modes. If the query editor is in Builder mode, there are additional elements explained in the Builder section.
> **Note:** In Explore, to run Prometheus queries, select `Run query`.
Each mode is synchronized with the other modes, so you can switch between them without losing your work, although there are some limitations. Some more complex queries are not yet supported in the builder mode. If you try to switch from `Code` to `Builder` with such query, editor will show a popup explaining that you can lose some parts of the query, and you can decide if you still want to continue to `Builder` mode or not.
### Code mode
![Code mode](/static/img/docs/prometheus/code-mode-9-1.png 'Code mode')
Code mode allows you to write raw queries in a textual editor. It implements advanced autocomplete features and syntax highlighting to help with writing complex queries. In addition, it also contains `Metrics browser` to further aid with writing queries (see more docs below).
For more information about Prometheus query language, refer to the [Prometheus documentation](http://prometheus.io/docs/querying/basics/).
#### Autocomplete
![Autocomplete](/static/img/docs/prometheus/autocomplete-9-1.png 'Autocomplete')
Autocomplete kicks automatically in appropriate times during typing. Use `ctrl/cmd + space` to trigger autocomplete manually when needed. Autocomplete can suggest both static functions, aggregations and keywords but also dynamic items like metrics and labels. Autocomplete dropdown also shows documentation for the suggested items, either static one or dynamic metric documentation where available.
In [Explore]({{< relref "../explore/" >}}) use `shift + enter` to run the query.
#### Metrics browser
The metrics browser allows you to quickly find metrics and select relevant labels to build basic queries.
When you open the browser you will see all available metrics and labels.
If supported by your Prometheus instance, each metric will show its HELP and TYPE as a tooltip.
![Metrics browser](/static/img/docs/prometheus/metric-browser-9-1.png 'Metrics browser')
When you select a metric, the browser narrows down the available labels to show only the ones applicable to the metric.
You can then select one or more labels for which the available label values are shown in lists in the bottom section.
Select one or more values for each label to tighten your query scope.
> **Note:** If you do not remember a metric name to start with, you can also select a few labels first, to narrow down the list and then find relevant label values.
All lists in the metrics browser have a search field above them to quickly filter for metrics or labels that match a certain string. The values section only has one search field. It's filtering applies to all labels to help you find values across labels once they have been selected, for example, among your labels `app`, `job`, `job_name` only one might with the value you are looking for.
Once you are satisfied with your query, click "Use query" to run the query. The button "Use as rate query" adds a `rate(...)[$__interval]` around your query to help write queries for counter metrics.
The "Validate selector" button will check with Prometheus how many time series are available for that selector.
#### Options
![Options](/static/img/docs/prometheus/options-8-5.png 'Options')
| Name | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Legend` | Controls the name of the time series. Use predefined format or use custom format.<br/>`Auto` - if there is a single label, it shows just the value of that label for each series. If there are multiple labels, it works the same as `Verbose`<br/>`Verbose` - includes all labels.<br/>`Custom` - select will change to text input. Use templating to select which labels will be included. For example, `{{hostname}}` is replaced by the label value for the label `hostname`. Clear the input and click outside the input to go back to select mode. |
| `Min step` | Set the lower bounds on the interval between data points. For example, set "1h" to hint that measurements are not frequent (taken hourly). `$__interval` and `$__rate_interval` are supported. |
| `Format` | You can switch between `Table` `Time series` or `Heatmap` options. The `Table` option works only in the Table panel. `Heatmap` displays metrics of the Histogram type on a Heatmap panel. Under the hood, it converts cumulative histograms to regular ones and sorts series by the bucket bound. |
| `Type` | `Range` - Query returning a Range vector, a set of time series containing a range of data points over time for each time series.<br/>`Instant` - Perform an "instant" query to return only the latest value that Prometheus has scraped for the requested time series. Instant queries can return results much faster than normal range queries. Use them to look up label sets. Instant query results are made up only of one data point per series and can be shown in the time series panel by adding a field override, adding a property to the override named `Transform`, and selecting `Constant` from the **Transform** dropdown. For more information, refer to [Transform]({{< relref "../panels-visualizations/visualizations/time-series/#transform" >}}). |
| `Exemplars` | If on, run exemplars query with the regular query and show exemplars in the graph. |
> **Note:** Grafana modifies the request dates for queries to align them with the dynamically calculated step. This ensures consistent display of metrics data, but it can result in a small gap of data at the right edge of a graph.
### Builder mode
The following video demonstrates how to use the visual Prometheus query builder available in Grafana version 9.0.
{{< vimeo 720004179 >}}
</br>
#### Toolbar
In addition to `Run query` button and mode switcher, in builder mode additional elements are available:
| Name | Description |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| Query patterns | A list of useful operation patterns that can be used to quickly add multiple operations to your query to achieve a specific goal. |
| Explain | Toggle to show a step by step explanation of all query parts and the operations. |
| Raw query | Toggle to show raw query generated by the builder that will be sent to Prometheus instance. |
#### Metric and labels
![Metric and labels](/static/img/docs/prometheus/metric-select-8-5.png 'Metric and labels')
Select a specific metric name from the dropdown list. List of available metrics is fetched from the Prometheus server based on selected time rage. Write into the select when the dropdown is open to search and filter the list.
Select desired labels and their values from the dropdown list. When metric is selected, available labels and their values are fetched from the server. Use the `+` button to add more labels. Use the `x` button to remove a label.
#### Operations
![Operations](/static/img/docs/prometheus/operations-9-1.gif 'Operations')
Use the `+ Operations` button to add operation to your query. Operations are grouped into sections for easier navigation. When the operations dropdown is open, write into the search input to search and filter operations list.
Operations in a query are shown as boxes in the operations section. Each has a header with a name and additional action buttons. Hover over the operation header to show the action buttons. Click the `v` button to quickly replace the operation with different one of the same type. Click the `info` button to open operations' description tooltip. Click the `x` button to remove the operation.
Operation can have additional parameters under the operation header. See the operation description or Prometheus docs for more details about each operation.
Some operations make sense only in specific order, if adding an operation would result in nonsensical query, operation will be added to the correct place. To order operations manually drag operation box by the operation name and drop in appropriate place.
##### Hints
![Hint](/static/img/docs/prometheus/hint-8-5.gif 'Hint')
In same cases the query editor can detect which operations would be most appropriate for a selected metric. In such cases it will show a hint next to the `+ Operations` button. Click on the hint to add the operations to your query.
### Explain
![Explain mode](/static/img/docs/prometheus/explain-9-1.gif 'Explain mode')
Explain mode helps with understanding the query. It shows a step by step explanation of all query parts and the operations.
#### Raw query
![Raw query](/static/img/docs/prometheus/raw-query-9-1.gif 'Raw query')
This section is shown only if the `Raw query` switch from the query editor top toolbar is set to `on`. It shows the raw query that will be created and executed by the query editor.
#### Options
Same set of option is available as in the `Code` mode. See the [Code mode options]({{< relref "#options" >}}) for details.
## Templating
Instead of hard-coding things like server, application and sensor name in your metric queries, you can use variables in their place.
Variables are shown as dropdown select boxes at the top of the dashboard. These dropdowns make it easy to change the data
being displayed in your dashboard.
Check out the [Templating]({{< relref "../dashboards/variables" >}}) documentation for an introduction to the templating feature and the different
types of template variables.
### Query variable
Variable of the type _Query_ allows you to query Prometheus for a list of metrics, labels or label values. The Prometheus data source plugin
provides the following functions you can use in the `Query` input field.
| Name | Description | Used API endpoints |
| ----------------------------- | ----------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------- |
| `label_names()` | Returns a list of label names. | /api/v1/labels |
| `label_values(label)` | Returns a list of label values for the `label` in every metric. | /api/v1/label/`label`/values |
| `label_values(metric, label)` | Returns a list of label values for the `label` in the specified metric. | /api/v1/series or /api/v1/label/`label`/values, depending on prometheus type and version in datasource configuration |
| `metrics(metric)` | Returns a list of metrics matching the specified `metric` regex. | /api/v1/label/\_\_name\_\_/values |
| `query_result(query)` | Returns a list of Prometheus query result for the `query`. | /api/v1/query |
For details of what _metric names_, _label names_ and _label values_ are please refer to the [Prometheus documentation](http://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
#### Using interval and range variables
> Support for `$__range`, `$__range_s` and `$__range_ms` only available from Grafana v5.3
You can use some global built-in variables in query variables, for example, `$__interval`, `$__interval_ms`, `$__range`, `$__range_s` and `$__range_ms`. See [Global built-in variables]({{< relref "../dashboards/variables/add-template-variables/#global-variables" >}}) for more information. They are convenient to use in conjunction with the `query_result` function when you need to filter variable queries since the `label_values` function doesn't support queries.
Make sure to set the variable's `refresh` trigger to be `On Time Range Change` to get the correct instances when changing the time range on the dashboard.
**Example usage:**
Populate a variable with the busiest 5 request instances based on average QPS over the time range shown in the dashboard:
```
Query: query_result(topk(5, sum(rate(http_requests_total[$__range])) by (instance)))
Regex: /"([^"]+)"/
```
Populate a variable with the instances having a certain state over the time range shown in the dashboard, using `$__range_s`:
```
Query: query_result(max_over_time(<metric>[${__range_s}s]) != <state>)
Regex:
```
### Using `$__rate_interval`
> **Note:** Available in Grafana 7.2 and above
`$__rate_interval` is the recommended interval to use in the `rate` and `increase` functions. It will "just work" in most cases, avoiding most of the pitfalls that can occur when using a fixed interval or `$__interval`.
```
OK: rate(http_requests_total[5m])
Better: rate(http_requests_total[$__rate_interval])
```
Details: `$__rate_interval` is defined as max(`$__interval` + _Scrape interval_, 4 \* _Scrape interval_), where _Scrape interval_ is the Min step setting (AKA query*interval, a setting per PromQL query) if any is set. Otherwise, the Scrape interval setting in the Prometheus data source is used. (The Min interval setting in the panel is modified by the resolution setting and therefore doesn't have any effect on \_Scrape interval*.) [This article](https://grafana.com/blog/2020/09/28/new-in-grafana-7.2-__rate_interval-for-prometheus-rate-queries-that-just-work/) contains additional details.
### Using variables in queries
There are two syntaxes:
- `$<varname>` Example: rate(http_requests_total{job=~"\$job"}[5m])
- `[[varname]]` Example: rate(http_requests_total{job=~"[[job]]"}[5m])
Why two ways? The first syntax is easier to read and write but does not allow you to use a variable in the middle of a word. When the _Multi-value_ or _Include all value_
options are enabled, Grafana converts the labels from plain text to a regex compatible string. Which means you have to use `=~` instead of `=`.
### Ad hoc filters variable
Prometheus supports the special [ad hoc filters]({{< relref "../dashboards/variables/add-template-variables/#add-ad-hoc-filters" >}}) variable type. It allows you to specify any number of label/value filters on the fly. These filters are automatically
applied to all your Prometheus queries.
## Annotations
[Annotations]({{< relref "../dashboards/build-dashboards/annotate-visualizations" >}}) allow you to overlay rich event information on top of graphs. You add annotation
queries via the Dashboard menu / Annotations view.
Prometheus supports two ways to query annotations.
- A regular metric query
- A Prometheus query for pending and firing alerts (for details see [Inspecting alerts during runtime](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#inspecting-alerts-during-runtime))
The step option is useful to limit the number of events returned from your query.
## Get Grafana metrics into Prometheus
Grafana exposes metrics for Prometheus on the `/metrics` endpoint. We also bundle a dashboard within Grafana so you can get started viewing your metrics faster. You can import the bundled dashboard by going to the data source edit page and click the dashboard tab. There you can find a dashboard for Grafana and one for Prometheus. Import and start viewing all the metrics!
For detailed instructions, refer to [Internal Grafana metrics]({{< relref "../setup-grafana/set-up-grafana-monitoring/" >}}).
## Prometheus API
The Prometheus data source works with other projects that implement the [Prometheus query API](https://prometheus.io/docs/prometheus/latest/querying/api/) including:
- [Grafana Mimir](https://grafana.com/docs/mimir/latest/)
- [Thanos](https://thanos.io/v0.17/components/query.md/)
For more information on how to query other Prometheus-compatible projects from Grafana, refer to the specific project documentation.
## Provision the Prometheus data source
You can configure data sources using config files with Grafana's provisioning system. Read more about how it works and all the settings you can set for data sources on the [provisioning docs page]({{< relref "../administration/provisioning/#datasources" >}}).
Here are some provisioning examples for this data source:
```yaml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
url: http://localhost:9090
jsonData:
httpMethod: POST
prometheusType: Prometheus
prometheusVersion: 2.37.0
exemplarTraceIdDestinations:
# Field with internal link pointing to data source in Grafana.
# datasourceUid value can be anything, but it should be unique across all defined data source uids.
- datasourceUid: my_jaeger_uid
name: traceID
# Field with external link.
- name: traceID
url: 'http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Jaeger%22,%7B%22query%22:%22$${__value.raw}%22%7D%5D'
```
## Amazon Managed Service for Prometheus
The Prometheus data source works with Amazon Managed Service for Prometheus. If you are using an AWS Identity and Access Management (IAM) policy to control access to your Amazon Managed Service for Prometheus domain, then you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain. For more details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
> **Note:** Grafana version 7.3.5 or higher is required to use SigV4 authentication.
To connect the Prometheus data source to Amazon Managed Service for Prometheus using SigV4 authentication, refer to the AWS guide to [Set up Grafana open source or Grafana Enterprise for use with AMP](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-standalone-grafana.html).
If you are running Grafana in an Amazon EKS cluster, follow the AWS guide to [Query using Grafana running in an Amazon EKS cluster](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-grafana-7.3.html).
## Configuring exemplars
> **Note:** This feature is available in Prometheus 2.26+ and Grafana 7.4+.
Grafana 7.4 and later versions have the capability to show exemplars data alongside a metric both in Explore and Dashboards.
Exemplars are a way to associate higher cardinality metadata from a specific event with traditional timeseries data.
{{< figure src="/static/img/docs/v74/exemplars.png" class="docs-image--no-shadow" caption="Screenshot showing the detail window of an Exemplar" >}}
Configure Exemplars in the data source settings by adding external or internal links.
{{< figure src="/static/img/docs/v74/exemplars-setting.png" class="docs-image--no-shadow" caption="Screenshot of the Exemplars configuration" >}}

View File

@ -0,0 +1,164 @@
---
aliases:
- /docs/grafana/latest/features/datasources/prometheus/
- /docs/grafana/latest/datasources/prometheus/
- /docs/grafana/latest/data-sources/prometheus/
description: Guide for using Prometheus in Grafana
keywords:
- grafana
- prometheus
- guide
menuTitle: Prometheus
title: Prometheus data source
weight: 1300
---
# Prometheus data source
Grafana ships with built-in support for Prometheus.
This topic explains options, variables, querying, and other features specific to the Prometheus data source, which include its [feature-rich code editor for queries and visual query builder]({{< relref "./query-editor/" >}}).
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}), use [Explore]({{< relref "../../explore/" >}}), and [annotate visualizations]({{< relref "./query-editor/#apply-annotations" >}}).
## Prometheus API
The Prometheus data source also works with other projects that implement the [Prometheus querying API](https://prometheus.io/docs/prometheus/latest/querying/api/).
For more information on how to query other Prometheus-compatible projects from Grafana, refer to the specific project's documentation:
- [Grafana Mimir](/docs/mimir/latest/)
- [Thanos](https://thanos.io/tip/components/query.md/)
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Prometheus data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets whether the data source is pre-selected for new panels. |
| **Url** | Sets the URL of your Prometheus server, such as `http://prometheus.example.org:9090`. |
| **Access** | Only Server access mode is functional. If Server mode is already selected, this option is hidden. Otherwise, change this to Server mode to prevent errors. |
| **Basic Auth** | Enables basic authentication to the Prometheus data source. |
| **User** | Sets the user name for basic authentication. |
| **Password** | Sets the password for basic authentication. |
| **Scrape interval** | Sets the scrape and evaluation interval. We recommend the same value as the typical configured in Prometheus. Defaults to 15s. |
| **Type** | Defines the type of your Prometheus server. Valid values are `Prometheus`, `Cortex`, `Thanos`, `Mimir`. When selected, the Prometheus version field attempts to detect the version automatically using the Prometheus [buildinfo](https://semver.org/) API. Some Prometheus types, such as Cortex, don't support this API, and you must provide their version. |
| **Version** | Defines the version of your Prometheus server. This field is visible only after the **Type** field is defined. |
| **HTTP method** | Sets the HTTP method used to query your data source. We recommend POST, which is pre-selected, because it allows for larger queries. Use GET if the Prometheus version is older than 2.1, or if POST requests are restricted in your network. |
| **Disable metrics lookup** | Disables the metrics chooser and metric/label support in the query field's autocompletion. This can prevent performance issues with larger Prometheus instances. |
| **Custom query parameters** | Adds custom parameters to the Prometheus query URL, such as `timeout`, `partial_response`, `dedup`, or `max_source_resolution`. Concatenate multiple parameters with '&amp;'. |
**Exemplars configuration:**
| Name | Description |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Internal link** | Enable this option if you have an internal link. When enabled, this reveals the data source selector. Select the backend tracing data store for your exemplar data. |
| **Data source** | _(Visible only if you enable `Internal link`)_ Selects the backend tracing data store for your exemplar data. |
| **URL** | _(Visible only if you disable `Internal link`)_ Defines the external link's full URL. You can interpolate the value from the field by using the [`${__value.raw}` macro]({{< relref "../..//panels-visualizations/configure-data-links/#value-variables" >}}). |
| **URL label** | _(Optional)_ Adds a custom display label to override the value of the `Label name` field. |
| **Label name** | Adds a name for the exemplar traceID property. |
### Provision the data source
You can define and configure the data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning example
```yaml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
url: http://localhost:9090
jsonData:
httpMethod: POST
exemplarTraceIdDestinations:
# Field with internal link pointing to data source in Grafana.
# datasourceUid value can be anything, but it should be unique across all defined data source uids.
- datasourceUid: my_jaeger_uid
name: traceID
# Field with external link.
- name: traceID
url: 'http://localhost:3000/explore?orgId=1&left=%5B%22now-1h%22,%22now%22,%22Jaeger%22,%7B%22query%22:%22$${__value.raw}%22%7D%5D'
```
## Query the data source
The Prometheus query editor includes a code editor and visual query builder.
For details, see the [query editor documentation]({{< relref "./query-editor" >}}).
## View Grafana metrics with Prometheus
Grafana exposes metrics for Prometheus on the `/metrics` endpoint.
We also bundle a dashboard within Grafana so you can start viewing your metrics faster.
**To import the bundled dashboard:**
1. Navigate to the data source's [configuration page]({{< relref "#configure-the-data-source" >}}).
1. Select the **Dashboards** tab.
This displays dashboards for Grafana and Prometheus.
1. Select **Import** for the dashboard to import.
For details about these metrics, refer to [Internal Grafana metrics]({{< relref "../../setup-grafana/set-up-grafana-monitoring/" >}}).
### Amazon Managed Service for Prometheus
The Prometheus data source works with Amazon Managed Service for Prometheus.
If you use an AWS Identity and Access Management (IAM) policy to control access to your Amazon Elasticsearch Service domain, you must use AWS Signature Version 4 (AWS SigV4) to sign all requests to that domain.
For details on AWS SigV4, refer to the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
#### AWS Signature Version 4 authentication
> **Note:** Available in Grafana v7.3.5 and higher.
To connect the Prometheus data source to Amazon Managed Service for Prometheus using SigV4 authentication, refer to the AWS guide to [Set up Grafana open source or Grafana Enterprise for use with AMP](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-standalone-grafana.html).
If you run Grafana in an Amazon EKS cluster, follow the AWS guide to [Query using Grafana running in an Amazon EKS cluster](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-onboard-query-grafana-7.3.html).
### Configure exemplars
> **Note:** Available in Prometheus v2.26 and higher with Grafana v7.4 and higher.
Grafana 7.4 and higher can show exemplars data alongside a metric both in Explore and in Dashboards.
Exemplars associate higher-cardinality metadata from a specific event with traditional time series data.
{{< figure src="/static/img/docs/v74/exemplars.png" class="docs-image--no-shadow" caption="Screenshot showing the detail window of an Exemplar" >}}
Configure Exemplars in the data source settings by adding external or internal links.
{{< figure src="/static/img/docs/v74/exemplars-setting.png" class="docs-image--no-shadow" caption="Screenshot of the Exemplars configuration" >}}
## Query the data source
You can create queries with the Prometheus data source's query editor.
For details, refer to the [query editor documentation]({{< relref "./query-editor/" >}}).
## Use template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For details, see the [template variables documentation]({{< relref "./template-variables/" >}}).

View File

@ -0,0 +1,228 @@
---
aliases:
- /docs/grafana/latest/data-sources/prometheus/query-editor/
description: Guide for using the Prometheus data source's query editor
keywords:
- grafana
- prometheus
- logs
- queries
menuTitle: Query editor
title: Prometheus query editor
weight: 300
---
# Prometheus query editor
This topic explains querying specific to the Prometheus data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Choose a query editing mode
You can switch the Prometheus query editor between two modes:
- [Code mode](#code-mode), which provides a feature-rich editor for writing queries
- [Builder mode](#builder-mode), which provides a visual query designer
{{< figure src="/static/img/docs/prometheus/header-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Editor toolbar" >}}
To switch between the editor modes, select the corresponding **Builder** and **Code** tabs.
To run a query, select **Run queries** located at the top of the editor.
> **Note:** To run Prometheus queries in [Explore]({{< relref "../../../explore" >}}), select **Run query**.
Each mode is synchronized with the other modes, so you can switch between them without losing your work, although there are some limitations.
Builder mode doesn't yet support some complex queries.
When you switch from Code mode to Builder mode with such a query, the editor displays a popup that explains how you might lose parts of the query if you continue.
You can then decide whether you still want to switch to Builder mode.
You can also use the [Explain feature]({{< relref "#use-explain-mode-to-understand-queries" >}}) to help understand how a query works, and augment queries by using [template variables]({{< relref "./template-variables/" >}}).
For options and functions common to all query editors, refer to [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Configure common options
You can configure Prometheus-specific options in the query editor by setting several options regardless of its mode.
{{< figure src="/static/img/docs/prometheus/options-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Options" >}}
### Legend
The **Legend** setting defines the time series's name. You can use a predefined or custom format.
| Option | Description |
| ----------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Auto** | Shows the value of a single label for each series with only one label, or displays all labels if a series has multiple labels. |
| **Verbose** | Displays all labels. |
| **Custom** | Uses templating to select which labels will be included.<br/>For example, `{{hostname}}` is replaced by the label value for the label `hostname`.<br/>Clear the input and click outside of it to select another mode. |
### Min step
The **Min step** setting defines the lower bounds on the interval between data points.
For example, set this to `1h` to hint that measurements are taken hourly.
This setting supports the `$__interval` and `$__rate_interval` macros.
### Format
You can switch between **Table**, **Time series**, and **Heatmap** options by configuring the query's **Format**.
| Option | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Table** | This works only in a [Table panel]{{< relref "../../../panels-visualizations/visualizations/table" >}}. |
| **Time series** | Uses the default time series format. |
| **Heatmap** | Displays metrics of the Histogram type on a [Heatmap panel]{{< relref "../../../panels-visualizations/visualizations/heatmap" >}} by converting cumulative histograms to regular ones and sorting the series by the bucket bound. |
### Type
The **Type** setting selects the query type.
- A **Range** query returns a Range vector, comprised of a set of time series containing a range of data points over time for each time series.
- An **Instant** query returns only the latest value that Prometheus has scraped for the requested time series. Instant queries can return results much faster than normal range queries and are well suited to look up label sets.
Instant query results are comprised of only one data point per series and can be shown in the time series panel by adding a field override, adding a property to the override named `Transform`, and selecting `Constant` from the **Transform** dropdown.
For more information, refer to the [Time Series Transform option documentation]({{< relref "../../../panels-visualizations/visualizations/time-series#transform" >}}).
- An **Exemplars** query runs with the regular query and shows exemplars in the graph.
> **Note:** Grafana modifies the request dates for queries to align them with the dynamically calculated step.
> This ensures a consistent display of metrics data, but it can result in a small gap of data at the right edge of a graph.
## Code mode
{{< figure src="/static/img/docs/prometheus/code-mode-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Code mode" >}}
In **Code mode**, you can write complex queries using a text editor with autocompletion features and syntax highlighting.
It also contains a [Metrics browser]({{< relref "#metrics-browser" >}}) to further help you write queries.
For more information about Prometheus's query language, refer to the [Prometheus documentation](http://prometheus.io/docs/querying/basics/).
### Use autocompletion
{{< figure src="/static/img/docs/prometheus/autocomplete-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Autocomplete" >}}
Code mode's autocompletion feature works automatically while typing.
To manually trigger autocompletion, use the keyboard shortcut <key>Ctrl</key>/<key>Cmd</key> + <key>Space</key>.
The query editor can autocomplete static functions, aggregations, and keywords, and also dynamic items like metrics and labels.
The autocompletion dropdown includes documentation for the suggested items where available.
To run a query in [Explore]({{< relref "../../../explore/" >}}), use the keyboard shortcut <key>Shift</key> + <key>Enter</key>.
### Metrics browser
The metrics browser locates metrics and selects relevant labels to help you build basic queries.
When you open the browser, it displays all available metrics and labels.
If supported by your Prometheus instance, each metric also displays its HELP and TYPE as a tooltip.
{{< figure src="/static/img/docs/prometheus/metric-browser-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Metrics browser" >}}
When you select a metric, the browser narrows down the available labels to show only the ones applicable to the metric.
You can then select one or more labels for which the available label values are shown in lists in the bottom section.
Select one or more values for each label to tighten your query scope.
> **Note:** If you do not remember a metric name to start with, you can also select a few labels to narrow down the list, then find relevant label values.
All lists in the metrics browser have a search field above them to quickly filter for metrics or labels that match a certain string.
The values section has only one search field, and its filtering applies to all labels to help you find values across labels once selected.
For example, among your labels `app`, `job`, `job_name` only one might with the value you are looking for.
Once you are satisfied with your query, click "Use query" to run the query. The button "Use as rate query" adds a `rate(...)[$__interval]` around your query to help write queries for counter metrics.
The "Validate selector" button will check with Prometheus how many time series are available for that selector.
## Builder mode
In **Builder mode**, you can build queries using a visual interface.
This video demonstrates how to use the visual Prometheus query builder available since Grafana v9.0:
{{< vimeo 720004179 >}}
</br>
### Toolbar
In addition to the **Run query** button and mode switcher, Builder mode includes additional elements:
| Name | Description |
| ------------------ | ----------------------------------------------------------------------------------------- |
| **Query patterns** | A list of operation patterns that help you quickly add multiple operations to your query. |
| **Explain** | Displays a step-by-step explanation of all query parts and its operations. |
| **Raw query** | Displays the raw query generated by the Builder that will be sent to Prometheus instance. |
### Metric and labels
{{< figure src="/static/img/docs/prometheus/metric-select-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Metric and labels" >}}
Select a specific metric name from the dropdown list.
The data source requests the list of available metrics from the Prometheus server based on the selected time rage.
You can also enter text into the selector when the dropdown is open to search and filter the list.
Select desired labels and their values from the dropdown list.
When a metric is selected, the data source requests available labels and their values from the server.
Use the `+` button to add a label, and the `x` button to remove a label.
### Operations
{{< figure src="/static/img/docs/prometheus/operations-9-1.png" max-width="500px" class="docs-image--no-shadow" caption="Operations" >}}
Select the `+ Operations` button to add operations to your query.
The query editor groups operations into related sections, and you can type while the operations dropdown is open to search and filter the list.
The query editor displays a query's operations as boxes in the operations section.
Each operation's header displays its name, and additional action buttons appear when you hover your cursor over the header:
| Button | Action |
| ------ | ----------------------------------------------------------------- |
| `v` | Replaces the operation with different operation of the same type. |
| `info` | Opens the operation's description tooltip. |
| `x` | Removes the operation. |
Some operations have additional parameters under the operation header.
For details about each operation, use the `info` button to view the operation's description, or refer to the Prometheus documentation on [query functions](https://prometheus.io/docs/prometheus/latest/querying/functions/).
Some operations make sense only when used in a specific order.
If adding an operation would result in nonsensical query, the query editor adds the operation to the correct place.
To re-order operations manually, drag the operation box by its name and drop it into the desired place.
#### Hints
{{< figure src="/static/img/docs/prometheus/hint-8-5.png" max-width="500px" class="docs-image--no-shadow" caption="Hint" >}}
The query editor can detect which operations are most appropriate for some selected metrics.
If it does, it displays a hint next to the `+ Operations` button.
To add the operations to your query, click the hint.
## Use Explain mode to understand queries
{{< figure src="/static/img/docs/prometheus/explain-8-5.png" max-width="500px" class="docs-image--no-shadow" caption="Explain mode" >}}
Explain mode helps you understand a query by displaying a step-by-step explanation of all query components and operations.
### Raw query
{{< figure src="/static/img/docs/prometheus/raw-query-8-5.png" max-width="500px" class="docs-image--no-shadow" caption="Raw query" >}}
The query editor displays the raw query only if the **Raw query** switch from the query editor toolbar is enabled.
If visible, it displays the raw query that the query editor has created.
### Additional options
In addition to these Builder mode-specific options, the query editor also displays the options it shares in common with Code mode.
For details, refer to the [common options]({{< relref "#configure-common-options" >}}).
## Apply annotations
[Annotations]({{< relref "../../../dashboards/build-dashboards/annotate-visualizations" >}}) overlay rich event information on top of graphs.
You can add annotation queries in the Dashboard menu's Annotations view.
Prometheus supports two ways to query annotations.
- A regular metric query
- A Prometheus query for pending and firing alerts (for details see [Inspecting alerts during runtime](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/#inspecting-alerts-during-runtime))
The step option is useful to limit the number of events returned from your query.

View File

@ -0,0 +1,111 @@
---
aliases:
- /docs/grafana/latest/datasources/prometheus/template-variables/
- /docs/grafana/latest/data-sources/prometheus/template-variables/
description: Using template variables with Prometheus in Grafana
keywords:
- grafana
- prometheus
- templates
- variables
- queries
menuTitle: Template variables
title: Prometheus template variables
weight: 400
---
# Prometheus template variables
Instead of hard-coding details such as server, application, and sensor names in metric queries, you can use variables.
Grafana lists these variables in dropdown select boxes at the top of the dashboard to help you change the data displayed in your dashboard.
Grafana refers to such variables as template variables.
For an introduction to templating and template variables, refer to the [Templating]({{< relref "../../../dashboards/variables" >}}) and [Add and manage variables]({{< relref "../../../dashboards/variables/add-template-variables" >}}) documentation.
## Use query variables
You can use variables of the type _Query_ to query Prometheus for a list of metrics, labels, or label values.
You can use these Prometheus data source functions in the **Query** input field:
| Name | Description | Used API endpoints |
| ----------------------------- | ----------------------------------------------------------------------- | --------------------------------- |
| `label_names()` | Returns a list of label names. | /api/v1/labels |
| `label_values(label)` | Returns a list of label values for the `label` in every metric. | /api/v1/label/`label`/values |
| `label_values(metric, label)` | Returns a list of label values for the `label` in the specified metric. | /api/v1/series |
| `metrics(metric)` | Returns a list of metrics matching the specified `metric` regex. | /api/v1/label/\_\_name\_\_/values |
| `query_result(query)` | Returns a list of Prometheus query result for the `query`. | /api/v1/query |
For details on _metric names_, _label names_, and _label values_, refer to the [Prometheus documentation](http://prometheus.io/docs/concepts/data_model/#metric-names-and-labels).
### Use interval and range variables
You can use some global built-in variables in query variables, for example, `$__interval`, `$__interval_ms`, `$__range`, `$__range_s` and `$__range_ms`.
For details, see [Global built-in variables]({{< relref "../../../dashboards/variables/add-template-variables#global-variables" >}}).
The `label_values` function doesn't support queries, so you can use these variables in conjunction with the `query_result` function to filter variable queries.
Make sure to set the variable's `refresh` trigger to be `On Time Range Change` to get the correct instances when changing the time range on the dashboard.
**Example:**
Populate a variable with the busiest 5 request instances based on average QPS over the time range shown in the dashboard:
```
Query: query_result(topk(5, sum(rate(http_requests_total[$__range])) by (instance)))
Regex: /"([^"]+)"/
```
Populate a variable with the instances having a certain state over the time range shown in the dashboard, using `$__range_s`:
```
Query: query_result(max_over_time(<metric>[${__range_s}s]) != <state>)
Regex:
```
## Use `$__rate_interval`
> **Note:** Available in Grafana v7.2 and higher.
We recommend using `$__rate_interval` in the `rate` and `increase` functions instead of `$__interval` or a fixed interval value.
Because `$__rate_interval` is always at least four times the value of the Scrape interval, it avoid problems specific to Prometheus.
For example, instead of using:
```
rate(http_requests_total[5m])
```
or:
```
rate(http_requests_total[$__interval])
```
We recommend that you use:
```
rate(http_requests_total[$__rate_interval])
```
The value of `$__rate_interval` is defined as
*max(`$__interval` + *Scrape interval*, 4 \* *Scrape interval*)*,
where _Scrape interval_ is the "Min step" setting (also known as `query*interval`, a setting per PromQL query) if any is set.
Otherwise, Grafana uses the Prometheus data source's "Scrape interval" setting.
The "Min interval" setting in the panel is modified by the resolution setting, and therefore doesn't have any effect on _Scrape interval_.
For details, refer to the [Grafana blog](/blog/2020/09/28/new-in-grafana-7.2-__rate_interval-for-prometheus-rate-queries-that-just-work/).
## Choose a variable syntax
The Prometheus data source supports two variable syntaxes for use in the **Query** field:
- `$<varname>`, for example `rate(http_requests_total{job=~"\$job"}[$_rate_interval])`, which is easier to read and write but does not allow you to use a variable in the middle of a word.
- `[[varname]]`, for example `rate(http_requests_total{job=~"[[job]]"}[$_rate_interval])`
If you've enabled the _Multi-value_ or _Include all value_ options, Grafana converts the labels from plain text to a regex-compatible string, which requires you to use `=~` instead of `=`.
## Use the ad hoc filters variable type
Prometheus supports the special [ad hoc filters]({{< relref "../../../dashboards/variables/add-template-variables#add-ad-hoc-filters" >}}) variable type, which you can use to specify any number of label/value filters on the fly.
These filters are automatically applied to all your Prometheus queries.

View File

@ -1,273 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/tempo/
- /docs/grafana/latest/features/datasources/tempo/
description: High volume, minimal dependency trace storage. OSS tracing solution from
Grafana Labs.
keywords:
- grafana
- tempo
- guide
- tracing
title: Tempo
weight: 1400
---
# Tempo data source
Grafana ships with built-in support for Tempo, a high volume, minimal dependency trace storage, OSS tracing solution from Grafana Labs. Add it as a data source, and you are ready to query your traces in [Explore]({{< relref "../explore" >}}).
## Add data source
To access Tempo settings, click the **Configuration** (gear) icon, then click **Data Sources** > **Tempo**.
| Name | Description |
| ------------ | --------------------------------------------------------------------------------------- |
| `Name` | The name using which you will refer to the data source in panels, queries, and Explore. |
| `Default` | The default data source will be pre-selected for new panels. |
| `URL` | The URL of the Tempo instance, e.g., `http://tempo` |
| `Basic Auth` | Enable basic authentication to the Tempo data source. |
| `User` | User name for basic authentication. |
| `Password` | Password for basic authentication. |
### Trace to logs
> **Note:** This feature is available in Grafana 7.4+.
> Grafana Cloud users can access this feature by [opening a support ticket in the Cloud Portal](https://grafana.com/profile/org#support).
This is a configuration for the [trace to logs feature]({{< relref "../explore/trace-integration/" >}}). Select target data source (at this moment limited to Loki or Splunk \[logs\] data sources) and select which tags will be used in the logs query.
- **Data source -** Target data source.
- **Tags -** The tags that will be used in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`.
- **Map tag names -** When enabled, allows configuring how Tempo tag names map to logs label names. For example, map `service.name` to `service`.
- **Span start time shift -** A shift in the start time for the logs query based on the start time for the span. To extend the time to the past, use a negative value. You can use time units, for example, 5s, 1m, 3h. The default is 0.
- **Span end time shift -** Shift in the end time for the logs query based on the span end time. Time units can be used here, for example, 5s, 1m, 3h. The default is 0.
- **Filter by Trace ID -** Toggle to append the trace ID to the logs query.
- **Filter by Span ID -** Toggle to append the span ID to the logs query.
{{< figure src="/static/img/docs/explore/traces-to-logs-settings-8-2.png" class="docs-image--no-shadow" caption="Screenshot of the trace to logs settings" >}}
### Trace to metrics
> **Note:** This feature is behind the `traceToMetrics` feature toggle.
> Grafana Cloud users can access this feature by [opening a support ticket in the Cloud Portal](https://grafana.com/profile/org#support).
To configure trace to metrics, select the target Prometheus data source and create any desired linked queries.
-- **Data source -** Target data source.
-- **Tags -** You can use tags in the linked queries. The key is the span attribute name. The optional value is the corresponding metric label name (for example, map `k8s.pod` to `pod`). You may interpolate these tags into your queries using the `$__tags` keyword.
Each linked query consists of:
-- **Link Label -** (Optional) Descriptive label for the linked query.
-- **Query -** Query that runs when navigating from a trace to the metrics data source. Interpolate tags using the `$__tags` keyword. For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, it results in `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Service Graph
This is a configuration for the Service Graph feature.
-- **Data source -** Prometheus instance where the Service Graph data is stored.
### Search
This is a configuration for Tempo search.
-- **Hide search -** Optionally, hide the search query option in Explore in cases where search is not configured in the Tempo instance.
### Node Graph
This is a configuration for the beta Node Graph visualization. The Node Graph is shown after the trace view is loaded and is disabled by default.
-- **Enable Node Graph -** Enables the Node Graph visualization.
### Loki search
This is a configuration for the Loki search query type.
-- **Data source -** The Loki instance in which you want to search traces. You must configure derived fields in the Loki instance.
### Span bar label
You can configure the span bar label. The span bar label allows you add additional information to the span bar row.
Select one of the following four options. The default selection is Duration.
- **None -** Do not show any additional information on the span bar row.
- **Duration -** Show the span duration on the span bar row.
- **Tag -** Show the span tag on the span bar row. Note: You will also need to specify the tag key to use to get the tag value. For example, `span.kind`.
## Query traces
You can query and display traces from Tempo via [Explore]({{< relref "../explore/" >}}).
### Tempo search
Tempo search is an experimental feature behind a feature toggle. Use this to search for traces by service name, span name, duration range, or process-level attributes that are included in your applications instrumentation, such as HTTP status code and customer ID.
{{< figure src="/static/img/docs/explore/tempo-search.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo search feature with a trace rendered in the right panel" >}}
#### Search recent traces
Tempo allows you to search recent traces held in the ingesters. By default, ingesters store the last 15 minutes of tracing data. You must configure your Tempo data source to use this feature. Refer to the [Tempo documentation](https://grafana.com/docs/tempo/latest/getting-started/tempo-in-grafana/#search-of-recent-traces).
#### Search backend datastore
Tempo includes the ability to search the entire backend datastore. You must configure your Tempo data source to use this feature. Refer to the [Tempo documentation](https://grafana.com/docs/tempo/latest/getting-started/tempo-in-grafana/#search-of-the-backend-datastore).
### Loki search
To find traces to visualize, use the [Loki query editor]({{< relref "loki/#loki-query-editor" >}}). To get search results, you must have [derived fields]({{< relref "loki/#derived-fields" >}}) configured, which point to this data source.
{{< figure src="/static/img/docs/tempo/query-editor-search.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo query editor showing the search tab" >}}
### Trace ID search
To query a particular trace, select the **TraceID** query type, and then put the ID into the Trace ID field.
{{< figure src="/static/img/docs/tempo/query-editor-traceid.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo TraceID query type" >}}
## Upload JSON trace file
You can upload a JSON file that contains a single trace or service graph to visualize it. If the file has multiple traces, the first trace is used for visualization.
You can download a trace or service graph through the inspector. Open the inspector, navigate to the 'Data' tab, and click 'Download traces' or 'Download service graph'.
Here is an example JSON:
```json
{
"batches": [
{
"resource": {
"attributes": [
{ "key": "service.name", "value": { "stringValue": "db" } },
{ "key": "job", "value": { "stringValue": "tns/db" } },
{ "key": "opencensus.exporterversion", "value": { "stringValue": "Jaeger-Go-2.22.1" } },
{ "key": "host.name", "value": { "stringValue": "63d16772b4a2" } },
{ "key": "ip", "value": { "stringValue": "0.0.0.0" } },
{ "key": "client-uuid", "value": { "stringValue": "39fb01637a579639" } }
]
},
"instrumentationLibrarySpans": [
{
"instrumentationLibrary": {},
"spans": [
{
"traceId": "AAAAAAAAAABguiq7RPE+rg==",
"spanId": "cmteMBAvwNA=",
"parentSpanId": "OY8PIaPbma4=",
"name": "HTTP GET - root",
"kind": "SPAN_KIND_SERVER",
"startTimeUnixNano": "1627471657255809000",
"endTimeUnixNano": "1627471657256268000",
"attributes": [
{ "key": "http.status_code", "value": { "intValue": "200" } },
{ "key": "http.method", "value": { "stringValue": "GET" } },
{ "key": "http.url", "value": { "stringValue": "/" } },
{ "key": "component", "value": { "stringValue": "net/http" } }
],
"status": {}
}
]
}
]
}
]
}
```
## Service graph
A service graph is a visual representation of the relationships between services. Each node on the graph represents a service such as an API or database. With this graph, customers can easily detect performance issues, increases in error, fault, or throttle rates in any of their services, and dive deep into corresponding traces and root causes.
![Node graph panel](/static/img/docs/node-graph/node-graph-8-0.png 'Node graph')
To display the service graph:
- [Configure Grafana Agent](https://grafana.com/docs/tempo/latest/grafana-agent/service-graphs/#quickstart), or [Tempo or GET](https://grafana.com/docs/tempo/latest/metrics-generator/service_graphs/#tempo) to generate service graph data
- Link a Prometheus data source in the Tempo data source settings.
- Navigate to [Explore]({{< relref "../explore/" >}}).
- Select the Tempo data source.
- Select the **Service Graph** query type and run the query.
- (Optional): Filter by service name.
You can pan and zoom the view with buttons or you mouse. For details about the visualization, refer to [Node graph panel](https://grafana.com/docs/grafana/latest/panels/visualizations/node-graph/).
Each service in the graph is represented as a circle. Numbers on the inside shows average time per request and request per second.
The color of each circle represents the percentage of requests in each of the following states:
- green = success
- red = fault
- yellow = errors
- purple = throttled responses
Click on the service to see a context menu with additional links for quick navigation to other relevant information.
## APM table
The Application Performance Management (APM) table lets you view several APM metrics out of the box.
The APM table is part of the APM dashboard.
For more information, refer to the [APM dashboard documentation](https://grafana.com/docs/tempo/latest/metrics-generator/app-performance-mgmt/).
To display the APM table:
1. Activate the `tempoApmTable` feature flag in your `grafana.ini` file.
1. Link a Prometheus data source in the Tempo data source settings.
1. Navigate to [Explore]({{< relref "../explore/_index.md" >}}).
1. Select the Tempo data source.
1. Select the **Service Graph** query type and run the query.
1. (Optional): Filter your results.
> **Note:** The metric `traces_spanmetrics_calls_total` is used to display the name, rate, and error rate columns and `traces_spanmetrics_latency_bucket` is used to display the duration column. These metrics need to exist in your Prometheus data source.
Click a row in the rate, error rate, or duration columns to open a query in Prometheus with the span name of that row automatically set in the query.
Click a row in the links column to open a query in Tempo with the span name of that row automatically set in the query.
{{< figure src="/static/img/docs/tempo/apm-table.png" class="docs-image--no-shadow" max-width="500px" caption="Screenshot of the Tempo APM table" >}}
## Linking Trace ID from logs
You can link to Tempo trace from logs in Loki or Elastic by configuring an internal link. See the [Derived fields]({{< relref "loki/#derived-fields" >}}) section in the [Loki data source]({{< relref "loki/" >}}) or [Data links]({{< relref "elasticsearch/#data-links" >}}) section in the [Elastic data source]({{< relref "elasticsearch/" >}}) for configuration instructions.
## Provision the Tempo data source
You can modify the Grafana configuration files to provision the Tempo data source. Read more about how it works and all the settings you can set for data sources on the [provisioning]({{< relref "../administration/provisioning/#datasources" >}}) topic.
Here is an example config:
```yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
url: http://localhost:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: 'loki'
tags: ['job', 'instance', 'pod', 'namespace']
mappedTags: [{ key: 'service.name', value: 'service' }]
mapTagNamesEnabled: false
spanStartTimeShift: '1h'
spanEndTimeShift: '1h'
filterByTraceID: false
filterBySpanID: false
tracesToMetrics:
datasourceUid: 'prom'
tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
queries:
- name: 'Sample query'
query: 'sum(rate(tempo_spanmetrics_latency_bucket{$__tags}[5m]))'
serviceMap:
datasourceUid: 'prometheus'
search:
hide: false
nodeGraph:
enabled: true
lokiSearch:
datasourceUid: 'loki'
```

View File

@ -0,0 +1,305 @@
---
aliases:
- /docs/grafana/latest/features/datasources/tempo/
- /docs/grafana/latest/datasources/tempo/
- /docs/grafana/latest/data-sources/tempo/
description: Guide for using Tempo in Grafana
keywords:
- grafana
- tempo
- guide
- tracing
menuTitle: Tempo
title: Tempo data source
weight: 1400
---
# Tempo data source
Grafana ships with built-in support for Tempo, a high-volume, minimal-dependency trace storage, open-source tracing solution from Grafana Labs.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "./query-editor/" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
You can also [use the service graph]({{< relref "#use-the-service-graph" >}}) to view service relationships, [track Application Performance Management metrics]({{< relref "#view-the-apm-table" >}}), [upload a JSON trace file]({{< relref "#upload-a-json-trace-file" >}}), and [link a trace ID from logs]({{< relref "#link-a-trace-id-from-logs" >}}) in Loki or Elasticsearch.
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Tempo data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| -------------- | ------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Sets the data source that's pre-selected for new panels. |
| **URL** | Sets the URL of the Tempo instance, such as `http://tempo`. |
| **Basic Auth** | Enables basic authentication to the Tempo data source. |
| **User** | Sets the user name for basic authentication. |
| **Password** | Sets the password for basic authentication. |
You can also configure settings specific to the Tempo data source:
### Configure trace to logs
{{< figure src="/static/img/docs/explore/traces-to-logs-settings-8-2.png" class="docs-image--no-shadow" caption="Screenshot of the trace to logs settings" >}}
> **Note:** Available in Grafana v7.4 and higher.
> If you use Grafana Cloud, open a [support ticket in the Cloud Portal](/profile/org#support) to access this feature.
The **Trace to logs** setting configures the [trace to logs feature]({{< relref "../../explore/trace-integration" >}}) available when integrating Grafana with Tempo.
**To configure trace to logs:**
1. Select the target data source.
1. Select which tags to use in the logs query.
| Setting name | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Defines the target data source. You can select only Loki or Splunk \[logs\] data sources. |
| **Tags** | Defines the the tags to use in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`. |
| **Map tag names** | Enables you to configure how Tempo tag names map to logs label names. For example, you can map `service.name` to `service`. |
| **Span start time shift** | Shifts the start time for the logs query, based on the span's start time. You can use time units, such as `5s`, `1m`, `3h`. To extend the time to the past, use a negative value. Default is 0. |
| **Span end time shift** | Shifts the end time for the logs query, based on the span's end time. You can use time units. Default is 0. |
| **Filter by Trace ID** | Toggles whether to append the trace ID to the logs query. |
| **Filter by Span ID** | Toggles whether to append the span ID to the logs query. |
### Configure trace to metrics
> **Note:** This feature is behind the `traceToMetrics` [feature toggle]({{< relref "../../setup-grafana/configure-grafana#feature_toggles" >}}).
> If you use Grafana Cloud, open a [support ticket in the Cloud Portal](/profile/org#support) to access this feature.
The **Trace to metrics** setting configures the [trace to metrics feature](/blog/2022/08/18/new-in-grafana-9.1-trace-to-metrics-allows-users-to-navigate-from-a-trace-span-to-a-selected-data-source/) available when integrating Grafana with Tempo.
**To configure trace to metrics:**
1. Select the target data source.
1. Create any desired linked queries.
| Setting name | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Defines the target data source. |
| **Tags** | Defines the tags used in linked queries. The key sets the span attribute name, and the optional value sets the corresponding metric label name. For example, you can map `k8s.pod` to `pod`. To interpolate these tags into queries, use the `$__tags` keyword. |
Each linked query consists of:
- **Link Label:** _(Optional)_ Descriptive label for the linked query.
- **Query:** The query ran when navigating from a trace to the metrics data source.
Interpolate tags using the `$__tags` keyword.
For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, the result looks like `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Configure service graph
The **Service Graph** section configures the [Service Graph](/docs/tempo/latest/grafana-agent/service-graphs/) feature.
Configure the **Data source** setting to define in which Prometheus instance the Service Graph data is stored.
To use the service graph, refer to the [Service graph section]({{< relref "#use-the-service-graph" >}}).
### Configure Tempo search integration
The **Search** section configures [Tempo search](/docs/tempo/latest/configuration/#search).
Optionally configure the **Hide search** setting to hide the search query option in Explore if search is not configured in the Tempo instance.
### Enable Node Graph
The **Node Graph** setting enables the beta [Node Graph visualization]({{< relref "../../panels-visualizations/visualizations/node-graph/" >}}), which is disabled by default.
Once enabled, Grafana displays the Node Graph after loading the trace view.
### Configure Loki search
The **Loki search** section configures the Loki search query type.
Configure the **Data source** setting to define which Loki instance you want to use to search traces.
You must configure [derived fields]({{< relref "../loki#configure-derived-fields" >}}) in the Loki instance.
### Span bar label
The **Span bar label** section helps you display additional information in the span bar row.
You can choose one of three options:
| Name | Description |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| **None** | Adds nothing to the span bar row. |
| **Duration** | _(Default)_ Displays the span duration on the span bar row. |
| **Tag** | Displays the span tag on the span bar row. You must also specify which tag key to use to get the tag value, such as `span.kind`. |
### Provision the data source
You can define and configure the Tempo data source in YAML files as part of Grafana's provisioning system.
For more information about provisioning, and for available configuration options, refer to [Provisioning Grafana]({{< relref "../../administration/provisioning/#data-sources" >}}).
#### Provisioning examples
```yaml
apiVersion: 1
datasources:
- name: Tempo
type: tempo
# Access mode - proxy (server in the UI) or direct (browser in the UI).
access: proxy
url: http://localhost:3200
jsonData:
httpMethod: GET
tracesToLogs:
datasourceUid: 'loki'
tags: ['job', 'instance', 'pod', 'namespace']
mappedTags: [{ key: 'service.name', value: 'service' }]
mapTagNamesEnabled: false
spanStartTimeShift: '1h'
spanEndTimeShift: '1h'
filterByTraceID: false
filterBySpanID: false
tracesToMetrics:
datasourceUid: 'prom'
tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
queries:
- name: 'Sample query'
query: 'sum(rate(tempo_spanmetrics_latency_bucket{$__tags}[5m]))'
serviceMap:
datasourceUid: 'prometheus'
search:
hide: false
nodeGraph:
enabled: true
lokiSearch:
datasourceUid: 'loki'
```
## Query the data source
The Tempo data source's query editor helps you query and display traces from Tempo in [Explore]({{< relref "../../explore" >}}).
For details, refer to the [query editor documentation]({{< relref "./query-editor" >}}).
## Upload a JSON trace file
You can upload a JSON file that contains a single trace and visualize it.
If the file has multiple traces, Grafana visualizes its first trace.
**To download a trace or service graph through the inspector:**
1. Open the inspector.
1. Navigate to the **Data** tab.
1. Click **Download traces** or **Download service graph**.
### Trace JSON example
```json
{
"batches": [
{
"resource": {
"attributes": [
{ "key": "service.name", "value": { "stringValue": "db" } },
{ "key": "job", "value": { "stringValue": "tns/db" } },
{ "key": "opencensus.exporterversion", "value": { "stringValue": "Jaeger-Go-2.22.1" } },
{ "key": "host.name", "value": { "stringValue": "63d16772b4a2" } },
{ "key": "ip", "value": { "stringValue": "0.0.0.0" } },
{ "key": "client-uuid", "value": { "stringValue": "39fb01637a579639" } }
]
},
"instrumentationLibrarySpans": [
{
"instrumentationLibrary": {},
"spans": [
{
"traceId": "AAAAAAAAAABguiq7RPE+rg==",
"spanId": "cmteMBAvwNA=",
"parentSpanId": "OY8PIaPbma4=",
"name": "HTTP GET - root",
"kind": "SPAN_KIND_SERVER",
"startTimeUnixNano": "1627471657255809000",
"endTimeUnixNano": "1627471657256268000",
"attributes": [
{ "key": "http.status_code", "value": { "intValue": "200" } },
{ "key": "http.method", "value": { "stringValue": "GET" } },
{ "key": "http.url", "value": { "stringValue": "/" } },
{ "key": "component", "value": { "stringValue": "net/http" } }
],
"status": {}
}
]
}
]
}
]
}
```
## Use the service graph
A service graph is a visual representation of the relationships between services.
Each node on the graph represents a service such as an API or database.
You use a service graph to detect performance issues; track increases in error, fault, or throttle rates in services; and investigate root causes by viewing corresponding traces.
{{< figure src="/static/img/docs/node-graph/node-graph-8-0.png" class="docs-image--no-shadow" max-width="500px" caption="Screenshot of a Node Graph" >}}
**To display the service graph:**
1. [Configure Grafana Agent](/docs/tempo/latest/grafana-agent/service-graphs/#quickstart) or [Tempo or GET](/docs/tempo/latest/metrics-generator/service_graphs/#tempo) to generate service graph data.
1. Link a Prometheus data source in the Tempo data source's [Service Graph](#configure-service-graph) settings.
1. Navigate to [Explore]({{< relref "../../explore/" >}}).
1. Select the Tempo data source.
1. Select the **Service Graph** query type.
1. Run the query.
1. _(Optional)_ Filter by service name.
For details, refer to [Node Graph panel]({{< relref "../../panels-visualizations/visualizations/node-graph/" >}}).
Each circle in the graph represents a service.
To open a context menu with additional links for quick navigation to other relevant information, click a service.
Numbers inside the circles indicate the average time per request and requests per second.
Each circle's color represents the percentage of requests in each state:
| Color | State |
| ---------- | ------------------- |
| **Green** | Success |
| **Red** | Fault |
| **Yellow** | Errors |
| **Purple** | Throttled responses |
## View the APM table
The Application Performance Management (APM) table lets you view several APM metrics out of the box.
The APM table is part of the APM dashboard.
{{< figure src="/static/img/docs/tempo/apm-table.png" class="docs-image--no-shadow" max-width="500px" caption="Screenshot of the Tempo APM table" >}}
For details, refer to the [APM dashboard documentation](/docs/tempo/latest/metrics-generator/app-performance-mgmt/).
**To display the APM table:**
1. Activate the `tempoApmTable` [feature toggle]({{< relref "../../setup-grafana/configure-grafana#feature_toggles" >}}) in your `grafana.ini` file.
1. Link a Prometheus data source in the Tempo data source settings.
1. Navigate to [Explore]({{< relref "../../explore/" >}}).
1. Select the Tempo data source.
1. Select the **Service Graph** query type and run the query.
1. _(Optional)_ Filter your results.
> **Note:** Grafana uses the `traces_spanmetrics_calls_total` metric to display the name, rate, and error rate columns, and `traces_spanmetrics_latency_bucket` to display the duration column.
> These metrics must exist in your Prometheus data source.
To open a query in Prometheus with the span name of that row automatically set in the query, click a row in the **rate**, **error rate**, or **duration** columns.
To open a query in Tempo with the span name of that row automatically set in the query, click a row in the **links** column.
## Link a trace ID from logs
You can link to Tempo trace from logs in [Loki](/docs/loki/latest) or Elasticsearch by configuring an internal link.
To configure this feature, see the [Derived fields]({{< relref "../loki#configure-derived-fields" >}}) section of the [Loki data source docs]({{< relref "../loki/" >}}), or the [Data links]({{< relref "../elasticsearch#data-links" >}}) section of the [Elasticsearch data source docs]({{< relref "../elasticsearch" >}}).

View File

@ -0,0 +1,58 @@
---
aliases:
- /docs/grafana/latest/data-sources/tempo/query-editor/
description: Guide for using the Tempo data source's query editor
keywords:
- grafana
- tempo
- traces
- queries
menuTitle: Query editor
title: Tempo query editor
weight: 300
---
# Tempo query editor
The Tempo data source's query editor helps you query and display traces from Tempo in [Explore]({{< relref "../../../explore" >}}).
This topic explains configuration and queries specific to the Tempo data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../../panels-visualizations/query-transform-data" >}}).
## Query Tempo search
Tempo search is an experimental feature behind a feature toggle.
Use this to search for traces by service name, span name, duration range, or process-level attributes that are included in your application's instrumentation, such as HTTP status code and customer ID.
To configure Tempo and the Tempo data source for search, refer to [Configure the data source]({{< relref "../#configure-the-data-source" >}}).
{{< figure src="/static/img/docs/explore/tempo-search.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo search feature with a trace rendered in the right panel" >}}
### Search recent traces
You can search recent traces held in Tempo's ingesters.
By default, ingesters store the last 15 minutes of tracing data.
To configure your Tempo data source to use this feature, refer to the [Tempo documentation](/docs/tempo/latest/getting-started/tempo-in-grafana/#search-of-recent-traces).
### Search the backend datastore
Tempo includes the ability to search the entire backend datastore.
To configure your Tempo data source to use this feature, refer to the [Tempo documentation](/docs/tempo/latest/getting-started/tempo-in-grafana/#search-of-the-backend-datastore).
## Query Loki for traces
To find traces to visualize, you can use the [Loki query editor]({{< relref "../../loki#loki-query-editor" >}}).
For results, you must configure [derived fields]({{< relref "../../loki#configure-derived-fields" >}}) in the Loki data source that point to this data source.
{{< figure src="/static/img/docs/tempo/query-editor-search.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo query editor showing the search tab" >}}
## Trace ID search
**To query a particular trace:**
1. Select the **TraceID** query type.
1. Enter the trace's ID into the **Trace ID** field.
{{< figure src="/static/img/docs/tempo/query-editor-traceid.png" class="docs-image--no-shadow" max-width="750px" caption="Screenshot of the Tempo TraceID query type" >}}

View File

@ -1,58 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/testdata/
- /docs/grafana/latest/features/datasources/testdata/
keywords:
- grafana
- dashboard
- documentation
- panels
- testdata
title: TestData
weight: 1500
---
# Grafana TestData DB
The purpose of this data source is to make it easier to create fake data for any panel.
Using `TestData DB` you can build your own time series and have any panel render it.
This makes it much easier to verify functionality since the data can be shared very easily.
## Enable
The `TestData DB` data source is not enabled by default. To enable it:
1. In the **Configuration** menu (small gear on the left side of the screen), click **Data Sources**.
1. Click **Add Data Source**.
1. Search and click `TestData DB`.
1. Click **Save & Test** to enable it.
## Create mock data.
Once `TestData DB` is enabled, you can use it as a data source in any metric panel.
![](/static/img/docs/v41/test_data_add.png)
## CSV
The comma separated values scenario is the most powerful one since it lets you create any kind of graph you like.
Once you provided the numbers, `TestData DB` distributes them evenly based on the time range of your query.
![](/static/img/docs/v41/test_data_csv_example.png)
## Dashboards
`TestData DB` also contains some dashboards with examples.
1. Click **Configuration** > **Data Sources** > **TestData DB** > **Dashboards**.
1. **Import** the **Simple Streaming Example** dashboard.
### Commit updates to the dashboards
To submit a change to one of the current dashboards bundled with `TestData DB`, update the revision property.
Otherwise the dashboard will not be updated automatically for other Grafana users.
## Using test data in issues
If you post an issue on Github regarding time series data or rendering of time series data, we strongly advise you to use this data source to replicate the data.
That makes it much easier for the developers to replicate and solve the issue you have.

View File

@ -0,0 +1,101 @@
---
aliases:
- /docs/grafana/latest/features/datasources/testdata/
- /docs/grafana/latest/datasources/testdata/
- /docs/grafana/latest/data-sources/testdata/
keywords:
- grafana
- dashboard
- documentation
- troubleshooting
- panels
- testdata
menuTitle: TestData DB
title: TestData DB data source
weight: 1500
---
# TestData DB data source
Grafana ships with a TestData DB data source, which creates simulated time series data for any [panel]({{< relref "../../panels-visualizations/" >}}).
You can use it to build your own fake and random time series data and render it in any panel, which helps you verify dashboard functionality since you can safely and easily share the data.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the TestData DB data source.
The data source doesn't provide any settings beyond the most basic options common to all data sources:
| Name | Description |
| ----------- | ------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Defines whether this data source is pre-selected for new panels. |
## Create mock data
{{< figure src="/static/img/docs/v41/test_data_add.png" class="docs-image--no-shadow" caption="Adding test data" >}}
Once you've added the TestData DB data source, your Grafana instance's users can use it as a data source in any metric panel.
### Choose a scenario
Instead of providing a query editor, the TestData DB data source helps you select a **Scenario** that generates simulated data for panels.
You can assign an **Alias** to each scenario, and many have their own options that appear when selected.
{{< figure src="/static/img/docs/v41/test_data_csv_example.png" class="docs-image--no-shadow" caption="Using CSV Metric Values" >}}
**Available scenarios:**
- **Annotations**
- **Conditional Error**
- **CSV Content**
- **CSV File**
- **CSV Metric Values**
- **Datapoints Outside Range**
- **Exponential heatmap bucket data**
- **Grafana API**
- **Grafana Live**
- **Linear heatmap bucket data**
- **Load Apache Arrow Data**
- **Logs**
- **No Data Points**
- **Node Graph**
- **Predictable CSV Wave**
- **Predictable Pulse**
- **Random Walk**
- **Random Walk (with error)**
- **Random Walk Table**
- **Raw Frames**
- **Simulation**
- **Slow Query**
- **Streaming Client**
- **Table Static**
- **USA generated data**
## Import a pre-configured dashboard
TestData DB also provides an example dashboard.
**To import the example dashboard:**
1. Navigate to the data source's [configuration page]({{< relref "#configure-the-data-source" >}}).
1. Select the **Dashboards** tab.
1. Select **Import** for the **Simple Streaming Example** dashboard.
**To customize an imported dashboard:**
To customize the imported dashboard, we recommend that you save it under a different name.
If you don't, upgrading Grafana can overwrite the customized dashboard with the new version.
## Use test data to report issues
If you report an issue on GitHub involving the use or rendering of time series data, we strongly recommend that you use this data source to replicate the issue.
That makes it much easier for the developers to replicate and solve your issue.

View File

@ -1,130 +0,0 @@
---
aliases:
- /docs/grafana/latest/datasources/zipkin/
description: Guide for using Zipkin in Grafana
keywords:
- grafana
- zipkin
- guide
- tracing
title: Zipkin
weight: 1600
---
# Zipkin data source
Grafana ships with built-in support for Zipkin, an open source, distributed tracing system.
Just add it as a data source and you are ready to query your traces in [Explore]({{< relref "../explore/" >}}).
## Adding the data source
To access Zipkin settings, click the **Configuration** (gear) icon, then click **Data Sources** > **Zipkin**.
| Name | Description |
| ------------ | --------------------------------------------------------------------- |
| `Name` | The data source name in panels, queries, and Explore. |
| `Default` | The pre-selected data source for a new panel. |
| `URL` | The URL of the Zipkin instance. For example, `http://localhost:9411`. |
| `Basic Auth` | Enable basic authentication for the Zipkin data source. |
| `User` | Specify a user name for basic authentication. |
| `Password` | Specify a password for basic authentication. |
### Trace to logs
> **Note:** This feature is available in Grafana 7.4+.
This is a configuration for the [trace to logs feature]({{< relref "../explore/trace-integration/" >}}). Select target data source (at this moment limited to Loki or Splunk \[logs\] data sources) and select which tags will be used in the logs query.
- **Data source -** Target data source.
- **Tags -** The tags that will be used in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`.
- **Map tag names -** When enabled, allows configuring how Zipkin tag names map to logs label names. For example, map `service.name` to `service`.
- **Span start time shift -** Shift in the start time for the logs query based on the span start time. In order to extend to the past, you need to use a negative value. Use time interval units like 5s, 1m, 3h. The default is 0.
- **Span end time shift -** Shift in the end time for the logs query based on the span end time. Time units can be used here, for example, 5s, 1m, 3h. The default is 0.
- **Filter by Trace ID -** Toggle to append the trace ID to the logs query.
- **Filter by Span ID -** Toggle to append the span ID to the logs query.
![Trace to logs settings](/static/img/docs/explore/trace-to-logs-settings-8-2.png 'Screenshot of the trace to logs settings')
### Trace to metrics
> **Note:** This feature is behind the `traceToMetrics` feature toggle.
To configure trace to metrics, select the target Prometheus data source and create any desired linked queries.
-- **Data source -** Target data source.
-- **Tags -** You can use tags in the linked queries. The key is the span attribute name. The optional value is the corresponding metric label name (for example, map `k8s.pod` to `pod`). You may interpolate these tags into your queries using the `$__tags` keyword.
Each linked query consists of:
-- **Link Label -** (Optional) Descriptive label for the linked query.
-- **Query -** Query that runs when navigating from a trace to the metrics data source. Interpolate tags using the `$__tags` keyword. For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, it results in `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Node Graph
This is a configuration for the beta Node Graph visualization. The Node Graph is shown after the trace view is loaded and is disabled by default.
-- **Enable Node Graph -** Enables the Node Graph visualization.
### Span bar label
You can configure the span bar label. The span bar label allows you add additional information to the span bar row.
Select one of the following four options. The default selection is Duration.
- **None -** Do not show any additional information on the span bar row.
- **Duration -** Show the span duration on the span bar row.
- **Tag -** Show the span tag on the span bar row. Note: You will also need to specify the tag key to use to get the tag value. For example, `span.kind`.
## Query traces
Querying and displaying traces from Zipkin is available via [Explore]({{< relref "../explore/" >}}).
{{< figure src="/static/img/docs/v70/zipkin-query-editor.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin query editor" >}}
The Zipkin query editor allows you to query by trace ID directly or selecting a trace from trace selector. To query by trace ID, insert the ID into the text input.
{{< figure src="/static/img/docs/v70/zipkin-query-editor-open.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin query editor with trace selector expanded" >}}
Use the trace selector to pick particular trace from all traces logged in the time range you have selected in Explore. The trace selector has three levels of nesting:
1. The service you are interested in.
1. Particular operation is part of the selected service
1. Specific trace in which the selected operation occurred, represented by the root operation name and trace duration.
## Data mapping in the trace UI
Zipkin annotations are shown in the trace view as logs with annotation value shown under annotation key.
## Upload JSON trace file
You can upload a JSON file that contains a single trace to visualize it.
{{< figure src="/static/img/docs/explore/zipkin-upload-json.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin data source in explore with upload selected" >}}
Here is an example JSON:
```json
[
{
"traceId": "efe9cb8857f68c8f",
"parentId": "efe9cb8857f68c8f",
"id": "8608dc6ce5cafe8e",
"kind": "SERVER",
"name": "get /api",
"timestamp": 1627975249601797,
"duration": 23457,
"localEndpoint": { "serviceName": "backend", "ipv4": "127.0.0.1", "port": 9000 },
"tags": {
"http.method": "GET",
"http.path": "/api",
"jaxrs.resource.class": "Resource",
"jaxrs.resource.method": "printDate"
},
"shared": true
}
]
```
## Linking Trace ID from logs
You can link to Zipkin trace from logs in Loki or Splunk by configuring a derived field with internal link. See [Loki documentation]({{< relref "loki/#derived-fields" >}}) for details.

View File

@ -0,0 +1,162 @@
---
aliases:
- /docs/grafana/latest/datasources/zipkin/
- /docs/grafana/latest/data-sources/zipkin/
- /docs/grafana/latest/data-sources/zipkin/query-editor/
description: Guide for using Zipkin in Grafana
keywords:
- grafana
- zipkin
- tracing
- querying
menuTitle: Zipkin
title: Zipkin data source
weight: 1600
---
# Zipkin data source
Grafana ships with built-in support for Zipkin, an open source, distributed tracing system.
For instructions on how to add a data source to Grafana, refer to the [administration documentation]({{< relref "../../administration/data-source-management/" >}}).
Only users with the organization administrator role can add data sources.
Administrators can also [configure the data source via YAML]({{< relref "#provision-the-data-source" >}}) with Grafana's provisioning system.
Once you've added the Zipkin data source, you can [configure it]({{< relref "#configure-the-data-source" >}}) so that your Grafana instance's users can create queries in its [query editor]({{< relref "#query-traces" >}}) when they [build dashboards]({{< relref "../../dashboards/build-dashboards/" >}}) and use [Explore]({{< relref "../../explore/" >}}).
## Configure the data source
**To access the data source configuration page:**
1. Hover the cursor over the **Configuration** (gear) icon.
1. Select **Data Sources**.
1. Select the Zipkin data source.
Set the data source's basic configuration options carefully:
| Name | Description |
| -------------- | ------------------------------------------------------------------------ |
| **Name** | Sets the name you use to refer to the data source in panels and queries. |
| **Default** | Defines whether this data source is pre-selected for new panels. |
| **URL** | Sets the URL of the Zipkin instance, such as `http://localhost:9411`. |
| **Basic Auth** | Enables basic authentication for the Zipkin data source. |
| **User** | Defines the user name for basic authentication. |
| **Password** | Defines the password for basic authentication. |
### Configure trace to logs
> **Note:** Available in Grafana v7.4 and higher.
The **Trace to logs** section configures the [trace to logs feature]({{< relref "../../explore/trace-integration/" >}}).
Select a target data source, limited to Loki and Splunk \[logs\] data sources, and which tags to use in the logs query.
{{< figure src="/static/img/docs/explore/trace-to-log-8-2.png" class="docs-image--no-shadow" caption="Screenshot of the trace to logs settings" >}}
| Name | Description |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Sets the target data source. |
| **Tags** | Defines the tags to use in the logs query. Default is `'cluster', 'hostname', 'namespace', 'pod'`. |
| **Map tag names** | Enables configuring how Jaeger tag names map to logs label names. For example, map `service.name` to `service`. |
| **Span start time shift** | Shifts the start time for the logs query based on the span start time. To extend to the past, use a negative value. Use time interval units like `5s`, `1m`, `3h`. Default is `0`. |
| **Span end time shift** | Shifts the end time for the logs query based on the span end time. Use time interval units. Default is `0`. |
| **Filter by Trace ID** | Toggles whether to append the trace ID to the logs query. |
| **Filter by Span ID** | Toggles whether to append the span ID to the logs query. |
### Configure trace to metrics
> **Note:** This feature is behind the `traceToMetrics` [feature toggle]({{< relref "../../setup-grafana/configure-grafana#feature_toggles" >}}).
The **Trace to metrics** section configures the [trace to metrics feature](/blog/2022/08/18/new-in-grafana-9.1-trace-to-metrics-allows-users-to-navigate-from-a-trace-span-to-a-selected-data-source/).
Use the settings to select the target Prometheus data source, and create any desired linked queries.
| Setting name | Description |
| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Data source** | Defines the target data source. |
| **Tags** | Defines the tags used in linked queries. The key sets the span attribute name, and the optional value sets the corresponding metric label name. For example, you can map `k8s.pod` to `pod`. To interpolate these tags into queries, use the `$__tags` keyword. |
Each linked query consists of:
- **Link Label:** _(Optional)_ Descriptive label for the linked query.
- **Query:** The query ran when navigating from a trace to the metrics data source.
Interpolate tags using the `$__tags` keyword.
For example, when you configure the query `requests_total{$__tags}`with the tags `k8s.pod=pod` and `cluster`, the result looks like `requests_total{pod="nginx-554b9", cluster="us-east-1"}`.
### Enable Node Graph
The **Node Graph** setting enables the beta [Node Graph visualization]({{< relref "../../panels-visualizations/visualizations/node-graph/" >}}), which is disabled by default.
Once enabled, Grafana displays the Node Graph after loading the trace view.
### Configure the span bar label
The **Span bar label** section helps you display additional information in the span bar row.
You can choose one of three options:
| Name | Description |
| ------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| **None** | Adds nothing to the span bar row. |
| **Duration** | _(Default)_ Displays the span duration on the span bar row. |
| **Tag** | Displays the span tag on the span bar row. You must also specify which tag key to use to get the tag value, such as `span.kind`. |
## Query traces
You can query and display traces from Zipkin via [Explore]({{< relref "../../explore/" >}}).
This topic explains configuration and queries specific to the Zipkin data source.
For general documentation on querying data sources in Grafana, see [Query and transform data]({{< relref "../../panels-visualizations/query-transform-data" >}}).
{{< figure src="/static/img/docs/v70/zipkin-query-editor.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin query editor" >}}
To query by trace ID, enter it.
{{< figure src="/static/img/docs/v70/zipkin-query-editor-open.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin query editor with trace selector expanded" >}}
To select a particular trace from all traces logged in the time range you have selected in Explore, you can also query by trace selector.
The trace selector has three levels of nesting:
- The service you're interested in.
- Particular operation, part of the selected service
- Specific trace in which the selected operation occurred, represented by the root operation name and trace duration
## View data mapping in the trace UI
You can view Zipkin annotations in the trace view as logs with annotation value displayed under the annotation key.
## Upload a JSON trace file
You can upload a JSON file that contains a single trace and visualize it.
If the file has multiple traces, Grafana visualizes its first trace.
{{< figure src="/static/img/docs/explore/zipkin-upload-json.png" class="docs-image--no-shadow" caption="Screenshot of the Zipkin data source in explore with upload selected" >}}
### Trace JSON example
```json
[
{
"traceId": "efe9cb8857f68c8f",
"parentId": "efe9cb8857f68c8f",
"id": "8608dc6ce5cafe8e",
"kind": "SERVER",
"name": "get /api",
"timestamp": 1627975249601797,
"duration": 23457,
"localEndpoint": { "serviceName": "backend", "ipv4": "127.0.0.1", "port": 9000 },
"tags": {
"http.method": "GET",
"http.path": "/api",
"jaxrs.resource.class": "Resource",
"jaxrs.resource.method": "printDate"
},
"shared": true
}
]
```
## Link a trace ID from logs
You can link to a Zipkin trace from logs in [Loki](/docs/loki/latest/) or Splunk by configuring a derived field with an internal link.
For details, refer to [Derived fields]({{< relref "../loki/#configure-derived-fields" >}}) section of the [Loki data source]({{< relref "../loki/" >}}) documentation.

View File

@ -54,7 +54,7 @@ Congratulations, you have created your first dashboard and it is displaying resu
#### Next steps
Continue to experiment with what you have built, try the [explore workflow]({{< relref "../explore/" >}}) or another visualization feature. Refer to [Data sources]({{< relref "../datasources/" >}}) for a list of supported data sources and instructions on how to [add a data source]({{< relref "../datasources/add-a-data-source/" >}}). The following topics will be of interest to you:
Continue to experiment with what you have built, try the [explore workflow]({{< relref "../explore/" >}}) or another visualization feature. Refer to [Data sources]({{< relref "../datasources/" >}}) for a list of supported data sources and instructions on how to [add a data source]({{< relref "../administration/data-source-management#add-a-data-source" >}}). The following topics will be of interest to you:
- [Panels and visualizations]({{< relref "../panels-visualizations/" >}})
- [Dashboards]({{< relref "../dashboards/" >}})

View File

@ -37,10 +37,11 @@ Windows users might need to make additional adjustments. Look for special instru
You can have more than one InfluxDB data source defined in Grafana.
1. Follow the general instructions to [add a data source]({{< relref "../datasources/add-a-data-source/" >}}).
1. Follow the general instructions to [add a data source]({{< relref "../administration/data-source-management#add-a-data-source/" >}}).
1. Decide if you will use InfluxQL or Flux as your query language.
- For InfluxQL, refer to [InfluxDB data source]({{< relref "../datasources/influxdb/" >}}) for information about specific data source fields.
- For Flux, refer to [Flux query language in Grafana]({{< relref "../datasources/influxdb/influxdb-flux/" >}}) for information about specific data source fields.
- [Configure the data source]({{< relref "../datasources/influxdb#configure-the-data-source" >}}) for your chosen query language.
Each query language has its own unique data source settings.
- For querying features specific to each language, see the data source's [query editor documentation]({{< relref "../datasources/influxdb/query-editor" >}}).
##### InfluxDB guides

View File

@ -16,11 +16,17 @@ keywords:
# Panels and visualizations
The _panel_ is the basic visualization building block in Grafana. Each panel has a query editor specific to the data source selected in the panel. The query editor allows you to extract the perfect visualization to display on the panel.
The _panel_ is the basic visualization building block in Grafana.
Each panel has a query editor specific to the data source selected in the panel.
The query editor allows you to build a query that returns the data you want to visualize.
There are a wide variety of styling and formatting options for each panel. Panels can be dragged and dropped and rearranged on the dashboard. They can also be resized.
There are a wide variety of styling and formatting options for each panel.
Panels can be dragged, dropped, and resized to rearrange them on the dashboard.
Before you begin, ensure that you have configured a data source. For more information about data sources, refer to [Data Sources]({{< relref "../administration/data-source-management/" >}}).
Before you add a panel, ensure that you have configured a data source.
- For more information about adding and managing data sources as an administrator, refer to [Data source management]({{< relref "../administration/data-source-management/" >}}).
- For details about using specific data sources, refer to [Data sources]({{< relref "../datasources/" >}}).
This section includes the following sub topics:

View File

@ -19,77 +19,86 @@ weight: 200
# Query and transform data
Data source queries return data that can then be transformed via transformations and then visualized by different types of visualizations. The query language and query builder UI depends on the data source type. Grafana supports many different types of data sources.
Grafana supports many types of [data sources]({{< relref "../../datasources/" >}}).
Data source **queries** return data that Grafana can **transform** and visualize.
Each data source uses its own query language, and data source plugins each implement a query-building user interface called a query editor.
## About queries
_Queries_ are how Grafana panels communicate with data sources to get data for the visualization. A query is a question written in the query language used by the data source. How often the query is sent to the data source and how many data points are collected can be adjusted in the panel data source options.
Grafana panels communicate with data sources via queries, which retrieve data for the visualization.
A query is a question written in the query language used by the data source.
Use you a query editor to write a query. Each data source has its own query editor that we have customized to include the features and capabilities of the data source. Grafana supports up to 26 queries per panel.
You can configure query frequency and data collection limits in the panel's data source options.
Grafana supports up to 26 queries per panel.
> Important! You must be familiar with the query language of the data source. For more information about data sources, refer to [Data sources]({{< relref "../../datasources/" >}}).
> **Important:** You **must** be familiar with a data source's query language.
> For more information, refer to [Data sources]({{< relref "../../datasources/" >}}).
### Query editors
Depending on your data source, the query editor might provide auto-completion, metric names, or variable suggestions.
{{< figure src="/static/img/docs/queries/influxdb-query-editor-7-2.png" class="docs-image--no-shadow" max-width="1000px" caption="The InfluxDB query editor" >}}
Because of the difference between query languages, data sources have query editors that look different. Here are two examples of query editors.
Each data source's **query editor** provides a customized user interface that helps you write queries that take advantage of its unique capabilities.
**InfluxDB query editor**
Because of the differences between query languages, each data source query editor looks and functions differently.
Depending on your data source, the query editor might provide auto-completion features, metric names, variable suggestions, or a visual query-building interface.
{{< figure src="/static/img/docs/queries/influxdb-query-editor-7-2.png" class="docs-image--no-shadow" max-width="1000px" >}}
For example, this video demonstrates the visual Prometheus query builder:
**Prometheus (PromQL) query editor**
{{< vimeo 720004179 >}}
{{< figure src="/static/img/docs/queries/prometheus-query-editor-7-4.png" class="docs-image--no-shadow" max-width="1000px" >}}
For details on a specific data source's unique query editor features, refer to its documentation:
- For data sources included with Grafana, refer to [Built-in core data sources]({{< relref "../../datasources/#data-source-plugins" >}}), which links to each core data source's documentation.
- For data sources installed as plugins, refer to its own documentation.
- Data source plugins in Grafana's [plugin catalog](/grafana/plugins/) link to or include their documentation in their catalog listings.
For details about the plugin catalog, refer to [Plugin management]({{< relref "../../administration/plugin-management/" >}}).
- For links to [Grafana Enterprise]({{< relref "../../enterprise/" >}}) data source plugin documentation, refer to the [Enterprise plugins index](/docs/plugins/).
### Query syntax
Data sources use different query languages to return data. Here are two query examples:
Each data source uses a different query languages to request data.
For details on a specific data source's unique query language, refer to its documentation.
**PostgreSQL**
**PostgreSQL example:**
```
SELECT hostname FROM host WHERE region IN($region)
SELECT hostname FROM host WHERE region IN($region)
```
**PromQL**
**PromQL example:**
```
query_result(max_over_time(<metric>[${__range_s}s]) != <state>)
```
### Data sources used in queries
### Special data sources
In addition to the data sources that you have configured in Grafana, there are three special data sources available:
Grafana also includes three special data sources: **Grafana**, **Mixed**, and **Dashboard**.
For details, refer to [Data sources]({{< relref "../../datasources/#special-data-sources" >}})
- **Grafana -** A built-in data source that generates random walk data, which can be useful for testing visualizations and running experiments.
- **Mixed -** Select this option to query multiple data sources in the same panel. When you select this data source, Grafana enables you to select a data source for every new query that you add.
- The first query uses the data source that was selected before you selected **Mixed**.
- You cannot change an existing query to use the Mixed Data Source.
- **Dashboard -** Select this option to use a result set from another panel in the same dashboard.
## Navigate the Query tab
You can combine data from multiple data sources onto a single dashboard, but each panel is tied to a specific data source that belongs to a particular Organization.
A panel's Query tab consists of the following elements:
## Navigate the query tab
The Query tab consists of the following elements:
- Data source selector: Use the data source selector to select the source of the data you want to query. For more information about data sources, refer to [Data sources]({{< relref "../../datasources/" >}}).
- Query options: Enables you to set maximum data retrieved parameters and query execution time intervals.
- Query inspector button: Open the query inspector panel where you can view and optimize your query.
- Query editor list: Lists the queries you have written.
- Expressions: Use the expression builder to create alert expressions. For more information about expressions, refer to [Use expressions to manipulate data]({{< relref "expression-queries/" >}}).
- **Data source selector:** Selects the data source to query.
For more information about data sources, refer to [Data sources]({{< relref "../../datasources/" >}}).
- **Query options:** Sets maximum data retrieval parameters and query execution time intervals.
- **Query inspector button:** Opens the query inspector panel, where you can view and optimize your query.
- **Query editor list:** Lists the queries you've written.
- **Expressions:** Uses the expression builder to create alert expressions.
For more information about expressions, refer to [Use expressions to manipulate data]({{< relref "expression-queries/" >}}).
{{< figure src="/static/img/docs/queries/query-editor-7-2.png" class="docs-image--no-shadow" max-width="1000px" >}}
## Add a query
A query returns data that Grafana visualizes in dashboards. When you create a panel, Grafana automatically selects the default data source.
A query returns data that Grafana visualizes in dashboard panels.
When you create a panel, Grafana automatically selects the default data source.
**To add a query**:
**To add a query:**
1. Edit the panel to which you are adding a query.
1. Edit the panel to which you're adding a query.
1. Click the **Query** tab.
1. Click the **Data source** drop-down menu and select a data source.
1. Click **Query options** to configure the maximum number of data points you need.
@ -97,80 +106,90 @@ A query returns data that Grafana visualizes in dashboards. When you create a pa
1. Write the query using the query editor.
1. Click **Apply**.
The system queries the data source and presents the data in the visualization.
Grafana queries the data source and visualizes the data.
## Manage queries
Queries are organized in collapsible query rows. Each query row contains a query editor and is identified with a letter (A, B, C, and so on).
Grafana organizes queries in collapsible query rows.
Each query row contains a query editor and is identified with a letter (A, B, C, and so on).
You can:
| Icon | Description |
| :-----------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| {{< figure src="/static/img/docs/queries/query-editor-help-7-4.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Toggle query editor help. If supported by the data source, click this icon to display information on how to use the query editor or provide quick access to common queries. |
| {{< figure src="/static/img/docs/queries/duplicate-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Copy a query. Duplicating queries is useful when working with multiple complex queries that are similar and you want to either experiment with different variants or do minor alterations. |
| {{< figure src="/static/img/docs/queries/hide-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Hide a query. Grafana does not send hidden queries to the data source. |
| {{< figure src="/static/img/docs/queries/remove-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Remove a query. Removing a query permanently deletes it, but sometimes you can recover deleted queries by reverting to previously saved versions of the panel. |
| {{< figure src="/static/img/docs/queries/query-drag-icon-7-2.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Reorder queries. Change the order of queries by clicking and holding the drag icon, then drag queries where desired. The order of results reflects the order of the queries, so you can often adjust your visual results based on query order. |
| Icon | Description |
| :-----------------------------------------------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| {{< figure src="/static/img/docs/queries/query-editor-help-7-4.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Toggles query editor help. If supported by the data source, click this icon to display information on how to use the query editor or provide quick access to common queries. |
| {{< figure src="/static/img/docs/queries/duplicate-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Copies a query. Duplicating queries is useful when working with multiple complex queries that are similar and you want to either experiment with different variants or do minor alterations. |
| {{< figure src="/static/img/docs/queries/hide-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Hides a query. Grafana does not send hidden queries to the data source. |
| {{< figure src="/static/img/docs/queries/remove-query-icon-7-0.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Removes a query. Removing a query permanently deletes it, but sometimes you can recover deleted queries by reverting to previously saved versions of the panel. |
| {{< figure src="/static/img/docs/queries/query-drag-icon-7-2.png" class="docs-image--no-shadow" max-width="30px" max-height="30px" >}} | Reorders queries. Change the order of queries by clicking and holding the drag icon, then drag queries where desired. The order of results reflects the order of the queries, so you can often adjust your visual results based on query order. |
## Query options
Click **Query options** next to the data source selector to see settings for your selected data source. Changes you make here affect only queries made in this panel.
Click **Query options** next to the data source selector to see settings for the selected data source.
Changes you make here affect only queries made in this panel.
{{< figure src="/static/img/docs/queries/data-source-options-7-0.png" class="docs-image--no-shadow" max-width="1000px" >}}
Grafana sets defaults that are shown in dark gray text. Changes are displayed in white text. To return a field to the default setting, delete the white text from the field.
Grafana sets defaults that are shown in dark gray text.
Changes are displayed in white text.
To return a field to the default setting, delete the white text from the field.
Panel data source query options:
Panel data source query options include:
- **Max data points -** If the data source supports it, sets the maximum numbers of data points for each series returned. If the query returns more data points than the max data points setting, then the data source consolidates them (reduces the number of points returned by aggregating them together by average or max or other function).
- **Max data points:** If the data source supports it, this sets the maximum number of data points for each series returned.
If the query returns more data points than the max data points setting, then the data source reduces the number of points returned by aggregating them together by average, max, or another function.
There are two main reasons for limiting the number of points, performance and smoothing the line. The default value is the width (or number of pixels) of the graph as there is no point in having more data points than the graph panel can display.
You can limit the number of points to improve query performance or smooth the visualized line.
The default value is the width (or number of pixels) of the graph, because you can only visualize as many data points as the graph panel has room to display.
With streaming data, the max data points value is used for the rolling buffer. (Streaming is a continuous flow of data and buffering is a way of dividing the stream into chunks). Loki streams data in the live tailing mode.
With streaming data, Grafana uses the max data points value for the rolling buffer.
Streaming is a continuous flow of data, and buffering divides the stream into chunks.
For example, Loki streams data in its live tailing mode.
- **Min interval -** Sets a minimum limit for the automatically calculated interval, typically the minimum scrape interval. If a data point is saved every 15 seconds, then there's no point in having an interval lower than that. Another use case is to set it to a higher minimum than the scrape interval to get more coarse-grained, well-functioning queries.
- **Min interval:** Sets a minimum limit for the automatically calculated interval, which is typically the minimum scrape interval.
If a data point is saved every 15 seconds, you don't benefit from having an interval lower than that.
You can also set this to a higher minimum than the scrape interval to retrieve queries that are more coarse-grained and well-functioning.
- **Interval:** Sets a time span that you can use when aggregating or grouping data points by time.
- **Interval -** The interval is a time span that you can use when aggregating or grouping data points by time.
Grafana automatically calculates an appropriate interval that you can use as a variable in templated queries.
The variable is measured in either seconds (`$__interval`) or milliseconds (`$__interval_ms`).
Grafana automatically calculates an appropriate interval and it can be used as a variable in templated queries. The variable is either in seconds: `$__interval` or in milliseconds: `$__interval_ms`. It is typically used in aggregation functions like sum or average. For example, a Prometheus query using the interval variable: `rate(http_requests_total[$__interval])`.
Intervals are typically used in aggregation functions like sum or average.
For example, this is a Prometheus query that uses the interval variable: `rate(http_requests_total[$__interval])`.
This automatic interval is calculated based on the width of the graph. If the user zooms out a lot then the interval becomes greater, resulting in a more coarse grained aggregation whereas if the user zooms in then the interval decreases resulting in a more fine grained aggregation.
This automatic interval is calculated based on the width of the graph.
As the user zooms out on a visualization, the interval grows, resulting in a more coarse-grained aggregation.
Likewise, if the user zooms in, the interval decreases, resulting in a more fine-grained aggregation.
For more information, refer to [Global variables]({{< relref "../../dashboards/variables/add-template-variables/#global-variables" >}}).
- **Relative time -** You can override the relative time range for individual panels, causing them to be different than what is selected in the dashboard time picker in the top right corner of the dashboard. This allows you to show metrics from different time periods or days on the same dashboard.
- **Relative time:** Overrides the relative time range for individual panels, which causes them to be different than what is selected in the dashboard time picker in the top-right corner of the dashboard.
You can use this to show metrics from different time periods or days on the same dashboard.
- **Time shift -** The time shift function is another way to override the time range for individual panels. It only works with relative time ranges and allows you to adjust the time range.
> **Note:** Panel time overrides have no effect when the dashboard's time range is absolute.
For example, you could shift the time range for the panel to be two hours earlier than the dashboard time picker. For more information, refer to [Time range controls]({{< relref "../../dashboards/manage-dashboards/#configure-dashboard-time-range-controls" >}}).
| Example | Relative time field |
| ---------------- | ------------------- |
| Last 5 minutes | `now-5m` |
| The day so far | `now/d` |
| Last 5 days | `now-5d/d` |
| This week so far | `now/w` |
| Last 2 years | `now-2y/y` |
- **Cache timeout -** (This field is only visible if available in your data source.) If your time series store has a query cache, then this option can override the default cache timeout. Specified as a numeric value in seconds.
- **Time shift:** Overrides the time range for individual panels by shifting its start and end relative to the time picker.
For example, you can shift the time range for the panel to be two hours earlier than the dashboard time picker.
### Examples
For more information, refer to [Time range controls]({{< relref "../../dashboards/manage-dashboards/#configure-dashboard-time-range-controls" >}}).
- **Relative time:**
> **Note:** Panel time overrides have no effect when the dashboard's time range is absolute.
| Example | Relative time field |
| ---------------- | ------------------- |
| Last 5 minutes | `now-5m` |
| The day so far | `now/d` |
| Last 5 days | `now-5d/d` |
| This week so far | `now/w` |
| Last 2 years | `now-2y/y` |
| Example | Time shift field |
| -------------------- | ---------------- |
| Last entire week | `1w/w` |
| Two entire weeks ago | `2w/w` |
| Last entire month | `1M/M` |
| This entire year | `1d/y` |
| Last entire year | `1y/y` |
- **Time shift:**
| Example | Time shift field |
| -------------------- | ---------------- |
| Last entire week | `1w/w` |
| Two entire weeks ago | `2w/w` |
| Last entire month | `1M/M` |
| This entire year | `1d/y` |
| Last entire year | `1y/y` |
### Panel time overrides and timeshift
In [Query options]({{< relref "#query-options" >}}), you can override the relative time range for individual panels, which causes them to be different than what is selected in the dashboard time picker located in the upper right. This enables you to show metrics from different time periods or days at the same time.
> **Note:** Panel time overrides have no effect when the time range for the dashboard is absolute.
- **Cache timeout:** _(Visible only if available in the data source)_ Overrides the default cache timeout if your time series store has a query cache.
Specify this value as a numeric value in seconds.

View File

@ -50,19 +50,19 @@ You can re-encrypt secrets in order to:
- Move already existing secrets' encryption forward from legacy to envelope encryption.
- Re-encrypt secrets after a [data keys rotation](#rotate-data-keys).
To re-encrypt secrets, use the [Grafana CLI]({{< ref "../../../cli" >}}) by running the `grafana-cli admin secrets-migration re-encrypt` command or the `/encryption/reencrypt-secrets` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#roll-back-secrets" >}}). It's safe to run more than once, more recommended under maintenance mode.
To re-encrypt secrets, use the [Grafana CLI]({{< ref "/cli" >}}) by running the `grafana-cli admin secrets-migration re-encrypt` command or the `/encryption/reencrypt-secrets` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#roll-back-secrets" >}}). It's safe to run more than once, more recommended under maintenance mode.
### Roll back secrets
You can roll back secrets encrypted with envelope encryption to legacy encryption. This might be necessary to downgrade to Grafana versions prior to v9.0 after an unsuccessful upgrade.
To roll back secrets, use the [Grafana CLI]({{< ref "../../../cli" >}}) by running the `grafana-cli admin secrets-migration rollback` command or the `/encryption/rollback-secrets` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#re-encrypt-secrets" >}}). It's safe to run more than once, more recommended under maintenance mode.
To roll back secrets, use the [Grafana CLI]({{< ref "/cli" >}}) by running the `grafana-cli admin secrets-migration rollback` command or the `/encryption/rollback-secrets` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#re-encrypt-secrets" >}}). It's safe to run more than once, more recommended under maintenance mode.
### Re-encrypt data keys
You can re-encrypt data keys encrypted with a specific key encryption key (KEK). This allows you to either re-encrypt existing data keys with a new KEK version (see [KMS integration](#kms-integration) rotation) or to re-encrypt them with a completely different KEK.
To re-encrypt data keys, use the [Grafana CLI]({{< ref "../../../cli" >}}) by running the `grafana-cli admin secrets-migration re-encrypt-data-keys` command or the `/encryption/reencrypt-data-keys` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#re-encrypt-data-encryption-keys" >}}). It's safe to run more than once, more recommended under maintenance mode.
To re-encrypt data keys, use the [Grafana CLI]({{< ref "/cli" >}}) by running the `grafana-cli admin secrets-migration re-encrypt-data-keys` command or the `/encryption/reencrypt-data-keys` endpoint of the Grafana [Admin API]({{< relref "../../../developers/http_api/admin/#re-encrypt-data-encryption-keys" >}}). It's safe to run more than once, more recommended under maintenance mode.
### Rotate data keys

View File

@ -196,7 +196,7 @@ You can now override the [time zone]({{< relref "../dashboards/manage-dashboards
The Azure Monitor data source supports multiple Azure services. Log Analytics queries in the data source now have alerting support too (Azure Monitor and Application Insights already had alerting support).
A new feature is [deep linking from the graph panel to the Log Analytics query editor in the Azure Portal]({{< relref "../datasources/azuremonitor/#deep-linking-from-grafana-panels-to-the-log-analytics-query-editor-in-azure-portal" >}}). Click on a time series in the panel to see a context menu with a link to View in Azure Portal. Clicking that link opens the Azure Log Analytics query editor in the Azure Portal and runs the query from the Grafana panel.
A new feature is [deep linking from the graph panel to the Log Analytics query editor in the Azure Portal]({{< relref "../datasources/azure-monitor#deep-link-from-grafana-panels-to-the-log-analytics-query-editor-in-azure-portal" >}}). Click on a time series in the panel to see a context menu with a link to View in Azure Portal. Clicking that link opens the Azure Log Analytics query editor in the Azure Portal and runs the query from the Grafana panel.
## Grafana Enterprise

View File

@ -94,9 +94,9 @@ You can now navigate from a span in a trace view directly to logs relevant for t
The following topics were updated as a result of this feature:
- [Explore]({{< relref "../explore/trace-integration/" >}})
- [Jaeger]({{< relref "../datasources/jaeger/#trace-to-logs" >}})
- [Tempo]({{< relref "../datasources/tempo/#trace-to-logs" >}})
- [Zipkin]({{< relref "../datasources/zipkin/#trace-to-logs" >}})
- [Jaeger]({{< relref "../datasources/jaeger#configure-trace-to-logs" >}})
- [Tempo]({{< relref "../datasources/tempo#configure-trace-to-logs" >}})
- [Zipkin]({{< relref "../datasources/zipkin#configure-trace-to-logs" >}})
### Server-side expressions
@ -141,13 +141,13 @@ Grafana 7.4 includes the following enhancements
> **Note:** We have deprecated browser access mode. It will be removed in a future release.
For more information, refer to the [Elasticsearch docs]({{< relref "../datasources/elasticsearch/" >}}).
For more information, refer to the [Elasticsearch docs]({{< relref "../datasources/elasticsearch" >}}).
### Azure Monitor updates
The Azure Monitor query type was renamed to Metrics and Azure Logs Analytics was renamed to Logs to match the service names in Azure and align the concepts with the rest of Grafana.
[Azure Monitor]({{< relref "../datasources/azuremonitor/" >}}) was updated to reflect this change.
[Azure Monitor]({{< relref "../datasources/azure-monitor" >}}) was updated to reflect this change.
### MQL support added for Google Cloud Monitoring
@ -157,7 +157,7 @@ Unlike the visual query builder, MQL allows you to control the time range and pe
MQL uses a set of operations and functions. Operations are linked together using the common pipe mechanism, where the output of one operation becomes the input to the next. Linking operations makes it possible to build up complex queries incrementally.
Once query type Metrics is selected in the Cloud Monitoring query editor, you can toggle between the editor modes for visual query builder and MQL. For more information, refer to the [Google Cloud Monitoring docs]({{< relref "../datasources/google-cloud-monitoring/#out-of-the-box-dashboards" >}}).
Once query type Metrics is selected in the Cloud Monitoring query editor, you can toggle between the editor modes for visual query builder and MQL. For more information, refer to the [Google Cloud Monitoring docs]({{< relref "../datasources/google-cloud-monitoring#import-pre-configured-dashboards" >}}).
Many thanks to [mtanda](https://github.com/mtanda) this contribution!
@ -169,7 +169,7 @@ Google Cloud Monitoring data source ships with pre-configured dashboards for som
If you want to customize a dashboard, we recommend that you save it under a different name. Otherwise the dashboard will be overwritten when a new version of the dashboard is released.
For more information, refer to the [Google Cloud Monitoring docs]({{< relref "../datasources/google-cloud-monitoring/#out-of-the-box-dashboards" >}}).
For more information, refer to the [Google Cloud Monitoring docs]({{< relref "../datasources/google-cloud-monitoring#import-pre-configured-dashboards" >}}).
### Query Editor Help

View File

@ -81,7 +81,7 @@ In the upcoming Grafana 8.0 release, Application Insights and Insights Analytics
Grafana 7.5 includes a deprecation notice for these queries, and some documentation to help users prepare for the upcoming changes.
For more information, refer to [Deprecating Application Insights and Insights Analytics]({{< relref "../datasources/azuremonitor/#deprecating-application-insights-and-insights-analytics" >}}).
For more information, refer to [Deprecating Application Insights and Insights Analytics]({{< relref "../datasources/azure-monitor#application-insights-and-insights-analytics--removed-" >}}).
### Cloudwatch data source enhancements
@ -90,7 +90,7 @@ For more information, refer to [Deprecating Application Insights and Insights An
- You can now enable or disable authentication providers and assume a role other than default by changing the [allowed_auth_providers]({{< relref "../setup-grafana/configure-grafana/#allowed-auth-providers" >}}) and [assume_role_enabled]({{< relref "../setup-grafana/configure-grafana/#assume-role-enabled" >}}) options in the Grafana configuration file. By default, the allowed authentication providers are _AWS SDK Default_, _Access and secret key_, and _Credentials File_, and role is _Assume role (ARN)_.
- You can now specify a custom endpoint in the CloudWatch data source configuration page. This field is optional, and if it is left empty, then the default endpoint for CloudWatch is used. By specifying a regional endpoint, you can reduce request latency.
[AWS Cloudwatch data source]({{< relref "../datasources/aws-cloudwatch/" >}}) was updated as a result of this change.
[AWS Cloudwatch data source]({{< relref "../datasources/aws-cloudwatch" >}}) was updated as a result of this change.
### Increased API limit for CloudMonitoring Services
@ -105,7 +105,7 @@ server:
http_listen_port: 3101
```
[Azure Monitor data source]({{< relref "../datasources/azuremonitor/" >}}) was updated as a result of this change.
[Azure Monitor data source]({{< relref "../datasources/azure-monitor/" >}}) was updated as a result of this change.
## Enterprise features

View File

@ -187,9 +187,9 @@ The Azure Monitor data source now supports Managed Identity for users hosting Gr
Also, in addition to querying Log Analytics Workspaces, you can now query the logs for any individual [supported resource](https://docs.microsoft.com/en-us/azure/azure-monitor/essentials/metrics-supported), or for all resources in a subscription or resource group.
> **Note:** In Grafana 7.5 we started the deprecation for separate Application Insights queries, in favor of querying Application Insights resources through Metrics and Logs. In Grafana 8.0 new Application Insights and Insights Analytics queries cannot be made, and existing queries have been made read-only. For more details, refer to the [Deprecating Application Insights]({{< relref "../datasources/azuremonitor/#deprecating-application-insights" >}}).
> **Note:** In Grafana 7.5 we started the deprecation for separate Application Insights queries, in favor of querying Application Insights resources through Metrics and Logs. In Grafana 8.0 new Application Insights and Insights Analytics queries cannot be made, and existing queries have been made read-only. For more details, refer to the [Deprecating Application Insights]({{< relref "../datasources/azure-monitor#application-insights-and-insights-analytics--removed-" >}}).
[Azure Monitor data source]({{< relref "../datasources/azuremonitor/" >}}) was updated as a result of these changes.
[Azure Monitor data source]({{< relref "../datasources/azure-monitor" >}}) was updated as a result of these changes.
#### Elasticsearch data source