* issue loki queries in parallel so total query time is only the slowest query rather than the sum of all query times.
* Fix lint
* Add running of queries in parallel behind feature toggle to test the functonality before release
* Add span end
* Move shared logic to separate function
* Add logging and tracing around running of all queries
---------
Co-authored-by: Ivana Huckova <ivana.huckova@gmail.com>
* Elasticsearch: Add tracing do data source
* Fix tests
* Address feedback
* Update pkg/tsdb/elasticsearch/response_parser.go
Co-authored-by: Sven Grossmann <sven.grossmann@grafana.com>
* Update pkg/tsdb/elasticsearch/response_parser.go
Co-authored-by: Sven Grossmann <sven.grossmann@grafana.com>
* Track error across both spans
* Add span for decoding of response
* Fix test
* Update setting of errors + fix test
---------
Co-authored-by: Sven Grossmann <sven.grossmann@grafana.com>
* Handle the response with different field key order
* More unit tests to cover edge cases
* Cover more edge cases
* make it simpler
* Better test inputs
* Elasticsearch: Adjust naming in logging according to convention
* Log response parsing per response
* Update
* Fix logging of errors when no response
* Add path to error loggigng
* Update pkg/tsdb/elasticsearch/response_parser.go
* adjust Loki to logging convention
* Fix call resource logging
* Update dataquery
* Update
* Remove redundant logging
* Fix TODO
* Rename action to stage and use variables
* `resp` might be `nil`
* `resp` might be `nil` here as well
* change to `statusCode`
* use correct logger
* also here
* add query information to logging
---------
Co-authored-by: Ivana Huckova <ivana.huckova@gmail.com>
* Add support for MI authentication to MSSQL
This adds support for managed identity authentication for MSSQL managed
instances running in Azure.
Co-authored-by: baldm0mma <jev.forsberg@grafana.com>
* removed infra logs
* improve health check
* remove debug and error logs
* feedback
* Update pkg/tsdb/azuremonitor/azuremonitor-resource-handler.go
* Update pkg/tsdb/azuremonitor/loganalytics/azure-log-analytics-datasource.go
* fix close body error
* update test
* resource request should return errors
* go linter
* go linter
* Unify default value
* Use variable to keep default precisions in sync
* Use default precision variable
* Update precision description
* Update defaultPrecisionString and move
* Be more specific in naming of variabkle
* Revert "Merge remote-tracking branch 'origin' into ivana/es-precision-default-value"
This reverts commit 599f236a77, reversing
changes made to 6742be0c6d.
* Revert wrong merge
* Revert wrong merge with turned off lefthook
* Metrics summary
* Update query
* Remove colors
* Update states
* Add group by into its own component
* Add group by to search and traceql tabs
* Add spacing for group by
* Update span kind values
* Update span status code values
* Update query based on target + group by
* Cleanup
* Only add targetQuery if not empty
* Add kind=server to table
* Update groupBy query logic
* Add feature toggle
* Use feature toggle
* Self review
* Update target query
* Make gen-cue
* Tweak query
* Update states
* useRef for onChange
* Fix for steaming in search tab
* Add loading state tests
* metricsSummary tests
* Datasource tests
* Review updates
* Update aria-label
* Update test
* Simplify response state
* More manual testing and feedback from sync call
* Prettier and fix test
* Remove group by component from traceql tab
* Cleanup, tests, error messages
* Add feature tracking
- The util/converter Prometheus response json parse was not checking for errors while parsing. It now does. In particular, if `[dataproxy]/response_limit` is set in Grafana's config, it will now recognize the limit error.
- Fixes#73747
- Adds `jsonitere` package, which wraps json-iterator/go's Iterator's Methods with methods that return errors, so errcheck linting can be relied upon
- Impact:
- If something was sending malformed JSON to the prometheus or loki datasources, the previous code might have accepted that and partially processed the data
- Before there may have been partial data with no error, where as no there may be errors but they will have no partial results, just the error.
* add `id` field to elasticsearch
* add comment
* slightly better perf
* only add `id` to logs frames
* only add `id` for logs responses
* concat `index` and `id`
* change snapshot generation to false
* use better loop
* fix tests
* moved up