* Initial schema
- Add types based off of current frontend
* Rename and field-level comments
* Update report and regenerate files
* Rename frontend Azure folder
- Doing this for consistency and to ensure code-generation works
- Update betterer results due to file renames
* Remove default and add back enum vals that I deleted
* Set workspace prop as optional
* Replace template variable types
* Connect frontend query types
- Keep properties optional for now to avoid major changes
- Rename AzureMetricResource
- Correctly use ResultFormat
* Add TSVeneer decorator
* Update schema
* Update type
* Update CODEOWNERS
* Fix gen-cue issue
* Fix backend test
* Fix e2e test
* Update code coverage
* Remove references to old Azure Monitor path
* Review
* Regen files
* Elasticsearch: Implement schema for query
* Comment out types I am not sure how to do
* Manually fix typing for PipelineMetricAggregationWithMultipleBucketPaths and BasePipelineMetricAggregation
* Import types to types.ts to have single source of truth
* Cleanup, reorder
* Remove unnecesary Schema.
* Fix test
* Refactor
* Create cue file and gen ts/go types
* Use generated schema in ts/go
* Run make den-cue to update report
* Manually extend Phlare query
* Updates
* Add default queryType
* Run make gen-cue to Update report.json
* Use dbName in jsonData instead of database
* Use dbName in instead of database
* Remove database fields and define dbName instead
* Fix tests
* set database field as empty string
* Tempo data query wip
* Replace TempoQuery with new type from schema
* Added some documentation for each DataQuery field
* Change limit type from number to int64
* Use TempoDataQuery instead of local model in the backend
* Update report.json
* feat: make api request to /loki/api/v1/index/stats
* fix: add /index/stats to callResource valid urls
* feat: make call to getQueryStats when the query changes
* feat: render user tooltip displaying the estimated value for processed data
* fix: add new props to component tests
* test: add tests for query size estimation
* fix: disable error message on request failure
* refactor: add suggestions from code review
* refactor: only pass required query string
* schematize data query
* add the stuff you dingus
* feat(testdatasource): add scenario to generated types
* use generated testdata query in frontend
* update code owners
* Add path exception for testdata datasource
* use specific numeric data types
* fix test
* fix e2e smoketest
* add test data query type
* use test data query type
* fix betterer
* Fix typo
* move to experimental
Co-authored-by: Alex Khomenko <Clarity-89@users.noreply.github.com>
Co-authored-by: Jack Westbrook <jack.westbrook@gmail.com>
Co-authored-by: Marcus Efraimsson <marcus.efraimsson@gmail.com>
Co-authored-by: sam boyer <sdboyer@grafana.com>
Co-authored-by: Alex Khomenko <Clarity-89@users.noreply.github.com>
Look for 'caused_by.reason' in ES error response
When the ES response does not contain `reason` or `root_cause[0].reason`
is empty, there is no information for the user to know what is going
wrong.
An example of the error message after this change:
```
Failed to evaluate queries and expressions: failed to execute query A: Trying to create too many buckets. Must be less than or equal to: [65536] but this number of buckets was exceeded. This limit can be set by changing the [search.max_buckets] cluster level setting.
```
Related to https://github.com/grafana/grafana/issues/61246
* Return errors from data parsing
* Better error handling
* Fix the tests
* When there is no frame add empty frame to get metadata attached to it
* Fix tests
* Update testdata
* use new log group picker also for non cross-account queries
* cleanup and add comment
* remove not used code
* remove not used test
* add error message when trying to set log groups before saving
* fix bugs from pr feedback
* add more tests
* fix broken test
* Correctly set filter values in portal URL
* Refactor to include dimensions as a part of AzureMonitor query
* Correctly set splitting value in URL
- Add type for dimension filters object
* Update tests
* Don't test dimensions
Automatically forward core plugin request HTTP headers in outgoing HTTP requests.
Core datasource plugin authors don't have to specifically handle forwarding of HTTP
headers, e.g. do not have to "hardcode" the header-names in the datasource plugin,
if not having custom needs.
Fixes#57065
* GoogleCloudMonitoring: Migrate queries to the new format
* Refactor Aligment and AligmentFunction components (#60235)
* Adapt CloudMonitoringDatasource and CloudMonitoringAnnotationSupport (#60177)
* Fix: avoid migration for new queries (#60375)
* Move preprocessor handling to the backend (#60383)
* Other fixes and new function (#60411)
* Adapt components to the new API (#60451)
* Split metrics query type in time series list and query (#60475)
* Clean up metricQuery references (#60478)
* More bug fixes (#60525)
* SQL Datasources: Use health check for config test
* Remove unnecessary test
* Fix test errors
* Revert mysql go driver update
* Use transform query error
* Use TransformQueryError from sql_engine
* Refactor parse query to functions
* Move parsing to new file
* Create empty result variable and use it when returning early
* Fix linting
* Revert "Create empty result variable and use it when returning early"
This reverts commit 36a503f66e.
* Elasticsearch: Fix ordering in raw_document and add logic for raw_data
* Add comments
* Fix raw data request to use correct timefield
* Fix linting
* Add raw data as metric type
* Fix linting
* Elasticsearch: Add defaults for log query
* Add higlight
* Fix lint
* Add snapshot test
* Implement correct query for logs
* Update
* Adjust naming and comments
* Fix lint
* Remove ifs
* Elasticsearch: Fix ordering in raw_document and add logic for raw_data
* Add comments
* Fix raw data request to use correct timefield
* Fix linting
* Add raw data as metric type
* Fix linting
* Hopefully fix lint
* Datasource settings: Add deprecation notice for database field
* SQL Datasources: Migrate from settings.database to settings.jsonData.database
* Check jsonData first
* Remove comment from docs
Removes request/response connection/hop headers for call resource in similar
manner as Go's reverse proxy functions. Also removes Prometheus datasource
custom call resource header manipulation in regards to hop-by-hop headers.
Fixes#60076
Ref #58646
Co-authored-by: Will Browne <wbrowne@users.noreply.github.com>
* Elasticsearch: Fix removing of empty settings from query in backend implementation
* Update
* Update
* Update pkg/tsdb/elasticsearch/time_series_query.go
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
Co-authored-by: Gábor Farkas <gabor.farkas@gmail.com>
* cleanup cloudwatch.go
* streamline interface naming
* use utility func
* rename test utils file
* move util function to where they are used
* move dtos to models
* split integration tests from the rest
* Update pkg/tsdb/cloudwatch/cloudwatch.go
Co-authored-by: Isabella Siu <Isabella.siu@grafana.com>
* refactor error codes aggregation
* move error messages to models
Co-authored-by: Isabella Siu <Isabella.siu@grafana.com>
* make create call consistent with update and delete
* send multiple targets to graphite and correlate the responses with the requests
* make create call consistent with update and delete
* send multiple targets to graphite and correlate the responses with the requests
* Revert "make create call consistent with update and delete"
This reverts commit 26b6463bd6.
* refactor query -> target parsing and fix unit tests
* add additional validations and more unit tests
* change error statement to warn
Adding support for backend plugin client middlewares. This allows headers in outgoing
backend plugin and HTTP requests to be modified using client middlewares.
The following client middlewares added:
Forward cookies: Will forward incoming HTTP request Cookies to outgoing plugins.Client
and HTTP requests if the datasource has enabled forwarding of cookies (keepCookies).
Forward OAuth token: Will set OAuth token headers on outgoing plugins.Client and HTTP
requests if the datasource has enabled Forward OAuth Identity (oauthPassThru).
Clear auth headers: Will clear any outgoing HTTP headers that was part of the incoming
HTTP request and used when authenticating to Grafana.
The current suggested way to register client middlewares is to have a separate package,
pluginsintegration, responsible for bootstrap/instantiate the backend plugin client with
middlewares and/or longer term bootstrap/instantiate plugin management.
Fixes#54135
Related to #47734
Related to #57870
Related to #41623
Related to #57065
* Lattice: Point to private prerelease of aws-sdk-go (#515)
* point to private prerelease of aws-sdk-go
* fix build issue
* Lattice: Adding a feature toggle (#549)
* Adding a feature toggle for lattice
* Change name of feature toggle
* Lattice: List accounts (#543)
* Separate layers
* Introduce testify/mock library
Co-authored-by: Shirley Leu <4163034+fridgepoet@users.noreply.github.com>
* point to version that includes metric api changes (#574)
* add accounts component (#575)
* Test refactor: remove unneeded clientFactoryMock (#581)
* Lattice: Add monitoring badge (#576)
* add monitoring badge
* fix tests
* solve conflict
* Lattice: Add dynamic label for account display name (#579)
* Build: Automatically sync lattice-main with OSS
* Lattice: Point to private prerelease of aws-sdk-go (#515)
* point to private prerelease of aws-sdk-go
* fix build issue
* Lattice: Adding a feature toggle (#549)
* Adding a feature toggle for lattice
* Change name of feature toggle
* Lattice: List accounts (#543)
* Separate layers
* Introduce testify/mock library
Co-authored-by: Shirley Leu <4163034+fridgepoet@users.noreply.github.com>
* point to version that includes metric api changes (#574)
* add accounts component (#575)
* Test refactor: remove unneeded clientFactoryMock (#581)
* Lattice: Add monitoring badge (#576)
* add monitoring badge
* fix tests
* solve conflict
* add account label
Co-authored-by: Shirley Leu <4163034+fridgepoet@users.noreply.github.com>
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* fix import
* solve merge related problem
* add account info (#608)
* add back namespaces handler
* Lattice: Parse account id and return it to frontend (#609)
* parse account id and return to frontend
* fix route test
* only show badge when feature toggle is enabled (#615)
* Lattice: Refactor resource response type and return account (#613)
* refactor resource response type
* remove not used file.
* go lint
* fix tests
* remove commented code
* Lattice: Use account as input when listing metric names and dimensions (#611)
* use account in resource requests
* add account to response
* revert accountInfo to accountId
* PR feedback
* unit test account in list metrics response
* remove not used asserts
* don't assert on response that is not relevant to the test
* removed dupe test
* pr feedback
* rename request package (#626)
* Lattice: Move account component and add tooltip (#630)
* move accounts component to the top of metric stat editor
* add tooltip
* CloudWatch: add account to GetMetricData queries (#627)
* Add AccountId to metric stat query
* Lattice: Account variable support (#625)
* add variable support in accounts component
* add account variable query type
* update variables
* interpolate variable before its sent to backend
* handle variable change in hooks
* remove not used import
* Update public/app/plugins/datasource/cloudwatch/components/Account.tsx
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* Update public/app/plugins/datasource/cloudwatch/hooks.ts
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* add one more unit test
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* cleanup (#629)
* Set account Id according to crossAccountQuerying feature flag in backend (#632)
* CloudWatch: Change spelling of feature-toggle (#634)
* Lattice Logs (#631)
* Lattice Logs
* Fixes after CR
* Lattice: Bug: fix dimension keys request (#644)
* fix dimension keys
* fix lint
* more lint
* CloudWatch: Add tests for QueryData with AccountId (#637)
* Update from breaking change (#645)
* Update from breaking change
* Remove extra interface and methods
Co-authored-by: Shirley Leu <4163034+fridgepoet@users.noreply.github.com>
* CloudWatch: Add business logic layer for getting log groups (#642)
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* Lattice: Fix - unset account id in region change handler (#646)
* move reset of account to region change handler
* fix broken test
* Lattice: Add account id to metric stat query deep link (#656)
add account id to metric stat link
* CloudWatch: Add new log groups handler for cross-account querying (#643)
* Lattice: Add feature tracking (#660)
* add tracking for account id prescense in metrics query
* also check feature toggle
* fix broken test
* CloudWatch: Add route for DescribeLogGroups for cross-account querying (#647)
Co-authored-by: Erik Sundell <erik.sundell87@gmail.com>
* Lattice: Handle account id default value (#662)
* make sure right type is returned
* set right default values
* Suggestions to lattice changes (#663)
* Change ListMetricsWithPageLimit response to slice of non-pointers
* Change GetAccountsForCurrentUserOrRole response to be not pointer
* Clean test Cleanup calls in test
* Remove CloudWatchAPI as part of mock
* Resolve conflicts
* Add Latest SDK (#672)
* add tooltip (#674)
* Docs: Add documentation for CloudWatch cross account querying (#676)
* wip docs
* change wordings
* add sections about metrics and logs
* change from monitoring to observability
* Update docs/sources/datasources/aws-cloudwatch/_index.md
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md
Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com>
* Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md
Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com>
* Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
* Update docs/sources/datasources/aws-cloudwatch/query-editor/index.md
Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com>
* apply pr feedback
* fix file name
* more pr feedback
* pr feedback
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com>
* use latest version of the aws-sdk-go
* Fix tests' mock response type
* Remove change in Azure Monitor
Co-authored-by: Sarah Zinger <sarah.zinger@grafana.com>
Co-authored-by: Shirley Leu <4163034+fridgepoet@users.noreply.github.com>
Co-authored-by: Fiona Artiaga <89225282+GrafanaWriter@users.noreply.github.com>
This change preallocates slices and maps where the size of the data is known before the object is created.
Co-authored-by: Joe Blubaugh <joe.blubaugh@grafana.com>
* Introduce a new feature flag for prometheus buffered client
* Use querydata client as default and put buffered client behind the feature flag
* Remove prometheusStreamingJSONParser feature flag as it is not needed anymore
* Update tests
* Fix unit tests
* Update feature flag description
* clean up and document integration test convention
* clarify integration test conventions
* clean up integration tests that don't follow convention
* mark testIntegration* functions as helpers to avoid confusion
* Chore: Update grafana-plugin-sdk-go to v0.142.0
* Update tests and golden files for 207 status code
* Chore: Move update flag definition at the top in response_parser_test.go
* retrigger
Co-authored-by: Will Browne <will.browne@grafana.com>
* make sql engine use pick log context for logs
* update tempo to get log context
* update opentsdb to use log context
* update es client to use log context
* Add phlare datasource
* Rename
* Add parca
* Add self field to parca
* Make sure phlare works with add to dashboard flow
* Add profiling category and hide behind feature flag
* Update description and logos
* Update phlare icon
* Cleanup logging
* Clean up logging
* Fix for shift+enter
* onRunQuery to set label
* Update type naming
* Fix lint
* Fix test and quality issues
Co-authored-by: Joey Tawadrous <joey.tawadrous@grafana.com>
* refactor metrics request
* Update pkg/tsdb/cloudwatch/routes/dimension_keys_test.go
Co-authored-by: Shirley <4163034+fridgepoet@users.noreply.github.com>
* return metric struct value intead of pointer
* make it possible to test hard coded metrics service
* test all paths in route
* fix broken test
* fix one more broken test
* add integration test
Co-authored-by: Shirley <4163034+fridgepoet@users.noreply.github.com>
* Revert "Revert "Prometheus: Type and flavor configuration (#56496)" (#57552)"
This reverts commit 2432ce619a.
* Adds new fields and documentation for Prometheus datasource configuration: prometheus type, and version
* Adding two new fields to the data JSON in the prometheus datasource configuration: prometheusType, and prometheusVersion.
* Version field will attempt to auto-detect via buildinfo API when prometheus Type is selected
* Add start time and end time parameters while querying tempo traces
* Added configurable time shift to query by trace id
* Test that the URL is formatted correctly
* Added test to check for time shift
* Improved label and tooltip of new time shift settings
Co-authored-by: André Pereira <adrapereira@gmail.com>
* use new layered architecture in get dimension keys request
* go lint fixes
* pr feedback
* more pr feedback
* remove not used code
* refactor route middleware
* change signature
* add integration tests for the dimension keys route
* use request suffix instead of query
* use typed args also in frontend
* remove unused import
* harmonize naming
* fix merge conflict
* chore: add alias for InitTestDB and Session
Adds an alias for the sqlstore InitTestDB and Session, and updates tests using these to reduce dependencies on the sqlstore.Store.
* next pass of removing sqlstore imports
* last little bit
* remove mockstore where possible
* Elasticsearch: Fix calculation of trimEdges
When a value of trimEdges is set greater than 1 we need to drop both the
first and last sample of the data from the response.
* Elasticsearch: Fix reading trimEdges from the query settings
Currently the trimEdges property in the JSON panel is stored as a string
and not directly as a number.
This caused that the reading of the value failed in the go backend
because the simplejson.Int() method doesn't properly handle this case.
This failure when decoding the value goes unnoticed because of the early
return causing the trimEdges configuration to be ignored.
* Refactor castToInt to also return an error
Add a new test case that sets the `trimEdges` property as a quoted
number.
* Added nextPageToken prop
* Adding first and pageToken condition to while loop
* clean up
* revert gitignore
* fix go lint
* Added logic to builder too
* Removed pageSize - was for local testing
* gofmt
* extracted doRequest function
* extracted doRequest in query too
* Adressed filter comments
* Adressed query comments
* go fmt
* removed pageSize added for testing
* go fmt again
* Flamegraph
* Updated flame graph width/height values
* Fix top table rendering issue
* Add feature toggle for flamegraph in explore
* Update tests
* Hide flamegraph from dash panel viz list if feature toggle not enabled
* Show table if no flameGraphFrames
* Add flame graph to testdata ds
* Minor improvement
This commit extends graphite QueryData() instrumentation to include in
the traces information about possible errors.
I've added an attribute about the graphite response code as well as
records for errors if there are any.
This restores the FromAlert header to prometheus for Grafana managed alert Queries.
It does this by reverting "Prometheus: Remove middleware for custom headers (#51518)" , but also changing it so it is only the FromAlert header.
This reverts commit 2372501368.
* WIP
* Set public_suffix to a pre Ruby 2.6 version
* we don't need to install python
* Stretch->Buster
* Bump versions in lib.star
* Manually update linter
Sort of messy, but the .mod-file need to contain all dependencies that
use 1.16+ features, otherwise they're assumed to be compiled with
-lang=go1.16 and cannot access generics et al.
Bingo doesn't seem to understand that, but it's possible to manually
update things to get Bingo happy.
* undo reformatting
* Various lint improvements
* More from the linter
* goimports -w ./pkg/
* Disable gocritic
* Add/modify linter exceptions
* lint + flatten nested list
Go 1.19 doesn't support nested lists, and there wasn't an obvious workaround.
https://go.dev/doc/comment#lists
Removes various custom headers logic sprinkled around in the backend.
It should automatically be applied to outgoing HTTP requests via the
CustomHeadersMiddleware.
This also removes decryption of SecureJSONData to populate custom
headers in ngalert which seemed to have caused a ton of CPU usage.
* Search: use SQL search as a fallback when bluge indexing is ongoing
* Search: lint
* Search: feedback fixes - return an empty frame with a special name
* Search: revert readiness check query type
* Search: remove println
* remove sleep, get coffee
* Refactor migrations and tests for secrets kvstore
* Use fake secrets store as a shortcut on tests
* Update wire
* Use global migration logger
* Fix ds proxy tests
* Fix linting issues
* Rename data source test setup function
* Move SignedInUser to user service and RoleType and Roles to org
* Use go naming convention for roles
* Fix some imports and leftovers
* Fix ldap debug test
* Fix lint
* Fix lint 2
* Fix lint 3
* Fix type and not needed conversion
* Clean up messages in api tests
* Clean up api tests 2
* Add check health functions for each datasource and generic checkHealth function
* Log backend errors
* Update testDatasource function
- Remove unused testDatasource functions from pseudo datasources
* Switch datasource to extend DataSourceWithBackend
* Improve errors and responses from health endpoint
* Fix backend lint issues
* Remove unneeded frontend tests
* Remove unused/unnecessary datasource methods
* Update types
* Improve message construction
* Stubbing out checkHealth tests
* Update tests
- Remove comments
- Simplify structure
* Update log analytics health check to query data rather than retrieve workspace metadata
* Fix lint issue
* Fix frontend lint issues
* Update pkg/tsdb/azuremonitor/azuremonitor.go
Co-authored-by: Andres Martinez Gotor <andres.martinez@grafana.com>
* Updates based on PR comments
- Don't use deprecated default workspace field
- Handle situation if no workspace is found by notifying user
- Correctly handle health responses
* Remove debug line
* Make use of defined api versions
* Remove field validation functions
* Expose errors in frontend
* Update errors and tests
* Remove instanceSettings
* Update error handling
* Improve error handling and presentation
* Update tests and correctly check error type
* Refactor AzureHealthCheckError and update tests
* Fix lint errors
Co-authored-by: Andres Martinez Gotor <andres.martinez@grafana.com>
Adds tags to the opentsdb response. This means the tags propagate to alert messages to quickly understand the source of the alert.
Fixes: https://github.com/grafana/grafana/issues/47092
Co-authored-by: SLAMA <36870081+xy-man@users.noreply.github.com>
* Chore: Exclude integration tests from running on test-backend step
* Remove -v from go test command
* Add check to skip integration tests before each integration test
* Try to restart pipeline
* Retrying to make pipeline run
* pass on all headers except for accept headers
* touch up and testing
* add custom header values to resource queries
* remove my picture. oops
* handle gzip responses as well
* fix linting issues
* add my space
* no lint
* removed cookies from being proxied
* clean up and handle errors from io.reader.Close() calls
* Sent resource calls for metadata to the backend
* moved resource calls to the backend
* code review feedback
* fixed post with body
* statuscode >= 300
* cleanup
* fixed tests
* fixed datasource tests
* code review feedback
* force some other endpoints to only GET
* fix linting errors
* fixed tests
* was able to remove section of redundant code
* cleanup and code review feedback
* moved query_exemplars to get request
* fixed return on error
* went back to resource calls, but using the backendsrv directly
* moved to a resource call with fallback
* fixed tests
* check for proper messages
* proper check for invalid calls
* code review changes
* Correctly encode default project response
* Make getGCEDefaultProject a method of Service and add test
* Handle error appropriately
* Update test and function definition