* Move scope type vars to testutil package
* Expose parts of state historian for use in annotation backend
* Implement Loki ASH Annotation store
This store will only implement the `Get` method of a RepositoryImpl since alert state history
writes to Loki elsewhere.
* Use interface for Loki HTTP Client
* Add tests for Loki ASH Annotation store
* Add missing test
* Fix lint
* Organize tests
* Add filter tests
* Improve tests
* Move filter logic into outer function
* Fix lint
* Add comment
* Fix tests
* Fix lint
* Rename historian store + refactor
* Cleanup historian store
* Fix tests
* Minor cleanup
* Use new `ShouldRecordAnnotation` filter
* Fix logic and add tests for this check
* Fix typos, remove unused variables, `< 1` -> `== 0`
* More closely mimic RBAC filter from xorm to ensure correct logic
* Move off weaveworks client
* Address PR comments
Backend:
* Update the Grafana Alerting engine to provide feedback to HysteresisCommand. The feedback information is stored in state.Manager as a fingerprint of each state. The fingerprint is persisted to the database. Only fingerprints that belong to Pending and Alerting states are considered as "loaded" and provided back to the command.
- add ResultFingerprint to state.State. It's different from other fingerprints we store in the state because it is calculated from the result labels.
- add rule_fingerprint column to alert_instance
- update alerting evaluator to accept AlertingResultsReader via context, and update scheduler to provide it.
- add AlertingResultsFromRuleState that implements the new interface in eval package
- update getExprRequest to patch the hysteresis command.
* Only one "Recovery Threshold" query is allowed to be used in the alert rule and it must be the Condition.
Frontend:
* Add hysteresis option to Threshold in UI. It's called "Recovery Threshold"
* Add test for getUnloadEvaluatorTypeFromCondition
* Hide hysteresis in panel expressions
* Refactor isInvalid and add test for it
* Remove unnecesary React.memo
* Add tests for updateEvaluatorConditions
---------
Co-authored-by: Sonia Aguilar <soniaaguilarpeiron@gmail.com>
* Alerting: Attempt to retry retryable errors
Retrying has been broken for a good while now (at least since version 9.4) - this change attempts to re-introduce them in their simplest and safest form possible.
I first introduced #79095 to make sure we don't disrupt or put additional load on our customer's data sources with this change in a patch release. Paired with this change, retries can now work as expected.
There's two small differences between how retries work now and how they used to work in legacy alerting.
Retries only occur for valid alert definitions - if we suspect that that error comes from a malformed alert definition we skip retrying.
We have added a constant backoff of 1s in between retries.
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* Alerting: Attempt to retry retryable errors
Currently in a draft state, but this was the minimal diff I could put together to exemplify how could achieve this.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Changes SSE to not always fail all queries when one fails. Now only the query itself, and nodes that depend on it will error.
---------
Co-authored-by: Gilles De Mey <gilles.de.mey@gmail.com>
* calculate cacheID instead of literals
* use mocked clocks
* advance clocks with the eval results
* use clearer timestamp aliases
* make expected state labels be more clear to read
Co-authored-by: Matthew Jacobson <matthew.jacobson@grafana.com>
This commit updates eval.go to improve the performance of matching
captures in the general case. In some cases we have reduced the
runtime of the function from 10s of minutes to a couple 100ms.
In the case where no capture matches the exact labels, we revert to
the current subset/superset match, but with a reduced search space
due to grouping captures.
This commit changes extractEvalString to sort NumberCaptureValues
in ascending order of Var before building the output string. This
means that users will see EvaluationString in a consistent order,
but also make it possible to assert its output in tests.
* introduce a new node-type ML and implement a command outlier that uses ML plugin as a source of data.
* add feature flag mlExpressions that guards the feature
* add NodeTypeFromDatasourceUID and DataSourceModelFromNodeType()
* deprecate expr.DataSourceModel
* replace usages of IsDataSource to NodeTypeFromDatasourceUID
* replace usages of DataSourceModel to DataSourceModelFromNodeType()
This commit fixes a bug where DatasourceUID and RefID annotations are
missing for DatasourceNoData alerts in Grafana 9.5. This bug affects
datasource plugins that have moved to using the data plane contract.
Takes a specific code path for data that identifies itself as dataplane instead of "guessing" what the data is.
The data must identify itself by being in the dataplane by having both the following frame metadata properties:
- TypeVersion property that is greater than 0.0
- 'Type' property
The flag is disableSSEDataplane and disables this functionality and uses the old code for all queries regardless.
See https://github.com/grafana/grafana-plugin-sdk-go/blob/main/data/contract_docs/contract.md for dataplane details.
* Alerting: Tiny refactor on the eval and schedule packages
two very small things:
- We had a constructor on something called a `Context` which is not a `context.Context` so let's just name that constructor `NewContext`
- The user that we use to run query evaluations is the same (with some variation) abstract it to a function so that it can be re-used when necessary.
* Update pkg/services/ngalert/schedule/schedule.go
Co-authored-by: Alexander Weaver <weaver.alex.d@gmail.com>
* Update pkg/services/ngalert/schedule/schedule.go
Co-authored-by: Alexander Weaver <weaver.alex.d@gmail.com>
---------
Co-authored-by: Alexander Weaver <weaver.alex.d@gmail.com>
This commit fixes an incorrect comment in the Result struct in eval.go
that I had written some time ago. The comment now documents the
actual behaviour and content of this field.
Automatically forward core plugin request HTTP headers in outgoing HTTP requests.
Core datasource plugin authors don't have to specifically handle forwarding of HTTP
headers, e.g. do not have to "hardcode" the header-names in the datasource plugin,
if not having custom needs.
Fixes#57065
* create contextual log context provider
* use contextual provider in scheduler
* init logger in the package
* use context for log context
* use context in state manager
* make TimeRange interface and add relative range
* make Execute methods support the current time
* update resample to support relative time range
* update DSNode to support relative time range
* update query service to create queries with absolute time
* make alerting evaluator create relative time ranges
* Revert "Revert "Prometheus: Type and flavor configuration (#56496)" (#57552)"
This reverts commit 2432ce619a.
* Adds new fields and documentation for Prometheus datasource configuration: prometheus type, and version
* Adding two new fields to the data JSON in the prometheus datasource configuration: prometheusType, and prometheusVersion.
* Version field will attempt to auto-detect via buildinfo API when prometheus Type is selected
* Define EvaluationContext
* Refactor ConditionEval to use new context struct
* Refactor QueriesAndExpressionsEval to use EvaluationContext
* Remove dead field from AlertExecCtx
* Refactor Validate to use EvaluationContext
* Get rid of privately used AlertExecCtx
* Move EvaluationContext to new file and add helper
* Add builder pattern and bind rule info to context
* Extract header logic and add rule UID header
* Fix missing call
This commit is one of two commits to make the data frames for all queries and expressions in an alert rule available to the state package for rendering a graph. It renames Result to Condition, and creates an additional field called
Results that is a map of Ref ID to data.Frames.
* PluginDetails: Make plugin details page look good in topnav
* Minor style tweak aligning things
* minor refactoring where I moved the logic to decide the default tab into its own hook.
* refactor(plugindetails): first pass at using navmodel for usePluginDetailsTabs hook
* refactor(plugindetails): move "reset page when uninstalling plugin" to installcontrols
this prevents a user from seeing a blank page if they uninstall an app plugin whilst viewing a
config page
* refactor(plugindetails): remove usage of toIconName and reduce nested if
* Trying to fix tests
* minor fix
* test(plugindetails): update selectors causing failing tests
* chore(plugindetails): remove commented out test code
* test(plugindetails): clean up - remove unnecesary usage of waitFor
Co-authored-by: Marcus Andersson <marcus.andersson@grafana.com>
Co-authored-by: Jack Westbrook <jack.westbrook@gmail.com>
* access control to log user name if it does not have permissions
* update ngalert Evaluator to accept user instead of creating a pseudo one
* update alerting eval (rule\query testing) API to provide the real user to the Evaluator
* update scheduler to create a pseudo user with proper permissions
* WIP
* Set public_suffix to a pre Ruby 2.6 version
* we don't need to install python
* Stretch->Buster
* Bump versions in lib.star
* Manually update linter
Sort of messy, but the .mod-file need to contain all dependencies that
use 1.16+ features, otherwise they're assumed to be compiled with
-lang=go1.16 and cannot access generics et al.
Bingo doesn't seem to understand that, but it's possible to manually
update things to get Bingo happy.
* undo reformatting
* Various lint improvements
* More from the linter
* goimports -w ./pkg/
* Disable gocritic
* Add/modify linter exceptions
* lint + flatten nested list
Go 1.19 doesn't support nested lists, and there wasn't an obvious workaround.
https://go.dev/doc/comment#lists
Removes various custom headers logic sprinkled around in the backend.
It should automatically be applied to outgoing HTTP requests via the
CustomHeadersMiddleware.
This also removes decryption of SecureJSONData to populate custom
headers in ngalert which seemed to have caused a ton of CPU usage.