* Alerting: Expose metrics for Alertmanager Alerts
In Grafana, the alert evaluation and alert delivery are combined. We're always used a metric named `grafana_alerting_alerts` to get a sense of what are the alerts that are currently firing (these come from the evaluation side) and opted to not map the alertmanager alerts metric directly.
I think it's important that we make a disction between alerts that happen at evaluation vs alerts that are received for delivery by the internal Alertmanager as we have options to skip the delivery of these alerts to the internal alertmanager altogether.
* Migrate old alerting templates to use $labels
* Fix imports
* Add test coverage and separate rewriting to Go templates
* Fix lint
* Check for additional closing braces
* Add logging of invalid message templates
* Fix tests
* Small fixes
* Update comments
* Panic on empty token
* Use logtest.Fake
* Fix lint
* Allow for spaces in variable names by not tokenizing spaces
* Add template function to deduplicate Labels in a Value map
* Fix behavior of mapLookupString
* Reference deduplicated labels in migrated message template
* Fix behavior of deduplicateLabelsFunc
* Don't create variable for parent logger
* Add more tests for deduplicateLabelsFunc
* Remove unused function
* Apply suggestions from code review
Co-authored by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
* Give label val merge function better name
* Extract template migration and escape literal tokens
* Consolidate + simplify template migration
---------
Co-authored-by: William Wernert <william.wernert@grafana.com>
* Alerting: Don't use a separate collection system for metrics
The state package had a metric collection system that ran every 15s updating the values of the metrics - there is a common pattern for this in the Prometheus ecosystem called "collectors".
I have removed the behaviour of using a time-based interval to "set" the metrics in favour of a set of functions as the "value" that get called at scrape time.
* add metrics and tracing to state manager
* propagate tracer to state manager
* add scheduler metrics
* fix backtesting
* add test for state metrics
* remove StateUpdateCount
* update docs
* metrics can be null
* add tracer to new tests
* calculate cacheID instead of literals
* use mocked clocks
* advance clocks with the eval results
* use clearer timestamp aliases
* make expected state labels be more clear to read
Co-authored-by: Matthew Jacobson <matthew.jacobson@grafana.com>
* Add limit query parameter
* Drop copy paste comment
* Extend history query limit to 30 days and 250 entries
* Fix history log entries ordering
* Update no history message, add empty history test
---------
Co-authored-by: Konrad Lalik <konrad.lalik@grafana.com>
This commit adds support for concurrent queries when saving alert
instances to the database. This is an experimental feature in
response to some customers experiencing delays between rule evaluation
and sending alerts to Alertmanager, resulting in flapping. It is
disabled by default.
This commit adds debug logs for previous_ends_at and next_ends_at
to state.go to help us debug issues where alerts are resolved in
Alertmanager due to expiration. This change is in response to a
support escalation where this information was needed but unavailable.
* Alerting: Repurpose rule testing endpoint to return potential alerts
This feature replaces the existing no-longer in-use grafana ruler testing API endpoint /api/v1/rule/test/grafana. The new endpoint returns a list of potential alerts created by the given alert rule, including built-in + interpolated labels and annotations.
The key priority of this endpoint is that it is intended to be as true as possible to what would be generated by the ruler except that the resulting alerts are not filtered to only Resolved / Firing and ready to be sent.
This means that the endpoint will, among other things:
- Attach static annotations and labels from the rule configuration to the alert instances.
- Attach dynamic annotations from the datasource to the alert instances.
- Attach built-in labels and annotations created by the Grafana Ruler (such as alertname and grafana_folder) to the alert instances.
- Interpolate templated annotations / labels and accept allowed template functions.
* use tokens or urls in image annotations
* improve tests, fix some comments
* fix empty tokens
* code review changes, check for url before checking for token (support old token formats)
* Alerting: Remove and revert flag alertingBigTransactions
This is a partial revert of #56575 and a removal of the `alertingBigTransactions` flag.
Real-word use has seen no clear performance incentive to maintain this flag. Lowered db connection count
came at the cost of significant increase in CPU usage and query latency.
* Fix lint backend
* Removed last bits of alertingBigTransactions
---------
Co-authored-by: Armand Grillet <2117580+armandgrillet@users.noreply.github.com>
* Add fresh context with timeout and same log properties, re-derive logger
* Unify timeout constants
* Move ctx after shortcut that got added through rebasing
* Unify timeouts
* Port opentracing's SpanFromContext and ContextFromSpan to the grafana tracing package
* Support both opentracing and otel variants
* Better document why we're creating a new ctx
* Add new func to FakeSpan which was added after rebase
* Support grafana-specific traceID key in both tracer implementations
* Alerting: Respect "For" Duration for NoData alerts
This change modifies `resultNoData` to be more inline with the logic of the other state handlers.
The main effects of this are:
1) NoData states with NoDataState config set to Alerting will respect "For" duration.
2) Prevents zero value in StartsAt and EndsAt for alerts that have only even been in normal state. This includes state transitions from NoDataState=OK and ExecErrState=OK.
3) Better state transition logging.
* Remove private labels
* No longer index by instance labels
* Labels are now invariant, only build them once
* Remove bucketing since everything is in a single stream
* Refactor statesToStreams to only return a single unified log stream
* Don't query on labels that no longer exist
* Move selector logic to loki layer, genericize client to work in terms of straight logQL
* Add support for line-level label filters in query
* Combine existing selector tests for better parallelism
* Tests for logQL construction
* Underscore instead of dot for unwrapping labels in logql
* Encode with snappy, always
* JSON encoder type
* Headers
* Copy labels formatter from promtail
* Implement snappy-proto encoding
* Create encoder interface, test both encoders, choose snappy-proto by default
* Make encoder configurable at the LokiCfg level
* Export both encoders
* Touch up comment and tests
* Drop unnecessary conversions after move to plain strings to appease linter