* Alerting: Attempt to retry retryable errors
Retrying has been broken for a good while now (at least since version 9.4) - this change attempts to re-introduce them in their simplest and safest form possible.
I first introduced #79095 to make sure we don't disrupt or put additional load on our customer's data sources with this change in a patch release. Paired with this change, retries can now work as expected.
There's two small differences between how retries work now and how they used to work in legacy alerting.
Retries only occur for valid alert definitions - if we suspect that that error comes from a malformed alert definition we skip retrying.
We have added a constant backoff of 1s in between retries.
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* Alerting: Attempt to retry retryable errors
Currently in a draft state, but this was the minimal diff I could put together to exemplify how could achieve this.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* Folders: Show folders user has access to at the root level
* Refactor
* Refactor
* Hide parent folders user has no access to
* Skip expensive computation if possible
* Fix tests
* Fix potential nil access
* Fix duplicated folders
* Fix linter error
* Fix querying folders if no managed permissions set
* Update benchmark
* Add special shared with me folder and fetch available non-root folders on demand
* Fix parents query
* Improve db query for folders
* Reset benchmark changes
* Fix permissions for shared with me folder
* Simplify dedup
* Add option to include shared folder permission to user's permissions
* Fix nil UID
* Remove duplicated folders from shared list
* Folders: Fix fetching empty folder
* Nested folders: Show dashboards with directly assigned permissions
* Fix slow dashboards fetch
* Refactor
* Fix cycle dependencies
* Move shared folder to models
* Fix shared folder links
* Refactor
* Use feature flag for permissions
* Use feature flag
* Review comments
* Expose shared folder UID through frontend settings
* Add frontend type for sharedWithMeFolderUID option
* Refactor: apply review suggestions
* Fix parent uid for shared folder
* Fix listing shared dashboards for users with access to all folders
* Prevent creating folder with "shared" UID
* Add tests for shared folders
* Add test for shared dashboards
* Fix linter
* Add metrics for shared with me folder
* Add metrics for shared with me dashboards
* Fix tests
* Tests: add metrics as a dependency
* Fix access control metadata for shared with me folder
* Use constant for shared with me
* Optimize parent folders access check, fetch all folders in one query.
* Use labels for metrics
* Export Notification Policy correctly (#78020)
The JSON version of an exported Notification Policy now
inline correctly the policy in the same way the Yaml version
does.
Co-authored-by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
* ngalert `make`: Support GNU install on Darwin
Currently, the Makefile assumes that Darwin is using the Mac version of `sed`
I have the GNU version, so it failed. With this PR, it checks which version is installed
I also called `make` and there are some changes that came out of it
* swagger-gen
* Alerting: Only warm alert state cache if execute_alerts=true.
If the Grafana instance is not executing alerts, then Warm()-ing the state
manager is wasteful and could lead to misleading rule status queries, as the
status returned will be always based on the state loaded from the database at
startup, and not the most recent evaluation state.
* Move Warm() down to shared conditional.
* Alerting: Add clean_upgrade config and deprecate force_migration
Upgrading to UA and rolling back will no longer delete any data by default.
Instead, each set of tables will remain unchanged when switching between
legacy and UA. As such, the force_migration config has been deprecated
and no extra configuration is required to roll back to legacy anymore.
If clean_upgrade is set to true when upgrading from legacy alerting to Unified
Alerting, grafana will first delete all existing Unified Alerting resources,
thus re-upgrading all organizations from scratch. If false or unset,
organizations that have previously upgraded will not lose their existing Unified
Alerting data when switching between legacy and Unified Alerting.
Similar to force_migration, it should be kept false when not needed as it may
cause unintended data-loss if left enabled.
---------
Co-authored-by: Christopher Moyer <35463610+chri2547@users.noreply.github.com>
* Alerting: Keep track of individual org migration status
Save migration status per migrated org.
Change the meaning (and key/value) of the org_id=0 entry
to store the current (previous) config value used by alerting.
This is so we can know when to upgrade/downgrade by
comparing with the new config value in
UnifiedAlerting.IsEnabled.
* Alerting: Add a sync interval for ApplyConfig in remote secondary mode
* remove out of scope code
* remove parentheses after CleanUp for consistency in test comments
* Add comment to ApplyConfig
* Alerting: In migration improve deduplication of title and group
This change improves alert titles generated in the legacy migration
that occur when we need to deduplicate titles. Now when duplicate
titles are detected we will first attempt to append a sequential index,
falling back to a random uid if none are unique within 10 attempts.
This should cause shorter and more easily readable deduplicated
titles in most cases.
In addition, groups are no longer deduplicated. Instead we set them
to a combination of truncated dashboard name and humanized alert
frequency. This way, alerts from the same dashboard share a group
if they have the same evaluation interval. In the event that truncation
causes overlap, it won't be a big issue as all alerts will still be in a
group with the correct evaluation interval.
* Alerting: Introduce a Mimir client as part of the Remote Alertmanager
Mimir client that understands the new APIs developed for mimir. Very much a WIP still.
* more wip
* appease the linter
* more linting
* add more code
* get state from kvstore, encode, send
* send state to the remote Alertmanager, extract fullstate logic into its own function
* pass kvstore to remote.NewAlertmanager()
* refactor
* add fake kvstore to tests
* tests
* use FileStore to get state
* always log 'completed state upload'
* refactor compareRemoteConfig
* base64-encode the state in the file store
* export silences and nflog filenames, refactor
* log 'completed state/config upload...' regardless of outcome
* add values to the state store in tests
* address code review comments
* log error from filestore
---------
Co-authored-by: gotjosh <josue.abreu@gmail.com>
* Alerting: Apply query optimization to eval endpoints
Previously, query optimization was applied to alert queries when scheduled but
not when ran through `api/v1/eval` or `/api/v1/rule/test/grafana`. This could
lead to discrepancies between preview and scheduled alert results.
* Alerting: Add GetFullState method to FileStore
* make tests compile, create stateStore in NewAlertmanager
* return errors instead of logging, accept an arbitrary number of strings
* make NewAlertmanager() accept a stateStore
* Alerting: In migration, fallback to '1s' for malformed min interval
During legacy migration, when we encounter an alert datasource query
with a min interval (interval field in the query model) that is not
parseable, instead of failing the migration we fallback to a min interval
of 1s and continue.
The reason for this is a bug in legacy alerting (existing for a few major
versions) which allows arbitrary dashboard variables to be used as the
min interval, even though those variables do not work and will cause
the legacy alert to fail with `interval calculation failed: time: invalid
duration`.
* Remote Alertmanager(refactor): Only parse the URL once
Exactly what it says in the tin.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* use the existing tests
Signed-off-by: gotjosh <josue.abreu@gmail.com>
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* Alerting: Introduce a Mimir client as part of the Remote Alertmanager
This is our first attempt at making Grafana communicate use Mimir as a backend - it uses a new set of APIs that we've developed on the Mimir side to upload the grafana configuration and alertmanager state so that it can then be ported over.
Codewise, we've introduced a couple of things:
A client to isolate in its own package all the communication that happens with Mimir
A few changes to the remote/alertmanager to include uploading the configuration and state when it starts
A few refactors that align a bit better with the design approach that we're thinking
An integration tests again these newly developed APIs using a custom image
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
Co-authored-by: Santiago <santiagohernandez.1997@gmail.com>
* remove use of SignedInUserCopies
* add extra safety to not cross assign permissions
unwind circular dependency
dashboardacl->dashboardaccess
fix missing import
* correctly set teams for permissions
* fix missing inits
* nit: check err
* exit early for api keys
* Alerting: Add an empty Forked Alertmanager
* Alerting: Add methods for silences to the forked Alertmanager
* check for errors in tests
* make linter happy
* Alerting: Add methods for alerts to the forked Alertmanager
* Alerting: Add methods for receivers to the forked Alertmanager
* Alerting: Add TestTemplate method to the forked Alertmanager
* make linter happy
* separate into both forked AMs
* fix tests
* Alerting: Add lifecycle methods to the forked Alertmanager
* Alerting: Add an empty Forked Alertmanager
* Alerting: Add methods for silences to the forked Alertmanager
* check for errors in tests
* make linter happy
* Alerting: Add methods for alerts to the forked Alertmanager
* Alerting: Add methods for receivers to the forked Alertmanager
* Alerting: Add TestTemplate method to the forked Alertmanager
* make linter happy
* separate into both forked AMs
* fix tests
* Alerting: Add an empty Forked Alertmanager
* Alerting: Add methods for silences to the forked Alertmanager
* check for errors in tests
* make linter happy
* Alerting: Add methods for alerts to the forked Alertmanager
* Alerting: Add methods for receivers to the forked Alertmanager
* make linter happy
* separate into both forked AMs
* fix tests
* rename testErr -> expErr
* Alerting: Add an empty Forked Alertmanager
* Alerting: Add methods for silences to the forked Alertmanager
* check for errors in tests
* make linter happy
* Alerting: Add methods for alerts to the forked Alertmanager
* make linter happy
* separate into both forked AMs
* rename testErr -> expErr
* Alerting: Add an empty Forked Alertmanager
* Alerting: Add methods for silences to the forked Alertmanager
* check for errors in tests
* make linter happy
* make linter happy
* Alerting: Add methods for silences to the forked Alertmanager
When running in dev mode, error messages would contain an additional "error" property alongside "message". Since this causes confusion, that has been removed and now error messages are the same both modes (using "message").
* Alerting: Rename remote.ExternalAlertmanager to remote.Alertmanager
* Alerting: Send alerts to the remote Alertmanager
* add ticker to readiness check, add tests
* use options when creating a new sender.ExternaAlertmanager
* unexport defaultMaxQueueCapacity
* delete unused defaultConfig field
* add debug log line when sending alerts to the remote alertmanager
* move and refactor readiness check
* update tests to not include defaultConfig
* Alerting: Move `ExternalAlertmanager` to its own package
We'll avoid import cycles when using components from other packages. In addition to that, I've created an `Options` approach for the multiorg alertmanger to allow us to override how per tenant alertmanagers are created.
* switch things around
* address review comments
* fix references and warnings
* Alerting: Move migration from background service run to ngalert init
sqlite database write contention between the migration's single transaction and
dashboard provisioning's frequent commits was causing the migration to
fail with SQLITE_BUSY/SQLITE_BUSY_SNAPSHOT on all retries.
This is not a new issue for sqlite+grafana, but the discrepancy between the
length of the transactions was causing it to be very consistent. In addition,
since a failed migration has implications on the assumed correctness of the
alertmanager and alert rule definition state, we cause a server shutdown on
error. This can make e2e tests as well as some high-load provisioned
sqlite installations flaky on startup.
The correct fix for this is better transaction management across various
services and is out of scope for this change as we're primarily interested in
mitigating the current bout of server failures in e2e tests when using sqlite.
* Alerting: post alerts to the remote Alertmanager and fetch them
* fix broken tests
* Alerting: Add Mimir Backend image to devenv (blocks)
* add alerting as code owner for mimir_backend block
* Alerting: Use Mimir image to run integration tests for the remote Alertmanager
* skip integration test when running all tests
* skipping integration test when no Alertmanager URL is provided
* fix bad host for mimir_backend
* remove basic auth testing until we have an nginx image in our CI
* add integration tests for alerts
* fix tests
* change SendCtx -> Send, add context.Context to Send, fix CI
* add reover() for functions from the Prometheus Alertmanager HTTP client that could panic
* add TODO to implement PutAlerts in a way that mimicks what Prometheus does
* fix log format
* Alerting: Use Mimir image to run integration tests for the remote Alertmanager
* skip integration test when running all tests
* skipping integration test when no Alertmanager URL is provided
* fix bad host for mimir_backend
* remove basic auth testing until we have an nginx image in our CI
* Fix migration of custom dashboard permissions
Dashboard alert permissions were determined by both its dashboard and
folder scoped permissions, while UA alert rules only have folder
scoped permissions.
This means, when migrating an alert, we'll need to decide if the parent folder
is a correct location for the newly created alert rule so that users, teams,
and org roles have the same access to it as they did in legacy.
To do this, we translate both the folder and dashboard resource
permissions to two sets of SetResourcePermissionCommands. Each of these
encapsulates a mapping of all:
OrgRoles -> Viewer/Editor/Admin
Teams -> Viewer/Editor/Admin
Users -> Viewer/Editor/Admin
When the dashboard permissions (including those inherited from the parent
folder) differ from the parent folder permissions alone, we need to create a
new folder to represent the access-level of the legacy dashboard.
Compromises:
When determining the SetResourcePermissionCommands we only take into account
managed and basic roles. Fixed and custom roles introduce significant complexity
and synchronicity hurdles. Instead, we log a warning they had the potential to
override the newly created folder permissions.
Also, we don't attempt to reconcile datasource permissions that were
not necessary in legacy alerting. Users without access to the necessary
datasources to edit an alert rule will need to obtain said access separate from
the migration.
This PR replaces the vendored models in the migration with their equivalent ngalert models. It also replaces the raw SQL selects and inserts with service calls.
It also fills in some gaps in the testing suite around:
- Migration of alert rules: verifying that the actual data model (queries, conditions) are correct 9a7cfa9
- Secure settings migration: verifying that secure fields remain encrypted for all available notifiers and certain fields migrate from plain text to encrypted secure settings correctly e7d3993
Replacing the checks for custom dashboard ACLs will be replaced in a separate targeted PR as it will be complex enough alone.
* update storage's method InstertRules to return ids of added rules as slice to keep the same order as rules in the argument
* schematize response of update rule group endpoint, add created, updated, deleted fields that contain UID of affected rules.
* update integration tests to use the new fields
* extend RuleStore interface to get namespace by UID
* add new export API endpoints
* implement request handlers
* update authorization and wire handlers to paths
* add folder error matchers to errorToResponse
* add tests for export methods
* Alerting: Expose metrics for Alertmanager Alerts
In Grafana, the alert evaluation and alert delivery are combined. We're always used a metric named `grafana_alerting_alerts` to get a sense of what are the alerts that are currently firing (these come from the evaluation side) and opted to not map the alertmanager alerts metric directly.
I think it's important that we make a disction between alerts that happen at evaluation vs alerts that are received for delivery by the internal Alertmanager as we have options to skip the delivery of these alerts to the internal alertmanager altogether.
* Migrate old alerting templates to use $labels
* Fix imports
* Add test coverage and separate rewriting to Go templates
* Fix lint
* Check for additional closing braces
* Add logging of invalid message templates
* Fix tests
* Small fixes
* Update comments
* Panic on empty token
* Use logtest.Fake
* Fix lint
* Allow for spaces in variable names by not tokenizing spaces
* Add template function to deduplicate Labels in a Value map
* Fix behavior of mapLookupString
* Reference deduplicated labels in migrated message template
* Fix behavior of deduplicateLabelsFunc
* Don't create variable for parent logger
* Add more tests for deduplicateLabelsFunc
* Remove unused function
* Apply suggestions from code review
Co-authored by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
* Give label val merge function better name
* Extract template migration and escape literal tokens
* Consolidate + simplify template migration
---------
Co-authored-by: William Wernert <william.wernert@grafana.com>
* Alerting: Manage remote Alertmanager silences
* fix typo
* check errors when encoding json in fake external AM
* take path from configured URL, check for nil responses
* Alerting: Don't use a separate collection system for metrics
The state package had a metric collection system that ran every 15s updating the values of the metrics - there is a common pattern for this in the Prometheus ecosystem called "collectors".
I have removed the behaviour of using a time-based interval to "set" the metrics in favour of a set of functions as the "value" that get called at scrape time.
* Add support for `keep_firing_for` in ruler proxy
* Don't delete `keep_firing_for` when editing a rule with the field set
Co-Authored-By: Sonia Aguilar <33540275+soniaAguilarPeiron@users.noreply.github.com>
---------
Co-authored-by: Sonia Aguilar <33540275+soniaAguilarPeiron@users.noreply.github.com>
Changes SSE to not always fail all queries when one fails. Now only the query itself, and nodes that depend on it will error.
---------
Co-authored-by: Gilles De Mey <gilles.de.mey@gmail.com>
* Make identity.Requester available at Context
* Clean pkg/services/guardian/guardian.go
* Clean guardian provider and guardian AC
* Clean pkg/api/team.go
* Clean ctxhandler, datasources, plugin and live
* Clean dashboards and guardian
* Implement NewUserDisplayDTOFromRequester
* Change status code numbers for http constants
* Upgrade signature of ngalert services
* log parsing errors instead of throwing error
* Make identity.Requester available at Context
* Clean pkg/services/guardian/guardian.go
* Clean guardian provider and guardian AC
* Clean pkg/api/team.go
* Clean ctxhandler, datasources, plugin and live
* Question: what to do with the UserDisplayDTO?
* Clean dashboards and guardian
* Remove identity.Requester from ReqContext
* Implement NewUserDisplayDTOFromRequester
* Fix tests
* Change status code numbers for http constants
* Upgrade signature of ngalert services
* log parsing errors instead of throwing error
* Fix tests and add logs
* linting
* add metrics and tracing to state manager
* propagate tracer to state manager
* add scheduler metrics
* fix backtesting
* add test for state metrics
* remove StateUpdateCount
* update docs
* metrics can be null
* add tracer to new tests
* make discord url secure
* support migrating unsecure settings to secure settings
* Update public/app/features/alerting/unified/utils/receiver-form.ts
Co-authored-by: William Wernert <william.wernert@grafana.com>
---------
Co-authored-by: Gilles De Mey <gilles.de.mey@gmail.com>
Co-authored-by: William Wernert <william.wernert@grafana.com>
* introduce a new action "alert.provisioning.secrets:read" and role "fixed:alerting.provisioning.secrets:reader"
* update alerting API authorization layer to let the user read provisioning with the new action
* let new action use decrypt flag
* add action and role to docs
* calculate cacheID instead of literals
* use mocked clocks
* advance clocks with the eval results
* use clearer timestamp aliases
* make expected state labels be more clear to read
Co-authored-by: Matthew Jacobson <matthew.jacobson@grafana.com>
* add folder data migration, fix unique index
* fix unique index
* pass a fake store in tests
* pass store into other providers in tests
* and now with alerting!
* Alerting: Fix contact point testing with secure settings
Fixes double encryption of secure settings during contact point testing and removes code duplication
that helped cause the drift between alertmanager and test endpoint. Also adds integration tests to cover
the regression.
Note: provisioningStore is created to remove cycle and the unnecessary dependency.
* Expose library element service's folder service
* Register library panels, add count implementation
* Expand folder counts test
* Update registry deletion method interface
* Allow getting library elements from any folder
* Add test for library panel deletion
* Add test for library panel counting
* introduce a function checkIfSeriesNeedToBeFixed to scan all value fields in the response and provide a function that updates Series so they can be uniquely identifiable. Only Graphite and TestData are checked.
* update `convertDataFramesToResults` to run this function and provide it to WideToMany
* update WideToMany to run the fix function if it is not nil
This commit updates eval.go to improve the performance of matching
captures in the general case. In some cases we have reduced the
runtime of the function from 10s of minutes to a couple 100ms.
In the case where no capture matches the exact labels, we revert to
the current subset/superset match, but with a reduced search space
due to grouping captures.
This commit changes extractEvalString to sort NumberCaptureValues
in ascending order of Var before building the output string. This
means that users will see EvaluationString in a consistent order,
but also make it possible to assert its output in tests.
* introduce a new node-type ML and implement a command outlier that uses ML plugin as a source of data.
* add feature flag mlExpressions that guards the feature