* Alerting: Consistently return Prometheus-style responses from rules APIs.
This commit is part refactor and part fix. The /rules API occasionally returns
error responses which are inconsistent with other error responses. This fixes
that, and adds a function to map from Prometheus error type and HTTP code.
* Fix integration tests
* Linter happiness
* Make linter more happy
* Fix up one more place returning non-Prometheus responses
* Alerting: Implement SaveAndApplyDefaultConfig in the remote Alertmanager struct
* send the hash of the encrypted configuration
* tests, default config hash in AM struct
* add missing default config to test
* restore build directory
* go work file...
* fix broken test
* remove unnecessary conversion to []byte
* go work again...
* make things work again with latest main branch changes
* update error messages in tests for decrypting config
Preparing these functions to be used by some other part of the codebase,
which does not have a `contextmodel.ReqContext`, only the normal request
structure (`url.Values`, etc). This is slightly messy because of how
Grafana allows url parameters to be in the URL or in the request body,
so we need to make sure to invoke the form parsing logic in `ReqContext`.
* Alerting: Optimize rule status gathering APIs when a limit is applied.
The frontend very commonly calls the `/rules` API with `limit_alerts=16`. When
there are a very large number of alert instances present, this API is quite
slow to respond, and profiling suggests that a big part of the problem is
sorting the alerts by importance, in order to select the first 16.
This changes the application of the limit to use a more efficient heap-based
top-k algorithm. This maintains a slice of only the highest ranked items whilst
iterating the full set of alert instances, which substantially reduces the
number of comparisons needed. This is particularly effective, as the
`AlertsByImportance` comparison is quite complex.
I've included a benchmark to compare the new TopK function to the existing
Sort/limit strategy. It shows that for small limits, the new approach is
much faster, especially at high numbers of alerts, e.g.
100K alerts / limit 16: 1.91s vs 0.02s (-99%)
For situations where there is no effective limit, sorting is marginally faster,
therefore in the API implementation, if there is either a) no limit or b) no
effective limit, then we just sort the alerts as before. There is also a space
overhead using a heap which would matter for large limits.
* Remove commented test cases
* Make linter happy
* Alerting: Fix simplified routing custom group by override
Custom group by overrides for simplified routing were missing required fields
GroupBy and GroupByAll normally set during upstream Route validation.
This fix ensures those missing fields are applied to the generated routes.
* Inline GroupBy and GroupByAll initialization instead of normalize after
* Alerting: Fix simplified routes '...' groupBy creating invalid routes
There were a few ways to go about this fix:
1. Modifying our copy of upstream validation to allow this
2. Modify our notification settings validation to prevent this
3. Normalize group by on save
4. Normalized group by on generate
Option 4. was chosen as the others have a mix of the following cons:
- Generated routes risk being incompatible with upstream/remote AM
- Awkward FE UX when using '...'
- Rule definition changing after save and potential pitfalls with TF
With option 4. generated routes stay compatible with external/remote AMs, FE
doesn't need to change as we allow mixed '...' and custom label groupBys, and
settings we save to db are the same ones requested.
In addition, it has the slight benefit of allowing us to hide the internal
implementation details of `alertname, grafana_folder` from the user in the
future, since we don't need to send them with every FE or TF request.
* Safer use of DefaultNotificationSettingsGroupBy
* Fix missed API tests
* Alerting: Persist silence state immediately on Create/Delete
Persists the silence state to the kvstore immediately instead of waiting for the
next maintenance run. This is used after Create/Delete to prevent silences from
being lost when a new Alertmanager is started before the state has persisted.
This can happen, for example, in a rolling deployment scenario.
* Fix test that requires real data
* Don't error if silence state persist fails, maintenance will correct
* extract genericService from RuleService just to reuse it later
* implement silence service
---------
Co-authored-by: William Wernert <william.wernert@grafana.com>
Co-authored-by: Matthew Jacobson <matthew.jacobson@grafana.com>
* Alerting: Make retention period configurable for the notification log
* update sample.ini
* fix outdated comment (on disk -> kvstore)
* skip checking cyclomatic complexity for ReadUnifiedAlertingSettings
* Feature Flags: use FeatureToggles interface where possible
Signed-off-by: Dave Henderson <dave.henderson@grafana.com>
* Replace TestFeatureToggles with existing WithFeatures
Signed-off-by: Dave Henderson <dave.henderson@grafana.com>
---------
Signed-off-by: Dave Henderson <dave.henderson@grafana.com>
* replace sqlstore with db interface in a few packages
* remove from stats
* remove sqlstore in admin test
* remove sqlstore from api plugin tests
* fix another createUser
* remove sqlstore in publicdashboards
* remove sqlstore from orgs
* clean up orguser test
* more clean up in sso
* clean up service accounts
* further cleanup
* more cleanup in accesscontrol
* last cleanup in accesscontrol
* clean up teams
* more removals
* split cfg from db in testenv
* few remaining fixes
* fix test with bus
* pass cfg for testing inside db as an option
* set query retries when no opts provided
* revert golden test data
* rebase and rollback
Terraform Issue: grafana/terraform-provider-grafana#1007
Nested routes should be allowed to inherit the contact point from the root (or direct parent) route but this fails in the provisioning API (it works in the UI)
* allow users with regular actions access provisioning API paths
* update methods that read rules
skip new authorization logic if user CanReadAllRules to avoid performance impact on file-provisioning
update all methods to accept identity.Requester that contains all permissions and is required by access control.
* create deltas for single rul e
* update modify methods
skip new authorization logic if user CanWriteAllRules to avoid performance impact on file-provisioning
update all methods to accept identity.Requester that contains all permissions and is required by access control.
* implement RuleAccessControlService in provisioning
* update file provisioning user to have all permissions to bypass authz
* update provisioning API to return errutil errors correctly
---------
Co-authored-by: Alexander Weaver <weaver.alex.d@gmail.com>
* Alerting: Implement ApplyConfig for remote primary mode (forked AM)
* add TODO for saving the config hash in other config-related methods
* fix bad method receiver name (m -> am)
* tests
* add mutex
* remove sync loop
* require "folders:read" and "alert.rules:read" in all rules API requests (write and read).
* add check for permissions "folders:read" and "alert.rules:read" to AuthorizeAccessToRuleGroup and HasAccessToRuleGroup
* check only access to datasource in rule testing API
---------
Co-authored-by: William Wernert <william.wernert@grafana.com>
* (WIP) Alerting: Decrypt secrets before sending configuration to the remote Alertmanager
* refactor, fix tests
* test decrypting secrets
* tidy up
* test SendConfiguration, quote keys, refactor tests
* make linter happy
* decrypt configuration before comparing
* copy configuration struct before decrypting
* reduce diff in TestCompareAndSendConfiguration
* clean up remote/alertmanager.go
* make linter happy
* avoid serializing into JSON to copy struct
* codeowners
Removes legacy alerting, so long and thanks for all the fish! 🐟
---------
Co-authored-by: Matthew Jacobson <matthew.jacobson@grafana.com>
Co-authored-by: Sonia Aguilar <soniaAguilarPeiron@users.noreply.github.com>
Co-authored-by: Armand Grillet <armandgrillet@users.noreply.github.com>
Co-authored-by: William Wernert <rwwiv@users.noreply.github.com>
Co-authored-by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
* Implement keep last state for state transitions
* Respect For duration when keeping state
* Only keep transition from recording an annotation
* Add keep last state option for nodata/error in UI
* export Evaluation
* Export Evaluation
* Export RuleVersionAndPauseStatus
* export Eval, create interface
* Export update and add to interface
* Export Stop and Run and add to interface
* Registry and scheduler use rule by interface and not concrete type
* Update factory to use interface, update tests to work over public API rather than writing to channels directly
* Rename map in registry
* Rename getOrCreateInfo to not reference a specific implementation
* Genericize alertRuleInfoRegistry into ruleRegistry
* Rename alertRuleInfo to alertRule
* Comments on interface
* Update pkg/services/ngalert/schedule/schedule.go
Co-authored-by: Jean-Philippe Quéméner <JohnnyQQQQ@users.noreply.github.com>
---------
Co-authored-by: Jean-Philippe Quéméner <JohnnyQQQQ@users.noreply.github.com>
* Regenerate openapidocs at 1.21.8 to match ci
* Adjust trigger to work on the actual outputted files
* Also put go.mod and go.sum in the triggers
* manually fix
* Make an arbitrary change rather than touching the trigger to force a run
* Drop all triggers - run all the time
* Print diff - taken from @papagian's PR
* Manual fixes to swagger doc
---------
Co-authored-by: Ryan McKinley <ryantxu@gmail.com>
* Alerting: Use Alertmanager types extracted into grafana/alerting
We're in the process of exporting all Alertmanager types into grafana/alerting so that they can be imported in the Mimir Alertmanager, without a neeed to import Grafana directly.
This change introduces type aliasing for all Alertmanager types based on their 1:1 copy that now live in grafana/alerting.
Signed-off-by: gotjosh <josue.abreu@gmail.com>
---------
Signed-off-by: gotjosh <josue.abreu@gmail.com>
* create rule factory for more complicated dep injection into rules
* Rules get direct access to metrics, logs, traces utilities, use factory in tests
* Use clock internal to rule
* Use sender, statemanager, evalfactory directly
* evalApplied and stopApplied
* use schedulableAlertRules behind interface
* loaded metrics reader
* 3 relevant config options
* Drop unused scheduler parameter
* Rename ruleRoutine to run
* Update READMED
* Handle long parameter lists
* remove dead branch
Updates Grafana Alertmanager to work with new interface from grafana/alerting#161. This change stops passing user-defined templates to the Grafana Alertmanager by persisting them to disk and instead passes them by string.
* ValidateInterval doesn't need the entire config
* Validation no longer depends on entire folder now that we've dropped foldertitle from api
* Don't depend on entire config struct
* Export validate group
* Alerting: feat: support deleting rule groups in the provisioning API
Adds support for DELETE to the provisioning API's alert rule groups route, which allows deleting the rule group with a
single API call. Previously, groups were deleted by deleting rules one-by-one.
Fixes#81860
This change doesn't add any new paths to the API, only new methods.
---------
Co-authored-by: Yuri Tseretyan <yuriy.tseretyan@grafana.com>
* Chore: Replace response status with const var
* Apply suggestions from code review
Co-authored-by: Sofia Papagiannaki <1632407+papagian@users.noreply.github.com>
* Add net/http import
---------
Co-authored-by: Sofia Papagiannaki <1632407+papagian@users.noreply.github.com>
This commit adds basic support for time_intervals, as mute_time_intervals
is deprecated in Alertmanager and scheduled to be removed before 1.0.
It does not add support for time_intervals in API or file provisioning,
nor does it support exporting time intervals. This will be added in
later commits to keep the changes as simple as possible.
Previously receivers were only validated before saving the alertmanager
configuration. This is a suboptimal experience for those upgrading with preview
as the failed channel upgrade will return an API error instead of being
summarized in the table.
Adds a feature flag (alertingUpgradeDryrunOnStart) that will dry-run the legacy
alert upgrade on startup. It is enabled by default.
When on legacy alerting, this feature flag will log the results of the legacy
alerting upgrade on startup and draw attention to anything in the current legacy
alerting configuration that will cause issues when the upgrade is eventually
performed. It acts as a log warning for those where action is required before
upgrading to Grafana v11 where legacy alerting will be removed.
If the db already has an entry in the kvstore for the silences of an
alertmanager before the migration has taken place, then it's possible that the
active alertmanager will overwrite the silence file created by the migration
before it has a chance to load it into memory.
This should not happen normally but is possible in edge-cases.
This change opts to bypass the unnecessary step of writing the silences to disk
during the migration and instead write them directly to the kvstore. This avoids
the race condition entirely and is more correct as we treat the database as the
source of truth for AM state.
Alerting: Remove start page of upgrade preview
Alerting upgrade page will now always show the summary table even before
upgrading any alerts or notification channels. There a few reasons for this:
- The information on the start page is redundant as it's now contained in the
documentation.
- Previously, if some unexpected issue prevented performing a full upgrade, a
user would have limited to no means to using the preview tool to help fix the
problem. This is because you could not see the summary table until the full
upgrade was performed at least once. Now, you can upgrade individual alerts and
notification channels from the beginning.
* Add notification settings to storage\domain and API models. Settings are a slice to workaround XORM mapping
* Support validation of notification settings when rules are updated
* Implement route generator for Alertmanager configuration. That fetches all notification settings.
* Update multi-tenant Alertmanager to run the generator before applying the configuration.
* Add notification settings labels to state calculation
* update the Multi-tenant Alertmanager to provide validation for notification settings
* update GET API so only admins can see auto-gen
* Alerting: fix race condition in (*ngalert/sender.ExternalAlertmanager).Run
* Chore: Fix data races when accessing members of *ngalert/state.FakeInstanceStore
* Chore: Fix data races in tests in ngalert/schedule and enable some parallel tests
* Chore: fix linters
* Chore: add TODO comment to remove loopvar once we move to Go 1.22
* Add config for limit of rules per rule group
* Warn when editing big groups through normal API
* Warn on prov api writes for groups
* Wire up comp root, tests
* Also add warning to state manager warm
* Drop unnecessary conversion
* Add user uid migration to run on every startup to protect against empty values in a upgrade downgrade scenario
* Add team uid migration to run on every startup to protect against empty values in a upgrade downgrade scenario
* Run team uid migration
* streamline initialization of test databases, support on-disk sqlite test db
* clean up test databases
* introduce testsuite helper
* use testsuite everywhere we use a test db
* update documentation
* improve error handling
* disable entity integration test until we can figure out locking error
* update GetUserVisibleNamespaces to use FolderSeriver
* update GetNamespaceByUID to use FolderService.GetFolders
* update GetAlertRulesForScheduling to use FolderService.GetFolders
* Update API and GetAlertRulesForScheduling to use the folder's full path
* get full path of folder in RouteTestGrafanaRuleConfig
* fix escaping of titles for MySQL
This pull request updates our fork of Alertmanager to commit 65bdab0, which is based on commit 5658f8c in Prometheus Alertmanager.
It applies the changes from grafana/alerting#155 which removes the overrides for validation of alerts, labels and silences that we had put in place to allow alerts and silences to work for non-Prometheus datasources. However, as this is now supported in Alertmanager with the UTF-8 work, we can use the new upstream functions and remove these overrides.
The compat package is a package in Alertmanager that takes care of backwards compatibility when parsing matchers, validating alerts, labels and silences. It has three modes: classic mode, UTF-8 strict mode, fallback mode. These modes are controlled via compat.InitFromFlags. Grafana initializes the compat package without any feature flags, which is the equivalent of fallback mode. Classic and UTF-8 strict mode are used in Mimir.
While Grafana Managed Alerts have no need for fallback mode, Grafana can still be used as an interface to manage the configurations of Mimir Alertmanagers and view configurations of Prometheus Alertmanager, and those installations might not have migrated or being running on older versions. Such installations behave as if in classic mode, and Grafana must be able to parse their configurations to interact with them for some period of time. As such, Grafana uses fallback mode until we are ready to drop support for outdated installations of Mimir and the Prometheus Alertmanager.
* Add single receiver method
* Add receiver permissions
* Add single/multi GET endpoints for receivers
* Remove stable tag from time intervals
See end of PR description here: https://github.com/grafana/grafana/pull/81672
* declare new API and models GettableTimeIntervals, PostableTimeIntervals
* add new actions alert.notifications.time-intervals:read and alert.notifications.time-intervals:write.
* update existing alerting roles with the read action. Add to all alerting roles.
* add integration tests