* Implementation of optimistic lock pattern
Try to insert the remote cache key and handle integrity error
* Remove transaction
Integrity error inside a transaction results in deadlock
* Remove check for existing remote cache key
Is no longer needed since integrity constrain violations are handled
* Add check for integrity constrain violation
Do not update the row if the insert statement fails
for other than an integrity constrain violation
* Handle failing inserts because of deadlocks
If the insert statement fails because of a deadlock
try to update the row
* Add utility function for returning SQL error code
Useful for debugging
* Add logging for failing expired cache key deletion
Do not shallow it completely
* Revert "Add utility function for returning SQL error code"
This reverts commit 8e0b82c79633e7d8bc350823cbbab2ac7a58c0a5.
* Better log for failing deletion of expired cache key
* Add some comments
* Remove check for existing cache key
Attempt to insert the key without checking if it's already there
and handle the error situations
* Do not propagate deadlocks created during update
Most probably somebody else is trying to insert/update
the key at the same time so it is safe enough to ignore it
* x_xss_protection
* strict_transport_security (HSTS)
* x_content_type_options
these are currently defaulted to false (off) until the next minor release.
fixes#17509
* Feature: Parse user agent string in user auth token api response (#16222)
* Adding UA Parser Go modules attempt (#16222)
* Bring user agent vals up per req
* fix tests
* doc update
* update to flatten, no maps
* update doc
* wip: fix remote cache for redis
connstr parsing and non-negative expires for #17377
TODO: finish parse, check zero case, find out why negative duration in the first place
* finish parse.
Still TODO, find out negative value, and decide if would be better to make database specific entries in the .ini file
* update ini files
* remove accidental uncomment in defaults.ini
* auth_proxy: expiration non-negative so expiration is not in the past
* fix test, revert neg in redis
* review: use errutil
xorm introduced some changes in
https://github.com/go-xorm/xorm/pull/824 and
https://github.com/go-xorm/xorm/pull/876 which by default will use
public as the postgres schema and this was a breaking change compared
to before. Grafana has implemented a custom postgres dialect so above
changes wasn't a problem here. However, Grafana's custom database
migration was using xorm dialect to check if the migration table exists
or not.
For those using a custom search_path (schema) in postgres configured on
server, database or user level the migration table check would not find
the migration table since it was looking in public schema due to xorm
changes above. This had the consequence that Grafana's database
migration failed the second time since migration had already run
migrations in another schema.
This change will make xorm use an empty default schema for postgres and
by that mimic the functionality of how it was functioning before
xorm's changes above.
Fixes#16720
Co-Authored-By: Carl Bergquist <carl@grafana.com>
* Refactor: Removes replaceUrl from actions
* Refactor: Moves saveState thunk to epic
* Refactor: Moves thunks to epics
* Wip: removes resulttype and queries once
* Refactor: LiveTailing uses observer in query
* Refactor: Creates epics folder for epics and move back actioncreators
* Tests: Adds tests for epics and reducer
* Fix: Checks for undefined as well
* Refactor: Cleans up previous live tailing implementation
* Chore: merge with master
* Fix: Fixes url issuses and prom graph in Panels
* Refactor: Removes supportsStreaming and adds sockets to DataSourcePluginMeta instead
* Refactor: Changes the way we create TimeSeries
* Refactor: Renames sockets to streaming
* Refactor: Changes the way Explore does incremental updates
* Refactor: Removes unused method
* Refactor: Adds back Loading indication
Adds a new [server] setting `serve_from_sub_path`. By enabling
this setting and using a subpath in `root_url` setting, e.g.
`root_url = http://localhost:3000/grafana`, Grafana will be accessible
on `http://localhost:3000/grafana`. By default it is set to `false`
for compatibility reasons.
Closes#16623
* Removes Add/Remove methods
* Publicise necessary fields and methods so we could extend it
* Publicise mock API
* More comments and additional simplifications
* Sync with master
Still having low coverage :/ - should be addressed in #17208
Adds an additional sqlite error code 5 (SQLITE_BUSY) to the
transaction retry handler to add retries when sqlite
returns database is locked error.
More info: https://www.sqlite.org/rescode.html#busy
Ref #17247#16638
* Users: add is_disabled column
* Users: disable users removed from LDAP
* Auth: return ErrInvalidCredentials for failed LDAP auth
* User: return isDisabled flag in user search api
* User: mark disabled users at the server admin page
* Chore: refactor according to review
* Auth: prevent disabled user from login
* Auth: re-enable user when it found in ldap
* User: add api endpoint for disabling user
* User: use separate endpoints to disable/enable user
* User: disallow disabling external users
* User: able do disable users from admin UI
* Chore: refactor based on review
* Chore: use more clear error check when disabling user
* Fix login tests
* Tests for disabling user during the LDAP login
* Tests for disable user API
* Tests for login with disabled user
* Remove disable user UI stub
* Sync with latest LDAP refactoring
* fix: azuremonitor adds multi-sub support to alerting
* fix: AzureMonitor missing parameter in metadata func
getMetricMetadata function when called in the query ctrl
was missing a parameter for Subscription Id.
Also, made some tweaks to what happens when a chained
dropdown is changed to not reset all the fields that
are dependent on it.
Adds logs scenario which is quite basic and not that smart
to begin with. This will hopefully ease development of
Explore and support for logs in Grafana.
This makes sure the scenarios returned from API are sorted in a consistent
way and by that makes the values in scenario drop down always presented
ordered instead of jumping around.
* incapsulates multipleldap logic under one module
* abstracts users upsert and get logic
* changes some of the text error messages and import sort sequence
* heavily refactors the LDAP module – LDAP module now only deals with LDAP related behaviour
* integrates affected auth_proxy module and their tests
* refactoring of the auth_proxy logic