* Add support for `is_disabled` to `CreateUser()`
* Add support for `is_disabled` to `SearchUsers()`
Had to add it as a `string` type not as `bool`, since if that's property
is omitted, we would have add it to SQL request, which might be dangerous
* Restructure desctructive tests and add more
* API: Duplicate API Key Name Handle With Useful HTTP Code
* 17447: make changes requested during review
- use dialect.IsUniqueContraintViolation
- change if statement to match others
- return error properly
* Revert "17447: make changes requested during review"
This reverts commit a4a674ea83.
* API: useful http code on duplicate api key error w/ tests
* API: API Key Duplicate Handling
fixed small typo associated with error
* LDAP:Docs: `active_sync_enabled` setting
Mention `active_sync_enabled` setting and enable it by default
* LDAP: move "disableExternalUser" method
Idea behind new design of the LDAP module is to minimise conflation
between other parts of the system, so it would decoupled as much as
possible from stuff like database, HTTP transport and etc.
Following "Do One Thing and Do It Well" Unix philosophy principal, other things
could be better fitted on the consumer side of things.
Which what this commit trying to archive
* LDAP: correct user/admin binding
The second binding was not happening, so if the admin login/password
in LDAP configuration was correct, anyone could had login as anyone using
incorrect password
* LDAP: Divide the requests
Active Directory does indeed have a limitation with 1000 results
per search (default of course).
However, that limitation can be workaround with the pagination search feature,
meaning `pagination` number is how many times LDAP compatible server will be
requested by the client with specified amount of users (like 1000). That feature
already embeded with LDAP compatible client (including our `go-ldap`).
But slapd server has by default stricter settings. First, limitation is not 1000
but 500, second, pagination workaround presumably (information about it a bit
scarce and I still not sure on some of the details from my own testing)
cannot be workaround with pagination feature.
See
https://www.openldap.org/doc/admin24/limits.htmlhttps://serverfault.com/questions/328671/paging-using-ldapsearchhashicorp/vault#4162 - not sure why they were hitting the limit in
the first place, since `go-ldap` doesn't have one by default.
But, given all that, for me `ldapsearch` command with same request
as with `go-ldap` still returns more then 500 results, it can even return
as much as 10500 items (probably more).
So either there is some differences with implementation of the LDAP search
between `go-ldap` module and `ldapsearch` or I am missing a step :/.
In the wild (see serverfault link), apparently, people still hitting that
limitation even with `ldapsearch`, so it still seems to be an issue.
But, nevertheless, I'm still confused by this incoherence.
To workaround it, I divide the request by no more then
500 items per search
* Teams: show proper label for each auth provider
Teams: don't sore AuthModule in team_member table, use JOIN to get it instead
* Teams: fix AddTeamMember after last changes
* Teams: add more auth provider labels
* Teams: show external sync badge if LDAP is not enabled
* Teams: tests for getting auth module
* Build: use golangci-lint as a make command
* Since gometalinter was deprecated in favor of golangci-lint so it was
replaced by it. Responsibilities held by the gometalinter was moved to
golangci-lint
* There was some changes in implementation (that was also mentioned in
the code comment) between the tools, which uncovered couple errors
in the code. Those issues were either solved or disabled by
the inline comments
* Introduce the golangci-lint config, to make their
configuration more manageable
* Build: replace backend-lint.sh script with make
* Add LDAP config instead sed use
* Add container name
* Add SizeLimit option to client and to server.
Probably useless at this point, but it's better to have it then otherwise
* Modify backend to allow expiration of API Keys
* Add middleware test for expired api keys
* Modify frontend to enable expiration of API Keys
* Fix frontend tests
* Fix migration and add index for `expires` field
* Add api key tests for database access
* Substitude time.Now() by a mock for test usage
* Front-end modifications
* Change input label to `Time to live`
* Change input behavior to comply with the other similar
* Add tooltip
* Modify AddApiKey api call response
Expiration should be *time.Time instead of string
* Present expiration date in the selected timezone
* Use kbn for transforming intervals to seconds
* Use `assert` library for tests
* Frontend fixes
Add checks for empty/undefined/null values
* Change expires column from datetime to integer
* Restrict api key duration input
It should be interval not number
* AddApiKey must complain if SecondsToLive is negative
* Declare ErrInvalidApiKeyExpiration
* Move configuration to auth section
* Update docs
* Eliminate alias for models in modified files
* Omit expiration from api response if empty
* Eliminate Goconvey from test file
* Fix test
Do not sleep, use mocked timeNow() instead
* Remove index for expires from api_key table
The index should be anyway on both org_id and expires fields.
However this commit eliminates completely the index for now
since not many rows are expected to be in this table.
* Use getTimeZone function
* Minor change in api key listing
The frontend should display a message instead of empty string
if the key does not expire.
* batch disable users
* batch revoke users tokens
* split batch disable user and revoke token
* API: get users with auth info and isExternal flag
* fix tests for batch disable users
* Users: refactor /api/users/search endpoint
* Users: use alias for "user" table
* Chore: add BatchDisableUsers() to the bus
* Users: order user list by id explicitly
* Users: return AuthModule from /api/users/:id endpoint
* Users: do not return unused fields
* Users: fix SearchUsers method after last changes
* User: return auth module as array for future purposes
* User: tests for SearchUsers()
* User: return only latest auth module in SearchUsers()
* User: fix JOIN, get only most recent auth module
* tsdb: add support for setting debug flag of tsdb query
* alerting: adds debug flag in eval context
Debug flag is set when testing an alert rule and this debug
flag is used to return more debug information in test aler rule
response. This debug flag is also provided to tsdb queries so
datasources can optionally add support for returning additional
debug data
* alerting: improve test alert rule ui
Adds buttons for expand/collapse json and copy json to clipboard,
very similar to how the query inspector works.
* elasticsearch: implement support for tsdb query debug flag
* elasticsearch: embedding client response in struct
* alerting: return proper query model when testing rule
* LDAP: use only one struct
* Use only models.ExternalUserInfo
* Add additional helper method :/
* Move all the helpers to one module
* LDAP: refactoring
* Rename some of the public methods and change their behaviour
* Remove outdated methods
* Simplify logic
* More tests
There is no and never were tests for settings.go, added tests for helper
methods (cover is now about 100% for them). Added tests for the main
LDAP logic, but there is some stuff to add. Dial() is not tested and not
decoupled. It might be a challenge to do it properly
* Restructure tests:
* they wouldn't depend on external modules
* more consistent naming
* logical division
* More guards for erroneous paths
* Login: make login service an explicit dependency
* LDAP: remove no longer needed test helper fns
* LDAP: remove useless import
* LDAP: Use new interface in multildap module
* LDAP: corrections for the groups of multiple users
* In case there is several users their groups weren't detected correctly
* Simplify helpers module
* Implementation of optimistic lock pattern
Try to insert the remote cache key and handle integrity error
* Remove transaction
Integrity error inside a transaction results in deadlock
* Remove check for existing remote cache key
Is no longer needed since integrity constrain violations are handled
* Add check for integrity constrain violation
Do not update the row if the insert statement fails
for other than an integrity constrain violation
* Handle failing inserts because of deadlocks
If the insert statement fails because of a deadlock
try to update the row
* Add utility function for returning SQL error code
Useful for debugging
* Add logging for failing expired cache key deletion
Do not shallow it completely
* Revert "Add utility function for returning SQL error code"
This reverts commit 8e0b82c79633e7d8bc350823cbbab2ac7a58c0a5.
* Better log for failing deletion of expired cache key
* Add some comments
* Remove check for existing cache key
Attempt to insert the key without checking if it's already there
and handle the error situations
* Do not propagate deadlocks created during update
Most probably somebody else is trying to insert/update
the key at the same time so it is safe enough to ignore it
xorm introduced some changes in
https://github.com/go-xorm/xorm/pull/824 and
https://github.com/go-xorm/xorm/pull/876 which by default will use
public as the postgres schema and this was a breaking change compared
to before. Grafana has implemented a custom postgres dialect so above
changes wasn't a problem here. However, Grafana's custom database
migration was using xorm dialect to check if the migration table exists
or not.
For those using a custom search_path (schema) in postgres configured on
server, database or user level the migration table check would not find
the migration table since it was looking in public schema due to xorm
changes above. This had the consequence that Grafana's database
migration failed the second time since migration had already run
migrations in another schema.
This change will make xorm use an empty default schema for postgres and
by that mimic the functionality of how it was functioning before
xorm's changes above.
Fixes#16720
Co-Authored-By: Carl Bergquist <carl@grafana.com>
* Removes Add/Remove methods
* Publicise necessary fields and methods so we could extend it
* Publicise mock API
* More comments and additional simplifications
* Sync with master
Still having low coverage :/ - should be addressed in #17208
Adds an additional sqlite error code 5 (SQLITE_BUSY) to the
transaction retry handler to add retries when sqlite
returns database is locked error.
More info: https://www.sqlite.org/rescode.html#busy
Ref #17247#16638
* Users: add is_disabled column
* Users: disable users removed from LDAP
* Auth: return ErrInvalidCredentials for failed LDAP auth
* User: return isDisabled flag in user search api
* User: mark disabled users at the server admin page
* Chore: refactor according to review
* Auth: prevent disabled user from login
* Auth: re-enable user when it found in ldap
* User: add api endpoint for disabling user
* User: use separate endpoints to disable/enable user
* User: disallow disabling external users
* User: able do disable users from admin UI
* Chore: refactor based on review
* Chore: use more clear error check when disabling user
* Fix login tests
* Tests for disabling user during the LDAP login
* Tests for disable user API
* Tests for login with disabled user
* Remove disable user UI stub
* Sync with latest LDAP refactoring
* incapsulates multipleldap logic under one module
* abstracts users upsert and get logic
* changes some of the text error messages and import sort sequence
* heavily refactors the LDAP module – LDAP module now only deals with LDAP related behaviour
* integrates affected auth_proxy module and their tests
* refactoring of the auth_proxy logic
* Chore: explore possibilities of using makefile
This is an exploratory commit - I wanted to see how
revive/gosec linters could be integrated with makefile and our build scripts.
Looks better then I expected :)
* Chore: make revive happy
Revive execution was not supplied with path, if you restore there is couple
errors that were popping up - so I fixed them
* Chore: make revive happy