* LDAP: Show all LDAP groups
* Use the returned LDAP groups as the reference when debugging LDAP
We need to use the LDAP groups returned as the main reference for
assuming what we were able to match and what wasn't. Before, we were
using the configured groups in LDAP TOML configuration file.
* s/User name/Username
* Add a title to for the LDAP mapping results
* LDAP: UI Updates to debug view
* LDAP: Make it explicit when we weren't able to match teams
* Add new query mode picker with different states for each query. Also really simple migration script
* Populate cross resource dropdowns
* Cleanup. Handle change events
* Add multi select picker for subscriptions
* Fix markup issue
* Prepare for new query mode
* More cleanup
* Handle multiple queries both in ds and backend
* Refactoring
* Improve migration
* Add support for multiselect display name
* Use multiselect also for locations and resources
* Add more typings
* Fix migrations
* Custom multiselect built for array of options instead of variables
* Add url builder test
* fix datasource tests
* UI fixes
* Improve query editor init
* Fix brokens tests
* Cleanup
* Fix tslint issue
* Change query mode display name
* Make sure alerting works for single queries
* Friendly error for multi resources
* Add temporary typings
* API: Add `updatedAt` to api/users/:id
This adds the timestamp of when a particular user was last updated to
the `api/users/:id` endpoint.
This helps our administrators understand when was the user information last
updated. Particularly when it comes from external systems e.g. LDAP
* LDAP: Add API endpoint to query the LDAP server(s) status|
This endpoint returns the current status(es) of the configured LDAP server(s).
The status of each server is verified by dialling and if no error is returned we assume the server is operational.
This is the last piece I'll produce as an API before moving into #18759 and see the view come to life.
* Move the ReloadLDAPCfg function to the debug file
Appears to be a better suite place for this.
* LDAP: Return the server information when we find a specific user
We allow you to specify multiple LDAP servers as part of LDAP authentication integration. As part of searching for specific users, we need to understand from which server they come from. Returning the server configuration as part of the search will help us do two things:
- Understand in which server we found the user
- Have access the groups specified as part of the server configuration
* LDAP: Adds the /api/admin/ldap/:username endpoint
This endpoint returns a user found within the configured LDAP server(s). Moreso, it provides the mapping information for the user to help administrators understand how the users would be created within Grafana based on the current configuration.
No changes are executed or saved to the database, this is all an in-memory representation of how the final result would look like.
* Emails: resurrect template notification
* Phantomjs (oh yeah, there is another dev dep phantom :-) was failing for
the generation of the html templates so I had to update the dependencies
in order to fix it. While doing that I update the scripts field and docs
for it as well. yarn.lock is included
* Move splitting of the emails to separate helper function, since more services
coming up that would need to use this functionality
* Add support for enterprise specific email letters. Probably could
be done in the better way, but it's not a priority right now
It seems `ldap` module introduced new error type of which
multildap module didn't know about.
This broke the multildap login logic
Fixes#18491
Ref #18587
* SQLite migrations
* cleanup
* migrate end times
* switch to update with a query
* real migration
* anno migrations
* remove old docs
* set isRegion from time changes
* use <> for is not
* add comment and fix index decleration
* single validation place
* add test
* fix test
* add upgrading docs
* use AnnotationEvent
* fix import
* remove regionId from typescript
Existing /api/alert-notifications now requires at least editor access.
Existing /api/alert-notifiers now requires at least editor access.
New /api/alert-notifications/lookup returns less information than
/api/alert-notifications and can be access by any authenticated user.
Existing /api/org/users now requires org admin role.
New /api/org/users/lookup returns less information than
/api/org/users and can be access by users that are org admins,
admin in any folder or admin of any team.
UserPicker component now uses /api/org/users/lookup instead
of /api/org/users.
Fixes#17318
* added alert rule tags in webhook notifications
* fix: don't include whole list of Tag objects but only key/value pairs in Webhook JSON
* marked webhook alerts to support alert rule tags
* LDAP: nitpicks
* Add more tests
* Correct and clarify comment for Login() method
* Rename methods (hail consistency!)
* Uppercases first letter of the logs everywhere
* Moves method definitions around to more appropriate places
Fixes#18295
* LDAP: improve POSIX support
* Correctly abtain DN attributes result
* Allow more flexibility with comparison mapping between POSIX group & user
* Add devenv for POSIX LDAP server
* Correct the docs
Fixes#18140
* Metrics: remove unused metrics
Metric `M_Grafana_Version` is not used anywhere, nor the mentioned
`M_Grafana_Build_Version`. Seems to be an artefact?
* Metrics: make the naming consistent
* Metrics: add comments to exported vars
* Metrics: use proper naming
Fixes#18110
* Add support for `is_disabled` to `CreateUser()`
* Add support for `is_disabled` to `SearchUsers()`
Had to add it as a `string` type not as `bool`, since if that's property
is omitted, we would have add it to SQL request, which might be dangerous
* Restructure desctructive tests and add more
* API: Duplicate API Key Name Handle With Useful HTTP Code
* 17447: make changes requested during review
- use dialect.IsUniqueContraintViolation
- change if statement to match others
- return error properly
* Revert "17447: make changes requested during review"
This reverts commit a4a674ea83.
* API: useful http code on duplicate api key error w/ tests
* API: API Key Duplicate Handling
fixed small typo associated with error
* LDAP:Docs: `active_sync_enabled` setting
Mention `active_sync_enabled` setting and enable it by default
* LDAP: move "disableExternalUser" method
Idea behind new design of the LDAP module is to minimise conflation
between other parts of the system, so it would decoupled as much as
possible from stuff like database, HTTP transport and etc.
Following "Do One Thing and Do It Well" Unix philosophy principal, other things
could be better fitted on the consumer side of things.
Which what this commit trying to archive
* LDAP: correct user/admin binding
The second binding was not happening, so if the admin login/password
in LDAP configuration was correct, anyone could had login as anyone using
incorrect password
* LDAP: Divide the requests
Active Directory does indeed have a limitation with 1000 results
per search (default of course).
However, that limitation can be workaround with the pagination search feature,
meaning `pagination` number is how many times LDAP compatible server will be
requested by the client with specified amount of users (like 1000). That feature
already embeded with LDAP compatible client (including our `go-ldap`).
But slapd server has by default stricter settings. First, limitation is not 1000
but 500, second, pagination workaround presumably (information about it a bit
scarce and I still not sure on some of the details from my own testing)
cannot be workaround with pagination feature.
See
https://www.openldap.org/doc/admin24/limits.htmlhttps://serverfault.com/questions/328671/paging-using-ldapsearchhashicorp/vault#4162 - not sure why they were hitting the limit in
the first place, since `go-ldap` doesn't have one by default.
But, given all that, for me `ldapsearch` command with same request
as with `go-ldap` still returns more then 500 results, it can even return
as much as 10500 items (probably more).
So either there is some differences with implementation of the LDAP search
between `go-ldap` module and `ldapsearch` or I am missing a step :/.
In the wild (see serverfault link), apparently, people still hitting that
limitation even with `ldapsearch`, so it still seems to be an issue.
But, nevertheless, I'm still confused by this incoherence.
To workaround it, I divide the request by no more then
500 items per search
* Teams: show proper label for each auth provider
Teams: don't sore AuthModule in team_member table, use JOIN to get it instead
* Teams: fix AddTeamMember after last changes
* Teams: add more auth provider labels
* Teams: show external sync badge if LDAP is not enabled
* Teams: tests for getting auth module
* Build: use golangci-lint as a make command
* Since gometalinter was deprecated in favor of golangci-lint so it was
replaced by it. Responsibilities held by the gometalinter was moved to
golangci-lint
* There was some changes in implementation (that was also mentioned in
the code comment) between the tools, which uncovered couple errors
in the code. Those issues were either solved or disabled by
the inline comments
* Introduce the golangci-lint config, to make their
configuration more manageable
* Build: replace backend-lint.sh script with make
* Add LDAP config instead sed use
* Add container name
* Add SizeLimit option to client and to server.
Probably useless at this point, but it's better to have it then otherwise
* Modify backend to allow expiration of API Keys
* Add middleware test for expired api keys
* Modify frontend to enable expiration of API Keys
* Fix frontend tests
* Fix migration and add index for `expires` field
* Add api key tests for database access
* Substitude time.Now() by a mock for test usage
* Front-end modifications
* Change input label to `Time to live`
* Change input behavior to comply with the other similar
* Add tooltip
* Modify AddApiKey api call response
Expiration should be *time.Time instead of string
* Present expiration date in the selected timezone
* Use kbn for transforming intervals to seconds
* Use `assert` library for tests
* Frontend fixes
Add checks for empty/undefined/null values
* Change expires column from datetime to integer
* Restrict api key duration input
It should be interval not number
* AddApiKey must complain if SecondsToLive is negative
* Declare ErrInvalidApiKeyExpiration
* Move configuration to auth section
* Update docs
* Eliminate alias for models in modified files
* Omit expiration from api response if empty
* Eliminate Goconvey from test file
* Fix test
Do not sleep, use mocked timeNow() instead
* Remove index for expires from api_key table
The index should be anyway on both org_id and expires fields.
However this commit eliminates completely the index for now
since not many rows are expected to be in this table.
* Use getTimeZone function
* Minor change in api key listing
The frontend should display a message instead of empty string
if the key does not expire.
* batch disable users
* batch revoke users tokens
* split batch disable user and revoke token
* API: get users with auth info and isExternal flag
* fix tests for batch disable users
* Users: refactor /api/users/search endpoint
* Users: use alias for "user" table
* Chore: add BatchDisableUsers() to the bus
* Users: order user list by id explicitly
* Users: return AuthModule from /api/users/:id endpoint
* Users: do not return unused fields
* Users: fix SearchUsers method after last changes
* User: return auth module as array for future purposes
* User: tests for SearchUsers()
* User: return only latest auth module in SearchUsers()
* User: fix JOIN, get only most recent auth module
* tsdb: add support for setting debug flag of tsdb query
* alerting: adds debug flag in eval context
Debug flag is set when testing an alert rule and this debug
flag is used to return more debug information in test aler rule
response. This debug flag is also provided to tsdb queries so
datasources can optionally add support for returning additional
debug data
* alerting: improve test alert rule ui
Adds buttons for expand/collapse json and copy json to clipboard,
very similar to how the query inspector works.
* elasticsearch: implement support for tsdb query debug flag
* elasticsearch: embedding client response in struct
* alerting: return proper query model when testing rule
* LDAP: use only one struct
* Use only models.ExternalUserInfo
* Add additional helper method :/
* Move all the helpers to one module
* LDAP: refactoring
* Rename some of the public methods and change their behaviour
* Remove outdated methods
* Simplify logic
* More tests
There is no and never were tests for settings.go, added tests for helper
methods (cover is now about 100% for them). Added tests for the main
LDAP logic, but there is some stuff to add. Dial() is not tested and not
decoupled. It might be a challenge to do it properly
* Restructure tests:
* they wouldn't depend on external modules
* more consistent naming
* logical division
* More guards for erroneous paths
* Login: make login service an explicit dependency
* LDAP: remove no longer needed test helper fns
* LDAP: remove useless import
* LDAP: Use new interface in multildap module
* LDAP: corrections for the groups of multiple users
* In case there is several users their groups weren't detected correctly
* Simplify helpers module
* Implementation of optimistic lock pattern
Try to insert the remote cache key and handle integrity error
* Remove transaction
Integrity error inside a transaction results in deadlock
* Remove check for existing remote cache key
Is no longer needed since integrity constrain violations are handled
* Add check for integrity constrain violation
Do not update the row if the insert statement fails
for other than an integrity constrain violation
* Handle failing inserts because of deadlocks
If the insert statement fails because of a deadlock
try to update the row
* Add utility function for returning SQL error code
Useful for debugging
* Add logging for failing expired cache key deletion
Do not shallow it completely
* Revert "Add utility function for returning SQL error code"
This reverts commit 8e0b82c79633e7d8bc350823cbbab2ac7a58c0a5.
* Better log for failing deletion of expired cache key
* Add some comments
* Remove check for existing cache key
Attempt to insert the key without checking if it's already there
and handle the error situations
* Do not propagate deadlocks created during update
Most probably somebody else is trying to insert/update
the key at the same time so it is safe enough to ignore it