* Pass device ID
* dont use device id as way of detecting
* fix spelling mistake
* update layers
* fix test
* fix linting
* save schema
* put columns in correct place
* fix linting
* update
* upgrade go change
* use props
* fix stuff
* update session tests
* address PR comments
* address PR comments
* [MM-26397] Take query size and order into account for Bleve
* Add a test to check post search pagination
* Add tests for checking limit when searching users
* Make pagination an independent test to discriminate DB engines
This avoids the race of assigning the sema field from the start method
which runs in a goroutine.
The race that happens is
==================
WARNING: DATA RACE
Write at 0x00c003a9d1b0 by goroutine 67:
github.com/mattermost/mattermost-server/v5/app.(*PushNotificationsHub).start()
/home/agniva/mattermost/mattermost-server/app/notification_push.go:264 +0x90
Previous read at 0x00c003a9d1b0 by goroutine 69:
github.com/mattermost/mattermost-server/v5/app.(*Server).createPushNotificationsHub()
/home/agniva/mattermost/mattermost-server/app/notification_push.go:260 +0x1d5
We just skip the test instead of failing because waiting for more than 5 seconds
to get a response does not make sense, and it will unncessarily slow down
the tests further in an already congested CI environment.
* MM-26206: Add GroupMentions permissions in default channel admin
This permission was skipped from the default permissions function.
Which led to a bug where permissions reset would not bring back
this permission to the different roles.
* Added tests
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* Dont return an error if createDefaultChannelMemberships fails due to email domain restriction on team
* Add test ensuring sync works as intended and only skips over failed users
* Remove unneeded line
* Update wording
* Trigger CI
* [MM-25646] Adds the permanent delete all users endpoint to the local API
* Add a check to ensure that teams and channels are not deleted
* Fix linter
* Fix audit record name for consistency with method name
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* Refactor of getListOfAllowedChannelsForTeam
Also, I've fixed some problematic scenarios:
- The quick search doesn't provide team id so it was always failing
- When the teamId was empty and view restrictions too we always
return all the channels because if we do "strings.Contains("foo", "")
it always returns true
- There was a case, in quick search with a guest account, where you
get an empty result because teamId is not provided
* Error if team id is not passed when searching for the channel
If we search users passing the channel id, we must pass the team id
too so we avoid returning all the channels if we remove the empty
team id restriction we have in the getListOfAllowedChannelsForTeam
There is no known reason to search for a channel but not filtering
using the team id. Even guest accounts belong to a team
* MM-26514: Fix ALTER PRIMARY KEY migration for Postgres <9.3
We work around the lack of LATERAL keyword by making the query separately.
* Complete full WHERE clause just to be sure
* api4/user: add verify user by id method
* Update api4/user.go
Co-Authored-By: Miguel de la Cruz <miguel@mcrx.me>
* Update model/client4.go
Co-Authored-By: Miguel de la Cruz <miguel@mcrx.me>
* api4/user: reflect review comments
* Update api4/user_test.go
Co-authored-by: Miguel de la Cruz <miguel@mcrx.me>
Co-authored-by: Miguel de la Cruz <miguel@mcrx.me>
Co-authored-by: mattermod <mattermod@users.noreply.github.com>
* Two missing methods to add in the channel layer
* Added delete user/channel posts methods
- Created in both search engines but only implemented in ES
- Add those methods in the search layer
- Included the PermanentDeleteByUser/Channel methods
* Two new delete documents are included in the bleve code with this
change:
- DeleteChannelPosts
- DeleteUserPosts
These two new functions delete post documents from the index-based
in the filed value provided
While resetting permissions, we were not removing old migration state residue
for the EMOJIS_PERMISSIONS_MIGRATION_KEY and GUEST_ROLES_CREATION_MIGRATION_KEY.
Therefore, during their migration, they got skipped because the code checks
if a migration key is already present or not.
To fix this, we remove the migration key just as we do for ADVANCED_PERMISSIONS_MIGRATION_KEY.
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* MM-25404: Add GetHealthScore metric for the cluster
This PR adds the necessary function to the cluster interface
so that they can be called from App code.
* Add mocks
* Remove fakeapp
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* MM-19548: Add a deadlock retry function for SaveChannel
A deadlock has been seen to occur in the upsertPublicChannelT method
during bulk import.
Here is a brief excerpt:
*** (1) TRANSACTION:
TRANSACTION 3141, ACTIVE 1 sec inserting
INSERT INTO
PublicChannels(Id, DeleteAt, TeamId, DisplayName, Name, Header, Purpose)
VALUES
(?, ?, ?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
DeleteAt = ?,
TeamId = ?,
DisplayName = ?,
Name = ?,
Header = ?,
Purpose = ?
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3141 lock_mode X locks gap before rec insert intention waiting
** (2) TRANSACTION:
TRANSACTION 3140, ACTIVE 1 sec inserting
mysql tables in use 1, locked 1
5 lock struct(s), heap size 1136, 3 row lock(s), undo log entries 2
MySQL thread id 50, OS thread handle 140641523848960, query id 3226 172.17.0.1 mmuser update
INSERT INTO
PublicChannels(Id, DeleteAt, TeamId, DisplayName, Name, Header, Purpose)
VALUES
(?, ?, ?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
DeleteAt = ?,
TeamId = ?,
DisplayName = ?,
Name = ?,
Header = ?,
Purpose = ?
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3140 lock_mode X locks gap before rec
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3140 lock_mode X locks gap before rec insert intention waiting
*** WE ROLL BACK TRANSACTION (1)
Following is my analysis:
From the deadlock output, it can be seen that it's due to a gap lock.
And that's clear because the index is Name which is a multi-column index using Name and TeamId.
But interestingly, both transactions seem to be inserting the same data, which is what is puzzling me.
The multi-column index on Name and TeamId will guarantee that they are always unique. And from looking at the code,
it does not seem possible to me that it will try to insert the same data from 2 different transactions.
But even if they do, why does tx 2 try to acquire the same lock again when it already has that ?
Here is what I think the order of events happening
Tx 2 gets a gap lock.
Tx 1 tries to get the same gap lock.
Tx 2 tries to again get the same gap lock ?
The last step is what is puzzling me. Why does an UPSERT statement acquire 2 gap locks ? From my reading of https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html:
> INSERT ... ON DUPLICATE KEY UPDATE differs from a simple INSERT in that an exclusive lock rather than a shared lock is placed on the row to be updated when a duplicate-key error occurs. An exclusive index-record lock is taken for a duplicate primary key value. An exclusive next-key lock is taken for a duplicate unique key value.
From what I understand, the expectation is that there will be one X lock and one gap lock is taken.
But that's not what the deadlock output seems to say.
The general advice on the internet seems to be that deadlocks will happen and not all of them can be understood.
For now, we add a generic deadlock retry function at the store package which can be reused by other queries too.
P.S.: This is a verbatim copy of my investigation posted at https://dba.stackexchange.com/questions/268652/mysql-deadlock-upsert-query-acquiring-gap-lock-twice
Testing:
This is ofcourse hard to test because it is impossible to reproduce this. I have tested this by manually returning an error
and confirming that it indeed retries.
WARN[2020-06-19T11:18:24.9585676+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.959158+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.9595072+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.9595451+05:30] Deadlock happened 3 times. Giving up caller="sqlstore/channel_store.go:579"
ERRO[2020-06-19T11:18:24.9596426+05:30] Unable to save channel. caller="mlog/log.go:175" err_details="Error 1213: mydeadlock" err_where=CreateChannel http_code=500 ip_addr="::1" method=POST path=/api/v4/channels request_id=745bsj13b7f6mnmsbn3t97grbw user_id=xcof1ipipbrfxpfjf6x4p6kx9e
* Fix tests
* MM-25890: Fix deadlock on deleting emoji reactions
A deadlock happens because `UPDATE_POST_HAS_REACTIONS_ON_DELETE_QUERY` is being called from 2 separate places.
1. From `DeleteAllWithEmojiName` where it's called as an independent query.
2. From `deleteReactionAndUpdatePost` where it's called as part of a transaction along with another DELETE query.
The deadlock occurs in such a scenario:
- tx #2 acquires an X lock from the DELETE query.
- tx #1 tries to acquire a S lock with the select query, but it's waiting for the X lock to be released from tx #2
- tx #2 now tries to acquire an S lock, but it can't because it is locked on tx #1.
Deadlock.
I have tested this and it does indeed deadlock. The root of the problem is that the Primary key is a multi-column index,
which means that a next-key lock has to be acquired to get a lock on the gap before the index.
Both the queries try to delete some reactions, and then select the new number of reactions. But they select different rows
due to which this happens.
This might just be an unavoidable deadlock due to the way the indexes are setup and next-key locks.
Unless we change the primary key to be a single-column index, it will be very hard to avoid this.
Therefore we just go with a simple retry.
* fix i18n
* address review comments
* address code review
* Fix scopelint
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* MM-19548: Add a deadlock retry function for SaveChannel
A deadlock has been seen to occur in the upsertPublicChannelT method
during bulk import.
Here is a brief excerpt:
*** (1) TRANSACTION:
TRANSACTION 3141, ACTIVE 1 sec inserting
INSERT INTO
PublicChannels(Id, DeleteAt, TeamId, DisplayName, Name, Header, Purpose)
VALUES
(?, ?, ?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
DeleteAt = ?,
TeamId = ?,
DisplayName = ?,
Name = ?,
Header = ?,
Purpose = ?
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3141 lock_mode X locks gap before rec insert intention waiting
** (2) TRANSACTION:
TRANSACTION 3140, ACTIVE 1 sec inserting
mysql tables in use 1, locked 1
5 lock struct(s), heap size 1136, 3 row lock(s), undo log entries 2
MySQL thread id 50, OS thread handle 140641523848960, query id 3226 172.17.0.1 mmuser update
INSERT INTO
PublicChannels(Id, DeleteAt, TeamId, DisplayName, Name, Header, Purpose)
VALUES
(?, ?, ?, ?, ?, ?, ?)
ON DUPLICATE KEY UPDATE
DeleteAt = ?,
TeamId = ?,
DisplayName = ?,
Name = ?,
Header = ?,
Purpose = ?
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3140 lock_mode X locks gap before rec
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 76 page no 4 n bits 104 index Name of table `mydb`.`PublicChannels` trx id 3140 lock_mode X locks gap before rec insert intention waiting
*** WE ROLL BACK TRANSACTION (1)
Following is my analysis:
From the deadlock output, it can be seen that it's due to a gap lock.
And that's clear because the index is Name which is a multi-column index using Name and TeamId.
But interestingly, both transactions seem to be inserting the same data, which is what is puzzling me.
The multi-column index on Name and TeamId will guarantee that they are always unique. And from looking at the code,
it does not seem possible to me that it will try to insert the same data from 2 different transactions.
But even if they do, why does tx 2 try to acquire the same lock again when it already has that ?
Here is what I think the order of events happening
Tx 2 gets a gap lock.
Tx 1 tries to get the same gap lock.
Tx 2 tries to again get the same gap lock ?
The last step is what is puzzling me. Why does an UPSERT statement acquire 2 gap locks ? From my reading of https://dev.mysql.com/doc/refman/8.0/en/innodb-locks-set.html:
> INSERT ... ON DUPLICATE KEY UPDATE differs from a simple INSERT in that an exclusive lock rather than a shared lock is placed on the row to be updated when a duplicate-key error occurs. An exclusive index-record lock is taken for a duplicate primary key value. An exclusive next-key lock is taken for a duplicate unique key value.
From what I understand, the expectation is that there will be one X lock and one gap lock is taken.
But that's not what the deadlock output seems to say.
The general advice on the internet seems to be that deadlocks will happen and not all of them can be understood.
For now, we add a generic deadlock retry function at the store package which can be reused by other queries too.
P.S.: This is a verbatim copy of my investigation posted at https://dba.stackexchange.com/questions/268652/mysql-deadlock-upsert-query-acquiring-gap-lock-twice
Testing:
This is ofcourse hard to test because it is impossible to reproduce this. I have tested this by manually returning an error
and confirming that it indeed retries.
WARN[2020-06-19T11:18:24.9585676+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.959158+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.9595072+05:30] A deadlock happened. Retrying. caller="sqlstore/channel_store.go:568" error="Error 1213: mydeadlock"
WARN[2020-06-19T11:18:24.9595451+05:30] Deadlock happened 3 times. Giving up caller="sqlstore/channel_store.go:579"
ERRO[2020-06-19T11:18:24.9596426+05:30] Unable to save channel. caller="mlog/log.go:175" err_details="Error 1213: mydeadlock" err_where=CreateChannel http_code=500 ip_addr="::1" method=POST path=/api/v4/channels request_id=745bsj13b7f6mnmsbn3t97grbw user_id=xcof1ipipbrfxpfjf6x4p6kx9e
* Fix tests
* Address review comments
* Address review comments
* Add forgotten test
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* Ensure time is different when second update operation occurs
* Change all sleeps to one ms
Co-authored-by: Mattermod <mattermod@users.noreply.github.com>
* add ServiceProviderIdentifier to config
* Update config, add unit test
* fix unit test, update i18n
* add english translation for error
Co-authored-by: mattermod <mattermod@users.noreply.github.com>