43ddf60cdf introduced a new method for dismissing new topics in topic-tracking-state, which works on a per-category basis.
This commit removes the old mechanism, which was to delete all 'new' topics from the local tracking state, regardless of category.
6e1fe22 introduced the possiblity for category_users to have a NULL notification_level, so that we can store `last_seen_at` dates without locking the notification level. At the time, this did not affect the topic-tracking-state query. However, the query changes in f434de2 introduced a slight change in behavior.
Previously, a subquery would look for a category_user with notification_level=mute. f434de2 refactored this to remove the subquery, and inverted some of the logic to suit.
The new query checked for `notification_level <> :muted`. If `notification_level` is NULL, this comparison will return NULL. In this scenario, notification_level=NULL means that we should fall back to the default tracking level (regular), and so we want the expression to resolve as true, not false. There was already a check for the existence of the category_users row, but it did not check for the existence of a NOT NULL notification_level.
This commit amends the expression so that the notification_level will only be compared if it is non-null.
* FIX: bulk insert to create application requests
* FIX: bulk insert to create topics
* FIX: no need to create separate user for each topic, post etc.
* FIX: Another bulk_insert of ApplicationRequests
* FIX: dont create user and topic instances when not neccessary
* FIX: merge examples with expensive setup into one example
This reverts commit 3193b0f6e6.
This is a temporary revert, we are seeing some CI failures due to this
change so I am reverting till we sort out all the problems.
This ensures that the user object is created fresh for each example.
This is required for this particular spec as we can not risk having a stale
object, which can lead to a flaky spec.
Previously we had the ability to download a simple .gz file
new changes mean we have a a tar.gz file that needs some levels
of fiddling to get extracted correctly
Providing invalid dates as the end_date or start_date param causes a 500 error and creates noise in the logs. This will handle the error and returns a proper 400 response to the client with a message that explains what the problem is.
MaxMind now requires an account with a license key to download files.
Discourse admins can register for such an account at:
https://www.maxmind.com/en/geolite2/signup
License key generation is available in the profile section.
Once registered you can set the license key using `DISCOURSE_MAXMIND_LICENSE_KEY`
This amends it so we unconditionally skip MaxMind DB downloads if no license key exists.
Page used to jitter when oneboxes loaded images lazily.
Previously we inserted the the "shadow" loading image before the "real" image.
This meant that certain styling with `firstChild` CSS selectors would apply
incorrectly to the shadow image.
Additionally we had special case code for onebox and quoted images that was
not really needed due to this fix.
We had an old fix that used computed style for image height and width in
specific scenarios, we now run it all the time.
On slow devices there was a possibility that the cache fetch after amending
src at the end of the process would cause a flash, this is avoided using a
new onload handler.
The env var `RAILS_ENABLE_TEST_LOG` didn't seem to do anything if enabled. This now sets the logger to STDOUT if `RAILS_ENABLE_TEST_LOG` is enabled and also introduces `RAILS_TEST_LOG_LEVEL` so the level of the logging in the console can be provided (default info).
Note: I am not sure if the original behaviour is expected. I can add an additional env var to enable the STDOUT logging if required
This is required for people using apps with custom protocols. We still verify the entire URL (including protocol) against the site setting value.
Refactored wildcard_url_checker so that it always returns a boolean, rather than sometimes returning a regex match.
Many security scanners ship invalid mime types, this ensures we return
a very cheap response to the clients and do not log anything.
Previous attempt still re-dispatched the request to get proper error page
but in this specific case we want no error page.
Under rare conditions due to bad HTTP timing and so on a draft could be
set at the exact same time from 2 unicorn workers.
When this happened and all the stars aligned, one of the sets would win
and the other would raise an error.
This transparently handles the situation without adding any cost to the
draft system.
The alternative is to add a distributed mutex, tricky DB transaction or handle
the error in the controller. However this seems like a reasonable way to
work around a pretty big edge case.
Added a fix to gracefully error with a Webauthn::SecurityKeyError if somehow a user provides an unkown COSE algorithm when logging in with a security key.
If `COSE::Algorithm.find` returns nil we now fail gracefully and log the algorithm used along with the user ID and the security key params for debugging, as this will help us find other common algorithms to implement for webauthn
This used to work due to side effects.
`rake parallel:migrate` used to work very inconsistently and would only migrate
some of the databases.
This introduces the recommended change to db.yml so the correct database is
found based off TEST_ENV_NUMBER if for some reason we did not set it using
RAILS_DB
Also avoids a bunch of schema dumping which is not needed when migrating
parallel specs
DB number 1 is very odd cause for whatever reason parallel spec is not
setting it.
Plugins like https://github.com/discourse/discourse-calendar add extra HTML (e.g. icons) to user/group mentions. Clicking on those extra elements used to only flash a blank card. Now, the card opens properly.