* FIX: 'false' value was treated as a truthy value
For example, latest.json?no_subcategories=false used to have set
no_subcategories to the string value of 'false', which is not false.
* DEV: Remove dead code
* FIX: Redirect to /none under the right conditions
These conditions are:
- neither /all or /none present
- only for default filter
* FIX: Build correct topic list filter
/none was never added to the topic list filter
* FIX: Do not show count for subcategories if 'none' category
* FIX: preload_key must contain /none if no_subcategories
* DEV: More robust processing of URLs
The previous `UrlHelper.encode_component(CGI.unescapeHTML(UrlHelper.unencode(uri))` method would naively process URLs, which could result in a badly formed response.
`Addressable::URI.normalized_encode(uri)` appears to deal with these edge-cases in a more robust way.
* DEV: onebox should use UrlHelper
* DEV: fix spec
* DEV: Escape output when rendering local links
Notification is created by a job. If the job is evaluated before changes are committed to a database, a notification will have an incorrect URL.
Therefore, the job should be lodged in enqueue_jobs method which is triggered after the transaction:
```ruby
Topic.transaction do
move_posts_to topic
end
add_allowed_users(participants) if participants.present? && @move_to_pm
enqueue_jobs(topic)
```
I improved a little bit specs to ensure that the destination topic_id is set. However, that tests are passing even without code improvements. I couldn't find an easy way to "delay" database transaction.
Meta: https://meta.discourse.org/t/bug-with-notifications-for-moved-posts/168937
When the invite was being redeemed and the ReviewableUser record status
for the invited user was not pending an error was being raised.
This commit makes sure that we are only looking for ReviewableUser
record with status pending and updates that to approved.
Force pushing a commit to a theme repository used to break the updater,
because the system was not able to count the commits behind the old and
new version. This operation failed because a force push deleted the old
commits.
The user was prompted with a simple "500 server error" message.
* FIX: Store Reviewable's force_review as a boolean.
Using the `force_review` flag raises the score to hit the minimum visibility threshold. This strategy turned out to be ineffective on sites with a high number of flags, where these values could rapidly fluctuate.
This change adds a `force_review` column on the reviewables table and modifies the `Reviewable#list_for` method to show these items when passing the `status: :pending` option, even if the score is not high enough. ReviewableQueuedPosts and ReviewableUsers are always created using this option.
This commit adds an additional find_user_by_email hook to ManagedAuthenticator so that GitHub login can continue to support secondary email addresses
The github_user_infos table will be dropped in a follow-up commit.
This is the last core authenticator to be migrated to ManagedAuthenticator 🎉
It used to simply say "not allowed" without giving any hint what the
problem could be. This commit refactors the code and tries to improve
readability.
There is a site setting reply_by_email_enabled which when combined with reply_by_email_address creates a Reply-To header in emails in the format "test+%{reply_key}@test.com" along with a PostReplyKey record, so when replying Discourse knows where to route the reply.
However this conflicts with the IMAP implementation. Since we are sending the email for a group via SMTP and from their actual email account, we want all replys to go to that email account as well so the IMAP sync job can pick them up and put them in the correct place. So if the group has IMAP enabled and configured, then the reply-to header will be correct.
This PR also makes a further fix to 64b0b50 by using the correct recipient user for the PostReplyKey record. If the post user is used we encounter this error:
if destination.user_id != user.id && !forwarded_reply_key?(destination, user)
raise ReplyUserNotMatchingError, "post_reply_key.user_id => #{destination.user_id.inspect}, user.id => #{user.id.inspect}"
end
This is because the user above is found from the from_address, but the destination which is the PostReplyKey is made by the post.user, which will be different people.
* PERF: we don't need to use a huge image to test thumbnails
Generating images with 5000x5000 dimensions is an expensive operation.
Using smaller images reduce the time of model spec from 11s to 3s and integration spec from 6s to 2s.
Resending an invite moved the expire date in the future, but did not
invalidate it. For example, if an invite was sent to an email,
invalidated and then resent, it would still be left invalidated.
This consolidates logic used to match routes in ApiKey, UserApiKey and DefaultCurrentUserProvider. This reduces duplicated logic, and will allow UserApiKeysScope to easily re-use the parameter matching logic from ApiKeyScope
When posts or topics are deleted we don't want to immediately delete associated bookmarks, so we have a grace period to recover them and their reminders if the post or topic is un-deleted. This PR adds a task to the Weekly scheduled job to go and delete bookmarks attached to posts or topics deleted > 3 days ago.
Creates a BabelHelper builder using a default list of plugins, to ensure the transpiled code is always using the same plugins instead of differents plugins in different cases.
Previously, any errors in those files would e.g. blow up the update process in docker_manager.
Now it prints out an error and proceeds as if there was no compatibility file.
Includes:
* DEV: Extract setup_git_repo
* DEV: Use `Dir.mktmpdir`
* DEV: Default to `main` branch (The latest versions of git already do this, so to avoid problems do this by default)
DEV: Replace instances of Discourse.base_uri with Discourse.base_path
This is clearer because the base_uri is actually just a path prefix. This continues the work started in 555f467.
Previously this site setting `embed unlisted` defaulted to false and
empty topics would be generated for embed, but those topics tend to take
up a lot of room on the topic lists.
This new default creates invisible topics by default until they receive
their first reply.
Upload.secure_media_url? raised an exceptions when the URL was invalid,
which was a issue in some situations where secure media URLs must be
removed.
For example, sending digests used PrettyText.strip_secure_media,
which used Upload.secure_media_url? to replace secure media with
placeholders. If the URL was invalid, then an exception would be raised
and left unhandled.
Now instead in UrlHelper.rails_route_from_url we return nil if there is something wrong with the URL.
Co-authored-by: Bianca Nenciu <nenciu.bianca@gmail.com>
Extracted commonly used spec helpers into spec/support/uploads_helpers.rb, removed unused stubs and let definitions. Makes it easier to write new S3-related specs without copy and pasting setup steps from other specs.
This is intended for use by plugins which are building their own topic lists, and want to include PMs alongside regular topics (e.g. discourse-assign). It does not get used directly in core.
This PR removes the user reminder topic timers, because that system has been supplanted and improved by bookmark reminders. The option is removed from the UI and all existing user reminder topic timers are migrated to bookmark reminders.
Migration does this:
* Get all topic_timers with status_type 5 (reminders)
* Gets all bookmarks where the user ID and topic ID match
* Loops through the found topic timers
* If there is no bookmark for the OP of the topic, then we just create a bookmark with a reminder
* If there is a bookmark for the OP of the topic and it does **not** have a reminder set, then just
update it with the topic timer reminder
* If there is a bookmark for the OP of the topic with a reminder then just discard the topic timer
* Cancels all outstanding user reminder topic timers
* **Trashes (not deletes) all user reminder topic timers**
Notes:
* For now I have left the user reminder topic timer job class in place; this is so the jobs can be cancelled in the migration. It and the specs will be deleted in the next PR.
* At a later date I will write a migration to delete all trashed user topic timers. They are not deleted here in case there are data issues and they need to be recovered.
* A future PR will change the UI of the topic timer modal to make it look more like the bookmark modal.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
This reverts commit 7fc7090. And fixed the spec test fails.
Currently, if a group's visibility is set to "Group owners, members" then the mods can't view those group pages. The same rule is applied for members visibility setting too.
We must guarantee that "rel=noopener" was set if "target=_blank" is present, which is not always the case for trusted users. Also, if the link contains the "nofollow" attribute, it has to have the "ugc" attribute as well.
If a user always read all group messages, we will never update the
`first_pm_unread_at` column since the previous query will not return the
group_user. Instead, we should update `first_pm_unread_at` to the
current timestamp if the user has read everything.
Follow-up to 9b75d95fc6
This commit is addressing an issue where it is possible that there could
be multiple topic timer jobs running to close a topic or a weird race
condition state causing a topic that was just closed to be re-opened.
By removing the logic from the Topic Timer model into the Topic Timer
controller endpoint we isolate the code that is used for setting an
auto-open or an auto-close timer to just that functionality making the
topic timer background jobs safer if multiple are running.
Possibly in the future if we would like this logic back in the model a
refactor will be needed where we actually pass in the auto-close and
auto-open action instead of mixing it with the close and open
action that is currently being passed to the controller.
Like "default watching" and "default tracking" categories option now the "regular" categories support is added. It will be useful for sites that are muted by default. The user option will be displayed only if `mute_all_categories_by_default` site setting is enabled.
Themes can now declare custom colors that get compiled in core's color definitions stylesheet, thus allowing themes to better support dark/light color schemes.
For example, if you need your theme to use tertiary for an element in a light color scheme and quaternary in a dark scheme, you can add the following SCSS to your theme's `color_definitions.scss` file:
```
:root {
--mytheme-tertiary-or-quaternary: #{dark-light-choose($tertiary, $quaternary)};
}
```
And then use the `--mytheme-tertiary-or-quaternary` variable as the color property of that element. You can also use this file to add color variables that use SCSS color transformation functions (lighten, darken, saturate, etc.) without compromising your theme's compatibility with different color schemes.
There is an fk to user_profile that can make destroying uploads fail
if they happen to be set as user profile.
This ensures we clear this information when destroying uploads.
There are more relationships, but this makes some more progress.
It's possible that the original topic image is broken in some form, so
we shouldn't try and generate a topic thumbnail for it. The fix will
prevent the generate_topic_thumbnails job being enqueued every time the
topic is viewed.
For sites that are configured to mute some or all categories and tags
for users by default, groups can now be configured to set members'
notification level to normal from the group manage UI.
Sometime parallel spec if failing with error:
```
NoMethodError:
undefined method `data' for nil:NilClass
# ./spec/models/topic_tracking_state_spec.rb:339:in `block (4 levels) in <main>'
```
I have a theory that it might be related to instance variables in before block
The UI prevents users from trying to create tags on topics when they
don't have permission, but if you are trying to add tags to a topic via
the API and you don't have permission before this change it would
silently succeed in creating the topic, but it wouldn't have any tags.
Now a 422 error will be returned with an error message when trying to
create a topic with tags when tagging is disabled or you don't have
enough trust level to add tags to a topic.
Bug report: https://meta.discourse.org/t/-/70525/14
Normally, secure media urls are linked like `/secure-media-uploads/...`. In this case, uploads were already being linked correctly.
But sometimes (e.g. when pulling hotlinked onebox images) secure media is referenced with a full domain name (`//example.com/secure-media-uploads`). This commit ensures that those uploads are also linked correctly.
* FEATURE: set notification levels when added to a group
This feature allows admins and group owners to define default
category and tag tracking levels that will be applied to user
preferences automatically at the time when users are added to the
group. Users are free to change those preferences afterwards.
When removed from a group, the user's notification preferences aren't
changed.
* Fixed an issue I introduced in the last PR where I am just archiving everything regardless of whether it is actually archived in Discourse man_facepalming
* Refactor group list_mailboxes IMAP code to use providers, add specs, and add provider code to get the correct prodivder
* DEV: Show message when cannot invite user to PM
When inviting a user to a PM return a message that says, "Sorry, this
user can't be invited." if they have been muted or are not in a users
allowed pm users list.
* Minor refactor & improved some text
This changes PG text search to only match the given title against
lexemes that are formed from the title. Likewise, the given raw will
only be matched against lexemes that are formed from the post's raw.
* Added scopes UI
* Create scopes when creating a new API key
* Show scopes on the API key show route
* Apply scopes on API requests
* Extend scopes from plugins
* Add missing scopes. A mapping can be associated with multiple controller actions
* Only send scopes if the use global key option is disabled. Use the discourse plugin registry to add new scopes
* Add not null validations and index for api_key_id
* Annotate model
* DEV: Move default mappings to ApiKeyScope
* Remove unused attribute and improve UI for existing keys
* Support multiple parameters separated by a comma
It's possible through an import or other means to have images larger
than the current max allowed image size in the db.
If this happens the thumbnail generation job will keep running
indefinitely trying to download a new copy of the original but
discarding it because it is larger than the max_file_size eventually
causing this error
`Job exception: undefined method `path' for nil:NilClass`
because the newly downloaded image is now nil.
This fix stops the enqueuing of the `GenerateTopicThumbnails` job for
all images that happen to be larger than the max image size.
If any value, including nil, is passed in as an argument the default
won't be set, so we need to handle when a non-Array value is passed in
to the `generate_thumbnails!` method.
`OptimizedImage#filesize` calls `Discourse.store.download` with an OptimizedImage as an argument. It would in turn attempt to call `#original_filename` and `#secure?` on that object. Both would fail as these methods do not exist on OptimizedImage, only on Upload. We didn't know about these issues because:
1. `#calculate_filesize` is not called often, because the filesize is saved on OptimizedImage creation, so it's used mostly for manual filesize recalculation
2. we were using `rescue nil` which swallows all errors
When rebaking a post we were invalidating _regular_ oneboxes but not inline oneboxes.
DEV: also renamed 'InlineOneboxer.purge' to 'InlineOneboxer.invalidate' to keep
the API consistent with 'Oneboxer.invalidate'
#b19dcac2 improved the serializer so it sends default notification
levels to users to work around cases where a category edit would
would result in clients being left with invalid notification state
Unfortunately this did not address the root issue.
When we edit categories we publish state to multiple users this
means that the serializer is executed unscoped with no user.
The client already handles this case per:
dcad720a4c/app/assets/javascripts/discourse/app/models/site.js (L119-L119)
If a property is not shipped to it, it will leave it alone on the
existing category.
This fix ensures that these wide category info updates do not
include notification state to avoid corruption of local state.
If `default email digest frequency` was set to "Never", users would get
a `digest_after_minutes` set to `nil` which triggered this error
in the logs if/when the site eventually changed that setting and
enabled digests:
```
NoMethodError (undefined method `>=' for nil:NilClass)
/var/www/discourse/app/mailers/user_notifications.rb:227:in `digest'
```
Hostname can vary per-site on a multisite cluster, so this change requires converting the compiler_version from a constant into a class method which is evaluated at runtime. The value is stored in the theme DistributedCache, so performance impact should be negligible.
We previously did not account for completely untagged topics when
looking at muted tags, this caused new/unread counts to be off if
1. You had muted tags
2. You had an unread/new topic
3. This topic had no tags
This feature allows certain plugins to output tag information
to topic tracking state, this allows per tag stats which can be
used by sidebars and other plugins that need per tag stats
Not enabled by default cause this would add cost to a critical
query
Previously we had a partial fix in place where non human users
were not allowed draft sequences, this left edges around where non
human users asked for drafts yet had none.
For example system could already have a few drafts in place.
This also removes and extensibility point we added that is not in use
This reverts commit 20780a1eee.
* SECURITY: re-adds accidentally reverted commit:
03d26cd6: ensure embed_url contains valid http(s) uri
* when the merge commit e62a85cf was reverted, git chose the 2660c2e2 parent to land on
instead of the 03d26cd6 parent (which contains security fixes)
* We now have a site setting "topic_excerpt_maxlength" that is used when the OP is created or revised to generate a topic excerpt.
* However, posts created before this setting was introduced cannot benefit from this change unless they are revised, and if the topic excerpt length setting is changed that situation is also not covererd.
* This PR makes a change to rebake! to update the topic excerpt IF the post is the OP.
Adds a new topic_excerpt_maxlength site setting.
* When topic excerpt is requested for a post, use the new topic_excerpt_maxlength site setting to limit the size of the excerpt
* Remove code for getting/setting Post.excerpt_size as it is not used anywhere
* FIX: randomly falling user_spec
When we evaluate `update_last_seen!` we relay on Redis to not run that code too often
https://github.com/discourse/discourse/blob/master/app/models/user.rb#L753
The problem is that not all specs which are running `update_last_seen!` are not cleaning after themselves
For examples specs in that block https://github.com/discourse/discourse/blob/master/spec/models/user_spec.rb#L901
So it can be replicated when you run a few times
`bundle exec rspec ./spec/models/user_spec.rb -e "should not update the first seen value if it doesn't exist" -e "should have 0 for days_visited"`
We should delete Redis key after each spec which is evaluating `update_last_seen!`
Previously the code was very race condition prone leading to
odd failures in production
It was re-written in raw SQL to avoid conditions where rows
conflict on inserts
There is no clean way in ActiveRecord to do:
Insert, on conflict do nothing and return existing id.
This also increases test coverage, we were previously not testing
the code responsible for crawling external sites directly
We were getting errors like this in Reviewables in some cases:
```
ActiveRecord::StatementInvalid (PG::AmbiguousColumn: ERROR: column reference "category_id" is ambiguous
LINE 4: ...TRUE) OR (reviewable_by_group_id IN (NULL))) AND (category_i...
```
The problem that was making everything go boom is that plugins can add their own custom filters for Reviewables. If one is doing an INNER JOIN on topics, which has its own category_id column, we would get the above AmbiguousColumn error. The solution here is to just make all references to the reviewable columns in the list_for and viewable_by code prefixed by the table name e.g. reviewables.category_id.
Previously we only changed sequence on ownership change, this
cause a race condition between tabs where user could type for a
long time without being warned of an out of date draft.
This change is a radical change and we should watch closely.
Code was already in place to track sequence on the client so no
changes are needed there.
Locale files get precompiled after deployment and they contained translations from the `default_locale`. That's especially bad in multisites, because the initial `default_locale` is `en_US`. Sites where the `default_locale` isn't `en_US` could see missing translations. The same thing could happen when users are allowed to chose a different locale.
This change simplifies the logic by not using the `default_locale` in the locale chain. It always falls back to `en` in case of missing translations.
We were sharing `Discourse` both as an application object and a
namespace which complicated things for Ember CLI. This patch
moves raw templates into `__DISCOURSE_RAW_TEMPLATES` and adds
a couple helper methods to create/remove them.
This introduces new APIs for obtaining optimized thumbnails for topics. There are a few building blocks required for this:
- Introduces new `image_upload_id` columns on the `posts` and `topics` table. This replaces the old `image_url` column, which means that thumbnails are now restricted to uploads. Hotlinked thumbnails are no longer possible. In normal use (with pull_hotlinked_images enabled), this has no noticeable impact
- A migration attempts to match existing urls to upload records. If a match cannot be found then the posts will be queued for rebake
- Optimized thumbnails are generated during post_process_cooked. If thumbnails are missing when serializing a topic list, then a sidekiq job is queued
- Topic lists and topics now include a `thumbnails` key, which includes all the available images:
```
"thumbnails": [
{
"max_width": null,
"max_height": null,
"url": "//example.com/original-image.png",
"width": 1380,
"height": 1840
},
{
"max_width": 1024,
"max_height": 1024,
"url": "//example.com/optimized-image.png",
"width": 768,
"height": 1024
}
]
```
- Themes can request additional thumbnail sizes by using a modifier in their `about.json` file:
```
"modifiers": {
"topic_thumbnail_sizes": [
[200, 200],
[800, 800]
],
...
```
Remember that these are generated asynchronously, so your theme should include logic to fallback to other available thumbnails if your requested size has not yet been generated
- Two new raw plugin outlets are introduced, to improve the customisability of the topic list. `topic-list-before-columns` and `topic-list-before-link`
Recently, we added feature that we are sending `/muted` to users who muted specific topic just before `/latest` so the client knows to ignore those messages - https://github.com/discourse/discourse/pull/9482
Same `/muted` message should be included when the post is edited
Previously we relied entirely on levenshtein_distance_spammer_emails site
setting to handle "similar looking" emails.
This commit improves the situation by always preferring to block (and check)
canonical emails.
This means that if:
`samevil+test@domain.com` is blocked the system will block `samevil@domain.com`
This means that `samevil+2@domain.com` (ad infinitum) will be blocked
* When copying the markdown for an image between posts, we were not adding the srcset and data-small-image attributes which are done by calling optimize_image! in cooked post processor
* Refactored the code which was confusing in its current state (the consider_for_reuse method was super confusing) and fixed the issue
* FEATURE: don't display new/unread notification for muted topics
Currently, even if user mute topic, when a new reply to that topic arrives, the user will get "See 1 new or updated topic" message. After clicking on that link, nothing is visible (because the topic is muted)
To solve that problem, we will send background message to all users who recently muted that topic that update is coming and they can ignore the next message about that topic.
Within 24 hours of signing up, new users were losing their
default trust level of 3. With this fix, demotions from
trust level 3 won't happen when the "default trust level"
setting is 3 or 4.
If the feature is enabled, staff members can construct a URL and publish a
topic for others to browse without the regular Discourse chrome.
This is useful if you want to use Discourse like a CMS and publish
topics as articles, which can then be embedded into other systems.
It wasn't intended that people should be able to earn trust level
3 without participating in public topics. When counting topic
views and likes given/received, don't count private topics.
Users noticed that sometimes, avatar from Gravatar is not correctly updated - https://meta.discourse.org/t/updated-image-on-gravatar-not-seeing-it-update-on-site/54357
A potential reason for that is that even if you update your avatar in Gravatar, URL stays the same and if the cache is involved, service is still receiving the old photo.
For example. In my case, when I click the button to refresh avatar the
new Upload record is created with `origin` URL to new avatar, and `url` to
old one
I made some tests in the rails console and adding random param to Gravatar URL is deceiving cache and correct, the newest avatar is downloaded
Previously all topic posters would be added which could lead to major performance issues. Now if there are too many posters, only the acting user will be added as a participant.
Introduce the concept of "high priority notifications" which include PM and bookmark reminder notifications. Now bookmark reminder notifications act in the same way as PM notifications (float to top of recent list, show in the green bubble) and most instances of unread_private_messages in the UI have been replaced with unread_high_priority_notifications.
The user email digest is changed to just have a section about unread high priority notifications, the unread PM section has been removed.
A high_priority boolean column has been added to the Notification table and relevant indices added to account for it.
unread_private_messages has been kept on the User model purely for backwards compat, but now just returns unread_high_priority_notifications count so this may cause some inconsistencies in the UI.
* FEATURE: add setting `auto_approve_email_domains` to auto approve users
This commit adds a new site setting `auto_approve_email_domains` to
auto approve users based on their email address domain.
Note that if a domain already exists in `email_domains_whitelist` then
`auto_approve_email_domains` needs to be duplicated there as well,
since users won’t be able to register with email address that is
not allowed in `email_domains_whitelist`.
* Update config/locales/server.en.yml
Co-Authored-By: Robin Ward <robin.ward@gmail.com>
This adds rubocop-rspec, and enables some cops that were either already passing or are passing now, after fixing them in this commit.
Some new cops are disabled for now, with annotation: "TODO" or "To be decided". Those either need to be discussed first, or require manual changes, or the number of found and fixed offenses is too large to bundle them up in a single PR.
Includes:
* DEV: Update rubocop's `TargetRubyVersion` to 2.6
* DEV: Enable RSpec/VoidExpect
* DEV: Enable RSpec/SharedContext
* DEV: Enable RSpec/EmptyExampleGroup (Removed an obsolete empty spec file)
* DEV: Enable RSpec/ItBehavesLike
* DEV: Remove RSpec/ScatteredLet (It's too strict, as it doesn't recognize fab! as a let-like)
* DEV: Remove RSpec/MultipleExpectations
If the “secure media” site setting is enabled then ALL files uploaded to Discourse (images, video, audio, pdf, txt, zip etc. etc.) will follow the secure media rules. The “prevent anons from downloading files” setting will no longer have any bearing on upload security. Basically, the feature will more appropriately be called “secure uploads” instead of “secure media”.
This is being done because there are communities out there that would like all attachments and media to be secure based on category rules but still allow anonymous users to download attachments in public places, which is not possible in the current arrangement.
New `duration` attribute is introduced for the `set_or_create_timer` method in the commit aad12822b7 for "based on last post" and "auto delete replies" topic timers.
* DEV: Replace User.unstage and User#unstage API with User#unstage!
Quoting @SamSaffron:
> User.unstage mixes concerns of both unstaging users and updating params which is fragile/surprising.
> u.unstage destroys notifications and raises a user_unstaged event prior to the user becoming unstaged and the user object being saved.
User#unstage! no longer updates user attributes and saves the object before triggering the `user_unstaged` event.
* Update one more spec
* Assign attributes after unstaging
Having a tag be a member of a tag group and the group's parent tag at
the same time causes some unexpected behavior. When a tag is assigned
as the parent, remove it from the group.
Adds 3 config values that allow to set a custom provider of Gravatar-like API accessible from gravatar_base_url. The gravatar_name is purely cosmetic, but helps with associating name with the service that actually provides the avatars. gravatar_login_url is a link relative to gravatar_base_url, which provides the user with the login to the Gravatar service
* This PR implements the scheduling and notification system for bookmark reminders. Every 5 minutes a schedule runs to check any reminders that need to be sent before now, limited to **300** reminders at a time. Any leftover reminders will be sent in the next run. This is to avoid having to deal with fickle sidekiq and reminders in the far-flung future, which would necessitate having a background job anyway to clean up any missing `enqueue_at` reminders.
* If a reminder is sent its `reminder_at` time is cleared and the `reminder_last_sent_at` time is filled in. Notifications are only user-level notifications for now.
* All JavaScript and frontend code related to displaying the bookmark reminder notification is contained here. The reminder functionality is now re-enabled in the bookmark modal as well.
* This PR also implements the "Remind me next time I am at my desktop" bookmark reminder functionality. When the user is on a mobile device they are able to select this option. When they choose this option we set a key in Redis saying they have a pending at desktop reminder. The next time they change devices we check if the new device is desktop, and if it is we send reminders using a DistributedMutex. There is also a job to ensure consistency of these reminders in Redis (in case Redis drops the ball) and the at desktop reminders expire after 20 days.
* Also in this PR is a fix to delete all Bookmarks for a user via `UserDestroyer`
There are three modifiers:
- serialize_topic_excerpts (boolean)
- csp_extensions (array of strings)
- svg_icons (array of strings)
When multiple themes are active, the values will be combined. The combination method varies based on the setting. CSP/SVG arrays will be combined. serialize_topic_excerpts will use `Enumerable#any`.
Introduces a new site setting `max_notifications_per_user`.
Out-of-the-box this is set to 10,000. If a user exceeds this number of
notifications, we will delete the oldest notifications keeping only 10,000.
To disable this safeguard set the setting to 0.
Enforcement happens weekly.
This is in place to protect the system from pathological states where a
single user has enormous amounts of notifications causing various queries
to time out. In practice nobody looks back more than a few hundred notifications.
* Attachments (non media files) were being marked as secure if just
SiteSetting.prevent_anons_from_downloading_files was enabled. this
was not correct as nothing should be marked as actually "secure" in
the DB without that site setting enabled
* Also add a proper standalone spec file for the upload security class
If a group mention could be notified on preview it was given an `<a>`
tag with the `.notify` class. When cooked it would display differently.
This patch makes the server side cooking match the client preview.
Further on from my earlier PR #8973 also reject upload as secure if its origin URL contains images/emoji. We still check Emoji.all first to try and be canonical.
This may be a little heavy handed (e.g. if an external URL followed this same path it would be a false positive), but there are a lot of emoji aliases where the actual Emoji url is something, but you can have another image that should not be secure that that thing is an alias for. For example slight_smile.png does not show up in Emoji.all BUT slightly_smiling_face does, and it aliases slight_smile e.g. /images/emoji/twitter/slight_smile.png?v=9 and /images/emoji/twitter/slightly_smiling_face.png?v=9 are equivalent.
Sometimes PullHotlinkedImages pulls down a site emoji and creates a new upload record for it. In the cases where these happen the upload is not created via the normal path that custom emoji follows, so we need to check in UploadSecurity whether the origin of the upload is based on a regular site emoji. If it is we never want to mark it as secure (we don't want emoji not accessible from other posts because of secure media).
This only became apparent because the uploads:ensure_correct_acl rake task uses UploadSecurity to check whether an upload should be secure, which would have marked a whole bunch of regular-old-emojis as secure.
* Because custom emoji count as post "uploads" we were
marking them as secure when updating the secure status for post uploads.
* We were also giving them an access control post id, which meant
broken image previews from 403 errors in the admin custom emoji list.
* We now check if an upload is used as a custom emoji and do not
assign the access control post + never mark as secure.
This is not used in core or official plugins, and has been printing a deprecation notice since v2.3.0beta4. All OpenID 2.0 code and dependencies have been dropped. The user_open_ids table remains for now, in case anyone has missed the deprecation notice, and needs to migrate their data.
Context at https://meta.discourse.org/t/-/113249
Previously you'd get a server side generic error due to a password check
failing. Now the input element has a maxlength attribute and the server
side will respond with a nicer error message if the value is too long.
If our reply tree somehow ends up with cycles or other odd
structures, we only want to consider a reply once, at the first
level in the tree that it appears.
* DEV: Add data-notification-level attribute to category UI
* Show muted categories on the category page by default
This reverts commit ed9c21e42c.
* Remove redundant spec - muted categories are now visible by default
It seems in some situations replies have been moved to other topics but
the `PostReply` table has not been updated. I will try and fix this in a
follow up PR, but for now this fix ensures that every time we ask a post
for its replies that we restrict it to the same topic.
* the spec to check if changing the topic between PM and
public to see if the upload security status is changed
is already covered extensively in topic_upload_security_manager_spec.rb
Basically, say you had already downloaded a certain image from a certain URL
using pull_hotlinked_images and the onebox. The upload would be stored
by its sha as an upload record. Whenever you linked to the same URL again
in a post (e.g. in our case an og:image on review.discourse) we would
would reuse the original upload record because of the sha1.
However when you turned on secure media this could cause problems as
the first post that uses that upload after secure media is enabled
will set the access control post for the upload to the new post.
Then if the post is deleted every single onebox/link to that same image
URL will fail forever with 403 as the secure-media-uploads URL fails
if the access control post has been deleted.
To fix this when cooking posts and pulling hotlinked images, we only
allow using an original upload by URL if its access control post
matches the current post, and if the original_sha1 is filled in,
meaning it was uploaded AFTER secure media was enabled. otherwise
we just redownload the media again to be safe, as the URL will always
be new then.
Regression was created here:
https://github.com/discourse/discourse/pull/8750
When tag or category is added and the user is watching that category/tag
we changed notification type to `edited` instead of `new post`.
However, the logic here should be a little bit more sophisticated.
If the user has already seen the post, notification should be `edited`.
However, when user hasn't yet seen post, notification should be "new
reply". The case for that is when for example topic is under private
category and set for publishing later. In that case, we modify an
existing topic, however, for a user, it is like a new post.
Discussion on meta:
https://meta.discourse.org/t/publication-of-timed-topics-dont-trigger-new-topic-notifications/139335/13
Add TopicUploadSecurityManager to handle post moves. When a post moves around or a topic changes between categories and public/private message status the uploads connected to posts in the topic need to have their secure status updated, depending on the security context the topic now lives in.
There is a feature, that when tag or category is added to the topic,
customers who are watching that category or tag are notified.
The problem is that it is using default notification type "new post"
It would be better to use "new post" only when there really is a new
post and "edited" when categories or tags were modified.
### General Changes and Duplication
* We now consider a post `with_secure_media?` if it is in a read-restricted category.
* When uploading we now set an upload's secure status straight away.
* When uploading if `SiteSetting.secure_media` is enabled, we do not check to see if the upload already exists using the `sha1` digest of the upload. The `sha1` column of the upload is filled with a `SecureRandom.hex(20)` value which is the same length as `Upload::SHA1_LENGTH`. The `original_sha1` column is filled with the _real_ sha1 digest of the file.
* Whether an upload `should_be_secure?` is now determined by whether the `access_control_post` is `with_secure_media?` (if there is no access control post then we leave the secure status as is).
* When serializing the upload, we now cook the URL if the upload is secure. This is so it shows up correctly in the composer preview, because we set secure status on upload.
### Viewing Secure Media
* The secure-media-upload URL will take the post that the upload is attached to into account via `Guardian.can_see?` for access permissions
* If there is no `access_control_post` then we just deliver the media. This should be a rare occurrance and shouldn't cause issues as the `access_control_post` is set when `link_post_uploads` is called via `CookedPostProcessor`
### Removed
We no longer do any of these because we do not reuse uploads by sha1 if secure media is enabled.
* We no longer have a way to prevent cross-posting of a secure upload from a private context to a public context.
* We no longer have to set `secure: false` for uploads when uploading for a theme component.
6e1fe22 introduced the possiblity for category_users to have a NULL notification_level, so that we can store `last_seen_at` dates without locking the notification level. At the time, this did not affect the topic-tracking-state query. However, the query changes in f434de2 introduced a slight change in behavior.
Previously, a subquery would look for a category_user with notification_level=mute. f434de2 refactored this to remove the subquery, and inverted some of the logic to suit.
The new query checked for `notification_level <> :muted`. If `notification_level` is NULL, this comparison will return NULL. In this scenario, notification_level=NULL means that we should fall back to the default tracking level (regular), and so we want the expression to resolve as true, not false. There was already a check for the existence of the category_users row, but it did not check for the existence of a NOT NULL notification_level.
This commit amends the expression so that the notification_level will only be compared if it is non-null.
* FIX: bulk insert to create application requests
* FIX: bulk insert to create topics
* FIX: no need to create separate user for each topic, post etc.
* FIX: Another bulk_insert of ApplicationRequests
* FIX: dont create user and topic instances when not neccessary
* FIX: merge examples with expensive setup into one example
Previously, categories without any topics were being excluded from the UPDATE query. This means the counter gets stuck, and the category cannot be deleted. This change ensures that the counters get correctly set to zero.
Restore tl3 reachability by calculating penalty counts vs unsuspend/unsilence.
Only count unsuspend or unsilences that have been made by a user other than
the Discourse system user.
API keys are now only visible when first created. After that, only the first four characters are stored in the database for identification, along with an sha256 hash of the full key. This makes key usage easier to audit, and ensures attackers would not have access to the live site in the event of a database leak.
This makes the merge lower risk, because we have some time to revert if needed. Once the change is confirmed to be working, we will add a second commit to drop the `key` column.
We like to stay as close as possible to latest with rubocop cause the cops
get better.
This update required some code changes, specifically the default is to avoid
explicit returns where implicit is done
Also this renames a few rules
When the tag is muted and topic contains that tag, we should not mark that message as NEW.
There are 3 possible settings which site admin can set.
remove_muted_tags_from_latest - always
It means that if the topic got at least one muted tag, we should not mark that topic as NEW
remove_muted_tags_from_latest - only muted
Similar to above, however, if at least one tag is not muted, the topic is marked as NEW
remove_muted_tags_from_latest - never
Basically, mute tag setting is ignored and all topics are set as NEW
Previously it was unclear why certain gems are being held back cause Gemfile
had no comment explaining it.
I tried to add some explanation from memory and remove some exceptions that
seemed to be superfluous.
This upgrades shoulda to latest, it appears to work once a couple of assertions
are removed
Also update http accept language used to auto detect language from http header
this is tested
Zeitwerk small update seems fine
This feature adds the ability to define synonyms for tags, and the ability to merge one tag into another while keeping it as a synonym. For example, tags named "js" and "java-script" can be synonyms of "javascript". When searching and creating topics using synonyms, they will be mapped to the base tag.
Along with this change is a new UI found on each tag's page (for example, `/tags/javascript`) where more information about the tag can be shown. It will list the synonyms, which categories it's restricted to (if any), and which tag groups it belongs to (if tag group names are public on the `/tags` page by enabling the "tags listed by group" setting). Staff users will be able to manage tags in this UI, merge tags, and add/remove synonyms.
* FEATURE: Ability to add components to all themes
This is the first and functional step from that topic https://dev.discourse.org/t/adding-a-theme-component-is-too-much-work/15398/16
The idea here is that when a new component is added, the user can easily assign it to all themes (parents).
To achieve that, I needed to change a site-setting component to accept `setDefaultValues` action and `setDefaultValuesLabel` translated label.
Also, I needed to add `allowAny` option to disable that for theme selector.
I also refactored backend to accept both parent and child ids with one method to avoid duplication (Renamed `add_child_theme!` to more general `add_relative_theme!`)
* FIX: Improvement after code review
* FIX: Improvement after code review2
* FIX: use mapBy and filterBy directly
When uploading a file to a theme component, and that file is existing and has already been marked as secure, we now automatically mark the file as secure: false, change the ACL, and log the action as the user (also rebake the posts for the upload)
PG 12 changes internals in a subtle way, time jitter is noticed in a few new
spots (which is normal) and default ordering is a bit different which is meant
to be random anyway.
* Add timezone to user_options table
* Also migrate existing timezone values from UserCustomField,
which is where the discourse-calendar plugin is storing them
* Allow user to change their core timezone from Profile
* Auto guess & set timezone on login & invite accept & signup
* Serialize user_options.timezone for group members. this is so discourse-group-timezones can access the core user timezone, as it is being removed in discourse-calendar.
* Annotate user_option with timezone
* Validate timezone values
This PR introduces a new secure media setting. When enabled, it prevent unathorized access to media uploads (files of type image, video and audio). When the `login_required` setting is enabled, then all media uploads will be protected from unauthorized (anonymous) access. When `login_required`is disabled, only media in private messages will be protected from unauthorized access.
A few notes:
- the `prevent_anons_from_downloading_files` setting no longer applies to audio and video uploads
- the `secure_media` setting can only be enabled if S3 uploads are already enabled and configured
- upload records have a new column, `secure`, which is a boolean `true/false` of the upload's secure status
- when creating a public post with an upload that has already been uploaded and is marked as secure, the post creator will raise an error
- when enabling or disabling the setting on a site with existing uploads, the rake task `uploads:ensure_correct_acl` should be used to update all uploads' secure status and their ACL on S3
Previously people were not consistent about mocking which left internals in
a fragile state when running subfolder specs.
This introduces a simple helper `set_subfolder` which you can use to set
the subfolder for the spec. It takes care of proper configuration of subfolder
and teardown.
```
# usage
set_subfolder "/my_amazing_subfolder"
```
You should no longer stub base_uri or global_settings