This patch removes two freedom patches:
- `mail_disable_starttls.rb`: this has been fixed in the 2.8 release of
the mail gem, so we don’t need it anymore.
- `rails4.rb`: those methods have been deprecated for a while now and
should have been dropped with Discourse v3.2.
In development mode and when a developer's email is configured as part
of `Rails.configuration.developer_emails`, the user can be trusted and
should not be required to be an admin user.
* System user attachment size WIP
* spec check
* controller update
* add max to system_user_max_attachment_size_kb
* DEV: update to use static method for `max_attachment_size_for_user`
add test to use large image.
add check for failure.
* DEV: update `system_user_max_attachment_size_kb` default value to 0
remove unecessary test.
update tests to reflect the new default value of `system_user_max_attachment_size_kb`
* DEV: update maximum_file_size to check when is an attachment made by a system user
Add tests for when `system_user_max_attachment_size_kb` is over and under the limit
Add test for checking interaction with `max_attachment_size_kb`
* DEV: move `max_attachment_size_for_user` to private methods
* DEV: turn `max_attachment_size_for_user` into a static method
* DEV: typo in test case
* DEV: move max_attachment_size_for_user to private class method
* Revert "DEV: move max_attachment_size_for_user to private class method"
This reverts commit 5d5ae0b715.
---------
Co-authored-by: Gabriel Grubba <gabriel@discourse.org>
* DEV: Removal of create_post_for_category_and_tag_changes setting
reverting commit: #65f35e1
and adding a migration to remove the setting
ref: t/132320
* DEV: change checks for zeros to check for nils
* DEV: remove create_post_for_category_and_tag_changes migration file
If anything goes wrong, we can always revert back to the previous state.
This commit updates `Demon::Base#start` to call `Discourse.before_fork`
before forking. According to the docs in `mini_racer`, we need to
"Dispose manually of all MiniRacer::Context objects prior to forking".
This commit is motivated by a segmentation fault which we are seeing in
production when killing a daemon process. Backtrace of the core dump
includes traces of `mini_racer` so we think this is the cause. Note that
we are not 100% sure if this will fix the issue.
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
### Why?
Before, all flags were static. Therefore, they were stored in class variables and serialized by SiteSerializer. Recently, we added an option for admins to add their own flags or disable existing flags. Therefore, the class variable had to be dropped because it was unsafe for a multisite environment. However, it started causing performance problems.
### Solution
When a new Flag system is used, instead of using PostActionType, we can serialize Flags and use fragment cache for performance reasons.
At the same time, we are still supporting deprecated `replace_flags` API call. When it is used, we fall back to the old solution and the admin cannot add custom flags. In a couple of months, we will be able to drop that API function and clean that code properly. However, because it may still be used, redis cache was introduced to improve performance.
To test backward compatibility you can add this code to any plugin
```ruby
replace_flags do |flag_settings|
flag_settings.add(
4,
:inappropriate,
topic_type: true,
notify_type: true,
auto_action_type: true,
)
flag_settings.add(1001, :trolling, topic_type: true, notify_type: true, auto_action_type: true)
end
```
Adds a new statistics (hidden from the UI, but available via the API) that tracks daily participating users.
A user is considered as "participating" if they have
- Reacted to a post
- Replied to a topic
- Created a new topic
- Created a new PM
- Sent a chat message
- Reacted to a chat message
Internal ref - t/131013
This commit adds a `MiniSchedulerLongRunningJobLogger` class which will
poll every 60 seconds for mini_scheduler jobs which are stuck. When it
detects that a job is stuck, it will log a warning message with the
current backtrace of the thread that is executing the job.
Note that for scheduled jobs which are executed at a frequency of less
than 30 minutes, we will log when the job has been executing for 30
minutes.
For scheduled jobs executed at a frequency of less than 2 hours, we will
log when the job has been executing for a duration greater than its
specified frequency.
For scheduled jobs executed at a frequency greater than 2 hours, we will
log as long as the job has been executing for more than 2 hours.
This commit continues on work laid out by 6039b513fe to redesign the /about page. In this commit, we add the site age and a section on the right hand side to show site activities/statistics such as topics, posts, sign-ups, likes etc.
Admin can create up to 50 custom flags. It is limited for performance reasons.
When the limit is reached "Add button" is disabled and backend is protected by guardian.
This commit patches `Net::HTTP` to reduce the default timeouts of 60
seconds when we are processing a request. There are certain routes in
Discourse which makes external requests and if the proper timeouts are
not set, we risk having the Unicorn master process force restarting the
Unicorn workers once the `30` seconds timeout is reached. This can
potentially become a vector for DoS attacks and this commit is aimed at
reducing the risk here.
The Safari 15 bugfix has been rolled into @babel/preset-env in the most recent version, so we no longer need to carry our vendored copy.
This commit updates @babel/preset-env, runs npx yarn-deduplicate yarn.lock, and removes the vendored transform.
This commit also refactors our theme transpiler to use @babel/preset-env, with the same list of target browsers as our ember-cli build uses. This means we no longer need to maintain a separate list of babel transforms for themes.
We were writing theme-transpiler JS files to the filesystem on a per-process basis, and then immediately reading them back in. Plus, there was no cleanup mechanism, so the tmp directory would grow indefinitely.
This commit refactors things so that the `build.js` script outputs the theme-transpiler source to stdout. That way, we can read it directly into the process, and then into mini-racer, without needing to go via the filesystem. No cleanup required!
In production, the theme-transpiler is still cached in a file during `assets:precompile`
When `SiteSetting.review_every_post` is true and the category `require_topic_approval` system creates two reviewable items.
1. Firstly, because the category needs approval, the `ReviewableQueuePost` record` is created - at this stage, no topic is created.
2. Admin is approving the review. The topic and first post are created.
3. Because `review_every_post` is true `queue_for_review_if_possible` callback is evaluated and `ReviewablePost` is created.
4. Then `ReviewableQueuePost` is linked to the newly generated topic and post.
At the beginning, we were thinking about hooking to those guards:
```
def self.queue_for_review_if_possible(post, created_or_edited_by)
return unless SiteSetting.review_every_post
return if post.post_type != Post.types[:regular] || post.topic.private_message?
return if Reviewable.pending.where(target: post).exists?
...
```
And add something like
```
return if Reviewable.approved.where(target: post).exists?
```
However, because the callback happens in point 3. before the `ReviewableQueuePost` is linked to the `Topic`, it was not possible.
Therefore, when `ReviewableQueuePost` is creating a `Topic`, a new option called `:reviewed_queued_post` is passed to `PostCreator` to avoid creating a second `Reviewable`.
We have been seeing `ZLib::BufError` when running the `assets:precompile` rake
task.
```
I, [2024-07-30T05:19:58.807019 #1059] INFO -- : Writing /var/www/discourse/public/assets/scripts/discourse-test-listen-boot-9b14a0fc65c689577e6a428dcfd680205516fe211700a71c7adb5cbcf4df2cc5.js
rake aborted!
Zlib::BufError: buffer error (Zlib::BufError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `<<'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache/file_store.rb💯in `set'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/sprockets-3.7.3/lib/sprockets/cache.rb:212:in `set'
```
The hypothesis here is that some thread unsafe issue is causing the
problem since we download the Maxmind databases in a thread and run
decompression operations once the gzip file is downloaded.
In the near term, we plan to move downloading of Maxmind databases out
of the Rake task into a scheduled job so this patch should be considered
a temporary solution.
The trade-off here is that build time will slightly increase since we
are not longer downloading Maxmind databases while precompiling assets
at the same time.
When using `Discourse.cache.fetch` with an expiry, there's a potential for a race condition due to how we read the data from redis.
The code used to be
```ruby
raw = redis.get(key) if !force
entry = read_entry(key) if raw
return entry if raw && !(entry == :__corrupt_cache__)
```
with `read_entry` defined as follow
```ruby
def read_entry(key)
if data = redis.get(key)
Marshal.load(data)
end
rescue => e
:__corrupt_cache__
end
```
If the value at "key" expired in redis between `raw = redis.get` and `entry = read_entry`, the `entry` variable would be `nil` despite `raw` having a value.
We would then proceed to return `entry` (which is `nil`) thinking it had a value, when it didn't.
The first `redis.get` can be skipped altogether and we can rely only on `read_entry` to read the data from redis. Thus avoiding the race condition and removing the double read operations.
Internal ref - t/132507
* SECURITY: Update default allowed iframes list
Change the default iframe url list to all include 3 slashes.
* SECURITY: limit group tag's name length
Limit the size of a group tag's name to 100 characters.
Internal ref - t/130059
* SECURITY: Improve sanitization of SVGs in Onebox
---------
Co-authored-by: Blake Erickson <o.blakeerickson@gmail.com>
Co-authored-by: Régis Hanol <regis@hanol.fr>
Co-authored-by: David Taylor <david@taylorhq.com>
Since switching to Maxmind permalinks to download the databases in
7079698cdf, we have received multiple
reports about rebuilds failing as `maxminddb:refresh` runs during
the rebuilds and failing to download the databases cases the rebuilds to
fail.
Downloading Maxmind databases should not sit in the critical rebuild
path but since we are close to the Discourse 3.3 release, we have opted
to just rescue all errors encountered when downloading the databases.
In the near future after the Discourse 3.3 release, we will be looking
at moving the downloading of maxmind databases out of the rebuild path.
We have a dedicated admin page (`/admin/customize/email_templates`) that lets admins customize all emails that Discourse sends to users. The way this page works is that it lists all translations strings that are used for emails, and the list of translation strings is currently hardcoded and hasn't been updated in years. We've had a number of new emails that Discourse sends, so we should add those templates to the list to let admins easily customize those templates.
Meta topic: https://meta.discourse.org/t/3-2-x-still-ignores-some-custom-email-templates/308203.
* FIX: Ensure JsLocaleHelper to obly outputs up-to-date translations
The old implementation forgot to filter out deprecated
translations, causing these translations to incorrectly override the new
locale in the frontend.
This commit fills in the forgotten where clause, filtering only the
up-to-date part.
Related meta topic: https://meta.discourse.org/t/outdated-translation-replacement-causing-missing-translation/314352
This patch fixes the `i18n:check` rake task which has been broken by
the `MessageFormat` upgrade.
It also adds a spec to ensure we generate valid MF code for all our
available locales.
We can get translations with invalid plural keys from Crowdin
or from custom overrides. Currently, this will raise an error and the
locales won’t be outputted at all.
This patch addresses this issue by using the new `strict: false` option
of our `messageformat-wrapper` gem, allowing to generate locales even if
there are invalid plural keys present.
* FIX: Add post id to the anchor to prevent two identical anchors
We generate anchors for headings in posts. This works fine if there is
only one post in a topic with anchors. The problem comes when you have
two or more posts with the same heading. PrettyText generates anchors
based on the heading text using the raw context of each post, so it is
entirely possible to generate the same anchor for two posts in the same
topic, especially for topics with template replies
Post1:
# heading
context
Post2:
# heading
context
When both posts are on the page at the same time, the anchor will only
work for the first post, according to the [HTML specification](https://html.spec.whatwg.org/multipage/browsing-the-web.html#scroll-to-the-fragment-identifier).
> If there is an a element in the document tree whose root is document
> that has a name attribute whose value is equal to fragment, then
> return the *first* such element in tree order.
This bug is particularly serious in forums with non-Latin languages,
such as Chinese. We do not generate slugs for Chinese, which results in
the heading anchors being completely dependent on their order.
```ruby
[2] pry(main)> PrettyText.cook("# 中文")
=> "<h1><a name=\"h-1\" class=\"anchor\" href=\"#h-1\"></a>中文</h1>"
```
Therefore, the anchors in the two posts must be in exactly the same by
order, causing almost all of the anchors in the second post to be
invalid.
This commit solves this problem by adding the `post_id` to the anchor.
The new anchor generation method will add `p-{post_id}` as a prefix when
post_id is available:
```ruby
[3] pry(main)> PrettyText.cook("# 中文", post_id: 1234)
=> "<h1><a name=\"p-1234-h-1\" class=\"anchor\" href=\"#p-1234-h-1\"></a>中文</h1>"
```
This way we can ensure that each anchor name only appears once on the
same topic. Using post id also prevents the potential possibility of the
same anchor name when splitting/merging topics.
Previously in these 2 PRs, we introduced a new site setting `SiteSetting.enforce_second_factor_on_external_auth`.
https://github.com/discourse/discourse/pull/27547https://github.com/discourse/discourse/pull/27674
When disabled, it should enforce 2FA for local login with username and password and skip the requirement when authenticating with oauth2.
We stored information about the login method in a secure session but it is not reliable. Therefore, information about the login method is moved to the database.
Previously, we couldn't change the user agent name dynamically for onebox requests. In this commit, a new hidden site setting `onebox_user_agent` is created to override the default user agent value specified in the [initializer](c333e9d6e6/config/initializers/100-onebox_options.rb (L15)).
Co-authored-by: Régis Hanol <regis@hanol.fr>
* FEATURE: Clean up previously logged information after permanently deleting posts
When soft deleteing a topic or post, we will log some details in the
staff log, including the raw content of the post. Before this commit, we
will not clear the information in these records. Therefore, after
permanently deleting the post, `UserHistory` still retains copy of the
permanently deleted post. This is an unexpected behaviour and may raise
some potential legal issues.
This commit adds a behavior that when a post is permanently deleted, the
details column of the `UserHistory` associated with the post will be
overwritten to "(permanently deleted)". At the same time, for permanent
deletion, a new `action_id` is introduced to distinguish it from soft
deletion.
Related meta topic: https://meta.discourse.org/t/introduce-a-way-to-also-permanently-delete-the-sensitive-info-from-the-staff-logs/292546
This ensures that elasticsearch doesn't parse it as an object. There are
too many combination of job opts so we don't want elasticsearch to be
parsing and indexing this field as an object.
This improves the `TextSentinel` so that we don't consider CJK text as being uppercase and thus failing the validator.
It also optimizes the entropy computation by using native ruby `.bytes` to get all the bytes from the text.
It also tweaks the `seems_pronounceable?` and `seems_unpretentious?` check to use the `\p{Alnum}` unicode regexp group to account for non-latin languages.
Reference - https://meta.discourse.org/t/body-seems-unclear-error-when-users-are-typing-in-chinese/88715
Inspired by https://github.com/discourse/discourse/pull/27900
Co-authored-by: Paulo Magalhaes <mentalstring@gmail.com>
We are investigating reports of errors with messageformat compilation following 301713ef. This commit does not fix the issue, but it introduces some basic error handling to avoid completely breaking affected sites.
We will have a fix for the root cause soon.
When tags contain an underscore we should allow filtering in the same way, previously due to the regex those with underscores were not being found when filtering.
This commit ensures that we reset the `missing_s3_uploads` status count
if there are no inventory files which are at least 2 days older than the
site's restored date.
Otherwise, a site with missing uploads but was subsequntly restored will
be continue to report missing uploads for 2 days.
Followup 560e8aff75
The linked commit allowed oneboxing private GitHub PRs,
issues, commits, and so on, but it didn't actually allow
oneboxing the root repo e.g https://github.com/discourse/discourse-reactions
We didn't have an engine for this, we were relying on OpenGraph
tags on the HTML rendering of the page like we do with other
oneboxes.
To fix this, we needed a new github engine for repos specifically.
Also, this commit adds a `data-github-private-repo` attribute to
PR, issue, and repo onebox HTML so we have an indicator of
whether the repo was private, which can be used for theme components
and so on.
Our old group SMTP SSL option was a checkbox,
but this was not ideal because there are actually
3 different ways SSL can be used when sending
SMTP:
* None
* SSL/TLS
* STARTTLS
We got around this before with specific overrides
for Gmail, but it's not flexible enough and now people
want to use other providers. It's best to be clear,
though it is a technical detail. We provide a way
to test the SMTP settings before saving them so there
should be little chance of messing this up.
This commit also converts GroupEmailSettings to a glimmer
component.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
# Context
Currently there is no way to add a custom filter to the experimental `/filter` endpoint. While you can implement a custom `status:` there is no way to include the user's input in a custom query.
# PR
This PR adds the ability to implement a custom filter. eg. `CUSTOM_FILTER:foo`
- Add `add_filter_custom_filter` for extension
- Add specs
Followup 7b627dc14b
In this other commit, I changed the email settings validator
to always use the `login` authentication method for
office365 and outlook, but I didn't change the actual
group SMTP mailer to do this.
This commit fixes that issue and does some minor refactoring.
This commit updates `TopicQuery.validators` to cover all of the
public options listed in `TopicQuery.public_valid_options`. This is done
to fix the app returning a 500 response code when an invalid value, such
as a hash, is passed as a query param when accessing the various topic
list routes.
Followup 560e8aff75
GitHub auth tokens cannot be made with permissions to
access multiple organisations. This is quite limiting.
This commit changes the site setting to be a "secret list"
type, which allows for a key/value mapping where the value
is treated like a password in the UI.
Now when a GitHub URL is requested for oneboxing, the
org name from the URL is used to determine which token
to use for the request.
Just in case anyone used the old site setting already,
there is a migration to create a `default` entry
with that token in the new list setting, and for
a period of time we will consider that token valid to
use for all GitHub oneboxes as well.
Allow admin to create custom flag which requires an additional message.
I decided to rename the old `custom_flag` into `require_message` as it is more descriptive.
Currently, when a plugin registers a new reviewable type or extends a
list method (through `register_reviewble_type` and `extend_list_method`
respectively), the new array is statically computed and always returns
the same value. It will continue to return the same value even if the
plugin is disabled (it can be a problem in a multisite env too).
To address this issue, this patch changes how `extend_list_method`
works. It’s now using `DiscoursePluginRegistry.define_filtered_register`
to create a register on the fly and store the extra values from various
plugins. It then combines the original values with the ones from the
registry. The registry is already aware of disabled plugins, so when a
plugin is disabled, its registered values won’t be returned.
This commit improves the logging of Sidekiq errors when
`ENABLE_LOGSTASH_LOGGER` is set to 1. Prior to this change, we would
only log the message and the backtrace. After this change, useful
information like `job.class`, `job.opts`, `job.problem_db`,
`exception.class` and `exception.message` are included in the log line
as well.
This change how we present attachments from incoming emails to now be "hidden" in a "[details]" so they don't "hang" at the end of the post.
This is especially useful when using Discourse as a support tool where email is the main communication channel. For various reasons, images are often duplicated by email user agents, and hiding them behind the details block help keep the conversation focused on the isssue at hand.
Internal ref t/122333
This patch upgrades the MessageFormat library to version 3.3.0 from
0.1.5.
Our `I18n.messageFormat` method signature is unchanged, and now uses the
new API under the hood.
We don’t need dedicated locale files for handling pluralization rules
anymore as everything is now included by the library itself.
The compilation of the messages now happens through our
`messageformat-wrapper` gem. It then outputs an ES module that includes
all its needed dependencies.
Most of the changes happen in `JsLocaleHelper` and in the `ExtraLocales`
controller.
A new method called `.output_MF` has been introduced in
`JsLocaleHelper`. It handles all the fetching, compiling and
transpiling to generate the proper MF messages in JS. Overrides and
fallbacks are also handled directly in this method.
The other main change is that now the MF translations are served through
the `ExtraLocales` controller instead of being statically compiled in a
JS file, then having to patch the messages using overrides and
fallbacks. Now the MF translations are just another bundle that is
created on the fly and cached by the client.
This commit updates `DiscourseLogstashLogger#add_with_opts` to avoid
logging messages that matches regexp patterns configured in
`Logster.store.ignore`. Those error logs are mostly triggered by clients
and do not serve any useful purpose.
This commit updates the `docker:test` rake task to run core and plugin
QUnit tests in parallel using half the number of available CPU
processors to speed up time it takes to run the tests on hardware with
more CPU cores.
Before this commit, core QUnit tests ran by the `docker:test` rake task was capped at 3 parallel
processes while plugin QUnit tests was not ran in parallel.
This commit updates `DiscourseLogstashlogger` to add the
`exception_class` and `exception_message` field to the log line when the
`progname` of the log message is `web-exception` which is Logster's
logging of exceptions during a web request.
The `exception_class` and `exception_message` fields allows consumers of
the logs to easily group logs together.
This commit adds the ability to onebox private GitHub
commits, pull requests, issues, blobs, and actions using
a new `github_onebox_access_token` site setting. The token
must be set up in correctly to have access to the repos needed.
To do this successfully with the Oneboxer, we need to skip
redirects on the github.com host, otherwise we get a 404
on the URL before it is translated into a GitHub API URL
and has the appropriate headers added.
- Delete vendored copy
- Create a JS entrypoint under `static/` which imports all the modes/themes/extensions we need
- Create an async `load-ace-editor` entrypoint
- Update `<AceEditor` component to use the new entrypoint
- De-jquery-ify `<AceEditor`
- Bump `v1.4.13` -> `v1.35.2`
This change eliminates a couple of instances where subfolder urls are badly formatted, in most cases we can use Discourse.base_url_no_prefix to prevent adding the subfolder to the base url.
This commit adds a hidden `s3_inventory_bucket_region` site setting to
specify the region of the `s3_inventory_bucket` when the `S3Inventory`
class initializes an instance of the `S3Helper`. By default, the
`S3Helper` class uses the value of the `s3_region` site setting but the
region of the `s3_inventory_bucket` is not always the same as the
`s3_region` configured.
Add a new column - `user_agent` - to the `SearchLog` table.
This column can be null as we are only allowing a the user-agent string to have a max length of 2000 characters. In the case the user-agent string surpasses the max characters allowed, we simply nullify the value, and save/write the log as normal.
In our official Docker image, running git commands results in the
following error:
```
fatal: detected dubious ownership in repository at '/var/www/discourse'
To add an exception for this directory, call:
git config --global --add safe.directory /var/www/discourse
```
The AdminPlugin JS model uses a similar pattern to chat models,
where it is a plain JS class manually converting provided
snake_case attributes from the serializer to JS camelCase.
However this doesn't work when it comes to using `add_to_serializer`
in plugins since core does not know about these new attributes.
Instead, we can use a JS function to convert snake_case to camelCase
and use that when initializing AdminPlugin. This commit also moves
similar functions to a new case-converter.js file in
discourse-common/lib.
This commit rewrites `DiscourseLogstashLogger` to not be an instance
of `LogstashLogger`. The reason we don't want it to be an instance of
`LogstashLogger` is because we want the new logger to be chained to
Logster's logger which can then pass down useful information like the
request's env and error backtraces which Logster has already gathered.
Note that this commit does not bother to maintain backwards
compatibility and drops the `LOGSTASH_URI` and `UNICORN_LOGSTASH_URI`
ENV variables which were previously used to configure the destination in
which `logstash-logger` would send the logs to. Instead, we introduce
the `ENABLE_LOGSTASH_LOGGER` ENV variable to replace both ENV and remove
the need for the log paths to be specified. Note that the previous
feature was considered experimental as stated in d888d3c54c
and the new feature should be considered experimental as well. The code
may be moved into a plugin in the future.
Fixes the following deprecation warning:
> DEPRECATION WARNING: The `check_pending!` method is deprecated in favor of `check_all_pending!`. The new implementation will loop through all available database configurations and find pending migrations. The prior implementation did not permit this.
Instead of the deprecated 'min trust level to allow ignore' in order to reduce the number of deprecation notices in the logs.
This tweaks a few serializers so that the 'can_ignore_users?` property is always coming from the server and properly used on the client-side.
Followup 2f2da72747
This commit moves topic view tracking from happening
every time a Topic is requested, which is susceptible
to inflating numbers of views from web crawlers, to
our request tracker middleware.
In this new location, topic views are only tracked when
the following headers are sent:
* HTTP_DISCOURSE_TRACK_VIEW - This is sent on every page navigation when
clicking around the ember app. We count these as browser page views
because we know it comes from the AJAX call in our app. The topic ID
is extracted from HTTP_DISCOURSE_TRACK_VIEW_TOPIC_ID
* HTTP_DISCOURSE_DEFERRED_TRACK_VIEW - Sent when MessageBus initializes
after first loading the page to count the initial page load view. The
topic ID is extracted from HTTP_DISCOURSE_DEFERRED_TRACK_VIEW.
This will bring topic views more in line with the change we
made to page views in the referenced commit and result in
more realistic topic view counts.