mirror of
https://github.com/discourse/discourse.git
synced 2025-02-25 18:55:32 -06:00
FEATURE: Split up text segmentation for Chinese and Japanese.
* Chinese segmenetation will continue to rely on cppjieba
* Japanese segmentation will use our port of TinySegmenter
* Korean currently does not rely on segmentation which was dropped in c677877e4f
* SiteSetting.search_tokenize_chinese_japanese_korean has been split
into SiteSetting.search_tokenize_chinese and
SiteSetting.search_tokenize_japanese respectively
This commit is contained in:
14
lib/validators/search_tokenize_chinese_validator.rb
Normal file
14
lib/validators/search_tokenize_chinese_validator.rb
Normal file
@@ -0,0 +1,14 @@
|
||||
# frozen_string_literal: true
|
||||
|
||||
class SearchTokenizeChineseValidator
|
||||
def initialize(opts = {})
|
||||
end
|
||||
|
||||
def valid_value?(value)
|
||||
!SiteSetting.search_tokenize_japanese
|
||||
end
|
||||
|
||||
def error_message
|
||||
I18n.t("site_settings.errors.search_tokenize_japanese_enabled")
|
||||
end
|
||||
end
|
||||
14
lib/validators/search_tokenize_japanese_validator.rb
Normal file
14
lib/validators/search_tokenize_japanese_validator.rb
Normal file
@@ -0,0 +1,14 @@
|
||||
# frozen_string_literal: true
|
||||
|
||||
class SearchTokenizeJapaneseValidator
|
||||
def initialize(opts = {})
|
||||
end
|
||||
|
||||
def valid_value?(value)
|
||||
!SiteSetting.search_tokenize_chinese
|
||||
end
|
||||
|
||||
def error_message
|
||||
I18n.t("site_settings.errors.search_tokenize_chinese_enabled")
|
||||
end
|
||||
end
|
||||
Reference in New Issue
Block a user