Merge pull request #6839 from tk0miya/6806_linkcheck_concatenate_error

Fix #6806: linkcheck: Failure on parsing content
This commit is contained in:
Takeshi KOMIYA
2019-11-21 00:44:03 +09:00
committed by GitHub
2 changed files with 4 additions and 0 deletions

View File

@@ -51,6 +51,7 @@ Bugs fixed
supported LaTeX engines: ¶, §, €, ∞, ±, →, ‣, , superscript and subscript supported LaTeX engines: ¶, §, €, ∞, ±, →, ‣, , superscript and subscript
digits go through "as is" (as default OpenType font supports them) digits go through "as is" (as default OpenType font supports them)
* #6704: linkcheck: Be defensive and handle newly defined HTTP error code * #6704: linkcheck: Be defensive and handle newly defined HTTP error code
* #6806: linkcheck: Failure on parsing content
* #6655: image URLs containing ``data:`` causes gettext builder crashed * #6655: image URLs containing ``data:`` causes gettext builder crashed
* #6584: i18n: Error when compiling message catalogs on Hindi * #6584: i18n: Error when compiling message catalogs on Hindi
* #6718: i18n: KeyError is raised if section title and table title are same * #6718: i18n: KeyError is raised if section title and table title are same

View File

@@ -59,6 +59,9 @@ def check_anchor(response: requests.requests.Response, anchor: str) -> bool:
# Read file in chunks. If we find a matching anchor, we break # Read file in chunks. If we find a matching anchor, we break
# the loop early in hopes not to have to download the whole thing. # the loop early in hopes not to have to download the whole thing.
for chunk in response.iter_content(chunk_size=4096, decode_unicode=True): for chunk in response.iter_content(chunk_size=4096, decode_unicode=True):
if isinstance(chunk, bytes): # requests failed to decode
chunk = chunk.decode() # manually try to decode it
parser.feed(chunk) parser.feed(chunk)
if parser.found: if parser.found:
break break