mirror of
https://github.com/sphinx-doc/sphinx.git
synced 2025-02-25 18:55:22 -06:00
Merge branch 'master' into fix-stemming-removes-short-words-from-search-results
This commit is contained in:
commit
475ebf6c36
@ -9,6 +9,7 @@ python:
|
|||||||
- "3.3"
|
- "3.3"
|
||||||
- "3.4"
|
- "3.4"
|
||||||
- "3.5"
|
- "3.5"
|
||||||
|
- "3.5-dev"
|
||||||
- "pypy"
|
- "pypy"
|
||||||
env:
|
env:
|
||||||
global:
|
global:
|
||||||
|
75
CHANGES
75
CHANGES
@ -13,6 +13,10 @@ Features added
|
|||||||
* Add ``:caption:`` option for sphinx.ext.inheritance_diagram.
|
* Add ``:caption:`` option for sphinx.ext.inheritance_diagram.
|
||||||
* #894: Add ``lualatexpdf`` and ``xelatexpdf`` as a make target to build PDF using lualatex or xelatex
|
* #894: Add ``lualatexpdf`` and ``xelatexpdf`` as a make target to build PDF using lualatex or xelatex
|
||||||
* #2471: Add config variable for default doctest flags.
|
* #2471: Add config variable for default doctest flags.
|
||||||
|
* Convert linkcheck builder to requests for better encoding handling
|
||||||
|
* #2463, #2516: Add keywords of "meta" directive to search index
|
||||||
|
* ``:maxdepth:`` option of toctree affects ``secnumdepth`` (ref: #2547)
|
||||||
|
* #2575: Now ``sphinx.ext.graphviz`` allows ``:align:`` option
|
||||||
|
|
||||||
Bugs fixed
|
Bugs fixed
|
||||||
----------
|
----------
|
||||||
@ -21,18 +25,39 @@ Documentation
|
|||||||
-------------
|
-------------
|
||||||
|
|
||||||
|
|
||||||
Release 1.4.2 (in development)
|
Release 1.4.3 (in development)
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
|
Bugs fixed
|
||||||
|
----------
|
||||||
|
|
||||||
|
|
||||||
|
Release 1.4.2 (released May 29, 2016)
|
||||||
|
=====================================
|
||||||
|
|
||||||
Features added
|
Features added
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
* Now :confval:`suppress_warnings` accepts following configurations: ``app.add_node``, ``app.add_directive``, ``app.add_role`` and ``app.add_generic_role`` (ref: #2451)
|
* Now :confval:`suppress_warnings` accepts following configurations (ref: #2451, #2466):
|
||||||
* LaTeX writer allows page breaks in topic contents; and their horizontal
|
|
||||||
extent now fits in the line width (shadow in margin). Warning-type
|
- ``app.add_node``
|
||||||
admonitions allow page breaks (if very long) and their vertical spacing
|
- ``app.add_directive``
|
||||||
has been made more coherent with the one for Hint-type notices.
|
- ``app.add_role``
|
||||||
* Convert linkcheck builder to requests for better encoding handling
|
- ``app.add_generic_role``
|
||||||
|
- ``app.add_source_parser``
|
||||||
|
- ``image.data_uri``
|
||||||
|
- ``image.nonlocal_uri``
|
||||||
|
|
||||||
|
* #2453: LaTeX writer allows page breaks in topic contents; and their
|
||||||
|
horizontal extent now fits in the line width (with shadow in margin). Also
|
||||||
|
warning-type admonitions allow page breaks and their vertical spacing has
|
||||||
|
been made more coherent with the one for hint-type notices (ref #2446).
|
||||||
|
|
||||||
|
* #2459: the framing of literal code-blocks in LaTeX output (and not only the
|
||||||
|
code lines themselves) obey the indentation in lists or quoted blocks.
|
||||||
|
|
||||||
|
* #2343: the long source lines in code-blocks are wrapped (without modifying
|
||||||
|
the line numbering) in LaTeX output (ref #1534, #2304).
|
||||||
|
|
||||||
Bugs fixed
|
Bugs fixed
|
||||||
----------
|
----------
|
||||||
@ -45,6 +70,42 @@ Bugs fixed
|
|||||||
* #2447: VerbatimBorderColor wrongly used also for captions of PDF
|
* #2447: VerbatimBorderColor wrongly used also for captions of PDF
|
||||||
* #2456: C++, fix crash related to document merging (e.g., singlehtml and Latex builders).
|
* #2456: C++, fix crash related to document merging (e.g., singlehtml and Latex builders).
|
||||||
* #2446: latex(pdf) sets local tables of contents (or more generally topic nodes) in unbreakable boxes, causes overflow at bottom
|
* #2446: latex(pdf) sets local tables of contents (or more generally topic nodes) in unbreakable boxes, causes overflow at bottom
|
||||||
|
* #2476: Omit MathJax markers if :nowrap: is given
|
||||||
|
* #2465: latex builder fails in case no caption option is provided to toctree directive
|
||||||
|
* Sphinx crashes if self referenced toctree found
|
||||||
|
* #2481: spelling mistake for mecab search splitter. Thanks to Naoki Sato.
|
||||||
|
* #2309: Fix could not refer "indirect hyperlink targets" by ref-role
|
||||||
|
* intersphinx fails if mapping URL contains any port
|
||||||
|
* #2088: intersphinx crashes if the mapping URL requires basic auth
|
||||||
|
* #2304: auto line breaks in latexpdf codeblocks
|
||||||
|
* #1534: Word wrap long lines in Latex verbatim blocks
|
||||||
|
* #2460: too much white space on top of captioned literal blocks in PDF output
|
||||||
|
* Show error reason when multiple math extensions are loaded (ref: #2499)
|
||||||
|
* #2483: any figure number was not assigned if figure title contains only non text objects
|
||||||
|
* #2501: Unicode subscript numbers are normalized in LaTeX
|
||||||
|
* #2492: Figure directive with :figwidth: generates incorrect Latex-code
|
||||||
|
* The caption of figure is always put on center even if ``:align:`` was specified
|
||||||
|
* #2526: LaTeX writer crashes if the section having only images
|
||||||
|
* #2522: Sphinx touches mo files under installed directory that caused permission error.
|
||||||
|
* #2536: C++, fix crash when an immediately nested scope has the same name as the current scope.
|
||||||
|
* #2555: Fix crash on any-references with unicode.
|
||||||
|
* #2517: wrong bookmark encoding in PDF if using LuaLaTeX
|
||||||
|
* #2521: generated Makefile causes BSD make crashed if sphinx-build not found
|
||||||
|
* #2470: ``typing`` backport package causes autodoc errors with python 2.7
|
||||||
|
* ``sphinx.ext.intersphinx`` crashes if non-string value is used for key of `intersphinx_mapping`
|
||||||
|
* #2518: `intersphinx_mapping` disallows non alphanumeric keys
|
||||||
|
* #2558: unpack error on devhelp builder
|
||||||
|
* #2561: Info builder crashes when a footnote contains a link
|
||||||
|
* #2565: The descriptions of objects generated by ``sphinx.ext.autosummary`` overflow lines at LaTeX writer
|
||||||
|
* Extend pdflatex config in sphinx.sty to subparagraphs (ref: #2551)
|
||||||
|
* #2445: `rst_prolog` and `rst_epilog` affect to non reST sources
|
||||||
|
* #2576: ``sphinx.ext.imgmath`` crashes if subprocess raises error
|
||||||
|
* #2577: ``sphinx.ext.imgmath``: Invalid argument are passed to dvisvgm
|
||||||
|
* #2556: Xapian search does not work with Python 3
|
||||||
|
* #2581: The search doesn't work if language="es" (spanish)
|
||||||
|
* #2382: Adjust spacing after abbreviations on figure numbers in LaTeX writer
|
||||||
|
* #2383: The generated footnote by `latex_show_urls` overflows lines
|
||||||
|
* #2497, #2552: The label of search button does not fit for the button itself
|
||||||
|
|
||||||
|
|
||||||
Release 1.4.1 (released Apr 12, 2016)
|
Release 1.4.1 (released Apr 12, 2016)
|
||||||
|
@ -222,6 +222,9 @@ General configuration
|
|||||||
* app.add_directive
|
* app.add_directive
|
||||||
* app.add_role
|
* app.add_role
|
||||||
* app.add_generic_role
|
* app.add_generic_role
|
||||||
|
* app.add_source_parser
|
||||||
|
* image.data_uri
|
||||||
|
* image.nonlocal_uri
|
||||||
* ref.term
|
* ref.term
|
||||||
* ref.ref
|
* ref.ref
|
||||||
* ref.numref
|
* ref.numref
|
||||||
@ -280,7 +283,12 @@ General configuration
|
|||||||
.. confval:: numfig
|
.. confval:: numfig
|
||||||
|
|
||||||
If true, figures, tables and code-blocks are automatically numbered if they
|
If true, figures, tables and code-blocks are automatically numbered if they
|
||||||
have a caption. For now, it works only with the HTML builder. Default is ``False``.
|
have a caption. At same time, the `numref` role is enabled. For now, it
|
||||||
|
works only with the HTML builder and LaTeX builder. Default is ``False``.
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
LaTeX builder always assign numbers whether this option is enabled or not.
|
||||||
|
|
||||||
.. versionadded:: 1.3
|
.. versionadded:: 1.3
|
||||||
|
|
||||||
@ -1615,8 +1623,13 @@ These options influence LaTeX output.
|
|||||||
* Keys that don't need be overridden unless in special cases are:
|
* Keys that don't need be overridden unless in special cases are:
|
||||||
|
|
||||||
``'inputenc'``
|
``'inputenc'``
|
||||||
"inputenc" package inclusion, default
|
"inputenc" package inclusion, defaults to
|
||||||
``'\\usepackage[utf8]{inputenc}'``.
|
``'\\usepackage[utf8]{inputenc}'`` when using pdflatex.
|
||||||
|
Otherwise unset.
|
||||||
|
|
||||||
|
.. versionchanged:: 1.5
|
||||||
|
Previously ``'\\usepackage[utf8]{inputenc}'`` was used for all
|
||||||
|
compilers.
|
||||||
``'cmappkg'``
|
``'cmappkg'``
|
||||||
"cmap" package inclusion, default ``'\\usepackage{cmap}'``.
|
"cmap" package inclusion, default ``'\\usepackage{cmap}'``.
|
||||||
|
|
||||||
|
@ -33,11 +33,14 @@ There are two kinds of test blocks:
|
|||||||
* *code-output-style* blocks consist of an ordinary piece of Python code, and
|
* *code-output-style* blocks consist of an ordinary piece of Python code, and
|
||||||
optionally, a piece of output for that code.
|
optionally, a piece of output for that code.
|
||||||
|
|
||||||
The doctest extension provides four directives. The *group* argument is
|
|
||||||
interpreted as follows: if it is empty, the block is assigned to the group named
|
Directives
|
||||||
``default``. If it is ``*``, the block is assigned to all groups (including the
|
----------
|
||||||
``default`` group). Otherwise, it must be a comma-separated list of group
|
|
||||||
names.
|
The *group* argument below is interpreted as follows: if it is empty, the block
|
||||||
|
is assigned to the group named ``default``. If it is ``*``, the block is
|
||||||
|
assigned to all groups (including the ``default`` group). Otherwise, it must be
|
||||||
|
a comma-separated list of group names.
|
||||||
|
|
||||||
.. rst:directive:: .. testsetup:: [group]
|
.. rst:directive:: .. testsetup:: [group]
|
||||||
|
|
||||||
@ -171,7 +174,10 @@ The following is an example for the usage of the directives. The test via
|
|||||||
This parrot wouldn't voom if you put 3000 volts through it!
|
This parrot wouldn't voom if you put 3000 volts through it!
|
||||||
|
|
||||||
|
|
||||||
There are also these config values for customizing the doctest extension:
|
Configuration
|
||||||
|
-------------
|
||||||
|
|
||||||
|
The doctest extension uses the following configuration values:
|
||||||
|
|
||||||
.. confval:: doctest_default_flags
|
.. confval:: doctest_default_flags
|
||||||
|
|
||||||
|
@ -96,6 +96,10 @@ It adds these directives:
|
|||||||
All three directives support a ``graphviz_dot`` option that can be switch the
|
All three directives support a ``graphviz_dot`` option that can be switch the
|
||||||
``dot`` command within the directive.
|
``dot`` command within the directive.
|
||||||
|
|
||||||
|
.. versionadded:: 1.5
|
||||||
|
All three directives support a ``align`` option to align the graph horizontal.
|
||||||
|
The values "left", "center", "right" are allowed.
|
||||||
|
|
||||||
There are also these new config values:
|
There are also these new config values:
|
||||||
|
|
||||||
.. confval:: graphviz_dot
|
.. confval:: graphviz_dot
|
||||||
|
@ -358,7 +358,7 @@ package.
|
|||||||
|
|
||||||
.. versionadded:: 1.1
|
.. versionadded:: 1.1
|
||||||
|
|
||||||
.. method:: Sphinx.add_source_parser(name, suffix, parser)
|
.. method:: Sphinx.add_source_parser(suffix, parser)
|
||||||
|
|
||||||
Register a parser class for specified *suffix*.
|
Register a parser class for specified *suffix*.
|
||||||
|
|
||||||
|
@ -196,6 +196,8 @@ Use ```Link text <http://example.com/>`_`` for inline web links. If the link
|
|||||||
text should be the web address, you don't need special markup at all, the parser
|
text should be the web address, you don't need special markup at all, the parser
|
||||||
finds links and mail addresses in ordinary text.
|
finds links and mail addresses in ordinary text.
|
||||||
|
|
||||||
|
.. important:: There must be a space between the link text and the opening \< for the URL.
|
||||||
|
|
||||||
You can also separate the link and the target definition (:duref:`ref
|
You can also separate the link and the target definition (:duref:`ref
|
||||||
<hyperlink-targets>`), like this::
|
<hyperlink-targets>`), like this::
|
||||||
|
|
||||||
|
@ -51,7 +51,7 @@ The third form provides your theme path dynamically to Sphinx if the
|
|||||||
called ``sphinx_themes`` in your setup.py file and write a ``get_path`` function
|
called ``sphinx_themes`` in your setup.py file and write a ``get_path`` function
|
||||||
that has to return the directory with themes in it::
|
that has to return the directory with themes in it::
|
||||||
|
|
||||||
// in your 'setup.py'
|
# 'setup.py'
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
...
|
...
|
||||||
@ -63,7 +63,7 @@ that has to return the directory with themes in it::
|
|||||||
...
|
...
|
||||||
)
|
)
|
||||||
|
|
||||||
// in 'your_package.py'
|
# 'your_package.py'
|
||||||
|
|
||||||
from os import path
|
from os import path
|
||||||
package_dir = path.abspath(path.dirname(__file__))
|
package_dir = path.abspath(path.dirname(__file__))
|
||||||
|
17
setup.py
17
setup.py
@ -98,6 +98,13 @@ else:
|
|||||||
def run(self):
|
def run(self):
|
||||||
compile_catalog.run(self)
|
compile_catalog.run(self)
|
||||||
|
|
||||||
|
if isinstance(self.domain, list):
|
||||||
|
for domain in self.domain:
|
||||||
|
self._run_domain_js(domain)
|
||||||
|
else:
|
||||||
|
self._run_domain_js(self.domain)
|
||||||
|
|
||||||
|
def _run_domain_js(self, domain):
|
||||||
po_files = []
|
po_files = []
|
||||||
js_files = []
|
js_files = []
|
||||||
|
|
||||||
@ -106,20 +113,20 @@ else:
|
|||||||
po_files.append((self.locale,
|
po_files.append((self.locale,
|
||||||
os.path.join(self.directory, self.locale,
|
os.path.join(self.directory, self.locale,
|
||||||
'LC_MESSAGES',
|
'LC_MESSAGES',
|
||||||
self.domain + '.po')))
|
domain + '.po')))
|
||||||
js_files.append(os.path.join(self.directory, self.locale,
|
js_files.append(os.path.join(self.directory, self.locale,
|
||||||
'LC_MESSAGES',
|
'LC_MESSAGES',
|
||||||
self.domain + '.js'))
|
domain + '.js'))
|
||||||
else:
|
else:
|
||||||
for locale in os.listdir(self.directory):
|
for locale in os.listdir(self.directory):
|
||||||
po_file = os.path.join(self.directory, locale,
|
po_file = os.path.join(self.directory, locale,
|
||||||
'LC_MESSAGES',
|
'LC_MESSAGES',
|
||||||
self.domain + '.po')
|
domain + '.po')
|
||||||
if os.path.exists(po_file):
|
if os.path.exists(po_file):
|
||||||
po_files.append((locale, po_file))
|
po_files.append((locale, po_file))
|
||||||
js_files.append(os.path.join(self.directory, locale,
|
js_files.append(os.path.join(self.directory, locale,
|
||||||
'LC_MESSAGES',
|
'LC_MESSAGES',
|
||||||
self.domain + '.js'))
|
domain + '.js'))
|
||||||
else:
|
else:
|
||||||
po_files.append((self.locale, self.input_file))
|
po_files.append((self.locale, self.input_file))
|
||||||
if self.output_file:
|
if self.output_file:
|
||||||
@ -127,7 +134,7 @@ else:
|
|||||||
else:
|
else:
|
||||||
js_files.append(os.path.join(self.directory, self.locale,
|
js_files.append(os.path.join(self.directory, self.locale,
|
||||||
'LC_MESSAGES',
|
'LC_MESSAGES',
|
||||||
self.domain + '.js'))
|
domain + '.js'))
|
||||||
|
|
||||||
for js_file, (locale, po_file) in zip(js_files, po_files):
|
for js_file, (locale, po_file) in zip(js_files, po_files):
|
||||||
infile = open(po_file, 'r')
|
infile = open(po_file, 'r')
|
||||||
|
@ -116,10 +116,14 @@ class index(nodes.Invisible, nodes.Inline, nodes.TextElement):
|
|||||||
"""Node for index entries.
|
"""Node for index entries.
|
||||||
|
|
||||||
This node is created by the ``index`` directive and has one attribute,
|
This node is created by the ``index`` directive and has one attribute,
|
||||||
``entries``. Its value is a list of 4-tuples of ``(entrytype, entryname,
|
``entries``. Its value is a list of 5-tuples of ``(entrytype, entryname,
|
||||||
target, ignored)``.
|
target, ignored, key)``.
|
||||||
|
|
||||||
*entrytype* is one of "single", "pair", "double", "triple".
|
*entrytype* is one of "single", "pair", "double", "triple".
|
||||||
|
|
||||||
|
*key* is categolziation characters (usually it is single character) for
|
||||||
|
general index page. For the detail of this, please see also:
|
||||||
|
:rst:dir:`glossary` and issue #2320.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
@ -62,11 +62,8 @@ def write_file(name, text, opts):
|
|||||||
print('File %s already exists, skipping.' % fname)
|
print('File %s already exists, skipping.' % fname)
|
||||||
else:
|
else:
|
||||||
print('Creating file %s.' % fname)
|
print('Creating file %s.' % fname)
|
||||||
f = open(fname, 'w')
|
with open(fname, 'w') as f:
|
||||||
try:
|
|
||||||
f.write(text)
|
f.write(text)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
|
|
||||||
def format_heading(level, text):
|
def format_heading(level, text):
|
||||||
|
@ -44,6 +44,7 @@ from sphinx.util.osutil import ENOENT
|
|||||||
from sphinx.util.logging import is_suppressed_warning
|
from sphinx.util.logging import is_suppressed_warning
|
||||||
from sphinx.util.console import bold, lightgray, darkgray, darkgreen, \
|
from sphinx.util.console import bold, lightgray, darkgray, darkgreen, \
|
||||||
term_width_line
|
term_width_line
|
||||||
|
from sphinx.util.i18n import find_catalog_source_files
|
||||||
|
|
||||||
if hasattr(sys, 'intern'):
|
if hasattr(sys, 'intern'):
|
||||||
intern = sys.intern
|
intern = sys.intern
|
||||||
@ -207,13 +208,17 @@ class Sphinx(object):
|
|||||||
if self.config.language is not None:
|
if self.config.language is not None:
|
||||||
self.info(bold('loading translations [%s]... ' %
|
self.info(bold('loading translations [%s]... ' %
|
||||||
self.config.language), nonl=True)
|
self.config.language), nonl=True)
|
||||||
locale_dirs = [None, path.join(package_dir, 'locale')] + \
|
user_locale_dirs = [
|
||||||
[path.join(self.srcdir, x) for x in self.config.locale_dirs]
|
path.join(self.srcdir, x) for x in self.config.locale_dirs]
|
||||||
|
# compile mo files if sphinx.po file in user locale directories are updated
|
||||||
|
for catinfo in find_catalog_source_files(
|
||||||
|
user_locale_dirs, self.config.language, domains=['sphinx'],
|
||||||
|
charset=self.config.source_encoding):
|
||||||
|
catinfo.write_mo(self.config.language)
|
||||||
|
locale_dirs = [None, path.join(package_dir, 'locale')] + user_locale_dirs
|
||||||
else:
|
else:
|
||||||
locale_dirs = []
|
locale_dirs = []
|
||||||
self.translator, has_translation = locale.init(locale_dirs,
|
self.translator, has_translation = locale.init(locale_dirs, self.config.language)
|
||||||
self.config.language,
|
|
||||||
charset=self.config.source_encoding)
|
|
||||||
if self.config.language is not None:
|
if self.config.language is not None:
|
||||||
if has_translation or self.config.language == 'en':
|
if has_translation or self.config.language == 'en':
|
||||||
# "en" never needs to be translated
|
# "en" never needs to be translated
|
||||||
@ -788,6 +793,11 @@ class Sphinx(object):
|
|||||||
|
|
||||||
def add_source_parser(self, suffix, parser):
|
def add_source_parser(self, suffix, parser):
|
||||||
self.debug('[app] adding search source_parser: %r, %r', (suffix, parser))
|
self.debug('[app] adding search source_parser: %r, %r', (suffix, parser))
|
||||||
|
if suffix in self._additional_source_parsers:
|
||||||
|
self.warn('while setting up extension %s: source_parser for %r is '
|
||||||
|
'already registered, it will be overridden' %
|
||||||
|
(self._setting_up_extension[-1], suffix),
|
||||||
|
type='app', subtype='add_source_parser')
|
||||||
self._additional_source_parsers[suffix] = parser
|
self._additional_source_parsers[suffix] = parser
|
||||||
|
|
||||||
|
|
||||||
|
@ -178,14 +178,11 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
|
|
||||||
# Build the access page
|
# Build the access page
|
||||||
self.info(bold('building access page...'), nonl=True)
|
self.info(bold('building access page...'), nonl=True)
|
||||||
f = codecs.open(path.join(language_dir, '_access.html'), 'w')
|
with codecs.open(path.join(language_dir, '_access.html'), 'w') as f:
|
||||||
try:
|
|
||||||
f.write(access_page_template % {
|
f.write(access_page_template % {
|
||||||
'toc': htmlescape(toc, quote=True),
|
'toc': htmlescape(toc, quote=True),
|
||||||
'title': htmlescape(self.config.applehelp_title)
|
'title': htmlescape(self.config.applehelp_title)
|
||||||
})
|
})
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
self.info('done')
|
self.info('done')
|
||||||
|
|
||||||
# Generate the help index
|
# Generate the help index
|
||||||
|
@ -101,16 +101,10 @@ class ChangesBuilder(Builder):
|
|||||||
'show_copyright': self.config.html_show_copyright,
|
'show_copyright': self.config.html_show_copyright,
|
||||||
'show_sphinx': self.config.html_show_sphinx,
|
'show_sphinx': self.config.html_show_sphinx,
|
||||||
}
|
}
|
||||||
f = codecs.open(path.join(self.outdir, 'index.html'), 'w', 'utf8')
|
with codecs.open(path.join(self.outdir, 'index.html'), 'w', 'utf8') as f:
|
||||||
try:
|
|
||||||
f.write(self.templates.render('changes/frameset.html', ctx))
|
f.write(self.templates.render('changes/frameset.html', ctx))
|
||||||
finally:
|
with codecs.open(path.join(self.outdir, 'changes.html'), 'w', 'utf8') as f:
|
||||||
f.close()
|
|
||||||
f = codecs.open(path.join(self.outdir, 'changes.html'), 'w', 'utf8')
|
|
||||||
try:
|
|
||||||
f.write(self.templates.render('changes/versionchanges.html', ctx))
|
f.write(self.templates.render('changes/versionchanges.html', ctx))
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
hltext = ['.. versionadded:: %s' % version,
|
hltext = ['.. versionadded:: %s' % version,
|
||||||
'.. versionchanged:: %s' % version,
|
'.. versionchanged:: %s' % version,
|
||||||
@ -126,27 +120,22 @@ class ChangesBuilder(Builder):
|
|||||||
|
|
||||||
self.info(bold('copying source files...'))
|
self.info(bold('copying source files...'))
|
||||||
for docname in self.env.all_docs:
|
for docname in self.env.all_docs:
|
||||||
f = codecs.open(self.env.doc2path(docname), 'r',
|
with codecs.open(self.env.doc2path(docname), 'r',
|
||||||
self.env.config.source_encoding)
|
self.env.config.source_encoding) as f:
|
||||||
try:
|
try:
|
||||||
lines = f.readlines()
|
lines = f.readlines()
|
||||||
except UnicodeDecodeError:
|
except UnicodeDecodeError:
|
||||||
self.warn('could not read %r for changelog creation' % docname)
|
self.warn('could not read %r for changelog creation' % docname)
|
||||||
continue
|
continue
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
targetfn = path.join(self.outdir, 'rst', os_path(docname)) + '.html'
|
targetfn = path.join(self.outdir, 'rst', os_path(docname)) + '.html'
|
||||||
ensuredir(path.dirname(targetfn))
|
ensuredir(path.dirname(targetfn))
|
||||||
f = codecs.open(targetfn, 'w', 'utf-8')
|
with codecs.open(targetfn, 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
text = ''.join(hl(i+1, line) for (i, line) in enumerate(lines))
|
text = ''.join(hl(i+1, line) for (i, line) in enumerate(lines))
|
||||||
ctx = {
|
ctx = {
|
||||||
'filename': self.env.doc2path(docname, None),
|
'filename': self.env.doc2path(docname, None),
|
||||||
'text': text
|
'text': text
|
||||||
}
|
}
|
||||||
f.write(self.templates.render('changes/rstsource.html', ctx))
|
f.write(self.templates.render('changes/rstsource.html', ctx))
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
themectx = dict(('theme_' + key, val) for (key, val) in
|
themectx = dict(('theme_' + key, val) for (key, val) in
|
||||||
iteritems(self.theme.get_options({})))
|
iteritems(self.theme.get_options({})))
|
||||||
copy_static_entry(path.join(package_dir, 'themes', 'default',
|
copy_static_entry(path.join(package_dir, 'themes', 'default',
|
||||||
|
@ -123,12 +123,9 @@ class DevhelpBuilder(StandaloneHTMLBuilder):
|
|||||||
subitem[1], [])
|
subitem[1], [])
|
||||||
|
|
||||||
for (key, group) in index:
|
for (key, group) in index:
|
||||||
for title, (refs, subitems) in group:
|
for title, (refs, subitems, key) in group:
|
||||||
write_index(title, refs, subitems)
|
write_index(title, refs, subitems)
|
||||||
|
|
||||||
# Dump the XML file
|
# Dump the XML file
|
||||||
f = comp_open(path.join(outdir, outname + '.devhelp'), 'w')
|
with comp_open(path.join(outdir, outname + '.devhelp'), 'w') as f:
|
||||||
try:
|
|
||||||
tree.write(f, 'utf-8')
|
tree.write(f, 'utf-8')
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
@ -496,11 +496,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
|||||||
def build_mimetype(self, outdir, outname):
|
def build_mimetype(self, outdir, outname):
|
||||||
"""Write the metainfo file mimetype."""
|
"""Write the metainfo file mimetype."""
|
||||||
self.info('writing %s file...' % outname)
|
self.info('writing %s file...' % outname)
|
||||||
f = codecs.open(path.join(outdir, outname), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.mimetype_template)
|
f.write(self.mimetype_template)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def build_container(self, outdir, outname):
|
def build_container(self, outdir, outname):
|
||||||
"""Write the metainfo file META-INF/cointainer.xml."""
|
"""Write the metainfo file META-INF/cointainer.xml."""
|
||||||
@ -511,11 +508,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
|||||||
except OSError as err:
|
except OSError as err:
|
||||||
if err.errno != EEXIST:
|
if err.errno != EEXIST:
|
||||||
raise
|
raise
|
||||||
f = codecs.open(path.join(outdir, outname), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.container_template)
|
f.write(self.container_template)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def content_metadata(self, files, spine, guide):
|
def content_metadata(self, files, spine, guide):
|
||||||
"""Create a dictionary with all metadata for the content.opf
|
"""Create a dictionary with all metadata for the content.opf
|
||||||
@ -652,12 +646,9 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
|||||||
guide = '\n'.join(guide)
|
guide = '\n'.join(guide)
|
||||||
|
|
||||||
# write the project file
|
# write the project file
|
||||||
f = codecs.open(path.join(outdir, outname), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(content_tmpl %
|
f.write(content_tmpl %
|
||||||
self.content_metadata(projectfiles, spine, guide))
|
self.content_metadata(projectfiles, spine, guide))
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def new_navpoint(self, node, level, incr=True):
|
def new_navpoint(self, node, level, incr=True):
|
||||||
"""Create a new entry in the toc from the node at given level."""
|
"""Create a new entry in the toc from the node at given level."""
|
||||||
@ -749,11 +740,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
|||||||
navpoints = self.build_navpoints(refnodes)
|
navpoints = self.build_navpoints(refnodes)
|
||||||
level = max(item['level'] for item in self.refnodes)
|
level = max(item['level'] for item in self.refnodes)
|
||||||
level = min(level, self.config.epub_tocdepth)
|
level = min(level, self.config.epub_tocdepth)
|
||||||
f = codecs.open(path.join(outdir, outname), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.toc_template % self.toc_metadata(level, navpoints))
|
f.write(self.toc_template % self.toc_metadata(level, navpoints))
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def build_epub(self, outdir, outname):
|
def build_epub(self, outdir, outname):
|
||||||
"""Write the epub file.
|
"""Write the epub file.
|
||||||
|
@ -203,11 +203,9 @@ class Epub3Builder(EpubBuilder):
|
|||||||
# 'includehidden'
|
# 'includehidden'
|
||||||
refnodes = self.refnodes
|
refnodes = self.refnodes
|
||||||
navlist = self.build_navlist(refnodes)
|
navlist = self.build_navlist(refnodes)
|
||||||
f = codecs.open(path.join(outdir, outname), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.navigation_doc_template %
|
f.write(self.navigation_doc_template %
|
||||||
self.navigation_doc_metadata(navlist))
|
self.navigation_doc_metadata(navlist))
|
||||||
finally:
|
|
||||||
f.close()
|
# Add nav.xhtml to epub file
|
||||||
# Add nav.xhtml to epub file
|
self.files.append(outname)
|
||||||
self.files.append(outname)
|
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
|
|
||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
from os import path, walk
|
from os import path, walk, getenv
|
||||||
from codecs import open
|
from codecs import open
|
||||||
from time import time
|
from time import time
|
||||||
from datetime import datetime, tzinfo, timedelta
|
from datetime import datetime, tzinfo, timedelta
|
||||||
@ -130,6 +130,12 @@ class I18nBuilder(Builder):
|
|||||||
timestamp = time()
|
timestamp = time()
|
||||||
tzdelta = datetime.fromtimestamp(timestamp) - \
|
tzdelta = datetime.fromtimestamp(timestamp) - \
|
||||||
datetime.utcfromtimestamp(timestamp)
|
datetime.utcfromtimestamp(timestamp)
|
||||||
|
# set timestamp from SOURCE_DATE_EPOCH if set
|
||||||
|
# see https://reproducible-builds.org/specs/source-date-epoch/
|
||||||
|
source_date_epoch = getenv('SOURCE_DATE_EPOCH')
|
||||||
|
if source_date_epoch is not None:
|
||||||
|
timestamp = float(source_date_epoch)
|
||||||
|
tzdelta = 0
|
||||||
|
|
||||||
|
|
||||||
class LocalTimeZone(tzinfo):
|
class LocalTimeZone(tzinfo):
|
||||||
@ -205,8 +211,7 @@ class MessageCatalogBuilder(I18nBuilder):
|
|||||||
ensuredir(path.join(self.outdir, path.dirname(textdomain)))
|
ensuredir(path.join(self.outdir, path.dirname(textdomain)))
|
||||||
|
|
||||||
pofn = path.join(self.outdir, textdomain + '.pot')
|
pofn = path.join(self.outdir, textdomain + '.pot')
|
||||||
pofile = open(pofn, 'w', encoding='utf-8')
|
with open(pofn, 'w', encoding='utf-8') as pofile:
|
||||||
try:
|
|
||||||
pofile.write(POHEADER % data)
|
pofile.write(POHEADER % data)
|
||||||
|
|
||||||
for message in catalog.messages:
|
for message in catalog.messages:
|
||||||
@ -228,6 +233,3 @@ class MessageCatalogBuilder(I18nBuilder):
|
|||||||
replace('"', r'\"'). \
|
replace('"', r'\"'). \
|
||||||
replace('\n', '\\n"\n"')
|
replace('\n', '\\n"\n"')
|
||||||
pofile.write('msgid "%s"\nmsgstr ""\n\n' % message)
|
pofile.write('msgid "%s"\nmsgstr ""\n\n' % message)
|
||||||
|
|
||||||
finally:
|
|
||||||
pofile.close()
|
|
||||||
|
@ -175,8 +175,7 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
self.tags_hash = get_stable_hash(sorted(self.tags))
|
self.tags_hash = get_stable_hash(sorted(self.tags))
|
||||||
old_config_hash = old_tags_hash = ''
|
old_config_hash = old_tags_hash = ''
|
||||||
try:
|
try:
|
||||||
fp = open(path.join(self.outdir, '.buildinfo'))
|
with open(path.join(self.outdir, '.buildinfo')) as fp:
|
||||||
try:
|
|
||||||
version = fp.readline()
|
version = fp.readline()
|
||||||
if version.rstrip() != '# Sphinx build info version 1':
|
if version.rstrip() != '# Sphinx build info version 1':
|
||||||
raise ValueError
|
raise ValueError
|
||||||
@ -187,8 +186,6 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
tag, old_tags_hash = fp.readline().strip().split(': ')
|
tag, old_tags_hash = fp.readline().strip().split(': ')
|
||||||
if tag != 'tags':
|
if tag != 'tags':
|
||||||
raise ValueError
|
raise ValueError
|
||||||
finally:
|
|
||||||
fp.close()
|
|
||||||
except ValueError:
|
except ValueError:
|
||||||
self.warn('unsupported build info format in %r, building all' %
|
self.warn('unsupported build info format in %r, building all' %
|
||||||
path.join(self.outdir, '.buildinfo'))
|
path.join(self.outdir, '.buildinfo'))
|
||||||
@ -657,15 +654,12 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
|
|
||||||
def write_buildinfo(self):
|
def write_buildinfo(self):
|
||||||
# write build info file
|
# write build info file
|
||||||
fp = open(path.join(self.outdir, '.buildinfo'), 'w')
|
with open(path.join(self.outdir, '.buildinfo'), 'w') as fp:
|
||||||
try:
|
|
||||||
fp.write('# Sphinx build info version 1\n'
|
fp.write('# Sphinx build info version 1\n'
|
||||||
'# This file hashes the configuration used when building'
|
'# This file hashes the configuration used when building'
|
||||||
' these files. When it is not found, a full rebuild will'
|
' these files. When it is not found, a full rebuild will'
|
||||||
' be done.\nconfig: %s\ntags: %s\n' %
|
' be done.\nconfig: %s\ntags: %s\n' %
|
||||||
(self.config_hash, self.tags_hash))
|
(self.config_hash, self.tags_hash))
|
||||||
finally:
|
|
||||||
fp.close()
|
|
||||||
|
|
||||||
def cleanup(self):
|
def cleanup(self):
|
||||||
# clean up theme stuff
|
# clean up theme stuff
|
||||||
@ -705,10 +699,8 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
f = codecs.open(searchindexfn, 'r', encoding='utf-8')
|
f = codecs.open(searchindexfn, 'r', encoding='utf-8')
|
||||||
else:
|
else:
|
||||||
f = open(searchindexfn, 'rb')
|
f = open(searchindexfn, 'rb')
|
||||||
try:
|
with f:
|
||||||
self.indexer.load(f, self.indexer_format)
|
self.indexer.load(f, self.indexer_format)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
except (IOError, OSError, ValueError):
|
except (IOError, OSError, ValueError):
|
||||||
if keep:
|
if keep:
|
||||||
self.warn('search index couldn\'t be loaded, but not all '
|
self.warn('search index couldn\'t be loaded, but not all '
|
||||||
@ -812,11 +804,8 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
# outfilename's path is in general different from self.outdir
|
# outfilename's path is in general different from self.outdir
|
||||||
ensuredir(path.dirname(outfilename))
|
ensuredir(path.dirname(outfilename))
|
||||||
try:
|
try:
|
||||||
f = codecs.open(outfilename, 'w', encoding, 'xmlcharrefreplace')
|
with codecs.open(outfilename, 'w', encoding, 'xmlcharrefreplace') as f:
|
||||||
try:
|
|
||||||
f.write(output)
|
f.write(output)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
except (IOError, OSError) as err:
|
except (IOError, OSError) as err:
|
||||||
self.warn("error writing file %s: %s" % (outfilename, err))
|
self.warn("error writing file %s: %s" % (outfilename, err))
|
||||||
if self.copysource and ctx.get('sourcename'):
|
if self.copysource and ctx.get('sourcename'):
|
||||||
@ -833,8 +822,7 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
|
|
||||||
def dump_inventory(self):
|
def dump_inventory(self):
|
||||||
self.info(bold('dumping object inventory... '), nonl=True)
|
self.info(bold('dumping object inventory... '), nonl=True)
|
||||||
f = open(path.join(self.outdir, INVENTORY_FILENAME), 'wb')
|
with open(path.join(self.outdir, INVENTORY_FILENAME), 'wb') as f:
|
||||||
try:
|
|
||||||
f.write((u'# Sphinx inventory version 2\n'
|
f.write((u'# Sphinx inventory version 2\n'
|
||||||
u'# Project: %s\n'
|
u'# Project: %s\n'
|
||||||
u'# Version: %s\n'
|
u'# Version: %s\n'
|
||||||
@ -856,8 +844,6 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
(u'%s %s:%s %s %s %s\n' % (name, domainname, type,
|
(u'%s %s:%s %s %s %s\n' % (name, domainname, type,
|
||||||
prio, uri, dispname)).encode('utf-8')))
|
prio, uri, dispname)).encode('utf-8')))
|
||||||
f.write(compressor.flush())
|
f.write(compressor.flush())
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
self.info('done')
|
self.info('done')
|
||||||
|
|
||||||
def dump_search_index(self):
|
def dump_search_index(self):
|
||||||
@ -872,10 +858,8 @@ class StandaloneHTMLBuilder(Builder):
|
|||||||
f = codecs.open(searchindexfn + '.tmp', 'w', encoding='utf-8')
|
f = codecs.open(searchindexfn + '.tmp', 'w', encoding='utf-8')
|
||||||
else:
|
else:
|
||||||
f = open(searchindexfn + '.tmp', 'wb')
|
f = open(searchindexfn + '.tmp', 'wb')
|
||||||
try:
|
with f:
|
||||||
self.indexer.dump(f, self.indexer_format)
|
self.indexer.dump(f, self.indexer_format)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
movefile(searchindexfn + '.tmp', searchindexfn)
|
movefile(searchindexfn + '.tmp', searchindexfn)
|
||||||
self.info('done')
|
self.info('done')
|
||||||
|
|
||||||
@ -1086,10 +1070,8 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
|
|||||||
f = codecs.open(filename, 'w', encoding='utf-8')
|
f = codecs.open(filename, 'w', encoding='utf-8')
|
||||||
else:
|
else:
|
||||||
f = open(filename, 'wb')
|
f = open(filename, 'wb')
|
||||||
try:
|
with f:
|
||||||
self.implementation.dump(context, f, *self.additional_dump_args)
|
self.implementation.dump(context, f, *self.additional_dump_args)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def handle_page(self, pagename, ctx, templatename='page.html',
|
def handle_page(self, pagename, ctx, templatename='page.html',
|
||||||
outfilename=None, event_arg=None):
|
outfilename=None, event_arg=None):
|
||||||
|
@ -198,16 +198,12 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
|
|
||||||
def build_hhx(self, outdir, outname):
|
def build_hhx(self, outdir, outname):
|
||||||
self.info('dumping stopword list...')
|
self.info('dumping stopword list...')
|
||||||
f = self.open_file(outdir, outname+'.stp')
|
with self.open_file(outdir, outname+'.stp') as f:
|
||||||
try:
|
|
||||||
for word in sorted(stopwords):
|
for word in sorted(stopwords):
|
||||||
print(word, file=f)
|
print(word, file=f)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
self.info('writing project file...')
|
self.info('writing project file...')
|
||||||
f = self.open_file(outdir, outname+'.hhp')
|
with self.open_file(outdir, outname+'.hhp') as f:
|
||||||
try:
|
|
||||||
f.write(project_template % {'outname': outname,
|
f.write(project_template % {'outname': outname,
|
||||||
'title': self.config.html_title,
|
'title': self.config.html_title,
|
||||||
'version': self.config.version,
|
'version': self.config.version,
|
||||||
@ -223,12 +219,9 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
fn.endswith('.html'):
|
fn.endswith('.html'):
|
||||||
print(path.join(root, fn)[olen:].replace(os.sep, '\\'),
|
print(path.join(root, fn)[olen:].replace(os.sep, '\\'),
|
||||||
file=f)
|
file=f)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
self.info('writing TOC file...')
|
self.info('writing TOC file...')
|
||||||
f = self.open_file(outdir, outname+'.hhc')
|
with self.open_file(outdir, outname+'.hhc') as f:
|
||||||
try:
|
|
||||||
f.write(contents_header)
|
f.write(contents_header)
|
||||||
# special books
|
# special books
|
||||||
f.write('<LI> ' + object_sitemap % (self.config.html_short_title,
|
f.write('<LI> ' + object_sitemap % (self.config.html_short_title,
|
||||||
@ -266,13 +259,10 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
for node in tocdoc.traverse(istoctree):
|
for node in tocdoc.traverse(istoctree):
|
||||||
write_toc(node)
|
write_toc(node)
|
||||||
f.write(contents_footer)
|
f.write(contents_footer)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
self.info('writing index file...')
|
self.info('writing index file...')
|
||||||
index = self.env.create_index(self)
|
index = self.env.create_index(self)
|
||||||
f = self.open_file(outdir, outname+'.hhk')
|
with self.open_file(outdir, outname+'.hhk') as f:
|
||||||
try:
|
|
||||||
f.write('<UL>\n')
|
f.write('<UL>\n')
|
||||||
|
|
||||||
def write_index(title, refs, subitems):
|
def write_index(title, refs, subitems):
|
||||||
@ -302,5 +292,3 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
for title, (refs, subitems, key_) in group:
|
for title, (refs, subitems, key_) in group:
|
||||||
write_index(title, refs, subitems)
|
write_index(title, refs, subitems)
|
||||||
f.write('</UL>\n')
|
f.write('</UL>\n')
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
@ -137,7 +137,7 @@ class LaTeXBuilder(Builder):
|
|||||||
tree = self.env.get_doctree(indexfile)
|
tree = self.env.get_doctree(indexfile)
|
||||||
contentsname = None
|
contentsname = None
|
||||||
for toctree in tree.traverse(addnodes.toctree):
|
for toctree in tree.traverse(addnodes.toctree):
|
||||||
if toctree['caption']:
|
if 'caption' in toctree:
|
||||||
contentsname = toctree['caption']
|
contentsname = toctree['caption']
|
||||||
break
|
break
|
||||||
|
|
||||||
|
@ -243,7 +243,7 @@ class CheckExternalLinksBuilder(Builder):
|
|||||||
elif status == 'broken':
|
elif status == 'broken':
|
||||||
self.write_entry('broken', docname, lineno, uri + ': ' + info)
|
self.write_entry('broken', docname, lineno, uri + ': ' + info)
|
||||||
if self.app.quiet or self.app.warningiserror:
|
if self.app.quiet or self.app.warningiserror:
|
||||||
self.warn('broken link: %s' % uri,
|
self.warn('broken link: %s (%s)' % (uri, info),
|
||||||
'%s:%s' % (self.env.doc2path(docname), lineno))
|
'%s:%s' % (self.env.doc2path(docname), lineno))
|
||||||
else:
|
else:
|
||||||
self.info(red('broken ') + uri + red(' - ' + info))
|
self.info(red('broken ') + uri + red(' - ' + info))
|
||||||
|
@ -179,8 +179,7 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
nspace = nspace.lower()
|
nspace = nspace.lower()
|
||||||
|
|
||||||
# write the project file
|
# write the project file
|
||||||
f = codecs.open(path.join(outdir, outname+'.qhp'), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname+'.qhp'), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(project_template % {
|
f.write(project_template % {
|
||||||
'outname': htmlescape(outname),
|
'outname': htmlescape(outname),
|
||||||
'title': htmlescape(self.config.html_title),
|
'title': htmlescape(self.config.html_title),
|
||||||
@ -191,23 +190,18 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
|
|||||||
'sections': sections,
|
'sections': sections,
|
||||||
'keywords': keywords,
|
'keywords': keywords,
|
||||||
'files': projectfiles})
|
'files': projectfiles})
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
homepage = 'qthelp://' + posixpath.join(
|
homepage = 'qthelp://' + posixpath.join(
|
||||||
nspace, 'doc', self.get_target_uri(self.config.master_doc))
|
nspace, 'doc', self.get_target_uri(self.config.master_doc))
|
||||||
startpage = 'qthelp://' + posixpath.join(nspace, 'doc', 'index.html')
|
startpage = 'qthelp://' + posixpath.join(nspace, 'doc', 'index.html')
|
||||||
|
|
||||||
self.info('writing collection project file...')
|
self.info('writing collection project file...')
|
||||||
f = codecs.open(path.join(outdir, outname+'.qhcp'), 'w', 'utf-8')
|
with codecs.open(path.join(outdir, outname+'.qhcp'), 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(collection_template % {
|
f.write(collection_template % {
|
||||||
'outname': htmlescape(outname),
|
'outname': htmlescape(outname),
|
||||||
'title': htmlescape(self.config.html_short_title),
|
'title': htmlescape(self.config.html_short_title),
|
||||||
'homepage': htmlescape(homepage),
|
'homepage': htmlescape(homepage),
|
||||||
'startpage': htmlescape(startpage)})
|
'startpage': htmlescape(startpage)})
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def isdocnode(self, node):
|
def isdocnode(self, node):
|
||||||
if not isinstance(node, nodes.list_item):
|
if not isinstance(node, nodes.list_item):
|
||||||
|
@ -220,11 +220,8 @@ class TexinfoBuilder(Builder):
|
|||||||
fn = path.join(self.outdir, 'Makefile')
|
fn = path.join(self.outdir, 'Makefile')
|
||||||
self.info(fn, nonl=1)
|
self.info(fn, nonl=1)
|
||||||
try:
|
try:
|
||||||
mkfile = open(fn, 'w')
|
with open(fn, 'w') as mkfile:
|
||||||
try:
|
|
||||||
mkfile.write(TEXINFO_MAKEFILE)
|
mkfile.write(TEXINFO_MAKEFILE)
|
||||||
finally:
|
|
||||||
mkfile.close()
|
|
||||||
except (IOError, OSError) as err:
|
except (IOError, OSError) as err:
|
||||||
self.warn("error writing file %s: %s" % (fn, err))
|
self.warn("error writing file %s: %s" % (fn, err))
|
||||||
self.info(' done')
|
self.info(' done')
|
||||||
|
@ -60,11 +60,8 @@ class TextBuilder(Builder):
|
|||||||
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
|
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
|
||||||
ensuredir(path.dirname(outfilename))
|
ensuredir(path.dirname(outfilename))
|
||||||
try:
|
try:
|
||||||
f = codecs.open(outfilename, 'w', 'utf-8')
|
with codecs.open(outfilename, 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.writer.output)
|
f.write(self.writer.output)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
except (IOError, OSError) as err:
|
except (IOError, OSError) as err:
|
||||||
self.warn("error writing file %s: %s" % (outfilename, err))
|
self.warn("error writing file %s: %s" % (outfilename, err))
|
||||||
|
|
||||||
|
@ -77,11 +77,8 @@ class XMLBuilder(Builder):
|
|||||||
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
|
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
|
||||||
ensuredir(path.dirname(outfilename))
|
ensuredir(path.dirname(outfilename))
|
||||||
try:
|
try:
|
||||||
f = codecs.open(outfilename, 'w', 'utf-8')
|
with codecs.open(outfilename, 'w', 'utf-8') as f:
|
||||||
try:
|
|
||||||
f.write(self.writer.output)
|
f.write(self.writer.output)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
except (IOError, OSError) as err:
|
except (IOError, OSError) as err:
|
||||||
self.warn("error writing file %s: %s" % (outfilename, err))
|
self.warn("error writing file %s: %s" % (outfilename, err))
|
||||||
|
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
import re
|
||||||
from os import path, environ
|
from os import path, environ, getenv
|
||||||
import shlex
|
import shlex
|
||||||
|
|
||||||
from six import PY2, PY3, iteritems, string_types, binary_type, text_type, integer_types
|
from six import PY2, PY3, iteritems, string_types, binary_type, text_type, integer_types
|
||||||
@ -19,8 +19,10 @@ from sphinx.errors import ConfigError
|
|||||||
from sphinx.locale import l_
|
from sphinx.locale import l_
|
||||||
from sphinx.util.osutil import make_filename, cd
|
from sphinx.util.osutil import make_filename, cd
|
||||||
from sphinx.util.pycompat import execfile_, NoneType
|
from sphinx.util.pycompat import execfile_, NoneType
|
||||||
|
from sphinx.util.i18n import format_date
|
||||||
|
|
||||||
nonascii_re = re.compile(br'[\x80-\xff]')
|
nonascii_re = re.compile(br'[\x80-\xff]')
|
||||||
|
copyright_year_re = re.compile(r'^((\d{4}-)?)(\d{4})(?=[ ,])')
|
||||||
|
|
||||||
CONFIG_SYNTAX_ERROR = "There is a syntax error in your configuration file: %s"
|
CONFIG_SYNTAX_ERROR = "There is a syntax error in your configuration file: %s"
|
||||||
if PY3:
|
if PY3:
|
||||||
@ -298,6 +300,15 @@ class Config(object):
|
|||||||
self.setup = config.get('setup', None)
|
self.setup = config.get('setup', None)
|
||||||
self.extensions = config.get('extensions', [])
|
self.extensions = config.get('extensions', [])
|
||||||
|
|
||||||
|
# correct values of copyright year that are not coherent with
|
||||||
|
# the SOURCE_DATE_EPOCH environment variable (if set)
|
||||||
|
# See https://reproducible-builds.org/specs/source-date-epoch/
|
||||||
|
if getenv('SOURCE_DATE_EPOCH') is not None:
|
||||||
|
for k in ('copyright', 'epub_copyright'):
|
||||||
|
if k in config:
|
||||||
|
config[k] = copyright_year_re.sub('\g<1>%s' % format_date('%Y'),
|
||||||
|
config[k])
|
||||||
|
|
||||||
def check_types(self, warn):
|
def check_types(self, warn):
|
||||||
# check all values for deviation from the default value's type, since
|
# check all values for deviation from the default value's type, since
|
||||||
# that can result in TypeErrors all over the place
|
# that can result in TypeErrors all over the place
|
||||||
|
@ -173,13 +173,12 @@ class LiteralInclude(Directive):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def read_with_encoding(self, filename, document, codec_info, encoding):
|
def read_with_encoding(self, filename, document, codec_info, encoding):
|
||||||
f = None
|
|
||||||
try:
|
try:
|
||||||
f = codecs.StreamReaderWriter(open(filename, 'rb'), codec_info[2],
|
with codecs.StreamReaderWriter(open(filename, 'rb'), codec_info[2],
|
||||||
codec_info[3], 'strict')
|
codec_info[3], 'strict') as f:
|
||||||
lines = f.readlines()
|
lines = f.readlines()
|
||||||
lines = dedent_lines(lines, self.options.get('dedent'))
|
lines = dedent_lines(lines, self.options.get('dedent'))
|
||||||
return lines
|
return lines
|
||||||
except (IOError, OSError):
|
except (IOError, OSError):
|
||||||
return [document.reporter.warning(
|
return [document.reporter.warning(
|
||||||
'Include file %r not found or reading it failed' % filename,
|
'Include file %r not found or reading it failed' % filename,
|
||||||
@ -189,9 +188,6 @@ class LiteralInclude(Directive):
|
|||||||
'Encoding %r used for reading included file %r seems to '
|
'Encoding %r used for reading included file %r seems to '
|
||||||
'be wrong, try giving an :encoding: option' %
|
'be wrong, try giving an :encoding: option' %
|
||||||
(encoding, filename))]
|
(encoding, filename))]
|
||||||
finally:
|
|
||||||
if f is not None:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
document = self.state.document
|
document = self.state.document
|
||||||
|
@ -28,6 +28,10 @@ class Figure(images.Figure):
|
|||||||
self.options['name'] = name
|
self.options['name'] = name
|
||||||
self.add_name(figure_node)
|
self.add_name(figure_node)
|
||||||
|
|
||||||
|
# fill lineno using image node
|
||||||
|
if figure_node.line is None and len(figure_node) == 2:
|
||||||
|
figure_node.line = figure_node[1].line
|
||||||
|
|
||||||
return [figure_node]
|
return [figure_node]
|
||||||
|
|
||||||
|
|
||||||
|
@ -2587,7 +2587,7 @@ class Symbol(object):
|
|||||||
s = s._find_named_symbol(identifier, templateParams,
|
s = s._find_named_symbol(identifier, templateParams,
|
||||||
templateArgs, operator,
|
templateArgs, operator,
|
||||||
templateShorthand=False,
|
templateShorthand=False,
|
||||||
matchSelf=True)
|
matchSelf=False)
|
||||||
if not s:
|
if not s:
|
||||||
return None
|
return None
|
||||||
return s
|
return s
|
||||||
@ -4100,7 +4100,7 @@ class CPPDomain(Domain):
|
|||||||
parser.assert_end()
|
parser.assert_end()
|
||||||
except DefinitionError as e:
|
except DefinitionError as e:
|
||||||
warner.warn('Unparseable C++ cross-reference: %r\n%s'
|
warner.warn('Unparseable C++ cross-reference: %r\n%s'
|
||||||
% (target, str(e.description)))
|
% (target, text_type(e.description)))
|
||||||
return None, None
|
return None, None
|
||||||
parentKey = node.get("cpp:parent_key", None)
|
parentKey = node.get("cpp:parent_key", None)
|
||||||
rootSymbol = self.data['root_symbol']
|
rootSymbol = self.data['root_symbol']
|
||||||
|
@ -532,6 +532,9 @@ class StandardDomain(Domain):
|
|||||||
if labelid is None:
|
if labelid is None:
|
||||||
continue
|
continue
|
||||||
node = document.ids[labelid]
|
node = document.ids[labelid]
|
||||||
|
if node.tagname == 'target' and 'refid' in node: # indirect hyperlink targets
|
||||||
|
node = document.ids.get(node['refid'])
|
||||||
|
labelid = node['names'][0]
|
||||||
if name.isdigit() or 'refuri' in node or \
|
if name.isdigit() or 'refuri' in node or \
|
||||||
node.tagname.startswith('desc_'):
|
node.tagname.startswith('desc_'):
|
||||||
# ignore footnote labels, labels automatically generated from a
|
# ignore footnote labels, labels automatically generated from a
|
||||||
@ -545,7 +548,7 @@ class StandardDomain(Domain):
|
|||||||
sectname = clean_astext(node[0]) # node[0] == title node
|
sectname = clean_astext(node[0]) # node[0] == title node
|
||||||
elif self.is_enumerable_node(node):
|
elif self.is_enumerable_node(node):
|
||||||
sectname = self.get_numfig_title(node)
|
sectname = self.get_numfig_title(node)
|
||||||
if sectname is None:
|
if not sectname:
|
||||||
continue
|
continue
|
||||||
elif node.traverse(addnodes.toctree):
|
elif node.traverse(addnodes.toctree):
|
||||||
n = node.traverse(addnodes.toctree)[0]
|
n = node.traverse(addnodes.toctree)[0]
|
||||||
|
@ -75,7 +75,7 @@ default_settings = {
|
|||||||
# or changed to properly invalidate pickle files.
|
# or changed to properly invalidate pickle files.
|
||||||
#
|
#
|
||||||
# NOTE: increase base version by 2 to have distinct numbers for Py2 and 3
|
# NOTE: increase base version by 2 to have distinct numbers for Py2 and 3
|
||||||
ENV_VERSION = 47 + (sys.version_info[0] - 2)
|
ENV_VERSION = 48 + (sys.version_info[0] - 2)
|
||||||
|
|
||||||
|
|
||||||
dummy_reporter = Reporter('', 4, 4)
|
dummy_reporter = Reporter('', 4, 4)
|
||||||
@ -103,11 +103,8 @@ class BuildEnvironment:
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def frompickle(srcdir, config, filename):
|
def frompickle(srcdir, config, filename):
|
||||||
picklefile = open(filename, 'rb')
|
with open(filename, 'rb') as picklefile:
|
||||||
try:
|
|
||||||
env = pickle.load(picklefile)
|
env = pickle.load(picklefile)
|
||||||
finally:
|
|
||||||
picklefile.close()
|
|
||||||
if env.version != ENV_VERSION:
|
if env.version != ENV_VERSION:
|
||||||
raise IOError('build environment version not current')
|
raise IOError('build environment version not current')
|
||||||
if env.srcdir != srcdir:
|
if env.srcdir != srcdir:
|
||||||
@ -123,7 +120,6 @@ class BuildEnvironment:
|
|||||||
del self.config.values
|
del self.config.values
|
||||||
domains = self.domains
|
domains = self.domains
|
||||||
del self.domains
|
del self.domains
|
||||||
picklefile = open(filename, 'wb')
|
|
||||||
# remove potentially pickling-problematic values from config
|
# remove potentially pickling-problematic values from config
|
||||||
for key, val in list(vars(self.config).items()):
|
for key, val in list(vars(self.config).items()):
|
||||||
if key.startswith('_') or \
|
if key.startswith('_') or \
|
||||||
@ -131,10 +127,8 @@ class BuildEnvironment:
|
|||||||
isinstance(val, types.FunctionType) or \
|
isinstance(val, types.FunctionType) or \
|
||||||
isinstance(val, class_types):
|
isinstance(val, class_types):
|
||||||
del self.config[key]
|
del self.config[key]
|
||||||
try:
|
with open(filename, 'wb') as picklefile:
|
||||||
pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
|
pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
|
||||||
finally:
|
|
||||||
picklefile.close()
|
|
||||||
# reset attributes
|
# reset attributes
|
||||||
self.domains = domains
|
self.domains = domains
|
||||||
self.config.values = values
|
self.config.values = values
|
||||||
@ -751,12 +745,9 @@ class BuildEnvironment:
|
|||||||
if self.versioning_compare:
|
if self.versioning_compare:
|
||||||
# get old doctree
|
# get old doctree
|
||||||
try:
|
try:
|
||||||
f = open(self.doc2path(docname,
|
with open(self.doc2path(docname,
|
||||||
self.doctreedir, '.doctree'), 'rb')
|
self.doctreedir, '.doctree'), 'rb') as f:
|
||||||
try:
|
|
||||||
old_doctree = pickle.load(f)
|
old_doctree = pickle.load(f)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
except EnvironmentError:
|
except EnvironmentError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@ -786,11 +777,8 @@ class BuildEnvironment:
|
|||||||
doctree_filename = self.doc2path(docname, self.doctreedir,
|
doctree_filename = self.doc2path(docname, self.doctreedir,
|
||||||
'.doctree')
|
'.doctree')
|
||||||
ensuredir(path.dirname(doctree_filename))
|
ensuredir(path.dirname(doctree_filename))
|
||||||
f = open(doctree_filename, 'wb')
|
with open(doctree_filename, 'wb') as f:
|
||||||
try:
|
|
||||||
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
|
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
# utilities to use while reading a document
|
# utilities to use while reading a document
|
||||||
|
|
||||||
@ -908,11 +896,13 @@ class BuildEnvironment:
|
|||||||
node['candidates'] = candidates = {}
|
node['candidates'] = candidates = {}
|
||||||
imguri = node['uri']
|
imguri = node['uri']
|
||||||
if imguri.startswith('data:'):
|
if imguri.startswith('data:'):
|
||||||
self.warn_node('image data URI found. some builders might not support', node)
|
self.warn_node('image data URI found. some builders might not support', node,
|
||||||
|
type='image', subtype='data_uri')
|
||||||
candidates['?'] = imguri
|
candidates['?'] = imguri
|
||||||
continue
|
continue
|
||||||
elif imguri.find('://') != -1:
|
elif imguri.find('://') != -1:
|
||||||
self.warn_node('nonlocal image URI found: %s' % imguri, node)
|
self.warn_node('nonlocal image URI found: %s' % imguri, node,
|
||||||
|
type='image', subtype='nonlocal_uri')
|
||||||
candidates['?'] = imguri
|
candidates['?'] = imguri
|
||||||
continue
|
continue
|
||||||
rel_imgpath, full_imgpath = self.relfn2path(imguri, docname)
|
rel_imgpath, full_imgpath = self.relfn2path(imguri, docname)
|
||||||
@ -1224,11 +1214,8 @@ class BuildEnvironment:
|
|||||||
def get_doctree(self, docname):
|
def get_doctree(self, docname):
|
||||||
"""Read the doctree for a file from the pickle and return it."""
|
"""Read the doctree for a file from the pickle and return it."""
|
||||||
doctree_filename = self.doc2path(docname, self.doctreedir, '.doctree')
|
doctree_filename = self.doc2path(docname, self.doctreedir, '.doctree')
|
||||||
f = open(doctree_filename, 'rb')
|
with open(doctree_filename, 'rb') as f:
|
||||||
try:
|
|
||||||
doctree = pickle.load(f)
|
doctree = pickle.load(f)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
doctree.settings.env = self
|
doctree.settings.env = self
|
||||||
doctree.reporter = Reporter(self.doc2path(docname), 2, 5,
|
doctree.reporter = Reporter(self.doc2path(docname), 2, 5,
|
||||||
stream=WarningStream(self._warnfunc))
|
stream=WarningStream(self._warnfunc))
|
||||||
@ -1909,6 +1896,10 @@ class BuildEnvironment:
|
|||||||
traversed = set()
|
traversed = set()
|
||||||
|
|
||||||
def traverse_toctree(parent, docname):
|
def traverse_toctree(parent, docname):
|
||||||
|
if parent == docname:
|
||||||
|
self.warn(docname, 'self referenced toctree found. Ignored.')
|
||||||
|
return
|
||||||
|
|
||||||
# traverse toctree by pre-order
|
# traverse toctree by pre-order
|
||||||
yield parent, docname
|
yield parent, docname
|
||||||
traversed.add(docname)
|
traversed.add(docname)
|
||||||
|
@ -35,7 +35,10 @@ from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \
|
|||||||
from sphinx.util.docstrings import prepare_docstring
|
from sphinx.util.docstrings import prepare_docstring
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import typing
|
if sys.version_info >= (3,):
|
||||||
|
import typing
|
||||||
|
else:
|
||||||
|
typing = None
|
||||||
except ImportError:
|
except ImportError:
|
||||||
typing = None
|
typing = None
|
||||||
|
|
||||||
@ -269,9 +272,17 @@ def format_annotation(annotation):
|
|||||||
if isinstance(annotation, typing.TypeVar):
|
if isinstance(annotation, typing.TypeVar):
|
||||||
return annotation.__name__
|
return annotation.__name__
|
||||||
elif hasattr(typing, 'GenericMeta') and \
|
elif hasattr(typing, 'GenericMeta') and \
|
||||||
isinstance(annotation, typing.GenericMeta) and \
|
isinstance(annotation, typing.GenericMeta):
|
||||||
hasattr(annotation, '__parameters__'):
|
# In Python 3.5.2+, all arguments are stored in __args__,
|
||||||
params = annotation.__parameters__
|
# whereas __parameters__ only contains generic parameters.
|
||||||
|
#
|
||||||
|
# Prior to Python 3.5.2, __args__ is not available, and all
|
||||||
|
# arguments are in __parameters__.
|
||||||
|
params = None
|
||||||
|
if hasattr(annotation, '__args__'):
|
||||||
|
params = annotation.__args__
|
||||||
|
elif hasattr(annotation, '__parameters__'):
|
||||||
|
params = annotation.__parameters__
|
||||||
if params is not None:
|
if params is not None:
|
||||||
param_str = ', '.join(format_annotation(p) for p in params)
|
param_str = ', '.join(format_annotation(p) for p in params)
|
||||||
return '%s[%s]' % (qualified_name, param_str)
|
return '%s[%s]' % (qualified_name, param_str)
|
||||||
|
@ -337,7 +337,7 @@ class Autosummary(Directive):
|
|||||||
*items* is a list produced by :meth:`get_items`.
|
*items* is a list produced by :meth:`get_items`.
|
||||||
"""
|
"""
|
||||||
table_spec = addnodes.tabular_col_spec()
|
table_spec = addnodes.tabular_col_spec()
|
||||||
table_spec['spec'] = 'll'
|
table_spec['spec'] = 'p{0.5\linewidth}p{0.5\linewidth}'
|
||||||
|
|
||||||
table = autosummary_table('')
|
table = autosummary_table('')
|
||||||
real_table = nodes.table('', classes=['longtable'])
|
real_table = nodes.table('', classes=['longtable'])
|
||||||
|
@ -87,8 +87,7 @@ class CoverageBuilder(Builder):
|
|||||||
c_objects = self.env.domaindata['c']['objects']
|
c_objects = self.env.domaindata['c']['objects']
|
||||||
for filename in self.c_sourcefiles:
|
for filename in self.c_sourcefiles:
|
||||||
undoc = set()
|
undoc = set()
|
||||||
f = open(filename, 'r')
|
with open(filename, 'r') as f:
|
||||||
try:
|
|
||||||
for line in f:
|
for line in f:
|
||||||
for key, regex in self.c_regexes:
|
for key, regex in self.c_regexes:
|
||||||
match = regex.match(line)
|
match = regex.match(line)
|
||||||
@ -101,15 +100,12 @@ class CoverageBuilder(Builder):
|
|||||||
else:
|
else:
|
||||||
undoc.add((key, name))
|
undoc.add((key, name))
|
||||||
continue
|
continue
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
if undoc:
|
if undoc:
|
||||||
self.c_undoc[filename] = undoc
|
self.c_undoc[filename] = undoc
|
||||||
|
|
||||||
def write_c_coverage(self):
|
def write_c_coverage(self):
|
||||||
output_file = path.join(self.outdir, 'c.txt')
|
output_file = path.join(self.outdir, 'c.txt')
|
||||||
op = open(output_file, 'w')
|
with open(output_file, 'w') as op:
|
||||||
try:
|
|
||||||
if self.config.coverage_write_headline:
|
if self.config.coverage_write_headline:
|
||||||
write_header(op, 'Undocumented C API elements', '=')
|
write_header(op, 'Undocumented C API elements', '=')
|
||||||
op.write('\n')
|
op.write('\n')
|
||||||
@ -119,8 +115,6 @@ class CoverageBuilder(Builder):
|
|||||||
for typ, name in sorted(undoc):
|
for typ, name in sorted(undoc):
|
||||||
op.write(' * %-50s [%9s]\n' % (name, typ))
|
op.write(' * %-50s [%9s]\n' % (name, typ))
|
||||||
op.write('\n')
|
op.write('\n')
|
||||||
finally:
|
|
||||||
op.close()
|
|
||||||
|
|
||||||
def build_py_coverage(self):
|
def build_py_coverage(self):
|
||||||
objects = self.env.domaindata['py']['objects']
|
objects = self.env.domaindata['py']['objects']
|
||||||
@ -214,9 +208,8 @@ class CoverageBuilder(Builder):
|
|||||||
|
|
||||||
def write_py_coverage(self):
|
def write_py_coverage(self):
|
||||||
output_file = path.join(self.outdir, 'python.txt')
|
output_file = path.join(self.outdir, 'python.txt')
|
||||||
op = open(output_file, 'w')
|
|
||||||
failed = []
|
failed = []
|
||||||
try:
|
with open(output_file, 'w') as op:
|
||||||
if self.config.coverage_write_headline:
|
if self.config.coverage_write_headline:
|
||||||
write_header(op, 'Undocumented Python objects', '=')
|
write_header(op, 'Undocumented Python objects', '=')
|
||||||
keys = sorted(self.py_undoc.keys())
|
keys = sorted(self.py_undoc.keys())
|
||||||
@ -247,17 +240,12 @@ class CoverageBuilder(Builder):
|
|||||||
if failed:
|
if failed:
|
||||||
write_header(op, 'Modules that failed to import')
|
write_header(op, 'Modules that failed to import')
|
||||||
op.writelines(' * %s -- %s\n' % x for x in failed)
|
op.writelines(' * %s -- %s\n' % x for x in failed)
|
||||||
finally:
|
|
||||||
op.close()
|
|
||||||
|
|
||||||
def finish(self):
|
def finish(self):
|
||||||
# dump the coverage data to a pickle file too
|
# dump the coverage data to a pickle file too
|
||||||
picklepath = path.join(self.outdir, 'undoc.pickle')
|
picklepath = path.join(self.outdir, 'undoc.pickle')
|
||||||
dumpfile = open(picklepath, 'wb')
|
with open(picklepath, 'wb') as dumpfile:
|
||||||
try:
|
|
||||||
pickle.dump((self.py_undoc, self.c_undoc), dumpfile)
|
pickle.dump((self.py_undoc, self.c_undoc), dumpfile)
|
||||||
finally:
|
|
||||||
dumpfile.close()
|
|
||||||
|
|
||||||
|
|
||||||
def setup(app):
|
def setup(app):
|
||||||
|
@ -43,6 +43,8 @@ class graphviz(nodes.General, nodes.Inline, nodes.Element):
|
|||||||
|
|
||||||
def figure_wrapper(directive, node, caption):
|
def figure_wrapper(directive, node, caption):
|
||||||
figure_node = nodes.figure('', node)
|
figure_node = nodes.figure('', node)
|
||||||
|
if 'align' in node:
|
||||||
|
figure_node['align'] = node.attributes.pop('align')
|
||||||
|
|
||||||
parsed = nodes.Element()
|
parsed = nodes.Element()
|
||||||
directive.state.nested_parse(ViewList([caption], source=''),
|
directive.state.nested_parse(ViewList([caption], source=''),
|
||||||
@ -55,6 +57,10 @@ def figure_wrapper(directive, node, caption):
|
|||||||
return figure_node
|
return figure_node
|
||||||
|
|
||||||
|
|
||||||
|
def align_spec(argument):
|
||||||
|
return directives.choice(argument, ('left', 'center', 'right'))
|
||||||
|
|
||||||
|
|
||||||
class Graphviz(Directive):
|
class Graphviz(Directive):
|
||||||
"""
|
"""
|
||||||
Directive to insert arbitrary dot markup.
|
Directive to insert arbitrary dot markup.
|
||||||
@ -65,6 +71,7 @@ class Graphviz(Directive):
|
|||||||
final_argument_whitespace = False
|
final_argument_whitespace = False
|
||||||
option_spec = {
|
option_spec = {
|
||||||
'alt': directives.unchanged,
|
'alt': directives.unchanged,
|
||||||
|
'align': align_spec,
|
||||||
'inline': directives.flag,
|
'inline': directives.flag,
|
||||||
'caption': directives.unchanged,
|
'caption': directives.unchanged,
|
||||||
'graphviz_dot': directives.unchanged,
|
'graphviz_dot': directives.unchanged,
|
||||||
@ -82,11 +89,8 @@ class Graphviz(Directive):
|
|||||||
rel_filename, filename = env.relfn2path(argument)
|
rel_filename, filename = env.relfn2path(argument)
|
||||||
env.note_dependency(rel_filename)
|
env.note_dependency(rel_filename)
|
||||||
try:
|
try:
|
||||||
fp = codecs.open(filename, 'r', 'utf-8')
|
with codecs.open(filename, 'r', 'utf-8') as fp:
|
||||||
try:
|
|
||||||
dotcode = fp.read()
|
dotcode = fp.read()
|
||||||
finally:
|
|
||||||
fp.close()
|
|
||||||
except (IOError, OSError):
|
except (IOError, OSError):
|
||||||
return [document.reporter.warning(
|
return [document.reporter.warning(
|
||||||
'External Graphviz file %r not found or reading '
|
'External Graphviz file %r not found or reading '
|
||||||
@ -104,6 +108,8 @@ class Graphviz(Directive):
|
|||||||
node['options']['graphviz_dot'] = self.options['graphviz_dot']
|
node['options']['graphviz_dot'] = self.options['graphviz_dot']
|
||||||
if 'alt' in self.options:
|
if 'alt' in self.options:
|
||||||
node['alt'] = self.options['alt']
|
node['alt'] = self.options['alt']
|
||||||
|
if 'align' in self.options:
|
||||||
|
node['align'] = self.options['align']
|
||||||
if 'inline' in self.options:
|
if 'inline' in self.options:
|
||||||
node['inline'] = True
|
node['inline'] = True
|
||||||
|
|
||||||
@ -124,6 +130,7 @@ class GraphvizSimple(Directive):
|
|||||||
final_argument_whitespace = False
|
final_argument_whitespace = False
|
||||||
option_spec = {
|
option_spec = {
|
||||||
'alt': directives.unchanged,
|
'alt': directives.unchanged,
|
||||||
|
'align': align_spec,
|
||||||
'inline': directives.flag,
|
'inline': directives.flag,
|
||||||
'caption': directives.unchanged,
|
'caption': directives.unchanged,
|
||||||
'graphviz_dot': directives.unchanged,
|
'graphviz_dot': directives.unchanged,
|
||||||
@ -138,6 +145,8 @@ class GraphvizSimple(Directive):
|
|||||||
node['options']['graphviz_dot'] = self.options['graphviz_dot']
|
node['options']['graphviz_dot'] = self.options['graphviz_dot']
|
||||||
if 'alt' in self.options:
|
if 'alt' in self.options:
|
||||||
node['alt'] = self.options['alt']
|
node['alt'] = self.options['alt']
|
||||||
|
if 'align' in self.options:
|
||||||
|
node['align'] = self.options['align']
|
||||||
if 'inline' in self.options:
|
if 'inline' in self.options:
|
||||||
node['inline'] = True
|
node['inline'] = True
|
||||||
|
|
||||||
@ -239,11 +248,11 @@ def render_dot_html(self, node, code, options, prefix='graphviz',
|
|||||||
<p class="warning">%s</p></object>\n''' % (fname, alt)
|
<p class="warning">%s</p></object>\n''' % (fname, alt)
|
||||||
self.body.append(svgtag)
|
self.body.append(svgtag)
|
||||||
else:
|
else:
|
||||||
mapfile = open(outfn + '.map', 'rb')
|
if 'align' in node:
|
||||||
try:
|
self.body.append('<div align="%s" class="align-%s">' %
|
||||||
|
(node['align'], node['align']))
|
||||||
|
with open(outfn + '.map', 'rb') as mapfile:
|
||||||
imgmap = mapfile.readlines()
|
imgmap = mapfile.readlines()
|
||||||
finally:
|
|
||||||
mapfile.close()
|
|
||||||
if len(imgmap) == 2:
|
if len(imgmap) == 2:
|
||||||
# nothing in image map (the lines are <map> and </map>)
|
# nothing in image map (the lines are <map> and </map>)
|
||||||
self.body.append('<img src="%s" alt="%s" %s/>\n' %
|
self.body.append('<img src="%s" alt="%s" %s/>\n' %
|
||||||
@ -254,6 +263,8 @@ def render_dot_html(self, node, code, options, prefix='graphviz',
|
|||||||
self.body.append('<img src="%s" alt="%s" usemap="#%s" %s/>\n' %
|
self.body.append('<img src="%s" alt="%s" usemap="#%s" %s/>\n' %
|
||||||
(fname, alt, mapname, imgcss))
|
(fname, alt, mapname, imgcss))
|
||||||
self.body.extend([item.decode('utf-8') for item in imgmap])
|
self.body.extend([item.decode('utf-8') for item in imgmap])
|
||||||
|
if 'align' in node:
|
||||||
|
self.body.append('</div>\n')
|
||||||
|
|
||||||
raise nodes.SkipNode
|
raise nodes.SkipNode
|
||||||
|
|
||||||
@ -277,8 +288,19 @@ def render_dot_latex(self, node, code, options, prefix='graphviz'):
|
|||||||
para_separator = '\n'
|
para_separator = '\n'
|
||||||
|
|
||||||
if fname is not None:
|
if fname is not None:
|
||||||
|
post = None
|
||||||
|
if not is_inline and 'align' in node:
|
||||||
|
if node['align'] == 'left':
|
||||||
|
self.body.append('{')
|
||||||
|
post = '\\hspace*{\\fill}}'
|
||||||
|
elif node['align'] == 'right':
|
||||||
|
self.body.append('{\\hspace*{\\fill}')
|
||||||
|
post = '}'
|
||||||
self.body.append('%s\\includegraphics{%s}%s' %
|
self.body.append('%s\\includegraphics{%s}%s' %
|
||||||
(para_separator, fname, para_separator))
|
(para_separator, fname, para_separator))
|
||||||
|
if post:
|
||||||
|
self.body.append(post)
|
||||||
|
|
||||||
raise nodes.SkipNode
|
raise nodes.SkipNode
|
||||||
|
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ from six import text_type
|
|||||||
from docutils import nodes
|
from docutils import nodes
|
||||||
|
|
||||||
import sphinx
|
import sphinx
|
||||||
from sphinx.errors import SphinxError
|
from sphinx.errors import SphinxError, ExtensionError
|
||||||
from sphinx.util.png import read_png_depth, write_png_depth
|
from sphinx.util.png import read_png_depth, write_png_depth
|
||||||
from sphinx.util.osutil import ensuredir, ENOENT, cd
|
from sphinx.util.osutil import ensuredir, ENOENT, cd
|
||||||
from sphinx.util.pycompat import sys_encoding
|
from sphinx.util.pycompat import sys_encoding
|
||||||
@ -162,8 +162,6 @@ def render_math(self, math):
|
|||||||
image_translator_args += ['-o', outfn]
|
image_translator_args += ['-o', outfn]
|
||||||
# add custom ones from config value
|
# add custom ones from config value
|
||||||
image_translator_args.extend(self.builder.config.imgmath_dvisvgm_args)
|
image_translator_args.extend(self.builder.config.imgmath_dvisvgm_args)
|
||||||
# last, the input file name
|
|
||||||
image_translator_args.append(path.join(tempdir, 'math.dvi'))
|
|
||||||
else:
|
else:
|
||||||
raise MathExtError(
|
raise MathExtError(
|
||||||
'imgmath_image_format must be either "png" or "svg"')
|
'imgmath_image_format must be either "png" or "svg"')
|
||||||
@ -178,14 +176,14 @@ def render_math(self, math):
|
|||||||
raise
|
raise
|
||||||
self.builder.warn('%s command %r cannot be run (needed for math '
|
self.builder.warn('%s command %r cannot be run (needed for math '
|
||||||
'display), check the imgmath_%s setting' %
|
'display), check the imgmath_%s setting' %
|
||||||
image_translator, image_translator_executable,
|
(image_translator, image_translator_executable,
|
||||||
image_translator)
|
image_translator))
|
||||||
self.builder._imgmath_warned_image_translator = True
|
self.builder._imgmath_warned_image_translator = True
|
||||||
return None, None
|
return None, None
|
||||||
|
|
||||||
stdout, stderr = p.communicate()
|
stdout, stderr = p.communicate()
|
||||||
if p.returncode != 0:
|
if p.returncode != 0:
|
||||||
raise MathExtError('%s exited with error',
|
raise MathExtError('%s exited with error' %
|
||||||
image_translator, stderr, stdout)
|
image_translator, stderr, stdout)
|
||||||
depth = None
|
depth = None
|
||||||
if use_preview and image_format == 'png': # depth is only useful for png
|
if use_preview and image_format == 'png': # depth is only useful for png
|
||||||
@ -267,7 +265,11 @@ def html_visit_displaymath(self, node):
|
|||||||
|
|
||||||
|
|
||||||
def setup(app):
|
def setup(app):
|
||||||
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
try:
|
||||||
|
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
||||||
|
except ExtensionError:
|
||||||
|
raise ExtensionError('sphinx.ext.imgmath: other math package is already loaded')
|
||||||
|
|
||||||
app.add_config_value('imgmath_image_format', 'png', 'html')
|
app.add_config_value('imgmath_image_format', 'png', 'html')
|
||||||
app.add_config_value('imgmath_dvipng', 'dvipng', 'html')
|
app.add_config_value('imgmath_dvipng', 'dvipng', 'html')
|
||||||
app.add_config_value('imgmath_dvisvgm', 'dvisvgm', 'html')
|
app.add_config_value('imgmath_dvisvgm', 'dvisvgm', 'html')
|
||||||
|
@ -33,7 +33,7 @@ import posixpath
|
|||||||
from os import path
|
from os import path
|
||||||
import re
|
import re
|
||||||
|
|
||||||
from six import iteritems
|
from six import iteritems, string_types
|
||||||
from six.moves.urllib import parse, request
|
from six.moves.urllib import parse, request
|
||||||
from docutils import nodes
|
from docutils import nodes
|
||||||
from docutils.utils import relative_path
|
from docutils.utils import relative_path
|
||||||
@ -150,7 +150,10 @@ def _strip_basic_auth(url):
|
|||||||
password = url_parts.password
|
password = url_parts.password
|
||||||
frags = list(url_parts)
|
frags = list(url_parts)
|
||||||
# swap out "user[:pass]@hostname" for "hostname"
|
# swap out "user[:pass]@hostname" for "hostname"
|
||||||
frags[1] = url_parts.hostname
|
if url_parts.port:
|
||||||
|
frags[1] = "%s:%s" % (url_parts.hostname, url_parts.port)
|
||||||
|
else:
|
||||||
|
frags[1] = url_parts.hostname
|
||||||
url = parse.urlunsplit(frags)
|
url = parse.urlunsplit(frags)
|
||||||
return (url, username, password)
|
return (url, username, password)
|
||||||
|
|
||||||
@ -177,7 +180,7 @@ def _read_from_url(url):
|
|||||||
password_mgr = request.HTTPPasswordMgrWithDefaultRealm()
|
password_mgr = request.HTTPPasswordMgrWithDefaultRealm()
|
||||||
password_mgr.add_password(None, url, username, password)
|
password_mgr.add_password(None, url, username, password)
|
||||||
handler = request.HTTPBasicAuthHandler(password_mgr)
|
handler = request.HTTPBasicAuthHandler(password_mgr)
|
||||||
opener = request.build_opener(default_handlers + [handler])
|
opener = request.build_opener(*(default_handlers + [handler]))
|
||||||
else:
|
else:
|
||||||
opener = default_opener
|
opener = default_opener
|
||||||
|
|
||||||
@ -268,8 +271,9 @@ def load_mappings(app):
|
|||||||
if isinstance(value, tuple):
|
if isinstance(value, tuple):
|
||||||
# new format
|
# new format
|
||||||
name, (uri, inv) = key, value
|
name, (uri, inv) = key, value
|
||||||
if not name.isalnum():
|
if not isinstance(name, string_types):
|
||||||
app.warn('intersphinx identifier %r is not alphanumeric' % name)
|
app.warn('intersphinx identifier %r is not string. Ignored' % name)
|
||||||
|
continue
|
||||||
else:
|
else:
|
||||||
# old format, no name
|
# old format, no name
|
||||||
name, uri, inv = None, key, value
|
name, uri, inv = None, key, value
|
||||||
|
@ -56,7 +56,11 @@ def builder_inited(app):
|
|||||||
|
|
||||||
|
|
||||||
def setup(app):
|
def setup(app):
|
||||||
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
try:
|
||||||
|
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
||||||
|
except ExtensionError:
|
||||||
|
raise ExtensionError('sphinx.ext.jsmath: other math package is already loaded')
|
||||||
|
|
||||||
app.add_config_value('jsmath_path', '', False)
|
app.add_config_value('jsmath_path', '', False)
|
||||||
app.connect('builder-inited', builder_inited)
|
app.connect('builder-inited', builder_inited)
|
||||||
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
|
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
|
||||||
|
@ -14,7 +14,7 @@
|
|||||||
from docutils import nodes
|
from docutils import nodes
|
||||||
|
|
||||||
import sphinx
|
import sphinx
|
||||||
from sphinx.application import ExtensionError
|
from sphinx.errors import ExtensionError
|
||||||
from sphinx.ext.mathbase import setup_math as mathbase_setup
|
from sphinx.ext.mathbase import setup_math as mathbase_setup
|
||||||
|
|
||||||
|
|
||||||
@ -29,9 +29,7 @@ def html_visit_math(self, node):
|
|||||||
def html_visit_displaymath(self, node):
|
def html_visit_displaymath(self, node):
|
||||||
self.body.append(self.starttag(node, 'div', CLASS='math'))
|
self.body.append(self.starttag(node, 'div', CLASS='math'))
|
||||||
if node['nowrap']:
|
if node['nowrap']:
|
||||||
self.body.append(self.builder.config.mathjax_display[0] +
|
self.body.append(self.encode(node['latex']))
|
||||||
self.encode(node['latex']) +
|
|
||||||
self.builder.config.mathjax_display[1])
|
|
||||||
self.body.append('</div>')
|
self.body.append('</div>')
|
||||||
raise nodes.SkipNode
|
raise nodes.SkipNode
|
||||||
|
|
||||||
@ -65,7 +63,11 @@ def builder_inited(app):
|
|||||||
|
|
||||||
|
|
||||||
def setup(app):
|
def setup(app):
|
||||||
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
try:
|
||||||
|
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
||||||
|
except ExtensionError:
|
||||||
|
raise ExtensionError('sphinx.ext.mathjax: other math package is already loaded')
|
||||||
|
|
||||||
# more information for mathjax secure url is here:
|
# more information for mathjax secure url is here:
|
||||||
# http://docs.mathjax.org/en/latest/start.html#secure-access-to-the-cdn
|
# http://docs.mathjax.org/en/latest/start.html#secure-access-to-the-cdn
|
||||||
app.add_config_value('mathjax_path',
|
app.add_config_value('mathjax_path',
|
||||||
@ -74,4 +76,5 @@ def setup(app):
|
|||||||
app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html')
|
app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html')
|
||||||
app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html')
|
app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html')
|
||||||
app.connect('builder-inited', builder_inited)
|
app.connect('builder-inited', builder_inited)
|
||||||
|
|
||||||
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
|
return {'version': sphinx.__display_version__, 'parallel_read_safe': True}
|
||||||
|
@ -33,6 +33,7 @@ class Config(object):
|
|||||||
# Napoleon settings
|
# Napoleon settings
|
||||||
napoleon_google_docstring = True
|
napoleon_google_docstring = True
|
||||||
napoleon_numpy_docstring = True
|
napoleon_numpy_docstring = True
|
||||||
|
napoleon_include_init_with_doc = False
|
||||||
napoleon_include_private_with_doc = False
|
napoleon_include_private_with_doc = False
|
||||||
napoleon_include_special_with_doc = False
|
napoleon_include_special_with_doc = False
|
||||||
napoleon_use_admonition_for_examples = False
|
napoleon_use_admonition_for_examples = False
|
||||||
@ -56,6 +57,21 @@ class Config(object):
|
|||||||
napoleon_numpy_docstring : bool, defaults to True
|
napoleon_numpy_docstring : bool, defaults to True
|
||||||
True to parse `NumPy style`_ docstrings. False to disable support
|
True to parse `NumPy style`_ docstrings. False to disable support
|
||||||
for NumPy style docstrings.
|
for NumPy style docstrings.
|
||||||
|
napoleon_include_init_with_doc : bool, defaults to False
|
||||||
|
True to include init methods (i.e. ``__init___``) with
|
||||||
|
docstrings in the documentation. False to fall back to Sphinx's
|
||||||
|
default behavior.
|
||||||
|
|
||||||
|
**If True**::
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
\"\"\"
|
||||||
|
This will be included in the docs because it has a docstring
|
||||||
|
\"\"\"
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# This will NOT be included in the docs
|
||||||
|
|
||||||
napoleon_include_private_with_doc : bool, defaults to False
|
napoleon_include_private_with_doc : bool, defaults to False
|
||||||
True to include private members (like ``_membername``) with docstrings
|
True to include private members (like ``_membername``) with docstrings
|
||||||
in the documentation. False to fall back to Sphinx's default behavior.
|
in the documentation. False to fall back to Sphinx's default behavior.
|
||||||
@ -223,6 +239,7 @@ class Config(object):
|
|||||||
_config_values = {
|
_config_values = {
|
||||||
'napoleon_google_docstring': (True, 'env'),
|
'napoleon_google_docstring': (True, 'env'),
|
||||||
'napoleon_numpy_docstring': (True, 'env'),
|
'napoleon_numpy_docstring': (True, 'env'),
|
||||||
|
'napoleon_include_init_with_doc': (False, 'env'),
|
||||||
'napoleon_include_private_with_doc': (False, 'env'),
|
'napoleon_include_private_with_doc': (False, 'env'),
|
||||||
'napoleon_include_special_with_doc': (False, 'env'),
|
'napoleon_include_special_with_doc': (False, 'env'),
|
||||||
'napoleon_use_admonition_for_examples': (False, 'env'),
|
'napoleon_use_admonition_for_examples': (False, 'env'),
|
||||||
@ -346,8 +363,10 @@ def _skip_member(app, what, name, obj, skip, options):
|
|||||||
"""Determine if private and special class members are included in docs.
|
"""Determine if private and special class members are included in docs.
|
||||||
|
|
||||||
The following settings in conf.py determine if private and special class
|
The following settings in conf.py determine if private and special class
|
||||||
members are included in the generated documentation:
|
members or init methods are included in the generated documentation:
|
||||||
|
|
||||||
|
* ``napoleon_include_init_with_doc`` --
|
||||||
|
include init methods if they have docstrings
|
||||||
* ``napoleon_include_private_with_doc`` --
|
* ``napoleon_include_private_with_doc`` --
|
||||||
include private members if they have docstrings
|
include private members if they have docstrings
|
||||||
* ``napoleon_include_special_with_doc`` --
|
* ``napoleon_include_special_with_doc`` --
|
||||||
@ -384,7 +403,7 @@ def _skip_member(app, what, name, obj, skip, options):
|
|||||||
"""
|
"""
|
||||||
has_doc = getattr(obj, '__doc__', False)
|
has_doc = getattr(obj, '__doc__', False)
|
||||||
is_member = (what == 'class' or what == 'exception' or what == 'module')
|
is_member = (what == 'class' or what == 'exception' or what == 'module')
|
||||||
if name != '__weakref__' and name != '__init__' and has_doc and is_member:
|
if name != '__weakref__' and has_doc and is_member:
|
||||||
cls_is_owner = False
|
cls_is_owner = False
|
||||||
if what == 'class' or what == 'exception':
|
if what == 'class' or what == 'exception':
|
||||||
if PY2:
|
if PY2:
|
||||||
@ -417,10 +436,16 @@ def _skip_member(app, what, name, obj, skip, options):
|
|||||||
cls_is_owner = True
|
cls_is_owner = True
|
||||||
|
|
||||||
if what == 'module' or cls_is_owner:
|
if what == 'module' or cls_is_owner:
|
||||||
is_special = name.startswith('__') and name.endswith('__')
|
is_init = (name == '__init__')
|
||||||
is_private = not is_special and name.startswith('_')
|
is_special = (not is_init and name.startswith('__') and
|
||||||
|
name.endswith('__'))
|
||||||
|
is_private = (not is_init and not is_special and
|
||||||
|
name.startswith('_'))
|
||||||
|
inc_init = app.config.napoleon_include_init_with_doc
|
||||||
inc_special = app.config.napoleon_include_special_with_doc
|
inc_special = app.config.napoleon_include_special_with_doc
|
||||||
inc_private = app.config.napoleon_include_private_with_doc
|
inc_private = app.config.napoleon_include_private_with_doc
|
||||||
if (is_special and inc_special) or (is_private and inc_private):
|
if ((is_special and inc_special) or
|
||||||
|
(is_private and inc_private) or
|
||||||
|
(is_init and inc_init)):
|
||||||
return False
|
return False
|
||||||
return skip
|
return skip
|
||||||
|
@ -23,7 +23,7 @@ from six import text_type
|
|||||||
from docutils import nodes
|
from docutils import nodes
|
||||||
|
|
||||||
import sphinx
|
import sphinx
|
||||||
from sphinx.errors import SphinxError
|
from sphinx.errors import SphinxError, ExtensionError
|
||||||
from sphinx.util.png import read_png_depth, write_png_depth
|
from sphinx.util.png import read_png_depth, write_png_depth
|
||||||
from sphinx.util.osutil import ensuredir, ENOENT, cd
|
from sphinx.util.osutil import ensuredir, ENOENT, cd
|
||||||
from sphinx.util.pycompat import sys_encoding
|
from sphinx.util.pycompat import sys_encoding
|
||||||
@ -239,7 +239,11 @@ def html_visit_displaymath(self, node):
|
|||||||
|
|
||||||
def setup(app):
|
def setup(app):
|
||||||
app.warn('sphinx.ext.pngmath has been deprecated. Please use sphinx.ext.imgmath instead.')
|
app.warn('sphinx.ext.pngmath has been deprecated. Please use sphinx.ext.imgmath instead.')
|
||||||
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
try:
|
||||||
|
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
|
||||||
|
except ExtensionError:
|
||||||
|
raise ExtensionError('sphinx.ext.pngmath: other math package is already loaded')
|
||||||
|
|
||||||
app.add_config_value('pngmath_dvipng', 'dvipng', 'html')
|
app.add_config_value('pngmath_dvipng', 'dvipng', 'html')
|
||||||
app.add_config_value('pngmath_latex', 'latex', 'html')
|
app.add_config_value('pngmath_latex', 'latex', 'html')
|
||||||
app.add_config_value('pngmath_use_preview', False, 'html')
|
app.add_config_value('pngmath_use_preview', False, 'html')
|
||||||
|
20
sphinx/io.py
20
sphinx/io.py
@ -112,14 +112,26 @@ class SphinxFileInput(FileInput):
|
|||||||
return data.decode(self.encoding, 'sphinx') # py2: decoding
|
return data.decode(self.encoding, 'sphinx') # py2: decoding
|
||||||
|
|
||||||
def read(self):
|
def read(self):
|
||||||
|
def get_parser_type(docname):
|
||||||
|
path = self.env.doc2path(docname)
|
||||||
|
for suffix in self.env.config.source_parsers:
|
||||||
|
if path.endswith(suffix):
|
||||||
|
parser_class = self.env.config.source_parsers[suffix]
|
||||||
|
if isinstance(parser_class, string_types):
|
||||||
|
parser_class = import_object(parser_class, 'source parser')
|
||||||
|
return parser_class.supported
|
||||||
|
else:
|
||||||
|
return ('restructuredtext',)
|
||||||
|
|
||||||
data = FileInput.read(self)
|
data = FileInput.read(self)
|
||||||
if self.app:
|
if self.app:
|
||||||
arg = [data]
|
arg = [data]
|
||||||
self.app.emit('source-read', self.env.docname, arg)
|
self.app.emit('source-read', self.env.docname, arg)
|
||||||
data = arg[0]
|
data = arg[0]
|
||||||
docinfo, data = split_docinfo(data)
|
docinfo, data = split_docinfo(data)
|
||||||
if self.env.config.rst_epilog:
|
if 'restructuredtext' in get_parser_type(self.env.docname):
|
||||||
data = data + '\n' + self.env.config.rst_epilog + '\n'
|
if self.env.config.rst_epilog:
|
||||||
if self.env.config.rst_prolog:
|
data = data + '\n' + self.env.config.rst_epilog + '\n'
|
||||||
data = self.env.config.rst_prolog + '\n' + data
|
if self.env.config.rst_prolog:
|
||||||
|
data = self.env.config.rst_prolog + '\n' + data
|
||||||
return docinfo + data
|
return docinfo + data
|
||||||
|
@ -70,10 +70,8 @@ class SphinxFileSystemLoader(FileSystemLoader):
|
|||||||
f = open_if_exists(filename)
|
f = open_if_exists(filename)
|
||||||
if f is None:
|
if f is None:
|
||||||
continue
|
continue
|
||||||
try:
|
with f:
|
||||||
contents = f.read().decode(self.encoding)
|
contents = f.read().decode(self.encoding)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
mtime = path.getmtime(filename)
|
mtime = path.getmtime(filename)
|
||||||
|
|
||||||
|
@ -195,7 +195,7 @@ else:
|
|||||||
return translators['sphinx'].ugettext(message)
|
return translators['sphinx'].ugettext(message)
|
||||||
|
|
||||||
|
|
||||||
def init(locale_dirs, language, catalog='sphinx', charset='utf-8'):
|
def init(locale_dirs, language, catalog='sphinx'):
|
||||||
"""Look for message catalogs in `locale_dirs` and *ensure* that there is at
|
"""Look for message catalogs in `locale_dirs` and *ensure* that there is at
|
||||||
least a NullTranslations catalog set in `translators`. If called multiple
|
least a NullTranslations catalog set in `translators`. If called multiple
|
||||||
times or if several ``.mo`` files are found, their contents are merged
|
times or if several ``.mo`` files are found, their contents are merged
|
||||||
@ -209,13 +209,6 @@ def init(locale_dirs, language, catalog='sphinx', charset='utf-8'):
|
|||||||
# the None entry is the system's default locale path
|
# the None entry is the system's default locale path
|
||||||
has_translation = True
|
has_translation = True
|
||||||
|
|
||||||
# compile mo files if po file is updated
|
|
||||||
# TODO: remove circular importing
|
|
||||||
from sphinx.util.i18n import find_catalog_source_files
|
|
||||||
for catinfo in find_catalog_source_files(locale_dirs, language, domains=[catalog],
|
|
||||||
charset=charset):
|
|
||||||
catinfo.write_mo(language)
|
|
||||||
|
|
||||||
# loading
|
# loading
|
||||||
for dir_ in locale_dirs:
|
for dir_ in locale_dirs:
|
||||||
try:
|
try:
|
||||||
|
@ -92,11 +92,8 @@ class Driver(object):
|
|||||||
|
|
||||||
def parse_file(self, filename, debug=False):
|
def parse_file(self, filename, debug=False):
|
||||||
"""Parse a file and return the syntax tree."""
|
"""Parse a file and return the syntax tree."""
|
||||||
stream = open(filename)
|
with open(filename) as stream:
|
||||||
try:
|
|
||||||
return self.parse_stream(stream, debug)
|
return self.parse_stream(stream, debug)
|
||||||
finally:
|
|
||||||
stream.close()
|
|
||||||
|
|
||||||
def parse_string(self, text, debug=False):
|
def parse_string(self, text, debug=False):
|
||||||
"""Parse a string and return the syntax tree."""
|
"""Parse a string and return the syntax tree."""
|
||||||
|
@ -219,7 +219,7 @@ html_theme = 'alabaster'
|
|||||||
# html_logo = None
|
# html_logo = None
|
||||||
|
|
||||||
# The name of an image file (relative to this directory) to use as a favicon of
|
# The name of an image file (relative to this directory) to use as a favicon of
|
||||||
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||||
# pixels large.
|
# pixels large.
|
||||||
#
|
#
|
||||||
# html_favicon = None
|
# html_favicon = None
|
||||||
@ -534,16 +534,6 @@ SPHINXBUILD = sphinx-build
|
|||||||
PAPER =
|
PAPER =
|
||||||
BUILDDIR = %(rbuilddir)s
|
BUILDDIR = %(rbuilddir)s
|
||||||
|
|
||||||
# User-friendly check for sphinx-build
|
|
||||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
|
||||||
\t$(error \
|
|
||||||
The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx \
|
|
||||||
installed, then set the SPHINXBUILD environment variable to point \
|
|
||||||
to the full path of the '$(SPHINXBUILD)' executable. Alternatively you \
|
|
||||||
can add the directory with the executable to your PATH. \
|
|
||||||
If you don\\'t have Sphinx installed, grab it from http://sphinx-doc.org/)
|
|
||||||
endif
|
|
||||||
|
|
||||||
# Internal variables.
|
# Internal variables.
|
||||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||||
PAPEROPT_letter = -D latex_paper_size=letter
|
PAPEROPT_letter = -D latex_paper_size=letter
|
||||||
@ -1077,16 +1067,6 @@ SPHINXPROJ = %(project_fn)s
|
|||||||
SOURCEDIR = %(rsrcdir)s
|
SOURCEDIR = %(rsrcdir)s
|
||||||
BUILDDIR = %(rbuilddir)s
|
BUILDDIR = %(rbuilddir)s
|
||||||
|
|
||||||
# User-friendly check for sphinx-build.
|
|
||||||
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
|
|
||||||
$(error \
|
|
||||||
The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx \
|
|
||||||
installed, then set the SPHINXBUILD environment variable to point \
|
|
||||||
to the full path of the '$(SPHINXBUILD)' executable. Alternatively you \
|
|
||||||
can add the directory with the executable to your PATH. \
|
|
||||||
If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
|
|
||||||
endif
|
|
||||||
|
|
||||||
# Has to be explicit, otherwise we don't get "make" without targets right.
|
# Has to be explicit, otherwise we don't get "make" without targets right.
|
||||||
help:
|
help:
|
||||||
\t@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
\t@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||||
@ -1487,11 +1467,8 @@ def generate(d, overwrite=True, silent=False):
|
|||||||
def write_file(fpath, content, newline=None):
|
def write_file(fpath, content, newline=None):
|
||||||
if overwrite or not path.isfile(fpath):
|
if overwrite or not path.isfile(fpath):
|
||||||
print('Creating file %s.' % fpath)
|
print('Creating file %s.' % fpath)
|
||||||
f = open(fpath, 'wt', encoding='utf-8', newline=newline)
|
with open(fpath, 'wt', encoding='utf-8', newline=newline) as f:
|
||||||
try:
|
|
||||||
f.write(content)
|
f.write(content)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
else:
|
else:
|
||||||
print('File %s already exists, skipping.' % fpath)
|
print('File %s already exists, skipping.' % fpath)
|
||||||
|
|
||||||
|
@ -376,7 +376,7 @@ class IndexBuilder(object):
|
|||||||
try:
|
try:
|
||||||
return self._stem_cache[word]
|
return self._stem_cache[word]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
self._stem_cache[word] = self.lang.stem(word)
|
self._stem_cache[word] = self.lang.stem(word).lower()
|
||||||
return self._stem_cache[word]
|
return self._stem_cache[word]
|
||||||
_filter = self.lang.word_filter
|
_filter = self.lang.word_filter
|
||||||
|
|
||||||
|
@ -311,5 +311,4 @@ class SearchGerman(SearchLanguage):
|
|||||||
self.stemmer = snowballstemmer.stemmer('german')
|
self.stemmer = snowballstemmer.stemmer('german')
|
||||||
|
|
||||||
def stem(self, word):
|
def stem(self, word):
|
||||||
word = word.lower()
|
|
||||||
return self.stemmer.stemWord(word)
|
return self.stemmer.stemWord(word)
|
||||||
|
@ -237,7 +237,6 @@ class SearchEnglish(SearchLanguage):
|
|||||||
make at least the stem method nicer.
|
make at least the stem method nicer.
|
||||||
"""
|
"""
|
||||||
def stem(self, word):
|
def stem(self, word):
|
||||||
word = word.lower()
|
|
||||||
return PorterStemmer.stem(self, word, 0, len(word) - 1)
|
return PorterStemmer.stem(self, word, 0, len(word) - 1)
|
||||||
|
|
||||||
self.stemmer = Stemmer()
|
self.stemmer = Stemmer()
|
||||||
|
@ -533,7 +533,7 @@ class SearchJapanese(SearchLanguage):
|
|||||||
language_name = 'Japanese'
|
language_name = 'Japanese'
|
||||||
splitters = {
|
splitters = {
|
||||||
'default': 'sphinx.search.ja.DefaultSplitter',
|
'default': 'sphinx.search.ja.DefaultSplitter',
|
||||||
'mecab': 'sphinx.sarch.ja.MecabSplitter',
|
'mecab': 'sphinx.search.ja.MecabSplitter',
|
||||||
'janome': 'sphinx.search.ja.JanomeSplitter',
|
'janome': 'sphinx.search.ja.JanomeSplitter',
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -556,4 +556,4 @@ class SearchJapanese(SearchLanguage):
|
|||||||
return len(stemmed_word) > 1
|
return len(stemmed_word) > 1
|
||||||
|
|
||||||
def stem(self, word):
|
def stem(self, word):
|
||||||
return word.lower()
|
return word
|
||||||
|
@ -256,7 +256,6 @@ class SearchChinese(SearchLanguage):
|
|||||||
make at least the stem method nicer.
|
make at least the stem method nicer.
|
||||||
"""
|
"""
|
||||||
def stem(self, word):
|
def stem(self, word):
|
||||||
word = word.lower()
|
|
||||||
return PorterStemmer.stem(self, word, 0, len(word) - 1)
|
return PorterStemmer.stem(self, word, 0, len(word) - 1)
|
||||||
|
|
||||||
self.stemmer = Stemmer()
|
self.stemmer = Stemmer()
|
||||||
|
@ -8,12 +8,6 @@
|
|||||||
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
||||||
\ProvidesPackage{sphinx}[2010/01/15 LaTeX package (Sphinx markup)]
|
\ProvidesPackage{sphinx}[2010/01/15 LaTeX package (Sphinx markup)]
|
||||||
|
|
||||||
\ifx\directlua\undefined\else
|
|
||||||
% if compiling with lualatex 0.85 or later load compatibility patch issued by
|
|
||||||
% the LaTeX team for older packages relying on \pdf<name> named primitives.
|
|
||||||
\IfFileExists{luatex85.sty}{\RequirePackage{luatex85}}{}
|
|
||||||
\fi
|
|
||||||
|
|
||||||
\@ifclassloaded{memoir}{}{\RequirePackage{fancyhdr}}
|
\@ifclassloaded{memoir}{}{\RequirePackage{fancyhdr}}
|
||||||
|
|
||||||
\RequirePackage{textcomp}
|
\RequirePackage{textcomp}
|
||||||
@ -166,12 +160,12 @@
|
|||||||
|
|
||||||
\newcommand\Sphinx@colorbox [2]{%
|
\newcommand\Sphinx@colorbox [2]{%
|
||||||
% #1 will be \fcolorbox or, for first part of frame: \Sphinx@fcolorbox
|
% #1 will be \fcolorbox or, for first part of frame: \Sphinx@fcolorbox
|
||||||
#1{VerbatimBorderColor}{VerbatimColor}{%
|
% let the framing obey the current indentation (adapted from framed.sty's code).
|
||||||
% adjust width to be able to handle indentation.
|
\hskip\@totalleftmargin
|
||||||
\begin{minipage}{\dimexpr\linewidth-\@totalleftmargin\relax}%
|
\hskip-\fboxsep\hskip-\fboxrule
|
||||||
#2%
|
#1{VerbatimBorderColor}{VerbatimColor}{#2}%
|
||||||
\end{minipage}%
|
\hskip-\fboxsep\hskip-\fboxrule
|
||||||
}%
|
\hskip-\linewidth \hskip-\@totalleftmargin \hskip\columnwidth
|
||||||
}
|
}
|
||||||
% use of \color@b@x here is compatible with both xcolor.sty and color.sty
|
% use of \color@b@x here is compatible with both xcolor.sty and color.sty
|
||||||
\def\Sphinx@fcolorbox #1#2%
|
\def\Sphinx@fcolorbox #1#2%
|
||||||
@ -184,9 +178,11 @@
|
|||||||
\newcommand*\SphinxLiteralBlockLabel {}
|
\newcommand*\SphinxLiteralBlockLabel {}
|
||||||
\newcommand*\SphinxSetupCaptionForVerbatim [2]
|
\newcommand*\SphinxSetupCaptionForVerbatim [2]
|
||||||
{%
|
{%
|
||||||
\needspace{\literalblockneedspace}\vspace{\literalblockcaptiontopvspace}%
|
\needspace{\literalblockneedspace}%
|
||||||
% insert a \label via \SphinxLiteralBlockLabel
|
% insert a \label via \SphinxLiteralBlockLabel
|
||||||
% reset to normal the color for the literal block caption
|
% reset to normal the color for the literal block caption
|
||||||
|
% the caption inserts \abovecaptionskip whitespace above itself (usually 10pt)
|
||||||
|
% there is also \belowcaptionskip but it is usually zero, hence the \smallskip
|
||||||
\def\SphinxVerbatimTitle
|
\def\SphinxVerbatimTitle
|
||||||
{\py@NormalColor\captionof{#1}{\SphinxLiteralBlockLabel #2}\smallskip }%
|
{\py@NormalColor\captionof{#1}{\SphinxLiteralBlockLabel #2}\smallskip }%
|
||||||
}
|
}
|
||||||
@ -198,12 +194,22 @@
|
|||||||
\long\def\Sphinx@VerbatimFBox#1{%
|
\long\def\Sphinx@VerbatimFBox#1{%
|
||||||
\leavevmode
|
\leavevmode
|
||||||
\begingroup
|
\begingroup
|
||||||
|
% framed.sty does some measuring but this macro adds possibly a caption
|
||||||
|
% use amsmath conditional to inhibit the caption counter stepping after
|
||||||
|
% first pass
|
||||||
\ifSphinx@myfirstframedpass\else\firstchoice@false\fi
|
\ifSphinx@myfirstframedpass\else\firstchoice@false\fi
|
||||||
\setbox\@tempboxa\hbox{\kern\fboxsep{#1}\kern\fboxsep}%
|
\setbox\@tempboxa\hbox{\kern\fboxsep{#1}\kern\fboxsep}%
|
||||||
\hbox
|
\hbox
|
||||||
{\lower\dimexpr\fboxrule+\fboxsep+\dp\@tempboxa
|
{\lower\dimexpr\fboxrule+\fboxsep+\dp\@tempboxa
|
||||||
\hbox{%
|
\hbox{%
|
||||||
\vbox{\ifx\SphinxVerbatimTitle\empty\else{\SphinxVerbatimTitle}\fi
|
\vbox{\ifx\SphinxVerbatimTitle\empty\else
|
||||||
|
% add the caption in a centered way above possibly indented frame
|
||||||
|
% hide its width from framed.sty's measuring step
|
||||||
|
% note that the caption brings \abovecaptionskip top vertical space
|
||||||
|
\moveright\dimexpr\fboxrule+.5\wd\@tempboxa
|
||||||
|
\hb@xt@\z@{\hss\begin{minipage}{\wd\@tempboxa}%
|
||||||
|
\SphinxVerbatimTitle
|
||||||
|
\end{minipage}\hss}\fi
|
||||||
\hrule\@height\fboxrule\relax
|
\hrule\@height\fboxrule\relax
|
||||||
\hbox{\vrule\@width\fboxrule\relax
|
\hbox{\vrule\@width\fboxrule\relax
|
||||||
\vbox{\vskip\fboxsep\copy\@tempboxa\vskip\fboxsep}%
|
\vbox{\vskip\fboxsep\copy\@tempboxa\vskip\fboxsep}%
|
||||||
@ -211,16 +217,70 @@
|
|||||||
\hrule\@height\fboxrule\relax}%
|
\hrule\@height\fboxrule\relax}%
|
||||||
}}%
|
}}%
|
||||||
\endgroup
|
\endgroup
|
||||||
% amsmath conditional inhibits counter stepping after first pass.
|
|
||||||
\global\Sphinx@myfirstframedpassfalse
|
\global\Sphinx@myfirstframedpassfalse
|
||||||
}
|
}
|
||||||
|
|
||||||
|
% For linebreaks inside Verbatim environment from package fancyvrb.
|
||||||
|
\newbox\Sphinxcontinuationbox
|
||||||
|
\newbox\Sphinxvisiblespacebox
|
||||||
|
% These are user customizable e.g. from latex_elements's preamble key.
|
||||||
|
% Use of \textvisiblespace for compatibility with XeTeX/LuaTeX/fontspec.
|
||||||
|
\newcommand*\Sphinxvisiblespace {\textcolor{red}{\textvisiblespace}}
|
||||||
|
\newcommand*\Sphinxcontinuationsymbol {\textcolor{red}{\llap{\tiny$\m@th\hookrightarrow$}}}
|
||||||
|
\newcommand*\Sphinxcontinuationindent {3ex }
|
||||||
|
\newcommand*\Sphinxafterbreak {\kern\Sphinxcontinuationindent\copy\Sphinxcontinuationbox}
|
||||||
|
|
||||||
|
% Take advantage of the already applied Pygments mark-up to insert
|
||||||
|
% potential linebreaks for TeX processing.
|
||||||
|
% {, <, #, %, $, ' and ": go to next line.
|
||||||
|
% _, }, ^, &, >, - and ~: stay at end of broken line.
|
||||||
|
% Use of \textquotesingle for straight quote.
|
||||||
|
\newcommand*\Sphinxbreaksatspecials {%
|
||||||
|
\def\PYGZus{\discretionary{\char`\_}{\Sphinxafterbreak}{\char`\_}}%
|
||||||
|
\def\PYGZob{\discretionary{}{\Sphinxafterbreak\char`\{}{\char`\{}}%
|
||||||
|
\def\PYGZcb{\discretionary{\char`\}}{\Sphinxafterbreak}{\char`\}}}%
|
||||||
|
\def\PYGZca{\discretionary{\char`\^}{\Sphinxafterbreak}{\char`\^}}%
|
||||||
|
\def\PYGZam{\discretionary{\char`\&}{\Sphinxafterbreak}{\char`\&}}%
|
||||||
|
\def\PYGZlt{\discretionary{}{\Sphinxafterbreak\char`\<}{\char`\<}}%
|
||||||
|
\def\PYGZgt{\discretionary{\char`\>}{\Sphinxafterbreak}{\char`\>}}%
|
||||||
|
\def\PYGZsh{\discretionary{}{\Sphinxafterbreak\char`\#}{\char`\#}}%
|
||||||
|
\def\PYGZpc{\discretionary{}{\Sphinxafterbreak\char`\%}{\char`\%}}%
|
||||||
|
\def\PYGZdl{\discretionary{}{\Sphinxafterbreak\char`\$}{\char`\$}}%
|
||||||
|
\def\PYGZhy{\discretionary{\char`\-}{\Sphinxafterbreak}{\char`\-}}%
|
||||||
|
\def\PYGZsq{\discretionary{}{\Sphinxafterbreak\textquotesingle}{\textquotesingle}}%
|
||||||
|
\def\PYGZdq{\discretionary{}{\Sphinxafterbreak\char`\"}{\char`\"}}%
|
||||||
|
\def\PYGZti{\discretionary{\char`\~}{\Sphinxafterbreak}{\char`\~}}%
|
||||||
|
}
|
||||||
|
|
||||||
|
% Some characters . , ; ? ! / are not pygmentized.
|
||||||
|
% This macro makes them "active" and they will insert potential linebreaks
|
||||||
|
\newcommand*\Sphinxbreaksatpunct {%
|
||||||
|
\lccode`\~`\.\lowercase{\def~}{\discretionary{\char`\.}{\Sphinxafterbreak}{\char`\.}}%
|
||||||
|
\lccode`\~`\,\lowercase{\def~}{\discretionary{\char`\,}{\Sphinxafterbreak}{\char`\,}}%
|
||||||
|
\lccode`\~`\;\lowercase{\def~}{\discretionary{\char`\;}{\Sphinxafterbreak}{\char`\;}}%
|
||||||
|
\lccode`\~`\:\lowercase{\def~}{\discretionary{\char`\:}{\Sphinxafterbreak}{\char`\:}}%
|
||||||
|
\lccode`\~`\?\lowercase{\def~}{\discretionary{\char`\?}{\Sphinxafterbreak}{\char`\?}}%
|
||||||
|
\lccode`\~`\!\lowercase{\def~}{\discretionary{\char`\!}{\Sphinxafterbreak}{\char`\!}}%
|
||||||
|
\lccode`\~`\/\lowercase{\def~}{\discretionary{\char`\/}{\Sphinxafterbreak}{\char`\/}}%
|
||||||
|
\catcode`\.\active
|
||||||
|
\catcode`\,\active
|
||||||
|
\catcode`\;\active
|
||||||
|
\catcode`\:\active
|
||||||
|
\catcode`\?\active
|
||||||
|
\catcode`\!\active
|
||||||
|
\catcode`\/\active
|
||||||
|
\lccode`\~`\~
|
||||||
|
}
|
||||||
|
|
||||||
\renewcommand{\Verbatim}[1][1]{%
|
\renewcommand{\Verbatim}[1][1]{%
|
||||||
|
% quit horizontal mode if we are still in a paragraph
|
||||||
|
\par
|
||||||
% list starts new par, but we don't want it to be set apart vertically
|
% list starts new par, but we don't want it to be set apart vertically
|
||||||
\parskip\z@skip
|
\parskip\z@skip
|
||||||
\smallskip
|
|
||||||
% first, let's check if there is a caption
|
% first, let's check if there is a caption
|
||||||
\ifx\SphinxVerbatimTitle\empty
|
\ifx\SphinxVerbatimTitle\empty
|
||||||
|
\addvspace\z@% counteract possible previous negative skip (French lists!)
|
||||||
|
\smallskip
|
||||||
% there was no caption. Check if nevertheless a label was set.
|
% there was no caption. Check if nevertheless a label was set.
|
||||||
\ifx\SphinxLiteralBlockLabel\empty\else
|
\ifx\SphinxLiteralBlockLabel\empty\else
|
||||||
% we require some space to be sure hyperlink target from \phantomsection
|
% we require some space to be sure hyperlink target from \phantomsection
|
||||||
@ -238,8 +298,36 @@
|
|||||||
% for mid pages and last page portion of (long) split frame:
|
% for mid pages and last page portion of (long) split frame:
|
||||||
\def\MidFrameCommand{\Sphinx@colorbox\fcolorbox }%
|
\def\MidFrameCommand{\Sphinx@colorbox\fcolorbox }%
|
||||||
\let\LastFrameCommand\MidFrameCommand
|
\let\LastFrameCommand\MidFrameCommand
|
||||||
% The list environement is needed to control perfectly the vertical
|
% fancyvrb's Verbatim puts each input line in (unbreakable) horizontal boxes.
|
||||||
% space.
|
% This customization wraps each line from the input in a \vtop, thus
|
||||||
|
% allowing it to wrap and display on two or more lines in the latex output.
|
||||||
|
% - The codeline counter will be increased only once.
|
||||||
|
% - The wrapped material will not break across pages, it is impossible
|
||||||
|
% to achieve this without extensive rewrite of fancyvrb.
|
||||||
|
% - The (not used in Sphinx) obeytabs option to Verbatim is
|
||||||
|
% broken by this change (showtabs and tabspace work).
|
||||||
|
\sbox\Sphinxcontinuationbox {\Sphinxcontinuationsymbol}%
|
||||||
|
\sbox\Sphinxvisiblespacebox {\FV@SetupFont\Sphinxvisiblespace}%
|
||||||
|
\def\FancyVerbFormatLine ##1{\hsize\linewidth
|
||||||
|
\vtop{\raggedright\hyphenpenalty\z@\exhyphenpenalty\z@
|
||||||
|
\doublehyphendemerits\z@\finalhyphendemerits\z@
|
||||||
|
\strut ##1\strut}%
|
||||||
|
}%
|
||||||
|
% If the linebreak is at a space, the latter will be displayed as visible
|
||||||
|
% space at end of first line, and a continuation symbol starts next line.
|
||||||
|
% Stretch/shrink are however usually zero for typewriter font.
|
||||||
|
\def\FV@Space {%
|
||||||
|
\nobreak\hskip\z@ plus\fontdimen3\font minus\fontdimen4\font
|
||||||
|
\discretionary{\copy\Sphinxvisiblespacebox}{\Sphinxafterbreak}
|
||||||
|
{\kern\fontdimen2\font}%
|
||||||
|
}%
|
||||||
|
% Allow breaks at special characters using \PYG... macros.
|
||||||
|
\Sphinxbreaksatspecials
|
||||||
|
% The list environment is needed to control perfectly the vertical space.
|
||||||
|
% Note: \OuterFrameSep used by framed.sty is later set to \topsep hence 0pt.
|
||||||
|
% - if caption: vertical space above caption = (\abovecaptionskip + D) with
|
||||||
|
% D = \baselineskip-\FrameHeightAdjust, and then \smallskip above frame.
|
||||||
|
% - if no caption: (\smallskip + D) above frame. By default D=6pt.
|
||||||
\list{}{%
|
\list{}{%
|
||||||
\setlength\parskip{0pt}%
|
\setlength\parskip{0pt}%
|
||||||
\setlength\itemsep{0ex}%
|
\setlength\itemsep{0ex}%
|
||||||
@ -251,13 +339,18 @@
|
|||||||
\item
|
\item
|
||||||
% use a minipage if we are already inside a framed environment
|
% use a minipage if we are already inside a framed environment
|
||||||
\relax\ifSphinx@inframed\noindent\begin{\minipage}{\linewidth}\fi
|
\relax\ifSphinx@inframed\noindent\begin{\minipage}{\linewidth}\fi
|
||||||
\MakeFramed {\FrameRestore}%
|
\MakeFramed {% adapted over from framed.sty's snugshade environment
|
||||||
|
\advance\hsize-\width\@totalleftmargin\z@\linewidth\hsize
|
||||||
|
\@setminipage }%
|
||||||
\small
|
\small
|
||||||
\OriginalVerbatim[#1]%
|
% For grid placement from \strut's in \FancyVerbFormatLine
|
||||||
|
\lineskip\z@skip
|
||||||
|
% Breaks at punctuation characters . , ; ? ! and / need catcode=\active
|
||||||
|
\OriginalVerbatim[#1,codes*=\Sphinxbreaksatpunct]%
|
||||||
}
|
}
|
||||||
\renewcommand{\endVerbatim}{%
|
\renewcommand{\endVerbatim}{%
|
||||||
\endOriginalVerbatim
|
\endOriginalVerbatim
|
||||||
\endMakeFramed
|
\par\unskip\@minipagefalse\endMakeFramed
|
||||||
\ifSphinx@inframed\end{minipage}\fi
|
\ifSphinx@inframed\end{minipage}\fi
|
||||||
\endlist
|
\endlist
|
||||||
% LaTeX environments always revert local changes on exit, here e.g. \parskip
|
% LaTeX environments always revert local changes on exit, here e.g. \parskip
|
||||||
@ -344,6 +437,7 @@
|
|||||||
\endtrivlist
|
\endtrivlist
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
% \moduleauthor{name}{email}
|
% \moduleauthor{name}{email}
|
||||||
\newcommand{\moduleauthor}[2]{}
|
\newcommand{\moduleauthor}[2]{}
|
||||||
|
|
||||||
@ -358,8 +452,12 @@
|
|||||||
{\py@TitleColor\thesubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
{\py@TitleColor\thesubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
||||||
\titleformat{\subsubsection}{\py@HeaderFamily}%
|
\titleformat{\subsubsection}{\py@HeaderFamily}%
|
||||||
{\py@TitleColor\thesubsubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
{\py@TitleColor\thesubsubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
||||||
\titleformat{\paragraph}{\small\py@HeaderFamily}%
|
% By default paragraphs (and subsubsections) will not be numbered because
|
||||||
{\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
|
% sphinxmanual.cls and sphinxhowto.cls set secnumdepth to 2
|
||||||
|
\titleformat{\paragraph}{\py@HeaderFamily}%
|
||||||
|
{\py@TitleColor\theparagraph}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
||||||
|
\titleformat{\subparagraph}{\py@HeaderFamily}%
|
||||||
|
{\py@TitleColor\thesubparagraph}{0.5em}{\py@TitleColor}{\py@NormalColor}
|
||||||
|
|
||||||
% {fulllineitems} is the main environment for object descriptions.
|
% {fulllineitems} is the main environment for object descriptions.
|
||||||
%
|
%
|
||||||
@ -456,8 +554,11 @@
|
|||||||
{\parskip\z@skip\noindent}%
|
{\parskip\z@skip\noindent}%
|
||||||
}
|
}
|
||||||
\newcommand{\py@endlightbox}{%
|
\newcommand{\py@endlightbox}{%
|
||||||
\par\nobreak
|
\par
|
||||||
{\parskip\z@skip\noindent\rule[.4\baselineskip]{\linewidth}{0.5pt}}\par
|
% counteract previous possible negative skip (French lists!):
|
||||||
|
% (we can't cancel that any earlier \vskip introduced a potential pagebreak)
|
||||||
|
\ifdim\lastskip<\z@\vskip-\lastskip\fi
|
||||||
|
\nobreak\vbox{\noindent\rule[.4\baselineskip]{\linewidth}{0.5pt}}\allowbreak
|
||||||
}
|
}
|
||||||
|
|
||||||
% Some are quite plain:
|
% Some are quite plain:
|
||||||
@ -600,7 +701,10 @@
|
|||||||
|
|
||||||
% to make pdf with correct encoded bookmarks in Japanese
|
% to make pdf with correct encoded bookmarks in Japanese
|
||||||
% this should precede the hyperref package
|
% this should precede the hyperref package
|
||||||
\ifx\kanjiskip\undefined\else
|
\ifx\kanjiskip\undefined
|
||||||
|
% for non-Japanese: make sure bookmarks are ok also with lualatex
|
||||||
|
\PassOptionsToPackage{pdfencoding=unicode}{hyperref}
|
||||||
|
\else
|
||||||
\usepackage{atbegshi}
|
\usepackage{atbegshi}
|
||||||
\ifx\ucs\undefined
|
\ifx\ucs\undefined
|
||||||
\ifnum 42146=\euc"A4A2
|
\ifnum 42146=\euc"A4A2
|
||||||
@ -721,8 +825,6 @@
|
|||||||
% control caption around literal-block
|
% control caption around literal-block
|
||||||
\RequirePackage{capt-of}
|
\RequirePackage{capt-of}
|
||||||
\RequirePackage{needspace}
|
\RequirePackage{needspace}
|
||||||
% if the left page space is less than \literalblockneedsapce, insert page-break
|
% if the left page space is less than \literalblockneedspace, insert page-break
|
||||||
\newcommand{\literalblockneedspace}{5\baselineskip}
|
\newcommand{\literalblockneedspace}{5\baselineskip}
|
||||||
\newcommand{\literalblockwithoutcaptionneedspace}{1.5\baselineskip}
|
\newcommand{\literalblockwithoutcaptionneedspace}{1.5\baselineskip}
|
||||||
% margin before the caption of literal-block
|
|
||||||
\newcommand{\literalblockcaptiontopvspace}{0.5\baselineskip}
|
|
||||||
|
@ -5,6 +5,12 @@
|
|||||||
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
||||||
\ProvidesClass{sphinxhowto}[2009/06/02 Document class (Sphinx HOWTO)]
|
\ProvidesClass{sphinxhowto}[2009/06/02 Document class (Sphinx HOWTO)]
|
||||||
|
|
||||||
|
\ifx\directlua\undefined\else
|
||||||
|
% if compiling with lualatex 0.85 or later load compatibility patch issued by
|
||||||
|
% the LaTeX team for older packages relying on \pdf<name> named primitives.
|
||||||
|
\IfFileExists{luatex85.sty}{\RequirePackage{luatex85}}{}
|
||||||
|
\fi
|
||||||
|
|
||||||
% 'oneside' option overriding the 'twoside' default
|
% 'oneside' option overriding the 'twoside' default
|
||||||
\newif\if@oneside
|
\newif\if@oneside
|
||||||
\DeclareOption{oneside}{\@onesidetrue}
|
\DeclareOption{oneside}{\@onesidetrue}
|
||||||
|
@ -5,6 +5,12 @@
|
|||||||
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
\NeedsTeXFormat{LaTeX2e}[1995/12/01]
|
||||||
\ProvidesClass{sphinxmanual}[2009/06/02 Document class (Sphinx manual)]
|
\ProvidesClass{sphinxmanual}[2009/06/02 Document class (Sphinx manual)]
|
||||||
|
|
||||||
|
\ifx\directlua\undefined\else
|
||||||
|
% if compiling with lualatex 0.85 or later load compatibility patch issued by
|
||||||
|
% the LaTeX team for older packages relying on \pdf<name> named primitives.
|
||||||
|
\IfFileExists{luatex85.sty}{\RequirePackage{luatex85}}{}
|
||||||
|
\fi
|
||||||
|
|
||||||
% chapters starting at odd pages (overridden by 'openany' document option)
|
% chapters starting at odd pages (overridden by 'openany' document option)
|
||||||
\PassOptionsToClass{openright}{\sphinxdocclass}
|
\PassOptionsToClass{openright}{\sphinxdocclass}
|
||||||
|
|
||||||
|
@ -11,8 +11,8 @@
|
|||||||
<div id="searchbox" style="display: none" role="search">
|
<div id="searchbox" style="display: none" role="search">
|
||||||
<h3>{{ _('Quick search') }}</h3>
|
<h3>{{ _('Quick search') }}</h3>
|
||||||
<form class="search" action="{{ pathto('search') }}" method="get">
|
<form class="search" action="{{ pathto('search') }}" method="get">
|
||||||
<input type="text" name="q" />
|
<div><input type="text" name="q" /></div>
|
||||||
<input type="submit" value="{{ _('Go') }}" />
|
<div><input type="submit" value="{{ _('Go') }}" /></div>
|
||||||
<input type="hidden" name="check_keywords" value="yes" />
|
<input type="hidden" name="check_keywords" value="yes" />
|
||||||
<input type="hidden" name="area" value="default" />
|
<input type="hidden" name="area" value="default" />
|
||||||
</form>
|
</form>
|
||||||
|
@ -85,10 +85,6 @@ div.sphinxsidebar #searchbox input[type="text"] {
|
|||||||
width: 170px;
|
width: 170px;
|
||||||
}
|
}
|
||||||
|
|
||||||
div.sphinxsidebar #searchbox input[type="submit"] {
|
|
||||||
width: 30px;
|
|
||||||
}
|
|
||||||
|
|
||||||
img {
|
img {
|
||||||
border: 0;
|
border: 0;
|
||||||
max-width: 100%;
|
max-width: 100%;
|
||||||
|
@ -103,7 +103,8 @@ class Theme(object):
|
|||||||
if name not in self.themes:
|
if name not in self.themes:
|
||||||
if name == 'sphinx_rtd_theme':
|
if name == 'sphinx_rtd_theme':
|
||||||
raise ThemeError('sphinx_rtd_theme is no longer a hard dependency '
|
raise ThemeError('sphinx_rtd_theme is no longer a hard dependency '
|
||||||
'since version 1.4.0. Please install it manually.')
|
'since version 1.4.0. Please install it manually.'
|
||||||
|
'(pip install sphinx_rtd_theme)')
|
||||||
else:
|
else:
|
||||||
raise ThemeError('no theme named %r found '
|
raise ThemeError('no theme named %r found '
|
||||||
'(missing theme.conf?)' % name)
|
'(missing theme.conf?)' % name)
|
||||||
|
@ -114,7 +114,7 @@ class AutoNumbering(Transform):
|
|||||||
domain = self.document.settings.env.domains['std']
|
domain = self.document.settings.env.domains['std']
|
||||||
|
|
||||||
for node in self.document.traverse(nodes.Element):
|
for node in self.document.traverse(nodes.Element):
|
||||||
if domain.is_enumerable_node(node) and domain.get_numfig_title(node):
|
if domain.is_enumerable_node(node) and domain.get_numfig_title(node) is not None:
|
||||||
self.document.note_implicit_target(node)
|
self.document.note_implicit_target(node)
|
||||||
|
|
||||||
|
|
||||||
@ -239,9 +239,7 @@ class Locale(Transform):
|
|||||||
# fetch translations
|
# fetch translations
|
||||||
dirs = [path.join(env.srcdir, directory)
|
dirs = [path.join(env.srcdir, directory)
|
||||||
for directory in env.config.locale_dirs]
|
for directory in env.config.locale_dirs]
|
||||||
catalog, has_catalog = init_locale(dirs, env.config.language,
|
catalog, has_catalog = init_locale(dirs, env.config.language, textdomain)
|
||||||
textdomain,
|
|
||||||
charset=env.config.source_encoding)
|
|
||||||
if not has_catalog:
|
if not has_catalog:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
@ -14,7 +14,6 @@ import os
|
|||||||
import re
|
import re
|
||||||
import warnings
|
import warnings
|
||||||
from os import path
|
from os import path
|
||||||
from time import gmtime
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from collections import namedtuple
|
from collections import namedtuple
|
||||||
|
|
||||||
@ -188,7 +187,7 @@ def format_date(format, date=None, language=None, warn=None):
|
|||||||
# See https://wiki.debian.org/ReproducibleBuilds/TimestampsProposal
|
# See https://wiki.debian.org/ReproducibleBuilds/TimestampsProposal
|
||||||
source_date_epoch = os.getenv('SOURCE_DATE_EPOCH')
|
source_date_epoch = os.getenv('SOURCE_DATE_EPOCH')
|
||||||
if source_date_epoch is not None:
|
if source_date_epoch is not None:
|
||||||
date = gmtime(float(source_date_epoch))
|
date = datetime.utcfromtimestamp(float(source_date_epoch))
|
||||||
else:
|
else:
|
||||||
date = datetime.now()
|
date = datetime.now()
|
||||||
|
|
||||||
|
@ -23,18 +23,14 @@ IEND_CHUNK = b'\x00\x00\x00\x00IEND\xAE\x42\x60\x82'
|
|||||||
|
|
||||||
def read_png_depth(filename):
|
def read_png_depth(filename):
|
||||||
"""Read the special tEXt chunk indicating the depth from a PNG file."""
|
"""Read the special tEXt chunk indicating the depth from a PNG file."""
|
||||||
result = None
|
with open(filename, 'rb') as f:
|
||||||
f = open(filename, 'rb')
|
|
||||||
try:
|
|
||||||
f.seek(- (LEN_IEND + LEN_DEPTH), 2)
|
f.seek(- (LEN_IEND + LEN_DEPTH), 2)
|
||||||
depthchunk = f.read(LEN_DEPTH)
|
depthchunk = f.read(LEN_DEPTH)
|
||||||
if not depthchunk.startswith(DEPTH_CHUNK_LEN + DEPTH_CHUNK_START):
|
if not depthchunk.startswith(DEPTH_CHUNK_LEN + DEPTH_CHUNK_START):
|
||||||
# either not a PNG file or not containing the depth chunk
|
# either not a PNG file or not containing the depth chunk
|
||||||
return None
|
return None
|
||||||
result = struct.unpack('!i', depthchunk[14:18])[0]
|
else:
|
||||||
finally:
|
return struct.unpack('!i', depthchunk[14:18])[0]
|
||||||
f.close()
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def write_png_depth(filename, depth):
|
def write_png_depth(filename, depth):
|
||||||
@ -43,8 +39,7 @@ def write_png_depth(filename, depth):
|
|||||||
The chunk is placed immediately before the special IEND chunk.
|
The chunk is placed immediately before the special IEND chunk.
|
||||||
"""
|
"""
|
||||||
data = struct.pack('!i', depth)
|
data = struct.pack('!i', depth)
|
||||||
f = open(filename, 'r+b')
|
with open(filename, 'r+b') as f:
|
||||||
try:
|
|
||||||
# seek to the beginning of the IEND chunk
|
# seek to the beginning of the IEND chunk
|
||||||
f.seek(-LEN_IEND, 2)
|
f.seek(-LEN_IEND, 2)
|
||||||
# overwrite it with the depth chunk
|
# overwrite it with the depth chunk
|
||||||
@ -54,5 +49,3 @@ def write_png_depth(filename, depth):
|
|||||||
f.write(struct.pack('!I', crc))
|
f.write(struct.pack('!I', crc))
|
||||||
# replace the IEND chunk
|
# replace the IEND chunk
|
||||||
f.write(IEND_CHUNK)
|
f.write(IEND_CHUNK)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
@ -105,11 +105,8 @@ def execfile_(filepath, _globals, open=open):
|
|||||||
from sphinx.util.osutil import fs_encoding
|
from sphinx.util.osutil import fs_encoding
|
||||||
# get config source -- 'b' is a no-op under 2.x, while 'U' is
|
# get config source -- 'b' is a no-op under 2.x, while 'U' is
|
||||||
# ignored under 3.x (but 3.x compile() accepts \r\n newlines)
|
# ignored under 3.x (but 3.x compile() accepts \r\n newlines)
|
||||||
f = open(filepath, 'rbU')
|
with open(filepath, 'rbU') as f:
|
||||||
try:
|
|
||||||
source = f.read()
|
source = f.read()
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
# py26 accept only LF eol instead of CRLF
|
# py26 accept only LF eol instead of CRLF
|
||||||
if sys.version_info[:2] == (2, 6):
|
if sys.version_info[:2] == (2, 6):
|
||||||
|
@ -47,8 +47,26 @@ tex_replacements = [
|
|||||||
('│', r'\textbar{}'),
|
('│', r'\textbar{}'),
|
||||||
('ℯ', r'e'),
|
('ℯ', r'e'),
|
||||||
('ⅈ', r'i'),
|
('ⅈ', r'i'),
|
||||||
('₁', r'1'),
|
('⁰', r'$^\text{0}$'),
|
||||||
('₂', r'2'),
|
('¹', r'$^\text{1}$'),
|
||||||
|
('²', r'$^\text{2}$'),
|
||||||
|
('³', r'$^\text{3}$'),
|
||||||
|
('⁴', r'$^\text{4}$'),
|
||||||
|
('⁵', r'$^\text{5}$'),
|
||||||
|
('⁶', r'$^\text{6}$'),
|
||||||
|
('⁷', r'$^\text{7}$'),
|
||||||
|
('⁸', r'$^\text{8}$'),
|
||||||
|
('⁹', r'$^\text{9}$'),
|
||||||
|
('₀', r'$_\text{0}$'),
|
||||||
|
('₁', r'$_\text{1}$'),
|
||||||
|
('₂', r'$_\text{2}$'),
|
||||||
|
('₃', r'$_\text{3}$'),
|
||||||
|
('₄', r'$_\text{4}$'),
|
||||||
|
('₅', r'$_\text{5}$'),
|
||||||
|
('₆', r'$_\text{6}$'),
|
||||||
|
('₇', r'$_\text{7}$'),
|
||||||
|
('₈', r'$_\text{8}$'),
|
||||||
|
('₉', r'$_\text{9}$'),
|
||||||
# map Greek alphabet
|
# map Greek alphabet
|
||||||
('α', r'\(\alpha\)'),
|
('α', r'\(\alpha\)'),
|
||||||
('β', r'\(\beta\)'),
|
('β', r'\(\beta\)'),
|
||||||
|
@ -130,11 +130,8 @@ class WebSupport(object):
|
|||||||
"""Load and return the "global context" pickle."""
|
"""Load and return the "global context" pickle."""
|
||||||
if not self._globalcontext:
|
if not self._globalcontext:
|
||||||
infilename = path.join(self.datadir, 'globalcontext.pickle')
|
infilename = path.join(self.datadir, 'globalcontext.pickle')
|
||||||
f = open(infilename, 'rb')
|
with open(infilename, 'rb') as f:
|
||||||
try:
|
|
||||||
self._globalcontext = pickle.load(f)
|
self._globalcontext = pickle.load(f)
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
return self._globalcontext
|
return self._globalcontext
|
||||||
|
|
||||||
def get_document(self, docname, username='', moderator=False):
|
def get_document(self, docname, username='', moderator=False):
|
||||||
@ -185,14 +182,11 @@ class WebSupport(object):
|
|||||||
infilename = docpath + '.fpickle'
|
infilename = docpath + '.fpickle'
|
||||||
|
|
||||||
try:
|
try:
|
||||||
f = open(infilename, 'rb')
|
with open(infilename, 'rb') as f:
|
||||||
|
document = pickle.load(f)
|
||||||
except IOError:
|
except IOError:
|
||||||
raise errors.DocumentNotFoundError(
|
raise errors.DocumentNotFoundError(
|
||||||
'The document "%s" could not be found' % docname)
|
'The document "%s" could not be found' % docname)
|
||||||
try:
|
|
||||||
document = pickle.load(f)
|
|
||||||
finally:
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
comment_opts = self._make_comment_options(username, moderator)
|
comment_opts = self._make_comment_options(username, moderator)
|
||||||
comment_meta = self._make_metadata(
|
comment_meta = self._make_metadata(
|
||||||
|
@ -11,6 +11,8 @@
|
|||||||
|
|
||||||
import xapian
|
import xapian
|
||||||
|
|
||||||
|
from six import string_types
|
||||||
|
|
||||||
from sphinx.util.osutil import ensuredir
|
from sphinx.util.osutil import ensuredir
|
||||||
from sphinx.websupport.search import BaseSearch
|
from sphinx.websupport.search import BaseSearch
|
||||||
|
|
||||||
@ -73,7 +75,10 @@ class XapianSearch(BaseSearch):
|
|||||||
results = []
|
results = []
|
||||||
|
|
||||||
for m in matches:
|
for m in matches:
|
||||||
context = self.extract_context(m.document.get_data())
|
data = m.document.get_data()
|
||||||
|
if not isinstance(data, string_types):
|
||||||
|
data = data.decode("utf-8")
|
||||||
|
context = self.extract_context(data)
|
||||||
results.append((m.document.get_value(self.DOC_PATH),
|
results.append((m.document.get_value(self.DOC_PATH),
|
||||||
m.document.get_value(self.DOC_TITLE),
|
m.document.get_value(self.DOC_TITLE),
|
||||||
''.join(context)))
|
''.join(context)))
|
||||||
|
@ -278,7 +278,7 @@ class HTMLTranslator(BaseTranslator):
|
|||||||
figtype = self.builder.env.domains['std'].get_figtype(node)
|
figtype = self.builder.env.domains['std'].get_figtype(node)
|
||||||
if figtype:
|
if figtype:
|
||||||
if len(node['ids']) == 0:
|
if len(node['ids']) == 0:
|
||||||
msg = 'Any IDs not assiend for %s node' % node.tagname
|
msg = 'Any IDs not assigned for %s node' % node.tagname
|
||||||
self.builder.env.warn_node(msg, node)
|
self.builder.env.warn_node(msg, node)
|
||||||
else:
|
else:
|
||||||
append_fignumber(figtype, node['ids'][0])
|
append_fignumber(figtype, node['ids'][0])
|
||||||
|
@ -34,6 +34,7 @@ from sphinx.util.smartypants import educate_quotes_latex
|
|||||||
HEADER = r'''%% Generated by Sphinx.
|
HEADER = r'''%% Generated by Sphinx.
|
||||||
\def\sphinxdocclass{%(docclass)s}
|
\def\sphinxdocclass{%(docclass)s}
|
||||||
\documentclass[%(papersize)s,%(pointsize)s%(classoptions)s]{%(wrapperclass)s}
|
\documentclass[%(papersize)s,%(pointsize)s%(classoptions)s]{%(wrapperclass)s}
|
||||||
|
\usepackage{iftex}
|
||||||
%(passoptionstopackages)s
|
%(passoptionstopackages)s
|
||||||
%(inputenc)s
|
%(inputenc)s
|
||||||
%(utf8extra)s
|
%(utf8extra)s
|
||||||
@ -52,6 +53,7 @@ HEADER = r'''%% Generated by Sphinx.
|
|||||||
%(numfig_format)s
|
%(numfig_format)s
|
||||||
%(pageautorefname)s
|
%(pageautorefname)s
|
||||||
%(tocdepth)s
|
%(tocdepth)s
|
||||||
|
%(secnumdepth)s
|
||||||
%(preamble)s
|
%(preamble)s
|
||||||
|
|
||||||
\title{%(title)s}
|
\title{%(title)s}
|
||||||
@ -77,6 +79,7 @@ FOOTER = r'''
|
|||||||
'''
|
'''
|
||||||
|
|
||||||
URI_SCHEMES = ('mailto:', 'http:', 'https:', 'ftp:')
|
URI_SCHEMES = ('mailto:', 'http:', 'https:', 'ftp:')
|
||||||
|
SECNUMDEPTH = 3
|
||||||
|
|
||||||
|
|
||||||
class collected_footnote(nodes.footnote):
|
class collected_footnote(nodes.footnote):
|
||||||
@ -195,7 +198,7 @@ class ShowUrlsTransform(object):
|
|||||||
def create_footnote(self, uri):
|
def create_footnote(self, uri):
|
||||||
label = nodes.label('', '#')
|
label = nodes.label('', '#')
|
||||||
para = nodes.paragraph()
|
para = nodes.paragraph()
|
||||||
para.append(nodes.Text(uri))
|
para.append(nodes.reference('', nodes.Text(uri), refuri=uri, nolinkurl=True))
|
||||||
footnote = nodes.footnote(uri, label, para, auto=1)
|
footnote = nodes.footnote(uri, label, para, auto=1)
|
||||||
footnote['names'].append('#')
|
footnote['names'].append('#')
|
||||||
self.document.note_autofootnote(footnote)
|
self.document.note_autofootnote(footnote)
|
||||||
@ -263,6 +266,29 @@ class Table(object):
|
|||||||
self.longtable = False
|
self.longtable = False
|
||||||
|
|
||||||
|
|
||||||
|
def width_to_latex_length(length_str):
|
||||||
|
"""Convert `length_str` with rst length to LaTeX length.
|
||||||
|
|
||||||
|
This function is copied from docutils' latex writer
|
||||||
|
"""
|
||||||
|
match = re.match('(\d*\.?\d*)\s*(\S*)', length_str)
|
||||||
|
if not match:
|
||||||
|
return length_str
|
||||||
|
value, unit = match.groups()[:2]
|
||||||
|
if unit in ('', 'pt'):
|
||||||
|
length_str = '%sbp' % value # convert to 'bp'
|
||||||
|
# percentage: relate to current line width
|
||||||
|
elif unit == '%':
|
||||||
|
length_str = '%.3f\\linewidth' % (float(value)/100.0)
|
||||||
|
|
||||||
|
return length_str
|
||||||
|
|
||||||
|
|
||||||
|
def escape_abbr(text):
|
||||||
|
"""Adjust spacing after abbreviations."""
|
||||||
|
return re.sub('\.(?=\s|$)', '.\\@', text)
|
||||||
|
|
||||||
|
|
||||||
class LaTeXTranslator(nodes.NodeVisitor):
|
class LaTeXTranslator(nodes.NodeVisitor):
|
||||||
sectionnames = ["part", "chapter", "section", "subsection",
|
sectionnames = ["part", "chapter", "section", "subsection",
|
||||||
"subsubsection", "paragraph", "subparagraph"]
|
"subsubsection", "paragraph", "subparagraph"]
|
||||||
@ -275,13 +301,15 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
'classoptions': '',
|
'classoptions': '',
|
||||||
'extraclassoptions': '',
|
'extraclassoptions': '',
|
||||||
'passoptionstopackages': '',
|
'passoptionstopackages': '',
|
||||||
'inputenc': '\\usepackage[utf8]{inputenc}',
|
'inputenc': ('\\ifPDFTeX\n'
|
||||||
|
' \\usepackage[utf8]{inputenc}\n'
|
||||||
|
'\\else\\fi'),
|
||||||
'utf8extra': ('\\ifdefined\\DeclareUnicodeCharacter\n'
|
'utf8extra': ('\\ifdefined\\DeclareUnicodeCharacter\n'
|
||||||
' \\DeclareUnicodeCharacter{00A0}{\\nobreakspace}\n'
|
' \\DeclareUnicodeCharacter{00A0}{\\nobreakspace}\n'
|
||||||
'\\else\\fi'),
|
'\\else\\fi'),
|
||||||
'cmappkg': '\\usepackage{cmap}',
|
'cmappkg': '\\usepackage{cmap}',
|
||||||
'fontenc': '\\usepackage[T1]{fontenc}',
|
'fontenc': '\\usepackage[T1]{fontenc}',
|
||||||
'amsmath': '\\usepackage{amsmath,amssymb}',
|
'amsmath': '\\usepackage{amsmath,amssymb,amstext}',
|
||||||
'babel': '\\usepackage{babel}',
|
'babel': '\\usepackage{babel}',
|
||||||
'fontpkg': '\\usepackage{times}',
|
'fontpkg': '\\usepackage{times}',
|
||||||
'fncychap': '\\usepackage[Bjarne]{fncychap}',
|
'fncychap': '\\usepackage[Bjarne]{fncychap}',
|
||||||
@ -305,6 +333,7 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
'transition': '\n\n\\bigskip\\hrule{}\\bigskip\n\n',
|
'transition': '\n\n\\bigskip\\hrule{}\\bigskip\n\n',
|
||||||
'figure_align': 'htbp',
|
'figure_align': 'htbp',
|
||||||
'tocdepth': '',
|
'tocdepth': '',
|
||||||
|
'secnumdepth': '',
|
||||||
'pageautorefname': '',
|
'pageautorefname': '',
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -408,11 +437,6 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
return '\\usepackage{%s}' % (packagename,)
|
return '\\usepackage{%s}' % (packagename,)
|
||||||
usepackages = (declare_package(*p) for p in builder.usepackages)
|
usepackages = (declare_package(*p) for p in builder.usepackages)
|
||||||
self.elements['usepackages'] += "\n".join(usepackages)
|
self.elements['usepackages'] += "\n".join(usepackages)
|
||||||
# allow the user to override them all
|
|
||||||
self.elements.update(builder.config.latex_elements)
|
|
||||||
if self.elements['extraclassoptions']:
|
|
||||||
self.elements['classoptions'] += ',' + \
|
|
||||||
self.elements['extraclassoptions']
|
|
||||||
if document.get('tocdepth'):
|
if document.get('tocdepth'):
|
||||||
# redece tocdepth if `part` or `chapter` is used for top_sectionlevel
|
# redece tocdepth if `part` or `chapter` is used for top_sectionlevel
|
||||||
# tocdepth = -1: show only parts
|
# tocdepth = -1: show only parts
|
||||||
@ -420,11 +444,24 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
# tocdepth = 1: show parts, chapters and sections
|
# tocdepth = 1: show parts, chapters and sections
|
||||||
# tocdepth = 2: show parts, chapters, sections and subsections
|
# tocdepth = 2: show parts, chapters, sections and subsections
|
||||||
# ...
|
# ...
|
||||||
self.elements['tocdepth'] = ('\\setcounter{tocdepth}{%d}' %
|
tocdepth = document['tocdepth'] + self.top_sectionlevel - 2
|
||||||
(document['tocdepth'] + self.top_sectionlevel - 2))
|
maxdepth = len(self.sectionnames) - self.top_sectionlevel
|
||||||
|
if tocdepth > maxdepth:
|
||||||
|
self.builder.warn('too large :maxdepth:, ignored.')
|
||||||
|
tocdepth = maxdepth
|
||||||
|
|
||||||
|
self.elements['tocdepth'] = '\\setcounter{tocdepth}{%d}' % tocdepth
|
||||||
|
if tocdepth >= SECNUMDEPTH:
|
||||||
|
# Increase secnumdepth if tocdepth is depther than default SECNUMDEPTH
|
||||||
|
self.elements['secnumdepth'] = '\\setcounter{secnumdepth}{%d}' % tocdepth
|
||||||
if getattr(document.settings, 'contentsname', None):
|
if getattr(document.settings, 'contentsname', None):
|
||||||
self.elements['contentsname'] = \
|
self.elements['contentsname'] = \
|
||||||
self.babel_renewcommand('\\contentsname', document.settings.contentsname)
|
self.babel_renewcommand('\\contentsname', document.settings.contentsname)
|
||||||
|
# allow the user to override them all
|
||||||
|
self.elements.update(builder.config.latex_elements)
|
||||||
|
if self.elements['extraclassoptions']:
|
||||||
|
self.elements['classoptions'] += ',' + \
|
||||||
|
self.elements['extraclassoptions']
|
||||||
self.elements['pageautorefname'] = \
|
self.elements['pageautorefname'] = \
|
||||||
self.babel_defmacro('\\pageautorefname', self.encode(_('page')))
|
self.babel_defmacro('\\pageautorefname', self.encode(_('page')))
|
||||||
self.elements['numfig_format'] = self.generate_numfig_format(builder)
|
self.elements['numfig_format'] = self.generate_numfig_format(builder)
|
||||||
@ -541,27 +578,27 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
figure = self.builder.config.numfig_format['figure'].split('%s', 1)
|
figure = self.builder.config.numfig_format['figure'].split('%s', 1)
|
||||||
if len(figure) == 1:
|
if len(figure) == 1:
|
||||||
ret.append('\\def\\fnum@figure{%s}\n' %
|
ret.append('\\def\\fnum@figure{%s}\n' %
|
||||||
text_type(figure[0]).translate(tex_escape_map))
|
escape_abbr(text_type(figure[0]).translate(tex_escape_map)))
|
||||||
else:
|
else:
|
||||||
definition = text_type(figure[0]).translate(tex_escape_map)
|
definition = escape_abbr(text_type(figure[0]).translate(tex_escape_map))
|
||||||
ret.append(self.babel_renewcommand('\\figurename', definition))
|
ret.append(self.babel_renewcommand('\\figurename', definition))
|
||||||
if figure[1]:
|
if figure[1]:
|
||||||
ret.append('\\makeatletter\n')
|
ret.append('\\makeatletter\n')
|
||||||
ret.append('\\def\\fnum@figure{\\figurename\\thefigure%s}\n' %
|
ret.append('\\def\\fnum@figure{\\figurename\\thefigure%s}\n' %
|
||||||
text_type(figure[1]).translate(tex_escape_map))
|
escape_abbr(text_type(figure[1]).translate(tex_escape_map)))
|
||||||
ret.append('\\makeatother\n')
|
ret.append('\\makeatother\n')
|
||||||
|
|
||||||
table = self.builder.config.numfig_format['table'].split('%s', 1)
|
table = self.builder.config.numfig_format['table'].split('%s', 1)
|
||||||
if len(table) == 1:
|
if len(table) == 1:
|
||||||
ret.append('\\def\\fnum@table{%s}\n' %
|
ret.append('\\def\\fnum@table{%s}\n' %
|
||||||
text_type(table[0]).translate(tex_escape_map))
|
escape_abbr(text_type(table[0]).translate(tex_escape_map)))
|
||||||
else:
|
else:
|
||||||
definition = text_type(table[0]).translate(tex_escape_map)
|
definition = escape_abbr(text_type(table[0]).translate(tex_escape_map))
|
||||||
ret.append(self.babel_renewcommand('\\tablename', definition))
|
ret.append(self.babel_renewcommand('\\tablename', definition))
|
||||||
if table[1]:
|
if table[1]:
|
||||||
ret.append('\\makeatletter\n')
|
ret.append('\\makeatletter\n')
|
||||||
ret.append('\\def\\fnum@table{\\tablename\\thetable%s}\n' %
|
ret.append('\\def\\fnum@table{\\tablename\\thetable%s}\n' %
|
||||||
text_type(table[1]).translate(tex_escape_map))
|
escape_abbr(text_type(table[1]).translate(tex_escape_map)))
|
||||||
ret.append('\\makeatother\n')
|
ret.append('\\makeatother\n')
|
||||||
|
|
||||||
codeblock = self.builder.config.numfig_format['code-block'].split('%s', 1)
|
codeblock = self.builder.config.numfig_format['code-block'].split('%s', 1)
|
||||||
@ -569,7 +606,7 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
pass # FIXME
|
pass # FIXME
|
||||||
else:
|
else:
|
||||||
ret.append('\\SetupFloatingEnvironment{literal-block}{name=%s}\n' %
|
ret.append('\\SetupFloatingEnvironment{literal-block}{name=%s}\n' %
|
||||||
text_type(codeblock[0]).translate(tex_escape_map))
|
escape_abbr(text_type(codeblock[0]).translate(tex_escape_map)))
|
||||||
if table[1]:
|
if table[1]:
|
||||||
pass # FIXME
|
pass # FIXME
|
||||||
|
|
||||||
@ -764,7 +801,8 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
elif isinstance(parent, nodes.section):
|
elif isinstance(parent, nodes.section):
|
||||||
short = ''
|
short = ''
|
||||||
if node.traverse(nodes.image):
|
if node.traverse(nodes.image):
|
||||||
short = '[%s]' % ' '.join(clean_astext(node).split()).translate(tex_escape_map)
|
short = ('[%s]' %
|
||||||
|
u' '.join(clean_astext(node).split()).translate(tex_escape_map))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.body.append(r'\%s%s{' % (self.sectionnames[self.sectionlevel], short))
|
self.body.append(r'\%s%s{' % (self.sectionnames[self.sectionlevel], short))
|
||||||
@ -1406,10 +1444,13 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
isinstance(node.children[0], nodes.image) and
|
isinstance(node.children[0], nodes.image) and
|
||||||
node.children[0]['ids']):
|
node.children[0]['ids']):
|
||||||
ids += self.hypertarget(node.children[0]['ids'][0], anchor=False)
|
ids += self.hypertarget(node.children[0]['ids'][0], anchor=False)
|
||||||
if 'width' in node and node.get('align', '') in ('left', 'right'):
|
if node.get('align', '') in ('left', 'right'):
|
||||||
|
if 'width' in node:
|
||||||
|
length = width_to_latex_length(node['width'])
|
||||||
|
else:
|
||||||
|
length = '0pt'
|
||||||
self.body.append('\\begin{wrapfigure}{%s}{%s}\n\\centering' %
|
self.body.append('\\begin{wrapfigure}{%s}{%s}\n\\centering' %
|
||||||
(node['align'] == 'right' and 'r' or 'l',
|
(node['align'] == 'right' and 'r' or 'l', length))
|
||||||
node['width']))
|
|
||||||
self.context.append(ids + '\\end{wrapfigure}\n')
|
self.context.append(ids + '\\end{wrapfigure}\n')
|
||||||
elif self.in_minipage:
|
elif self.in_minipage:
|
||||||
if ('align' not in node.attributes or
|
if ('align' not in node.attributes or
|
||||||
@ -1609,7 +1650,10 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
self.context.append('')
|
self.context.append('')
|
||||||
elif uri.startswith(URI_SCHEMES):
|
elif uri.startswith(URI_SCHEMES):
|
||||||
if len(node) == 1 and uri == node[0]:
|
if len(node) == 1 and uri == node[0]:
|
||||||
self.body.append('\\url{%s}' % self.encode_uri(uri))
|
if node.get('nolinkurl'):
|
||||||
|
self.body.append('\\nolinkurl{%s}' % self.encode_uri(uri))
|
||||||
|
else:
|
||||||
|
self.body.append('\\url{%s}' % self.encode_uri(uri))
|
||||||
raise nodes.SkipNode
|
raise nodes.SkipNode
|
||||||
else:
|
else:
|
||||||
self.body.append('\\href{%s}{' % self.encode_uri(uri))
|
self.body.append('\\href{%s}{' % self.encode_uri(uri))
|
||||||
@ -1664,7 +1708,7 @@ class LaTeXTranslator(nodes.NodeVisitor):
|
|||||||
ref = '\\ref{%s}' % self.idescape(id)
|
ref = '\\ref{%s}' % self.idescape(id)
|
||||||
title = node.get('title', '%s')
|
title = node.get('title', '%s')
|
||||||
title = text_type(title).translate(tex_escape_map).replace('\\%s', '%s')
|
title = text_type(title).translate(tex_escape_map).replace('\\%s', '%s')
|
||||||
hyperref = '\\hyperref[%s]{%s}' % (self.idescape(id), title % ref)
|
hyperref = '\\hyperref[%s]{%s}' % (self.idescape(id), escape_abbr(title) % ref)
|
||||||
self.body.append(hyperref)
|
self.body.append(hyperref)
|
||||||
|
|
||||||
raise nodes.SkipNode
|
raise nodes.SkipNode
|
||||||
|
@ -650,7 +650,7 @@ class TexinfoTranslator(nodes.NodeVisitor):
|
|||||||
self.next_section_ids.add(node['refid'])
|
self.next_section_ids.add(node['refid'])
|
||||||
self.next_section_ids.update(node['ids'])
|
self.next_section_ids.update(node['ids'])
|
||||||
return
|
return
|
||||||
except IndexError:
|
except (IndexError, AttributeError):
|
||||||
pass
|
pass
|
||||||
if 'refuri' in node:
|
if 'refuri' in node:
|
||||||
return
|
return
|
||||||
|
@ -62,3 +62,7 @@ Test for issue #1700
|
|||||||
|
|
||||||
:ref:`mastertoc`
|
:ref:`mastertoc`
|
||||||
|
|
||||||
|
Test for indirect hyperlink targets
|
||||||
|
===================================
|
||||||
|
|
||||||
|
:ref:`indirect hyperref <other-label>`
|
||||||
|
@ -107,6 +107,9 @@ Admonitions
|
|||||||
.. tip::
|
.. tip::
|
||||||
Tip text.
|
Tip text.
|
||||||
|
|
||||||
|
Indirect hyperlink targets
|
||||||
|
|
||||||
|
.. _other-label: some-label_
|
||||||
|
|
||||||
Inline markup
|
Inline markup
|
||||||
-------------
|
-------------
|
||||||
@ -142,6 +145,7 @@ Adding \n to test unescaping.
|
|||||||
* :token:`try statement <try_stmt>`
|
* :token:`try statement <try_stmt>`
|
||||||
* :ref:`admonition-section`
|
* :ref:`admonition-section`
|
||||||
* :ref:`here <some-label>`
|
* :ref:`here <some-label>`
|
||||||
|
* :ref:`there <other-label>`
|
||||||
* :ref:`my-figure`
|
* :ref:`my-figure`
|
||||||
* :ref:`my-figure-name`
|
* :ref:`my-figure-name`
|
||||||
* :ref:`my-table`
|
* :ref:`my-table`
|
||||||
@ -231,6 +235,16 @@ Figures
|
|||||||
|
|
||||||
Description paragraph is wraped with legend node.
|
Description paragraph is wraped with legend node.
|
||||||
|
|
||||||
|
.. figure:: rimg.png
|
||||||
|
:align: right
|
||||||
|
|
||||||
|
figure with align option
|
||||||
|
|
||||||
|
.. figure:: rimg.png
|
||||||
|
:align: right
|
||||||
|
:figwidth: 50%
|
||||||
|
|
||||||
|
figure with align & figwidth option
|
||||||
|
|
||||||
Version markup
|
Version markup
|
||||||
--------------
|
--------------
|
||||||
|
3
tests/roots/test-correct-year/conf.py
Normal file
3
tests/roots/test-correct-year/conf.py
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
|
||||||
|
copyright = u'2006-2009, Author'
|
||||||
|
|
4
tests/roots/test-correct-year/contents.rst
Normal file
4
tests/roots/test-correct-year/contents.rst
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
=================
|
||||||
|
test-correct-year
|
||||||
|
=================
|
||||||
|
|
@ -19,3 +19,9 @@ Hello |graph| graphviz world
|
|||||||
|
|
||||||
|
|
||||||
.. graphviz:: graph.dot
|
.. graphviz:: graph.dot
|
||||||
|
|
||||||
|
.. digraph:: bar
|
||||||
|
:align: right
|
||||||
|
:caption: on right
|
||||||
|
|
||||||
|
foo -> bar
|
||||||
|
@ -16,3 +16,7 @@ another blah
|
|||||||
Other [blah] |picture| section
|
Other [blah] |picture| section
|
||||||
------------------------------
|
------------------------------
|
||||||
other blah
|
other blah
|
||||||
|
|
||||||
|
|picture|
|
||||||
|
---------
|
||||||
|
blah blah blah
|
||||||
|
12
tests/roots/test-prolog/conf.py
Normal file
12
tests/roots/test-prolog/conf.py
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
sys.path.insert(0, os.path.abspath('.'))
|
||||||
|
|
||||||
|
|
||||||
|
master_doc = 'index'
|
||||||
|
extensions = ['prolog_markdown_parser']
|
||||||
|
|
||||||
|
rst_prolog = '*Hello world*.\n\n'
|
||||||
|
rst_epilog = '\n\n*Good-bye world*.'
|
7
tests/roots/test-prolog/index.rst
Normal file
7
tests/roots/test-prolog/index.rst
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
prolog and epilog
|
||||||
|
=================
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
|
||||||
|
restructuredtext
|
||||||
|
markdown
|
3
tests/roots/test-prolog/markdown.md
Normal file
3
tests/roots/test-prolog/markdown.md
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
# sample document
|
||||||
|
|
||||||
|
This is a sample document in markdown
|
12
tests/roots/test-prolog/prolog_markdown_parser.py
Normal file
12
tests/roots/test-prolog/prolog_markdown_parser.py
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
from docutils.parsers import Parser
|
||||||
|
|
||||||
|
|
||||||
|
class DummyMarkdownParser(Parser):
|
||||||
|
def parse(self, inputstring, document):
|
||||||
|
document.rawsource = inputstring
|
||||||
|
|
||||||
|
|
||||||
|
def setup(app):
|
||||||
|
app.add_source_parser('.md', DummyMarkdownParser)
|
4
tests/roots/test-prolog/restructuredtext.rst
Normal file
4
tests/roots/test-prolog/restructuredtext.rst
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
sample document
|
||||||
|
===============
|
||||||
|
|
||||||
|
This is a sample document in reST
|
9
tests/roots/test-toctree-maxdepth/qux.rst
Normal file
9
tests/roots/test-toctree-maxdepth/qux.rst
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
test-toctree-max-depth
|
||||||
|
======================
|
||||||
|
|
||||||
|
.. toctree::
|
||||||
|
:numbered:
|
||||||
|
:maxdepth: 4
|
||||||
|
|
||||||
|
foo
|
||||||
|
bar
|
@ -31,16 +31,16 @@ http://www.python.org/logo.png
|
|||||||
reading included file u'.*?wrongenc.inc' seems to be wrong, try giving an \
|
reading included file u'.*?wrongenc.inc' seems to be wrong, try giving an \
|
||||||
:encoding: option\\n?
|
:encoding: option\\n?
|
||||||
%(root)s/includes.txt:4: WARNING: download file not readable: .*?nonexisting.png
|
%(root)s/includes.txt:4: WARNING: download file not readable: .*?nonexisting.png
|
||||||
(%(root)s/markup.txt:359: WARNING: invalid single index entry u'')?
|
(%(root)s/markup.txt:373: WARNING: invalid single index entry u'')?
|
||||||
(%(root)s/undecodable.txt:3: WARNING: undecodable source characters, replacing \
|
(%(root)s/undecodable.txt:3: WARNING: undecodable source characters, replacing \
|
||||||
with "\\?": b?'here: >>>(\\\\|/)xbb<<<'
|
with "\\?": b?'here: >>>(\\\\|/)xbb<<<'
|
||||||
)?"""
|
)?"""
|
||||||
|
|
||||||
HTML_WARNINGS = ENV_WARNINGS + """\
|
HTML_WARNINGS = ENV_WARNINGS + """\
|
||||||
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
||||||
%(root)s/markup.txt:271: WARNING: Could not lex literal_block as "c". Highlighting skipped.
|
%(root)s/markup.txt:285: WARNING: Could not lex literal_block as "c". Highlighting skipped.
|
||||||
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
||||||
%(root)s/markup.txt:160: WARNING: unknown option: &option
|
%(root)s/markup.txt:164: WARNING: unknown option: &option
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if PY3:
|
if PY3:
|
||||||
@ -151,6 +151,8 @@ HTML_XPATH = {
|
|||||||
"[@class='reference internal']/code/span[@class='pre']", '^with$'),
|
"[@class='reference internal']/code/span[@class='pre']", '^with$'),
|
||||||
(".//a[@href='#grammar-token-try_stmt']"
|
(".//a[@href='#grammar-token-try_stmt']"
|
||||||
"[@class='reference internal']/code/span", '^statement$'),
|
"[@class='reference internal']/code/span", '^statement$'),
|
||||||
|
(".//a[@href='#some-label'][@class='reference internal']/span", '^here$'),
|
||||||
|
(".//a[@href='#some-label'][@class='reference internal']/span", '^there$'),
|
||||||
(".//a[@href='subdir/includes.html']"
|
(".//a[@href='subdir/includes.html']"
|
||||||
"[@class='reference internal']/span", 'Including in subdir'),
|
"[@class='reference internal']/span", 'Including in subdir'),
|
||||||
(".//a[@href='objects.html#cmdoption-python-c']"
|
(".//a[@href='objects.html#cmdoption-python-c']"
|
||||||
@ -274,6 +276,9 @@ HTML_XPATH = {
|
|||||||
'http://sphinx-doc.org/'),
|
'http://sphinx-doc.org/'),
|
||||||
(".//a[@class='reference external'][@href='http://sphinx-doc.org/latest/']",
|
(".//a[@class='reference external'][@href='http://sphinx-doc.org/latest/']",
|
||||||
'Latest reference'),
|
'Latest reference'),
|
||||||
|
# Indirect hyperlink targets across files
|
||||||
|
(".//a[@href='markup.html#some-label'][@class='reference internal']/span",
|
||||||
|
'^indirect hyperref$'),
|
||||||
],
|
],
|
||||||
'bom.html': [
|
'bom.html': [
|
||||||
(".//title", " File with UTF-8 BOM"),
|
(".//title", " File with UTF-8 BOM"),
|
||||||
|
@ -24,10 +24,10 @@ from test_build_html import ENV_WARNINGS
|
|||||||
|
|
||||||
|
|
||||||
LATEX_WARNINGS = ENV_WARNINGS + """\
|
LATEX_WARNINGS = ENV_WARNINGS + """\
|
||||||
%(root)s/markup.txt:160: WARNING: unknown option: &option
|
%(root)s/markup.txt:164: WARNING: unknown option: &option
|
||||||
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
||||||
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
||||||
%(root)s/markup.txt:271: WARNING: Could not lex literal_block as "c". Highlighting skipped.
|
%(root)s/markup.txt:285: WARNING: Could not lex literal_block as "c". Highlighting skipped.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if PY3:
|
if PY3:
|
||||||
@ -106,6 +106,20 @@ def test_latex(app, status, warning):
|
|||||||
run_latex(app.outdir)
|
run_latex(app.outdir)
|
||||||
|
|
||||||
|
|
||||||
|
@with_app(buildername='latex')
|
||||||
|
def test_writer(app, status, warning):
|
||||||
|
app.builder.build_all()
|
||||||
|
result = (app.outdir / 'SphinxTests.tex').text(encoding='utf8')
|
||||||
|
|
||||||
|
assert ('\\begin{wrapfigure}{r}{0pt}\n\\centering\n'
|
||||||
|
'\\includegraphics{{rimg}.png}\n\\caption{figure with align option}'
|
||||||
|
'\\label{markup:id7}\\end{wrapfigure}' in result)
|
||||||
|
|
||||||
|
assert ('\\begin{wrapfigure}{r}{0.500\\linewidth}\n\\centering\n'
|
||||||
|
'\\includegraphics{{rimg}.png}\n\\caption{figure with align \\& figwidth option}'
|
||||||
|
'\\label{markup:id8}\\end{wrapfigure}' in result)
|
||||||
|
|
||||||
|
|
||||||
@with_app(buildername='latex', freshenv=True, # use freshenv to check warnings
|
@with_app(buildername='latex', freshenv=True, # use freshenv to check warnings
|
||||||
confoverrides={'latex_documents': [
|
confoverrides={'latex_documents': [
|
||||||
('contents', 'SphinxTests.tex', 'Sphinx Tests Documentation',
|
('contents', 'SphinxTests.tex', 'Sphinx Tests Documentation',
|
||||||
@ -163,10 +177,10 @@ def test_numref(app, status, warning):
|
|||||||
print(result)
|
print(result)
|
||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig. }}' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig.\\@ }}' in result
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table }}' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table }}' in result
|
||||||
assert '\\SetupFloatingEnvironment{literal-block}{name=Listing }' in result
|
assert '\\SetupFloatingEnvironment{literal-block}{name=Listing }' in result
|
||||||
assert '\\hyperref[index:fig1]{Fig. \\ref{index:fig1}}' in result
|
assert '\\hyperref[index:fig1]{Fig.\\@ \\ref{index:fig1}}' in result
|
||||||
assert '\\hyperref[baz:fig22]{Figure\\ref{baz:fig22}}' in result
|
assert '\\hyperref[baz:fig22]{Figure\\ref{baz:fig22}}' in result
|
||||||
assert '\\hyperref[index:table-1]{Table \\ref{index:table-1}}' in result
|
assert '\\hyperref[index:table-1]{Table \\ref{index:table-1}}' in result
|
||||||
assert '\\hyperref[baz:table22]{Table:\\ref{baz:table22}}' in result
|
assert '\\hyperref[baz:table22]{Table:\\ref{baz:table22}}' in result
|
||||||
@ -214,11 +228,11 @@ def test_numref_with_prefix2(app, status, warning):
|
|||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Figure:}}' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Figure:}}' in result
|
||||||
assert '\\def\\fnum@figure{\\figurename\\thefigure.}' in result
|
assert '\\def\\fnum@figure{\\figurename\\thefigure.\\@}' in result
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Tab\\_}}' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Tab\\_}}' in result
|
||||||
assert '\\def\\fnum@table{\\tablename\\thetable:}' in result
|
assert '\\def\\fnum@table{\\tablename\\thetable:}' in result
|
||||||
assert '\\SetupFloatingEnvironment{literal-block}{name=Code-}' in result
|
assert '\\SetupFloatingEnvironment{literal-block}{name=Code-}' in result
|
||||||
assert '\\hyperref[index:fig1]{Figure:\\ref{index:fig1}.}' in result
|
assert '\\hyperref[index:fig1]{Figure:\\ref{index:fig1}.\\@}' in result
|
||||||
assert '\\hyperref[baz:fig22]{Figure\\ref{baz:fig22}}' in result
|
assert '\\hyperref[baz:fig22]{Figure\\ref{baz:fig22}}' in result
|
||||||
assert '\\hyperref[index:table-1]{Tab\\_\\ref{index:table-1}:}' in result
|
assert '\\hyperref[index:table-1]{Tab\\_\\ref{index:table-1}:}' in result
|
||||||
assert '\\hyperref[baz:table22]{Table:\\ref{baz:table22}}' in result
|
assert '\\hyperref[baz:table22]{Table:\\ref{baz:table22}}' in result
|
||||||
@ -268,8 +282,8 @@ def test_babel_with_no_language_settings(app, status, warning):
|
|||||||
assert '\\usepackage[Bjarne]{fncychap}' in result
|
assert '\\usepackage[Bjarne]{fncychap}' in result
|
||||||
assert ('\\addto\\captionsenglish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
assert ('\\addto\\captionsenglish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
||||||
in result)
|
in result)
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig. }}\n' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig.\\@ }}\n' in result
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table. }}\n' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table.\\@ }}\n' in result
|
||||||
assert '\\addto\\extrasenglish{\\def\\pageautorefname{page}}\n' in result
|
assert '\\addto\\extrasenglish{\\def\\pageautorefname{page}}\n' in result
|
||||||
assert '\\shorthandoff' not in result
|
assert '\\shorthandoff' not in result
|
||||||
|
|
||||||
@ -288,8 +302,8 @@ def test_babel_with_language_de(app, status, warning):
|
|||||||
assert '\\usepackage[Sonny]{fncychap}' in result
|
assert '\\usepackage[Sonny]{fncychap}' in result
|
||||||
assert ('\\addto\\captionsngerman{\\renewcommand{\\contentsname}{Table of content}}\n'
|
assert ('\\addto\\captionsngerman{\\renewcommand{\\contentsname}{Table of content}}\n'
|
||||||
in result)
|
in result)
|
||||||
assert '\\addto\\captionsngerman{\\renewcommand{\\figurename}{Fig. }}\n' in result
|
assert '\\addto\\captionsngerman{\\renewcommand{\\figurename}{Fig.\\@ }}\n' in result
|
||||||
assert '\\addto\\captionsngerman{\\renewcommand{\\tablename}{Table. }}\n' in result
|
assert '\\addto\\captionsngerman{\\renewcommand{\\tablename}{Table.\\@ }}\n' in result
|
||||||
assert '\\addto\\extrasngerman{\\def\\pageautorefname{page}}\n' in result
|
assert '\\addto\\extrasngerman{\\def\\pageautorefname{page}}\n' in result
|
||||||
assert '\\shorthandoff{"}' in result
|
assert '\\shorthandoff{"}' in result
|
||||||
|
|
||||||
@ -308,8 +322,8 @@ def test_babel_with_language_ru(app, status, warning):
|
|||||||
assert '\\usepackage[Sonny]{fncychap}' in result
|
assert '\\usepackage[Sonny]{fncychap}' in result
|
||||||
assert ('\\addto\\captionsrussian{\\renewcommand{\\contentsname}{Table of content}}\n'
|
assert ('\\addto\\captionsrussian{\\renewcommand{\\contentsname}{Table of content}}\n'
|
||||||
in result)
|
in result)
|
||||||
assert '\\addto\\captionsrussian{\\renewcommand{\\figurename}{Fig. }}\n' in result
|
assert '\\addto\\captionsrussian{\\renewcommand{\\figurename}{Fig.\\@ }}\n' in result
|
||||||
assert '\\addto\\captionsrussian{\\renewcommand{\\tablename}{Table. }}\n' in result
|
assert '\\addto\\captionsrussian{\\renewcommand{\\tablename}{Table.\\@ }}\n' in result
|
||||||
assert '\\addto\\extrasrussian{\\def\\pageautorefname{page}}\n' in result
|
assert '\\addto\\extrasrussian{\\def\\pageautorefname{page}}\n' in result
|
||||||
assert '\\shorthandoff' not in result
|
assert '\\shorthandoff' not in result
|
||||||
|
|
||||||
@ -328,8 +342,8 @@ def test_babel_with_language_tr(app, status, warning):
|
|||||||
assert '\\usepackage[Sonny]{fncychap}' in result
|
assert '\\usepackage[Sonny]{fncychap}' in result
|
||||||
assert ('\\addto\\captionsturkish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
assert ('\\addto\\captionsturkish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
||||||
in result)
|
in result)
|
||||||
assert '\\addto\\captionsturkish{\\renewcommand{\\figurename}{Fig. }}\n' in result
|
assert '\\addto\\captionsturkish{\\renewcommand{\\figurename}{Fig.\\@ }}\n' in result
|
||||||
assert '\\addto\\captionsturkish{\\renewcommand{\\tablename}{Table. }}\n' in result
|
assert '\\addto\\captionsturkish{\\renewcommand{\\tablename}{Table.\\@ }}\n' in result
|
||||||
assert '\\addto\\extrasturkish{\\def\\pageautorefname{sayfa}}\n' in result
|
assert '\\addto\\extrasturkish{\\def\\pageautorefname{sayfa}}\n' in result
|
||||||
assert '\\shorthandoff{=}' in result
|
assert '\\shorthandoff{=}' in result
|
||||||
|
|
||||||
@ -347,8 +361,8 @@ def test_babel_with_language_ja(app, status, warning):
|
|||||||
assert '\\usepackage{times}' in result
|
assert '\\usepackage{times}' in result
|
||||||
assert '\\usepackage[Sonny]{fncychap}' not in result
|
assert '\\usepackage[Sonny]{fncychap}' not in result
|
||||||
assert '\\renewcommand{\\contentsname}{Table of content}\n' in result
|
assert '\\renewcommand{\\contentsname}{Table of content}\n' in result
|
||||||
assert '\\renewcommand{\\figurename}{Fig. }\n' in result
|
assert '\\renewcommand{\\figurename}{Fig.\\@ }\n' in result
|
||||||
assert '\\renewcommand{\\tablename}{Table. }\n' in result
|
assert '\\renewcommand{\\tablename}{Table.\\@ }\n' in result
|
||||||
assert u'\\def\\pageautorefname{ページ}\n' in result
|
assert u'\\def\\pageautorefname{ページ}\n' in result
|
||||||
assert '\\shorthandoff' not in result
|
assert '\\shorthandoff' not in result
|
||||||
|
|
||||||
@ -367,8 +381,8 @@ def test_babel_with_unknown_language(app, status, warning):
|
|||||||
assert '\\usepackage[Sonny]{fncychap}' in result
|
assert '\\usepackage[Sonny]{fncychap}' in result
|
||||||
assert ('\\addto\\captionsenglish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
assert ('\\addto\\captionsenglish{\\renewcommand{\\contentsname}{Table of content}}\n'
|
||||||
in result)
|
in result)
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig. }}\n' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\figurename}{Fig.\\@ }}\n' in result
|
||||||
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table. }}\n' in result
|
assert '\\addto\\captionsenglish{\\renewcommand{\\tablename}{Table.\\@ }}\n' in result
|
||||||
assert '\\addto\\extrasenglish{\\def\\pageautorefname{page}}\n' in result
|
assert '\\addto\\extrasenglish{\\def\\pageautorefname{page}}\n' in result
|
||||||
assert '\\shorthandoff' not in result
|
assert '\\shorthandoff' not in result
|
||||||
|
|
||||||
@ -483,21 +497,22 @@ def test_latex_show_urls_is_footnote(app, status, warning):
|
|||||||
assert 'First footnote: \\footnote[3]{\sphinxAtStartFootnote%\nFirst\n}' in result
|
assert 'First footnote: \\footnote[3]{\sphinxAtStartFootnote%\nFirst\n}' in result
|
||||||
assert 'Second footnote: \\footnote[1]{\sphinxAtStartFootnote%\nSecond\n}' in result
|
assert 'Second footnote: \\footnote[1]{\sphinxAtStartFootnote%\nSecond\n}' in result
|
||||||
assert ('\\href{http://sphinx-doc.org/}{Sphinx}'
|
assert ('\\href{http://sphinx-doc.org/}{Sphinx}'
|
||||||
'\\footnote[4]{\sphinxAtStartFootnote%\nhttp://sphinx-doc.org/\n}' in result)
|
'\\footnote[4]{\sphinxAtStartFootnote%\n'
|
||||||
|
'\\nolinkurl{http://sphinx-doc.org/}\n}' in result)
|
||||||
assert 'Third footnote: \\footnote[6]{\sphinxAtStartFootnote%\nThird\n}' in result
|
assert 'Third footnote: \\footnote[6]{\sphinxAtStartFootnote%\nThird\n}' in result
|
||||||
assert ('\\href{http://sphinx-doc.org/~test/}{URL including tilde}'
|
assert ('\\href{http://sphinx-doc.org/~test/}{URL including tilde}'
|
||||||
'\\footnote[5]{\sphinxAtStartFootnote%\n'
|
'\\footnote[5]{\sphinxAtStartFootnote%\n'
|
||||||
'http://sphinx-doc.org/\\textasciitilde{}test/\n}' in result)
|
'\\nolinkurl{http://sphinx-doc.org/~test/}\n}' in result)
|
||||||
assert ('\\item[{\\href{http://sphinx-doc.org/}{URL in term}\\protect\\footnotemark[8]}] '
|
assert ('\\item[{\\href{http://sphinx-doc.org/}{URL in term}\\protect\\footnotemark[8]}] '
|
||||||
'\\leavevmode\\footnotetext[8]{\sphinxAtStartFootnote%\n'
|
'\\leavevmode\\footnotetext[8]{\sphinxAtStartFootnote%\n'
|
||||||
'http://sphinx-doc.org/\n}\nDescription' in result)
|
'\\nolinkurl{http://sphinx-doc.org/}\n}\nDescription' in result)
|
||||||
assert ('\\item[{Footnote in term \\protect\\footnotemark[10]}] '
|
assert ('\\item[{Footnote in term \\protect\\footnotemark[10]}] '
|
||||||
'\\leavevmode\\footnotetext[10]{\sphinxAtStartFootnote%\n'
|
'\\leavevmode\\footnotetext[10]{\sphinxAtStartFootnote%\n'
|
||||||
'Footnote in term\n}\nDescription' in result)
|
'Footnote in term\n}\nDescription' in result)
|
||||||
assert ('\\item[{\\href{http://sphinx-doc.org/}{Term in deflist}\\protect'
|
assert ('\\item[{\\href{http://sphinx-doc.org/}{Term in deflist}\\protect'
|
||||||
'\\footnotemark[9]}] '
|
'\\footnotemark[9]}] '
|
||||||
'\\leavevmode\\footnotetext[9]{\sphinxAtStartFootnote%\n'
|
'\\leavevmode\\footnotetext[9]{\sphinxAtStartFootnote%\n'
|
||||||
'http://sphinx-doc.org/\n}\nDescription' in result)
|
'\\nolinkurl{http://sphinx-doc.org/}\n}\nDescription' in result)
|
||||||
assert ('\\url{https://github.com/sphinx-doc/sphinx}\n' in result)
|
assert ('\\url{https://github.com/sphinx-doc/sphinx}\n' in result)
|
||||||
assert ('\\href{mailto:sphinx-dev@googlegroups.com}'
|
assert ('\\href{mailto:sphinx-dev@googlegroups.com}'
|
||||||
'{sphinx-dev@googlegroups.com}\n' in result)
|
'{sphinx-dev@googlegroups.com}\n' in result)
|
||||||
@ -575,6 +590,7 @@ def test_toctree_maxdepth_manual(app, status, warning):
|
|||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\setcounter{tocdepth}{1}' in result
|
assert '\\setcounter{tocdepth}{1}' in result
|
||||||
|
assert '\\setcounter{secnumdepth}' not in result
|
||||||
|
|
||||||
|
|
||||||
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
||||||
@ -589,6 +605,7 @@ def test_toctree_maxdepth_howto(app, status, warning):
|
|||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\setcounter{tocdepth}{2}' in result
|
assert '\\setcounter{tocdepth}{2}' in result
|
||||||
|
assert '\\setcounter{secnumdepth}' not in result
|
||||||
|
|
||||||
|
|
||||||
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
||||||
@ -600,6 +617,7 @@ def test_toctree_not_found(app, status, warning):
|
|||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\setcounter{tocdepth}' not in result
|
assert '\\setcounter{tocdepth}' not in result
|
||||||
|
assert '\\setcounter{secnumdepth}' not in result
|
||||||
|
|
||||||
|
|
||||||
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
||||||
@ -611,6 +629,19 @@ def test_toctree_without_maxdepth(app, status, warning):
|
|||||||
print(status.getvalue())
|
print(status.getvalue())
|
||||||
print(warning.getvalue())
|
print(warning.getvalue())
|
||||||
assert '\\setcounter{tocdepth}' not in result
|
assert '\\setcounter{tocdepth}' not in result
|
||||||
|
assert '\\setcounter{secnumdepth}' not in result
|
||||||
|
|
||||||
|
|
||||||
|
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
||||||
|
confoverrides={'master_doc': 'qux'})
|
||||||
|
def test_toctree_with_deeper_maxdepth(app, status, warning):
|
||||||
|
app.builder.build_all()
|
||||||
|
result = (app.outdir / 'Python.tex').text(encoding='utf8')
|
||||||
|
print(result)
|
||||||
|
print(status.getvalue())
|
||||||
|
print(warning.getvalue())
|
||||||
|
assert '\\setcounter{tocdepth}{3}' in result
|
||||||
|
assert '\\setcounter{secnumdepth}{3}' in result
|
||||||
|
|
||||||
|
|
||||||
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
@with_app(buildername='latex', testroot='toctree-maxdepth',
|
||||||
|
@ -23,7 +23,7 @@ from test_build_html import ENV_WARNINGS
|
|||||||
|
|
||||||
|
|
||||||
TEXINFO_WARNINGS = ENV_WARNINGS + """\
|
TEXINFO_WARNINGS = ENV_WARNINGS + """\
|
||||||
%(root)s/markup.txt:160: WARNING: unknown option: &option
|
%(root)s/markup.txt:164: WARNING: unknown option: &option
|
||||||
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
%(root)s/footnote.txt:60: WARNING: citation not found: missing
|
||||||
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
%(root)s/images.txt:20: WARNING: no matching candidate for image URI u'foo.\\*'
|
||||||
%(root)s/images.txt:29: WARNING: no matching candidate for image URI u'svgimg.\\*'
|
%(root)s/images.txt:29: WARNING: no matching candidate for image URI u'svgimg.\\*'
|
||||||
|
49
tests/test_correct_year.py
Normal file
49
tests/test_correct_year.py
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
test_correct_year
|
||||||
|
~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
Test copyright year adjustment
|
||||||
|
|
||||||
|
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
|
||||||
|
:license: BSD, see LICENSE for details.
|
||||||
|
"""
|
||||||
|
import os
|
||||||
|
|
||||||
|
from util import TestApp
|
||||||
|
|
||||||
|
|
||||||
|
def test_correct_year():
|
||||||
|
try:
|
||||||
|
# save current value of SOURCE_DATE_EPOCH
|
||||||
|
sde = os.environ.pop('SOURCE_DATE_EPOCH', None)
|
||||||
|
|
||||||
|
# test with SOURCE_DATE_EPOCH unset: no modification
|
||||||
|
app = TestApp(buildername='html', testroot='correct-year')
|
||||||
|
app.builder.build_all()
|
||||||
|
content = (app.outdir / 'contents.html').text()
|
||||||
|
app.cleanup()
|
||||||
|
assert '2006-2009' in content
|
||||||
|
|
||||||
|
# test with SOURCE_DATE_EPOCH set: copyright year should be
|
||||||
|
# updated
|
||||||
|
os.environ['SOURCE_DATE_EPOCH'] = "1293840000"
|
||||||
|
app = TestApp(buildername='html', testroot='correct-year')
|
||||||
|
app.builder.build_all()
|
||||||
|
content = (app.outdir / 'contents.html').text()
|
||||||
|
app.cleanup()
|
||||||
|
assert '2006-2011' in content
|
||||||
|
|
||||||
|
os.environ['SOURCE_DATE_EPOCH'] = "1293839999"
|
||||||
|
app = TestApp(buildername='html', testroot='correct-year')
|
||||||
|
app.builder.build_all()
|
||||||
|
content = (app.outdir / 'contents.html').text()
|
||||||
|
app.cleanup()
|
||||||
|
assert '2006-2010' in content
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Restores SOURCE_DATE_EPOCH
|
||||||
|
if sde is None:
|
||||||
|
os.environ.pop('SOURCE_DATE_EPOCH', None)
|
||||||
|
else:
|
||||||
|
os.environ['SOURCE_DATE_EPOCH'] = sde
|
@ -24,7 +24,7 @@ def setup_module():
|
|||||||
global app, env
|
global app, env
|
||||||
app = TestApp(srcdir='root-envtest')
|
app = TestApp(srcdir='root-envtest')
|
||||||
env = app.env
|
env = app.env
|
||||||
env.set_warnfunc(lambda *args: warnings.append(args))
|
env.set_warnfunc(lambda *args, **kwargs: warnings.append(args))
|
||||||
|
|
||||||
|
|
||||||
def teardown_module():
|
def teardown_module():
|
||||||
|
@ -55,6 +55,10 @@ def test_graphviz_html(app, status, warning):
|
|||||||
html = '<img src=".*?" alt="digraph {\n bar -> baz\n}" />'
|
html = '<img src=".*?" alt="digraph {\n bar -> baz\n}" />'
|
||||||
assert re.search(html, content, re.M)
|
assert re.search(html, content, re.M)
|
||||||
|
|
||||||
|
html = ('<div class="figure align-right" .*?>\s*<img .*?/>\s*<p class="caption">'
|
||||||
|
'<span class="caption-text">on right</span>.*</p>\s*</div>')
|
||||||
|
assert re.search(html, content, re.S)
|
||||||
|
|
||||||
|
|
||||||
@with_app('latex', testroot='ext-graphviz')
|
@with_app('latex', testroot='ext-graphviz')
|
||||||
@skip_if_graphviz_not_found
|
@skip_if_graphviz_not_found
|
||||||
@ -70,6 +74,11 @@ def test_graphviz_latex(app, status, warning):
|
|||||||
macro = 'Hello \\\\includegraphics{graphviz-\w+.pdf} graphviz world'
|
macro = 'Hello \\\\includegraphics{graphviz-\w+.pdf} graphviz world'
|
||||||
assert re.search(macro, content, re.S)
|
assert re.search(macro, content, re.S)
|
||||||
|
|
||||||
|
macro = ('\\\\begin{wrapfigure}{r}{0pt}\n\\\\centering\n'
|
||||||
|
'\\\\includegraphics{graphviz-\w+.pdf}\n'
|
||||||
|
'\\\\caption{on right}\\\\label{.*}\\\\end{wrapfigure}')
|
||||||
|
assert re.search(macro, content, re.S)
|
||||||
|
|
||||||
|
|
||||||
@with_app('html', testroot='ext-graphviz', confoverrides={'language': 'xx'})
|
@with_app('html', testroot='ext-graphviz', confoverrides={'language': 'xx'})
|
||||||
@skip_if_graphviz_not_found
|
@skip_if_graphviz_not_found
|
||||||
|
@ -161,7 +161,7 @@ def test_missing_reference(tempdir, app, status, warning):
|
|||||||
def test_load_mappings_warnings(tempdir, app, status, warning):
|
def test_load_mappings_warnings(tempdir, app, status, warning):
|
||||||
"""
|
"""
|
||||||
load_mappings issues a warning if new-style mapping
|
load_mappings issues a warning if new-style mapping
|
||||||
identifiers are not alphanumeric
|
identifiers are not string
|
||||||
"""
|
"""
|
||||||
inv_file = tempdir / 'inventory'
|
inv_file = tempdir / 'inventory'
|
||||||
inv_file.write_bytes(inventory_v2)
|
inv_file.write_bytes(inventory_v2)
|
||||||
@ -170,13 +170,14 @@ def test_load_mappings_warnings(tempdir, app, status, warning):
|
|||||||
'py3k': ('https://docs.python.org/py3k/', inv_file),
|
'py3k': ('https://docs.python.org/py3k/', inv_file),
|
||||||
'repoze.workflow': ('http://docs.repoze.org/workflow/', inv_file),
|
'repoze.workflow': ('http://docs.repoze.org/workflow/', inv_file),
|
||||||
'django-taggit': ('http://django-taggit.readthedocs.org/en/latest/',
|
'django-taggit': ('http://django-taggit.readthedocs.org/en/latest/',
|
||||||
inv_file)
|
inv_file),
|
||||||
|
12345: ('http://www.sphinx-doc.org/en/stable/', inv_file),
|
||||||
}
|
}
|
||||||
|
|
||||||
app.config.intersphinx_cache_limit = 0
|
app.config.intersphinx_cache_limit = 0
|
||||||
# load the inventory and check if it's done correctly
|
# load the inventory and check if it's done correctly
|
||||||
load_mappings(app)
|
load_mappings(app)
|
||||||
assert warning.getvalue().count('\n') == 2
|
assert warning.getvalue().count('\n') == 1
|
||||||
|
|
||||||
|
|
||||||
class TestStripBasicAuth(unittest.TestCase):
|
class TestStripBasicAuth(unittest.TestCase):
|
||||||
@ -199,6 +200,16 @@ class TestStripBasicAuth(unittest.TestCase):
|
|||||||
self.assertEqual(None, actual_username)
|
self.assertEqual(None, actual_username)
|
||||||
self.assertEqual(None, actual_password)
|
self.assertEqual(None, actual_password)
|
||||||
|
|
||||||
|
def test_having_port(self):
|
||||||
|
"""basic auth creds correctly stripped from URL containing creds even if URL
|
||||||
|
contains port"""
|
||||||
|
url = 'https://user:12345@domain.com:8080/project/objects.inv'
|
||||||
|
expected = 'https://domain.com:8080/project/objects.inv'
|
||||||
|
actual_url, actual_username, actual_password = _strip_basic_auth(url)
|
||||||
|
self.assertEqual(expected, actual_url)
|
||||||
|
self.assertEqual('user', actual_username)
|
||||||
|
self.assertEqual('12345', actual_password)
|
||||||
|
|
||||||
|
|
||||||
@mock.patch('six.moves.urllib.request.HTTPBasicAuthHandler')
|
@mock.patch('six.moves.urllib.request.HTTPBasicAuthHandler')
|
||||||
@mock.patch('six.moves.urllib.request.HTTPPasswordMgrWithDefaultRealm')
|
@mock.patch('six.moves.urllib.request.HTTPPasswordMgrWithDefaultRealm')
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
import re
|
||||||
|
import pickle
|
||||||
|
|
||||||
from docutils import frontend, utils, nodes
|
from docutils import frontend, utils, nodes
|
||||||
from docutils.parsers import rst
|
from docutils.parsers import rst
|
||||||
@ -18,7 +19,7 @@ from sphinx.util import texescape
|
|||||||
from sphinx.writers.html import HTMLWriter, SmartyPantsHTMLTranslator
|
from sphinx.writers.html import HTMLWriter, SmartyPantsHTMLTranslator
|
||||||
from sphinx.writers.latex import LaTeXWriter, LaTeXTranslator
|
from sphinx.writers.latex import LaTeXWriter, LaTeXTranslator
|
||||||
|
|
||||||
from util import TestApp
|
from util import TestApp, with_app, assert_node
|
||||||
|
|
||||||
|
|
||||||
app = settings = parser = None
|
app = settings = parser = None
|
||||||
@ -142,3 +143,27 @@ def test_latex_escaping():
|
|||||||
# in URIs
|
# in URIs
|
||||||
yield (verify_re, u'`test <http://example.com/~me/>`_', None,
|
yield (verify_re, u'`test <http://example.com/~me/>`_', None,
|
||||||
r'\\href{http://example.com/~me/}{test}.*')
|
r'\\href{http://example.com/~me/}{test}.*')
|
||||||
|
|
||||||
|
|
||||||
|
@with_app(buildername='dummy', testroot='prolog')
|
||||||
|
def test_rst_prolog(app, status, warning):
|
||||||
|
app.builder.build_all()
|
||||||
|
rst = pickle.loads((app.doctreedir / 'restructuredtext.doctree').bytes())
|
||||||
|
md = pickle.loads((app.doctreedir / 'markdown.doctree').bytes())
|
||||||
|
|
||||||
|
# rst_prolog
|
||||||
|
assert_node(rst[0], nodes.paragraph)
|
||||||
|
assert_node(rst[0][0], nodes.emphasis)
|
||||||
|
assert_node(rst[0][0][0], nodes.Text)
|
||||||
|
assert rst[0][0][0] == 'Hello world'
|
||||||
|
|
||||||
|
# rst_epilog
|
||||||
|
assert_node(rst[-1], nodes.section)
|
||||||
|
assert_node(rst[-1][-1], nodes.paragraph)
|
||||||
|
assert_node(rst[-1][-1][0], nodes.emphasis)
|
||||||
|
assert_node(rst[-1][-1][0][0], nodes.Text)
|
||||||
|
assert rst[-1][-1][0][0] == 'Good-bye world'
|
||||||
|
|
||||||
|
# rst_prolog & rst_epilog on exlucding reST parser
|
||||||
|
assert not md.rawsource.startswith('*Hello world*.')
|
||||||
|
assert not md.rawsource.endswith('*Good-bye world*.\n')
|
||||||
|
@ -28,6 +28,17 @@ def setup_module():
|
|||||||
parser = rst.Parser()
|
parser = rst.Parser()
|
||||||
|
|
||||||
|
|
||||||
|
def jsload(path):
|
||||||
|
searchindex = path.text()
|
||||||
|
assert searchindex.startswith('Search.setIndex(')
|
||||||
|
|
||||||
|
return jsdump.loads(searchindex[16:-2])
|
||||||
|
|
||||||
|
|
||||||
|
def is_registered_term(index, keyword):
|
||||||
|
return index['terms'].get(keyword, []) != []
|
||||||
|
|
||||||
|
|
||||||
FILE_CONTENTS = '''\
|
FILE_CONTENTS = '''\
|
||||||
.. test that comments are not indexed: boson
|
.. test that comments are not indexed: boson
|
||||||
|
|
||||||
@ -54,30 +65,31 @@ def test_objects_are_escaped(app, status, warning):
|
|||||||
index = jsdump.loads(searchindex[16:-2])
|
index = jsdump.loads(searchindex[16:-2])
|
||||||
assert 'n::Array<T, d>' in index.get('objects').get('') # n::Array<T,d> is escaped
|
assert 'n::Array<T, d>' in index.get('objects').get('') # n::Array<T,d> is escaped
|
||||||
|
|
||||||
def assert_lang_agnostic_key_words(searchindex):
|
|
||||||
assert 'thisnoteith' not in searchindex
|
|
||||||
assert 'thisonetoo' in searchindex
|
|
||||||
|
|
||||||
|
|
||||||
@with_app(testroot='search')
|
@with_app(testroot='search')
|
||||||
def test_meta_keys_are_handled_for_language_en(app, status, warning):
|
def test_meta_keys_are_handled_for_language_en(app, status, warning):
|
||||||
os.remove(app.outdir / 'searchindex.js')
|
|
||||||
app.builder.build_all()
|
app.builder.build_all()
|
||||||
searchindex = (app.outdir / 'searchindex.js').text()
|
searchindex = jsload(app.outdir / 'searchindex.js')
|
||||||
assert_lang_agnostic_key_words(searchindex)
|
assert not is_registered_term(searchindex, 'thisnoteith')
|
||||||
assert 'findthiskei' in searchindex
|
assert is_registered_term(searchindex, 'thisonetoo')
|
||||||
assert 'onlygerman' not in searchindex
|
assert is_registered_term(searchindex, 'findthiskei')
|
||||||
assert 'thistoo' in searchindex
|
assert is_registered_term(searchindex, 'thistoo')
|
||||||
|
assert not is_registered_term(searchindex, 'onlygerman')
|
||||||
|
assert is_registered_term(searchindex, 'notgerman')
|
||||||
|
assert not is_registered_term(searchindex, 'onlytoogerman')
|
||||||
|
|
||||||
|
|
||||||
@with_app(testroot='search', confoverrides={'html_search_language': 'de'})
|
@with_app(testroot='search', confoverrides={'html_search_language': 'de'})
|
||||||
def test_meta_keys_are_handled_for_language_de(app, status, warning):
|
def test_meta_keys_are_handled_for_language_de(app, status, warning):
|
||||||
app.builder.build_all()
|
app.builder.build_all()
|
||||||
searchindex = (app.outdir / 'searchindex.js').text()
|
searchindex = jsload(app.outdir / 'searchindex.js')
|
||||||
assert_lang_agnostic_key_words(searchindex)
|
assert not is_registered_term(searchindex, 'thisnoteith')
|
||||||
assert 'onlygerman' in searchindex
|
assert is_registered_term(searchindex, 'thisonetoo')
|
||||||
assert 'notgerman' not in searchindex
|
assert not is_registered_term(searchindex, 'findthiskei')
|
||||||
assert 'onlytoogerman' in searchindex
|
assert not is_registered_term(searchindex, 'thistoo')
|
||||||
|
assert is_registered_term(searchindex, 'onlygerman')
|
||||||
|
assert not is_registered_term(searchindex, 'notgerman')
|
||||||
|
assert is_registered_term(searchindex, 'onlytoogerman')
|
||||||
|
|
||||||
|
|
||||||
@with_app(testroot='search')
|
@with_app(testroot='search')
|
||||||
|
@ -109,10 +109,10 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
def assert_in(x, thing, msg=''):
|
def assert_in(x, thing, msg=''):
|
||||||
if x not in thing:
|
if x not in thing:
|
||||||
assert False, msg or '%r is not in %r%r' % (x, thing)
|
assert False, msg or '%r is not in %r' % (x, thing)
|
||||||
def assert_not_in(x, thing, msg=''):
|
def assert_not_in(x, thing, msg=''):
|
||||||
if x in thing:
|
if x in thing:
|
||||||
assert False, msg or '%r is in %r%r' % (x, thing)
|
assert False, msg or '%r is in %r' % (x, thing)
|
||||||
|
|
||||||
|
|
||||||
def skip_if(condition, msg=None):
|
def skip_if(condition, msg=None):
|
||||||
|
@ -8,13 +8,15 @@ Release checklist
|
|||||||
* Update release date in CHANGES
|
* Update release date in CHANGES
|
||||||
* `git commit`
|
* `git commit`
|
||||||
* `make clean`
|
* `make clean`
|
||||||
* `python setup.py release bdist_wheel sdist upload`
|
* `python setup.py release bdist_wheel sdist upload --identify=[your key]`
|
||||||
* Check PyPI release page for obvious errors
|
* Check PyPI release page for obvious errors
|
||||||
* `git tag` with version number
|
* `git tag` with version number
|
||||||
* Merge default into stable if final major release
|
* Merge default into stable if final major release
|
||||||
* Update homepage (release info), regenerate docs (+printable!)
|
* `git push stable --tags`
|
||||||
|
* Open https://readthedocs.org/dashboard/sphinx/versions/ and enable the released version
|
||||||
* Add new version/milestone to tracker categories
|
* Add new version/milestone to tracker categories
|
||||||
* Write announcement and send to mailing list/python-announce
|
* Write announcement and send to sphinx-dev, sphinx-users and python-announce
|
||||||
* Update version info, add new CHANGES entry for next version
|
* Update version info, add new CHANGES entry for next version
|
||||||
* `git commit`
|
* `git commit`
|
||||||
* `git push`
|
* `git push`
|
||||||
|
* Update `sphinx-doc-translations <https://github.com/sphinx-doc/sphinx-doc-translations>`_
|
||||||
|
Loading…
Reference in New Issue
Block a user