Merge with default

This commit is contained in:
tk0miya 2014-09-24 00:43:47 +09:00
commit 3ad1f1c164
162 changed files with 5035 additions and 4519 deletions

View File

@ -7,15 +7,15 @@
^build/
^dist/
^tests/.coverage
^tests/build/
^sphinx/pycode/Grammar.*pickle
^Sphinx.egg-info/
^doc/_build/
^TAGS
^\.tags
^\.ropeproject/
^env/
\.DS_Store$
~$
^utils/.*3\.py$
^distribute-
^tests/root/_build/*
^tests/root/generated/*

303
CHANGES
View File

@ -12,12 +12,14 @@ Incompatible changes
* A new node, ``sphinx.addnodes.literal_strong``, has been added, for text that
should appear literally (i.e. no smart quotes) in strong font. Custom writers
will have to be adapted to handle this node.
* PR#269, #1476: replace `<tt>` tag by `<code>`. User customized stylesheets
should be updated If the css contain some styles for `<tt>` tag.
* PR#269, #1476: replace ``<tt>`` tag by ``<code>``. User customized stylesheets
should be updated If the css contain some styles for ``tt>`` tag.
Thanks to Takeshi Komiya.
* #1543: :confval:`templates_path` is automatically added to
:confval:`exclude_patterns` to avoid reading autosummary rst templates in the
* #1543: `templates_path` is automatically added to
`exclude_patterns` to avoid reading autosummary rst templates in the
templates directory.
* Custom domains should implement the new `Domain.resolve_any_xref`
method to make the `any` role work properly.
Features added
--------------
@ -26,22 +28,31 @@ Features added
* Add support for docutils 0.12
* Added ``sphinx.ext.napoleon`` extension for NumPy and Google style docstring
support.
* Added support for parallel reading (parsing) of source files with the
`sphinx-build -j` option. Third-party extensions will need to be checked for
compatibility and may need to be adapted if they store information in the
build environment object. See `env-merge-info`.
* Added the `any` role that can be used to find a cross-reference of
*any* type in *any* domain. Custom domains should implement the new
`Domain.resolve_any_xref` method to make this work properly.
* Exception logs now contain the last 10 messages emitted by Sphinx.
* Added support for extension versions (a string returned by ``setup()``, these
can be shown in the traceback log files). Version requirements for extensions
can be specified in projects using the new :confval:`needs_extensions` config
can be specified in projects using the new `needs_extensions` config
value.
* Changing the default role within a document with the :dudir:`default-role`
directive is now supported.
* PR#214: Added stemming support for 14 languages, so that the built-in document
search can now handle these. Thanks to Shibukawa Yoshiki.
* PR#202: Allow "." and "~" prefixed references in ``:param:`` doc fields
for Python.
* PR#184: Add :confval:`autodoc_mock_imports`, allowing to mock imports of
* PR#184: Add `autodoc_mock_imports`, allowing to mock imports of
external modules that need not be present when autodocumenting.
* #925: Allow list-typed config values to be provided on the command line,
like ``-D key=val1,val2``.
* #668: Allow line numbering of ``code-block`` and ``literalinclude`` directives
* #668: Allow line numbering of `code-block` and `literalinclude` directives
to start at an arbitrary line number, with a new ``lineno-start`` option.
* PR#172, PR#266: The :rst:dir:`code-block` and :rst:dir:`literalinclude`
* PR#172, PR#266: The `code-block` and `literalinclude`
directives now can have a ``caption`` option that shows a filename before the
code in the output. Thanks to Nasimul Haque, Takeshi Komiya.
* Prompt for the document language in sphinx-quickstart.
@ -56,135 +67,43 @@ Features added
for the ids defined on the node. Thanks to Olivier Heurtier.
* PR#229: Allow registration of other translators. Thanks to Russell Sim.
* Add app.set_translator() API to register or override a Docutils translator
class like :confval:`html_translator_class`.
class like `html_translator_class`.
* PR#267, #1134: add 'diff' parameter to literalinclude. Thanks to Richard Wall
and WAKAYAMA shirou.
* PR#272: Added 'bizstyle' theme. Thanks to Shoji KUMAGAI.
* Automatically compile ``*.mo`` files from ``*.po`` files when
:confval:`gettext_auto_build` is True (default) and ``*.po`` is newer than
`gettext_auto_build` is True (default) and ``*.po`` is newer than
``*.mo`` file.
* #623: :mod:`~sphinx.ext.viewcode` supports imported function/class aliases.
* PR#275: :mod:`~sphinx.ext.intersphinx` supports multiple target for the
* #623: `sphinx.ext.viewcode` supports imported function/class aliases.
* PR#275: `sphinx.ext.intersphinx` supports multiple target for the
inventory. Thanks to Brigitta Sipocz.
* PR#261: Added the `env-before-read-docs` event that can be connected to modify
the order of documents before they are read by the environment.
* #1284: Program options documented with :rst:dir:`option` can now start with
``+``.
* PR#291: The caption of :rst:dir:`code-block` is recognised as a title of ref
target. Thanks to Takeshi Komiya.
Bugs fixed
----------
* #1568: fix a crash when a "centered" directive contains a reference.
* #1568: Fix a crash when a "centered" directive contains a reference.
* #1563: :meth:`~sphinx.application.Sphinx.add_search_language` raises
AssertionError for correct type of argument. Thanks to rikoman.
* #1174: Fix smart quotes being applied inside roles like :rst:role:`program` or
:rst:role:`makevar`.
* #1335: Fix autosummary template overloading with exclamation prefix like
``{% extends "!autosummary/class.rst" %}`` cause infinite recursive function
call. This was caused by PR#181.
* #1337: Fix autodoc with ``autoclass_content="both"`` uses useless
``object.__init__`` docstring when class does not have ``__init__``.
This was caused by a change for #1138.
* #1340: Can't search alphabetical words on the HTML quick search generated
with language='ja'.
* #1319: Do not crash if the :confval:`html_logo` file does not exist.
* #603: Do not use the HTML-ized title for building the search index (that
resulted in "literal" being found on every page with a literal in the
title).
* #751: Allow production lists longer than a page in LaTeX by using longtable.
* #764: Always look for stopwords lowercased in JS search.
* #814: autodoc: Guard against strange type objects that don't have
``__bases__``.
* #932: autodoc: Do not crash if ``__doc__`` is not a string.
* #933: Do not crash if an :rst:role:`option` value is malformed (contains
spaces but no option name).
* #908: On Python 3, handle error messages from LaTeX correctly in the pngmath
extension.
* #943: In autosummary, recognize "first sentences" to pull from the docstring
if they contain uppercase letters.
* #923: Take the entire LaTeX document into account when caching
pngmath-generated images. This rebuilds them correctly when
:confval:`pngmath_latex_preamble` changes.
* #901: Emit a warning when using docutils' new "math" markup without a Sphinx
math extension active.
* #845: In code blocks, when the selected lexer fails, display line numbers
nevertheless if configured.
* #929: Support parsed-literal blocks in LaTeX output correctly.
* #949: Update the tabulary.sty packed with Sphinx.
* #1050: Add anonymous labels into ``objects.inv`` to be referenced via
:mod:`~sphinx.ext.intersphinx`.
* #1095: Fix print-media stylesheet being included always in the "scrolls"
theme.
* #1085: Fix current classname not getting set if class description has
``:noindex:`` set.
* #1181: Report option errors in autodoc directives more gracefully.
* #1155: Fix autodocumenting C-defined methods as attributes in Python 3.
* #1233: Allow finding both Python classes and exceptions with the "class" and
"exc" roles in intersphinx.
* #1198: Allow "image" for the "figwidth" option of the :rst:dir:`figure`
directive as documented by docutils.
* #1152: Fix pycode parsing errors of Python 3 code by including two grammar
versions for Python 2 and 3, and loading the appropriate version for the
running Python version.
* #1017: Be helpful and tell the user when the argument to :rst:dir:`option`
does not match the required format.
* #1345: Fix two bugs with :confval:`nitpick_ignore`; now you don't have to
remove the store environment for changes to have effect.
* #1072: In the JS search, fix issues searching for upper-cased words by
lowercasing words before stemming.
* #1299: Make behavior of the :rst:dir:`math` directive more consistent and
avoid producing empty environments in LaTeX output.
* #1308: Strip HTML tags from the content of "raw" nodes before feeding it
to the search indexer.
* #1249: Fix duplicate LaTeX page numbering for manual documents.
* #1292: In the linkchecker, retry HEAD requests when denied by HTTP 405.
Also make the redirect code apparent and tweak the output a bit to be
more obvious.
* #1285: Avoid name clashes between C domain objects and section titles.
* #848: Always take the newest code in incremental rebuilds with the
:mod:`sphinx.ext.viewcode` extension.
* #979, #1266: Fix exclude handling in ``sphinx-apidoc``.
* #1302: Fix regression in :mod:`sphinx.ext.inheritance_diagram` when
documenting classes that can't be pickled.
* #1316: Remove hard-coded ``font-face`` resources from epub theme.
* #1329: Fix traceback with empty translation msgstr in .po files.
* #1300: Fix references not working in translated documents in some instances.
* #1283: Fix a bug in the detection of changed files that would try to access
doctrees of deleted documents.
* #1330: Fix :confval:`exclude_patterns` behavior with subdirectories in the
:confval:`html_static_path`.
* #1323: Fix emitting empty ``<ul>`` tags in the HTML writer, which is not
valid HTML.
* #1147: Don't emit a sidebar search box in the "singlehtml" builder.
* PR#211: When checking for existence of the :confval:`html_logo` file, check
the full relative path and not the basename.
* #1357: Option names documented by :rst:dir:`option` are now again allowed to
not start with a dash or slash, and referencing them will work correctly.
* #1358: Fix handling of image paths outside of the source directory when using
the "wildcard" style reference.
* #1374: Fix for autosummary generating overly-long summaries if first line
doesn't end with a period.
* #1391: Actually prevent using "pngmath" and "mathjax" extensions at the same
time in sphinx-quickstart.
* #1386: Fix bug preventing more than one theme being added by the entry point
mechanism.
* #1370: Ignore "toctree" nodes in text writer, instead of raising.
* #1364: Fix 'make gettext' fails when the '.. todolist::' directive is present.
* #1367: Fix a change of PR#96 that break sphinx.util.docfields.Field.make_field
interface/behavior for `item` argument usage.
* #1363: Fix i18n: missing python domain's cross-references with currentmodule
directive or currentclass directive.
* #1419: Generated i18n sphinx.js files are missing message catalog entries
from '.js_t' and '.html'. The issue was introduced in Sphinx 1.1.
* #636: Keep straight single quotes in literal blocks in the LaTeX build.
`makevar`.
* PR#235: comment db schema of websupport lacked a length of the node_id field.
Thanks to solos.
* #1466,PR#241: Fix failure of the cpp domain parser to parse C+11
"variadic templates" declarations. Thanks to Victor Zverovich.
* #1459,PR#244: Fix default mathjax js path point to `http://` that cause
* #1459,PR#244: Fix default mathjax js path point to ``http://`` that cause
mixed-content error on HTTPS server. Thanks to sbrandtb and robo9k.
* PR#157: autodoc remove spurious signatures from @property decorated
attributes. Thanks to David Ham.
* PR#159: Add coverage targets to quickstart generated Makefile and make.bat.
Thanks to Matthias Troffaes.
* #1251: When specifying toctree :numbered: option and :tocdepth: metadata,
sub section number that is larger depth than `:tocdepth:` is shrinked.
sub section number that is larger depth than ``:tocdepth:`` is shrunk.
* PR#260: Encode underscore in citation labels for latex export. Thanks to
Lennart Fricke.
* PR#264: Fix could not resolve xref for figure node with :name: option.
@ -208,8 +127,8 @@ Bugs fixed
qualified name. It should be rather easy to change this behaviour and
potentially index by namespaces/classes as well.
* PR#258, #939: Add dedent option for :rst:dir:`code-block` and
:rst:dir:`literal-include`. Thanks to Zafar Siddiqui.
* PR#258, #939: Add dedent option for `code-block` and
`literalinclude`. Thanks to Zafar Siddiqui.
* PR#268: Fix numbering section does not work at singlehtml mode. It still
ad-hoc fix because there is a issue that section IDs are conflicted.
Thanks to Takeshi Komiya.
@ -217,20 +136,18 @@ Bugs fixed
Takeshi Komiya.
* PR#274: Set its URL as a default title value if URL appears in toctree.
Thanks to Takeshi Komiya.
* PR#276, #1381: :rst:role:`rfc` and :rst:role:`pep` roles support custom link
* PR#276, #1381: `rfc` and `pep` roles support custom link
text. Thanks to Takeshi Komiya.
* PR#277, #1513: highlights for function pointers in argument list of
:rst:dir:`c:function`. Thanks to Takeshi Komiya.
`c:function`. Thanks to Takeshi Komiya.
* PR#278: Fix section entries were shown twice if toctree has been put under
only directive. Thanks to Takeshi Komiya.
* #1547: pgen2 tokenizer doesn't recognize `...` literal (Ellipsis for py3).
* #1547: pgen2 tokenizer doesn't recognize ``...`` literal (Ellipsis for py3).
Documentation
-------------
* Add clarification about the syntax of tags. (:file:`doc/markup/misc.rst`)
* #1325: Added a "Intersphinx" tutorial section. (:file:`doc/tutorial.rst`)
* Extended the :ref:`documentation about building extensions <dev-extensions>`.
Release 1.2.3 (released Sep 1, 2014)
@ -239,7 +156,7 @@ Release 1.2.3 (released Sep 1, 2014)
Features added
--------------
* #1518: `sphinx-apidoc` command now have a `--version` option to show version
* #1518: ``sphinx-apidoc`` command now has a ``--version`` option to show version
information and exit
* New locales: Hebrew, European Portuguese, Vietnamese.
@ -257,14 +174,14 @@ Bugs fixed
Thanks to Jorge_C.
* #1467: Exception on Python3 if nonexistent method is specified by automethod
* #1441: autosummary can't handle nested classes correctly.
* #1499: With non-callable `setup` in a conf.py, now sphinx-build emits
user-friendly error message.
* #1499: With non-callable ``setup`` in a conf.py, now sphinx-build emits
a user-friendly error message.
* #1502: In autodoc, fix display of parameter defaults containing backslashes.
* #1226: autodoc, autosummary: importing setup.py by automodule will invoke
setup process and execute `sys.exit()`. Now sphinx avoids SystemExit
setup process and execute ``sys.exit()``. Now sphinx avoids SystemExit
exception and emits warnings without unexpected termination.
* #1503: py:function directive generate incorrectly signature when specifying
a default parameter with an empty list `[]`. Thanks to Geert Jansen.
a default parameter with an empty list ``[]``. Thanks to Geert Jansen.
* #1508: Non-ASCII filename raise exception on make singlehtml, latex, man,
texinfo and changes.
* #1531: On Python3 environment, docutils.conf with 'source_link=true' in the
@ -274,11 +191,11 @@ Bugs fixed
* PR#281, PR#282, #1509: TODO extension not compatible with websupport. Thanks
to Takeshi Komiya.
* #1477: gettext does not extract nodes.line in a table or list.
* #1544: `make text` generate wrong table when it has empty table cells.
* #1544: ``make text`` generates wrong table when it has empty table cells.
* #1522: Footnotes from table get displayed twice in LaTeX. This problem has
been appeared from Sphinx-1.2.1 by #949.
* #508: Sphinx every time exit with zero when is invoked from setup.py command.
ex. `python setup.py build_sphinx -b doctest` return zero even if doctest
ex. ``python setup.py build_sphinx -b doctest`` return zero even if doctest
failed.
Release 1.2.2 (released Mar 2, 2014)
@ -287,7 +204,7 @@ Release 1.2.2 (released Mar 2, 2014)
Bugs fixed
----------
* PR#211: When checking for existence of the :confval:`html_logo` file, check
* PR#211: When checking for existence of the `html_logo` file, check
the full relative path and not the basename.
* PR#212: Fix traceback with autodoc and ``__init__`` methods without docstring.
* PR#213: Fix a missing import in the setup command.
@ -305,7 +222,7 @@ Bugs fixed
* #1370: Ignore "toctree" nodes in text writer, instead of raising.
* #1364: Fix 'make gettext' fails when the '.. todolist::' directive is present.
* #1367: Fix a change of PR#96 that break sphinx.util.docfields.Field.make_field
interface/behavior for `item` argument usage.
interface/behavior for ``item`` argument usage.
Documentation
-------------
@ -327,7 +244,7 @@ Bugs fixed
This was caused by a change for #1138.
* #1340: Can't search alphabetical words on the HTML quick search generated
with language='ja'.
* #1319: Do not crash if the :confval:`html_logo` file does not exist.
* #1319: Do not crash if the `html_logo` file does not exist.
* #603: Do not use the HTML-ized title for building the search index (that
resulted in "literal" being found on every page with a literal in the
title).
@ -344,7 +261,7 @@ Bugs fixed
if they contain uppercase letters.
* #923: Take the entire LaTeX document into account when caching
pngmath-generated images. This rebuilds them correctly when
:confval:`pngmath_latex_preamble` changes.
`pngmath_latex_preamble` changes.
* #901: Emit a warning when using docutils' new "math" markup without a Sphinx
math extension active.
* #845: In code blocks, when the selected lexer fails, display line numbers
@ -361,14 +278,14 @@ Bugs fixed
* #1155: Fix autodocumenting C-defined methods as attributes in Python 3.
* #1233: Allow finding both Python classes and exceptions with the "class" and
"exc" roles in intersphinx.
* #1198: Allow "image" for the "figwidth" option of the :rst:dir:`figure`
* #1198: Allow "image" for the "figwidth" option of the :dudir:`figure`
directive as documented by docutils.
* #1152: Fix pycode parsing errors of Python 3 code by including two grammar
versions for Python 2 and 3, and loading the appropriate version for the
running Python version.
* #1017: Be helpful and tell the user when the argument to :rst:dir:`option`
does not match the required format.
* #1345: Fix two bugs with :confval:`nitpick_ignore`; now you don't have to
* #1345: Fix two bugs with `nitpick_ignore`; now you don't have to
remove the store environment for changes to have effect.
* #1072: In the JS search, fix issues searching for upper-cased words by
lowercasing words before stemming.
@ -391,8 +308,8 @@ Bugs fixed
* #1300: Fix references not working in translated documents in some instances.
* #1283: Fix a bug in the detection of changed files that would try to access
doctrees of deleted documents.
* #1330: Fix :confval:`exclude_patterns` behavior with subdirectories in the
:confval:`html_static_path`.
* #1330: Fix `exclude_patterns` behavior with subdirectories in the
`html_static_path`.
* #1323: Fix emitting empty ``<ul>`` tags in the HTML writer, which is not
valid HTML.
* #1147: Don't emit a sidebar search box in the "singlehtml" builder.
@ -424,7 +341,7 @@ Bugs fixed
* Restore ``versionmodified`` CSS class for versionadded/changed and deprecated
directives.
* PR#181: Fix `html_theme_path=['.']` is a trigger of rebuild all documents
* PR#181: Fix ``html_theme_path = ['.']`` is a trigger of rebuild all documents
always (This change keeps the current "theme changes cause a rebuild"
feature).
@ -491,7 +408,7 @@ Features added
* Support docutils.conf 'writers' and 'html4css1 writer' section in the HTML
writer. The latex, manpage and texinfo writers also support their respective
'writers' sections.
* The new :confval:`html_extra_path` config value allows to specify directories
* The new `html_extra_path` config value allows to specify directories
with files that should be copied directly to the HTML output directory.
* Autodoc directives for module data and attributes now support an
``annotation`` option, so that the default display of the data/attribute
@ -562,10 +479,10 @@ Incompatible changes
* Removed ``sphinx.util.compat.directive_dwim()`` and
``sphinx.roles.xfileref_role()`` which were deprecated since version 1.0.
* PR#122: the files given in :confval:`latex_additional_files` now override TeX
* PR#122: the files given in `latex_additional_files` now override TeX
files included by Sphinx, such as ``sphinx.sty``.
* PR#124: the node generated by :rst:dir:`versionadded`,
:rst:dir:`versionchanged` and :rst:dir:`deprecated` directives now includes
* PR#124: the node generated by `versionadded`,
`versionchanged` and `deprecated` directives now includes
all added markup (such as "New in version X") as child nodes, and no
additional text must be generated by writers.
* PR#99: the :rst:dir:`seealso` directive now generates admonition nodes instead
@ -619,7 +536,7 @@ Features added
asterisks ("*").
- The default value for the ``paragraphindent`` has been changed from 2 to 0
meaning that paragraphs are no longer indented by default.
- #1110: A new configuration value :confval:`texinfo_no_detailmenu` has been
- #1110: A new configuration value `texinfo_no_detailmenu` has been
added for controlling whether a ``@detailmenu`` is added in the "Top"
node's menu.
- Detailed menus are no longer created except for the "Top" node.
@ -628,16 +545,16 @@ Features added
* LaTeX builder:
- PR#115: Add ``'transition'`` item in :confval:`latex_elements` for
- PR#115: Add ``'transition'`` item in `latex_elements` for
customizing how transitions are displayed. Thanks to Jeff Klukas.
- PR#114: The LaTeX writer now includes the "cmap" package by default. The
``'cmappkg'`` item in :confval:`latex_elements` can be used to control this.
``'cmappkg'`` item in `latex_elements` can be used to control this.
Thanks to Dmitry Shachnev.
- The ``'fontpkg'`` item in :confval:`latex_elements` now defaults to ``''``
when the :confval:`language` uses the Cyrillic script. Suggested by Dmitry
- The ``'fontpkg'`` item in `latex_elements` now defaults to ``''``
when the `language` uses the Cyrillic script. Suggested by Dmitry
Shachnev.
- The :confval:`latex_documents`, :confval:`texinfo_documents`, and
:confval:`man_pages` configuration values will be set to default values based
- The `latex_documents`, `texinfo_documents`, and
`man_pages` configuration values will be set to default values based
on the :confval:`master_doc` if not explicitly set in :file:`conf.py`.
Previously, if these values were not set, no output would be generated by
their respective builders.
@ -655,13 +572,13 @@ Features added
- Added the Docutils-native XML and pseudo-XML builders. See
:class:`XMLBuilder` and :class:`PseudoXMLBuilder`.
- PR#45: The linkcheck builder now checks ``#anchor``\ s for existence.
- PR#123, #1106: Add :confval:`epub_use_index` configuration value. If
provided, it will be used instead of :confval:`html_use_index` for epub
- PR#123, #1106: Add `epub_use_index` configuration value. If
provided, it will be used instead of `html_use_index` for epub
builder.
- PR#126: Add :confval:`epub_tocscope` configuration value. The setting
- PR#126: Add `epub_tocscope` configuration value. The setting
controls the generation of the epub toc. The user can now also include
hidden toc entries.
- PR#112: Add :confval:`epub_show_urls` configuration value.
- PR#112: Add `epub_show_urls` configuration value.
* Extensions:
@ -729,7 +646,7 @@ Bugs fixed
* #1127: Fix traceback when autodoc tries to tokenize a non-Python file.
* #1126: Fix double-hyphen to en-dash conversion in wrong places such as
command-line option names in LaTeX.
* #1123: Allow whitespaces in filenames given to :rst:dir:`literalinclude`.
* #1123: Allow whitespaces in filenames given to `literalinclude`.
* #1120: Added improvements about i18n for themes "basic", "haiku" and
"scrolls" that Sphinx built-in. Thanks to Leonardo J. Caballero G.
* #1118: Updated Spanish translation. Thanks to Leonardo J. Caballero G.
@ -737,7 +654,7 @@ Bugs fixed
* #1112: Avoid duplicate download files when referenced from documents in
different ways (absolute/relative).
* #1111: Fix failure to find uppercase words in search when
:confval:`html_search_language` is 'ja'. Thanks to Tomo Saito.
`html_search_language` is 'ja'. Thanks to Tomo Saito.
* #1108: The text writer now correctly numbers enumerated lists with
non-default start values (based on patch by Ewan Edwards).
* #1102: Support multi-context "with" statements in autodoc.
@ -802,7 +719,7 @@ Release 1.1.3 (Mar 10, 2012)
* #860: Do not crash when encountering invalid doctest examples, just
emit a warning.
* #864: Fix crash with some settings of :confval:`modindex_common_prefix`.
* #864: Fix crash with some settings of `modindex_common_prefix`.
* #862: Fix handling of ``-D`` and ``-A`` options on Python 3.
@ -866,7 +783,7 @@ Release 1.1 (Oct 9, 2011)
Incompatible changes
--------------------
* The :rst:dir:`py:module` directive doesn't output its ``platform`` option
* The `py:module` directive doesn't output its ``platform`` option
value anymore. (It was the only thing that the directive did output, and
therefore quite inconsistent.)
@ -902,7 +819,7 @@ Features added
:rst:dir:`toctree`\'s ``numbered`` option.
- #586: Implemented improved :rst:dir:`glossary` markup which allows
multiple terms per definition.
- #478: Added :rst:dir:`py:decorator` directive to describe decorators.
- #478: Added `py:decorator` directive to describe decorators.
- C++ domain now supports array definitions.
- C++ domain now supports doc fields (``:param x:`` inside directives).
- Section headings in :rst:dir:`only` directives are now correctly
@ -913,7 +830,7 @@ Features added
* HTML builder:
- Added ``pyramid`` theme.
- #559: :confval:`html_add_permalinks` is now a string giving the
- #559: `html_add_permalinks` is now a string giving the
text to display in permalinks.
- #259: HTML table rows now have even/odd CSS classes to enable
"Zebra styling".
@ -921,26 +838,26 @@ Features added
* Other builders:
- #516: Added new value of the :confval:`latex_show_urls` option to
- #516: Added new value of the `latex_show_urls` option to
show the URLs in footnotes.
- #209: Added :confval:`text_newlines` and :confval:`text_sectionchars`
- #209: Added `text_newlines` and `text_sectionchars`
config values.
- Added :confval:`man_show_urls` config value.
- Added `man_show_urls` config value.
- #472: linkcheck builder: Check links in parallel, use HTTP HEAD
requests and allow configuring the timeout. New config values:
:confval:`linkcheck_timeout` and :confval:`linkcheck_workers`.
- #521: Added :confval:`linkcheck_ignore` config value.
`linkcheck_timeout` and `linkcheck_workers`.
- #521: Added `linkcheck_ignore` config value.
- #28: Support row/colspans in tables in the LaTeX builder.
* Configuration and extensibility:
- #537: Added :confval:`nitpick_ignore`.
- #537: Added `nitpick_ignore`.
- #306: Added :event:`env-get-outdated` event.
- :meth:`.Application.add_stylesheet` now accepts full URIs.
* Autodoc:
- #564: Add :confval:`autodoc_docstring_signature`. When enabled (the
- #564: Add `autodoc_docstring_signature`. When enabled (the
default), autodoc retrieves the signature from the first line of the
docstring, if it is found there.
- #176: Provide ``private-members`` option for autodoc directives.
@ -958,12 +875,12 @@ Features added
- Added ``inline`` option to graphviz directives, and fixed the
default (block-style) in LaTeX output.
- #590: Added ``caption`` option to graphviz directives.
- #553: Added :rst:dir:`testcleanup` blocks in the doctest extension.
- #594: :confval:`trim_doctest_flags` now also removes ``<BLANKLINE>``
- #553: Added `testcleanup` blocks in the doctest extension.
- #594: `trim_doctest_flags` now also removes ``<BLANKLINE>``
indicators.
- #367: Added automatic exclusion of hidden members in inheritance
diagrams, and an option to selectively enable it.
- Added :confval:`pngmath_add_tooltips`.
- Added `pngmath_add_tooltips`.
- The math extension displaymath directives now support ``name`` in
addition to ``label`` for giving the equation label, for compatibility
with Docutils.
@ -1036,7 +953,7 @@ Release 1.0.8 (Sep 23, 2011)
* #669: Respect the ``noindex`` flag option in py:module directives.
* #675: Fix IndexErrors when including nonexisting lines with
:rst:dir:`literalinclude`.
`literalinclude`.
* #676: Respect custom function/method parameter separator strings.
@ -1119,7 +1036,7 @@ Release 1.0.6 (Jan 04, 2011)
* #570: Try decoding ``-D`` and ``-A`` command-line arguments with
the locale's preferred encoding.
* #528: Observe :confval:`locale_dirs` when looking for the JS
* #528: Observe `locale_dirs` when looking for the JS
translations file.
* #574: Add special code for better support of Japanese documents
@ -1292,51 +1209,51 @@ Features added
- Added a "nitpicky" mode that emits warnings for all missing
references. It is activated by the :option:`-n` command-line switch
or the :confval:`nitpicky` config value.
or the `nitpicky` config value.
- Added ``latexpdf`` target in quickstart Makefile.
* Markup:
- The :rst:role:`menuselection` and :rst:role:`guilabel` roles now
- The `menuselection` and `guilabel` roles now
support ampersand accelerators.
- New more compact doc field syntax is now recognized: ``:param type
name: description``.
- Added ``tab-width`` option to :rst:dir:`literalinclude` directive.
- Added ``tab-width`` option to `literalinclude` directive.
- Added ``titlesonly`` option to :rst:dir:`toctree` directive.
- Added the ``prepend`` and ``append`` options to the
:rst:dir:`literalinclude` directive.
`literalinclude` directive.
- #284: All docinfo metadata is now put into the document metadata, not
just the author.
- The :rst:role:`ref` role can now also reference tables by caption.
- The :rst:dir:`include` directive now supports absolute paths, which
- The `ref` role can now also reference tables by caption.
- The :dudir:`include` directive now supports absolute paths, which
are interpreted as relative to the source directory.
- In the Python domain, references like ``:func:`.name``` now look for
matching names with any prefix if no direct match is found.
* Configuration:
- Added :confval:`rst_prolog` config value.
- Added :confval:`html_secnumber_suffix` config value to control
- Added `rst_prolog` config value.
- Added `html_secnumber_suffix` config value to control
section numbering format.
- Added :confval:`html_compact_lists` config value to control
- Added `html_compact_lists` config value to control
docutils' compact lists feature.
- The :confval:`html_sidebars` config value can now contain patterns
- The `html_sidebars` config value can now contain patterns
as keys, and the values can be lists that explicitly select which
sidebar templates should be rendered. That means that the builtin
sidebar contents can be included only selectively.
- :confval:`html_static_path` can now contain single file entries.
- The new universal config value :confval:`exclude_patterns` makes the
old :confval:`unused_docs`, :confval:`exclude_trees` and
:confval:`exclude_dirnames` obsolete.
- Added :confval:`html_output_encoding` config value.
- Added the :confval:`latex_docclass` config value and made the
- `html_static_path` can now contain single file entries.
- The new universal config value `exclude_patterns` makes the
old ``unused_docs``, ``exclude_trees`` and
``exclude_dirnames`` obsolete.
- Added `html_output_encoding` config value.
- Added the `latex_docclass` config value and made the
"twoside" documentclass option overridable by "oneside".
- Added the :confval:`trim_doctest_flags` config value, which is true
- Added the `trim_doctest_flags` config value, which is true
by default.
- Added :confval:`html_show_copyright` config value.
- Added :confval:`latex_show_pagerefs` and :confval:`latex_show_urls`
- Added `html_show_copyright` config value.
- Added `latex_show_pagerefs` and `latex_show_urls`
config values.
- The behavior of :confval:`html_file_suffix` changed slightly: the
- The behavior of `html_file_suffix` changed slightly: the
empty string now means "no suffix" instead of "default suffix", use
``None`` for "default suffix".
@ -1378,7 +1295,7 @@ Features added
* Extension API:
- Added :event:`html-collect-pages`.
- Added :confval:`needs_sphinx` config value and
- Added `needs_sphinx` config value and
:meth:`~sphinx.application.Sphinx.require_sphinx` application API
method.
- #200: Added :meth:`~sphinx.application.Sphinx.add_stylesheet`
@ -1390,7 +1307,7 @@ Features added
- Added the :mod:`~sphinx.ext.extlinks` extension.
- Added support for source ordering of members in autodoc, with
``autodoc_member_order = 'bysource'``.
- Added :confval:`autodoc_default_flags` config value, which can be
- Added `autodoc_default_flags` config value, which can be
used to select default flags for all autodoc directives.
- Added a way for intersphinx to refer to named labels in other
projects, and to specify the project you want to link to.
@ -1400,7 +1317,7 @@ Features added
extension, thanks to Pauli Virtanen.
- #309: The :mod:`~sphinx.ext.graphviz` extension can now output SVG
instead of PNG images, controlled by the
:confval:`graphviz_output_format` config value.
`graphviz_output_format` config value.
- Added ``alt`` option to :rst:dir:`graphviz` extension directives.
- Added ``exclude`` argument to :func:`.autodoc.between`.

View File

@ -48,10 +48,10 @@ reindent:
@$(PYTHON) utils/reindent.py -r -n .
endif
test: build
test:
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' $(TEST)
covertest: build
covertest:
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' --with-coverage \
--cover-package=sphinx $(TEST)

View File

@ -2,6 +2,9 @@
README for Sphinx
=================
This is the Sphinx documentation generator, see http://sphinx-doc.org/.
Installing
==========
@ -17,7 +20,7 @@ Reading the docs
After installing::
cd doc
sphinx-build . _build/html
make html
Then, direct your browser to ``_build/html/index.html``.
@ -35,6 +38,11 @@ If you want to use a different interpreter, e.g. ``python3``, use::
PYTHON=python3 make test
Continuous testing runs on drone.io:
.. image:: https://drone.io/bitbucket.org/birkenfeld/sphinx/status.png
:target: https://drone.io/bitbucket.org/birkenfeld/sphinx/
Contributing
============

View File

@ -34,6 +34,9 @@
<li>{%trans path=pathto('extensions')%}<b>Extensions:</b> automatic testing of code snippets, inclusion of
docstrings from Python modules (API docs), and
<a href="{{ path }}#builtin-sphinx-extensions">more</a>{%endtrans%}</li>
<li>{%trans path=pathto('develop')%}<b>Contributed extensions:</b> more than
50 extensions <a href="{{ path }}#extensions">contributed by users</a>
in a second repository; most of them installable from PyPI{%endtrans%}</li>
</ul>
<p>{%trans%}
Sphinx uses <a href="http://docutils.sf.net/rst.html">reStructuredText</a>

View File

@ -3,7 +3,7 @@
{%trans%}project{%endtrans%}</p>
<h3>Download</h3>
{% if version.endswith('(hg)') %}
{% if version.endswith('a0') %}
<p>{%trans%}This documentation is for version <b>{{ version }}</b>, which is
not released yet.{%endtrans%}</p>
<p>{%trans%}You can use it from the

View File

@ -1,5 +1,7 @@
:tocdepth: 2
.. default-role:: any
.. _changes:
Changes in Sphinx

View File

@ -83,7 +83,7 @@ texinfo_documents = [
# We're not using intersphinx right now, but if we did, this would be part of
# the mapping:
intersphinx_mapping = {'python': ('http://docs.python.org/dev', None)}
intersphinx_mapping = {'python': ('http://docs.python.org/2/', None)}
# Sphinx document translation with sphinx gettext feature uses these settings:
locale_dirs = ['locale/']

View File

@ -707,7 +707,7 @@ that use Sphinx's HTMLWriter class.
.. confval:: html_use_opensearch
If nonempty, an `OpenSearch <http://opensearch.org>` description file will be
If nonempty, an `OpenSearch <http://opensearch.org>`_ description file will be
output, and all pages will contain a ``<link>`` tag referring to it. Since
OpenSearch doesn't support relative URLs for its search page location, the
value of this option must be the base URL from which these documents are

View File

@ -130,6 +130,11 @@ These are the basic steps needed to start developing on Sphinx.
* For bug fixes, first add a test that fails without your changes and passes
after they are applied.
* Tests that need a sphinx-build run should be integrated in one of the
existing test modules if possible. New tests that to ``@with_app`` and
then ``build_all`` for a few assertions are not good since *the test suite
should not take more than a minute to run*.
#. Please add a bullet point to :file:`CHANGES` if the fix or feature is not
trivial (small doc updates, typo fixes). Then commit::

View File

@ -437,6 +437,19 @@ handlers to the events. Example:
.. versionadded:: 0.5
.. event:: env-before-read-docs (app, env, docnames)
Emitted after the environment has determined the list of all added and
changed files and just before it reads them. It allows extension authors to
reorder the list of docnames (*inplace*) before processing, or add more
docnames that Sphinx did not consider changed (but never add any docnames
that are not in ``env.found_docs``).
You can also remove document names; do this with caution since it will make
Sphinx treat changed files as unchanged.
.. versionadded:: 1.3
.. event:: source-read (app, docname, source)
Emitted when a source file has been read. The *source* argument is a list
@ -480,6 +493,26 @@ handlers to the events. Example:
Here is the place to replace custom nodes that don't have visitor methods in
the writers, so that they don't cause errors when the writers encounter them.
.. event:: env-merge-info (env, docnames, other)
This event is only emitted when parallel reading of documents is enabled. It
is emitted once for every subprocess that has read some documents.
You must handle this event in an extension that stores data in the
environment in a custom location. Otherwise the environment in the main
process will not be aware of the information stored in the subprocess.
*other* is the environment object from the subprocess, *env* is the
environment from the main process. *docnames* is a set of document names
that have been read in the subprocess.
For a sample of how to deal with this event, look at the standard
``sphinx.ext.todo`` extension. The implementation is often similar to that
of :event:`env-purge-doc`, only that information is not removed, but added to
the main environment from the other environment.
.. versionadded:: 1.3
.. event:: env-updated (app, env)
Emitted when the :meth:`update` method of the build environment has

View File

@ -18,15 +18,32 @@ imports this module and executes its ``setup()`` function, which in turn
notifies Sphinx of everything the extension offers -- see the extension tutorial
for examples.
.. versionadded:: 1.3
The ``setup()`` function can return a string, this is treated by Sphinx as
the version of the extension and used for informational purposes such as the
traceback file when an exception occurs.
The configuration file itself can be treated as an extension if it contains a
``setup()`` function. All other extensions to load must be listed in the
:confval:`extensions` configuration value.
Extension metadata
------------------
.. versionadded:: 1.3
The ``setup()`` function can return a dictionary. This is treated by Sphinx
as metadata of the extension. Metadata keys currently recognized are:
* ``'version'``: a string that identifies the extension version. It is used for
extension version requirement checking (see :confval:`needs_extensions`) and
informational purposes. If not given, ``"unknown version"`` is substituted.
* ``'parallel_read_safe'``: a boolean that specifies if parallel reading of
source files can be used when the extension is loaded. It defaults to
``False``, i.e. you have to explicitly specify your extension to be
parallel-read-safe after checking that it is.
* ``'parallel_write_safe'``: a boolean that specifies if parallel writing of
output files can be used when the extension is loaded. Since extensions
usually don't negatively influence the process, this defaults to ``True``.
APIs used for writing extensions
--------------------------------
.. toctree::
tutorial

View File

@ -162,7 +162,7 @@ new Python module called :file:`todo.py` and add the setup function::
app.connect('doctree-resolved', process_todo_nodes)
app.connect('env-purge-doc', purge_todos)
return '0.1' # identifies the version of our extension
return {'version': '0.1'} # identifies the version of our extension
The calls in this function refer to classes and functions not yet written. What
the individual calls do is the following:

View File

@ -36,21 +36,29 @@ installed) and handled in a smart way:
highlighted as Python).
* The highlighting language can be changed using the ``highlight`` directive,
used as follows::
used as follows:
.. highlight:: c
.. rst:directive:: .. highlight:: language
This language is used until the next ``highlight`` directive is encountered.
Example::
.. highlight:: c
This language is used until the next ``highlight`` directive is encountered.
* For documents that have to show snippets in different languages, there's also
a :rst:dir:`code-block` directive that is given the highlighting language
directly::
directly:
.. code-block:: ruby
.. rst:directive:: .. code-block:: language
Some Ruby code.
Use it like this::
The directive's alias name :rst:dir:`sourcecode` works as well.
.. code-block:: ruby
Some Ruby code.
The directive's alias name :rst:dir:`sourcecode` works as well.
* The valid values for the highlighting language are:

View File

@ -12,7 +12,9 @@ They are written as ``:rolename:`content```.
The default role (```content```) has no special meaning by default. You are
free to use it for anything you like, e.g. variable names; use the
:confval:`default_role` config value to set it to a known role.
:confval:`default_role` config value to set it to a known role -- the
:rst:role:`any` role to find anything or the :rst:role:`py:obj` role to find
Python objects are very useful for this.
See :ref:`domains` for roles added by domains.
@ -38,12 +40,57 @@ more versatile:
* If you prefix the content with ``~``, the link text will only be the last
component of the target. For example, ``:py:meth:`~Queue.Queue.get``` will
refer to ``Queue.Queue.get`` but only display ``get`` as the link text.
refer to ``Queue.Queue.get`` but only display ``get`` as the link text. This
does not work with all cross-reference roles, but is domain specific.
In HTML output, the link's ``title`` attribute (that is e.g. shown as a
tool-tip on mouse-hover) will always be the full target name.
.. _any-role:
Cross-referencing anything
--------------------------
.. rst:role:: any
.. versionadded:: 1.3
This convenience role tries to do its best to find a valid target for its
reference text.
* First, it tries standard cross-reference targets that would be referenced
by :rst:role:`doc`, :rst:role:`ref` or :rst:role:`option`.
Custom objects added to the standard domain by extensions (see
:meth:`.add_object_type`) are also searched.
* Then, it looks for objects (targets) in all loaded domains. It is up to
the domains how specific a match must be. For example, in the Python
domain a reference of ``:any:`Builder``` would match the
``sphinx.builders.Builder`` class.
If none or multiple targets are found, a warning will be emitted. In the
case of multiple targets, you can change "any" to a specific role.
This role is a good candidate for setting :confval:`default_role`. If you
do, you can write cross-references without a lot of markup overhead. For
example, in this Python function documentation ::
.. function:: install()
This function installs a `handler` for every signal known by the
`signal` module. See the section `about-signals` for more information.
there could be references to a glossary term (usually ``:term:`handler```), a
Python module (usually ``:py:mod:`signal``` or ``:mod:`signal```) and a
section (usually ``:ref:`about-signals```).
The :rst:role:`any` role also works together with the
:mod:`~sphinx.ext.intersphinx` extension: when no local cross-reference is
found, all object types of intersphinx inventories are also searched.
Cross-referencing objects
-------------------------

View File

@ -25,6 +25,7 @@ class desc(nodes.Admonition, nodes.Element):
contains one or more ``desc_signature`` and a ``desc_content``.
"""
class desc_signature(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for object signatures.
@ -39,33 +40,42 @@ class desc_addname(nodes.Part, nodes.Inline, nodes.TextElement):
# compatibility alias
desc_classname = desc_addname
class desc_type(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for return types or object type names."""
class desc_returns(desc_type):
"""Node for a "returns" annotation (a la -> in Python)."""
def astext(self):
return ' -> ' + nodes.TextElement.astext(self)
class desc_name(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for the main object name."""
class desc_parameterlist(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for a general parameter list."""
child_text_separator = ', '
class desc_parameter(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for a single parameter."""
class desc_optional(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for marking optional parts of the parameter list."""
child_text_separator = ', '
def astext(self):
return '[' + nodes.TextElement.astext(self) + ']'
class desc_annotation(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for signature annotations (not Python 3-style annotations)."""
class desc_content(nodes.General, nodes.Element):
"""Node for object description content.
@ -82,15 +92,18 @@ class versionmodified(nodes.Admonition, nodes.TextElement):
directives.
"""
class seealso(nodes.Admonition, nodes.Element):
"""Custom "see also" admonition."""
class productionlist(nodes.Admonition, nodes.Element):
"""Node for grammar production lists.
Contains ``production`` nodes.
"""
class production(nodes.Part, nodes.Inline, nodes.TextElement):
"""Node for a single grammar production rule."""
@ -107,26 +120,33 @@ class index(nodes.Invisible, nodes.Inline, nodes.TextElement):
*entrytype* is one of "single", "pair", "double", "triple".
"""
class centered(nodes.Part, nodes.TextElement):
"""Deprecated."""
class acks(nodes.Element):
"""Special node for "acks" lists."""
class hlist(nodes.Element):
"""Node for "horizontal lists", i.e. lists that should be compressed to
take up less vertical space.
"""
class hlistcol(nodes.Element):
"""Node for one column in a horizontal list."""
class compact_paragraph(nodes.paragraph):
"""Node for a compact paragraph (which never makes a <p> node)."""
class glossary(nodes.Element):
"""Node to insert a glossary."""
class only(nodes.Element):
"""Node for "only" directives (conditional inclusion based on tags)."""
@ -136,14 +156,17 @@ class only(nodes.Element):
class start_of_file(nodes.Element):
"""Node to mark start of a new file, used in the LaTeX builder only."""
class highlightlang(nodes.Element):
"""Inserted to set the highlight language and line number options for
subsequent code blocks.
"""
class tabular_col_spec(nodes.Element):
"""Node for specifying tabular columns, used for LaTeX output."""
class meta(nodes.Special, nodes.PreBibliographic, nodes.Element):
"""Node for meta directive -- same as docutils' standard meta node,
but pickleable.
@ -160,22 +183,27 @@ class pending_xref(nodes.Inline, nodes.Element):
BuildEnvironment.resolve_references.
"""
class download_reference(nodes.reference):
"""Node for download references, similar to pending_xref."""
class literal_emphasis(nodes.emphasis):
"""Node that behaves like `emphasis`, but further text processors are not
applied (e.g. smartypants for HTML output).
"""
class literal_strong(nodes.strong):
"""Node that behaves like `strong`, but further text processors are not
applied (e.g. smartypants for HTML output).
"""
class abbreviation(nodes.Inline, nodes.TextElement):
"""Node for abbreviations with explanations."""
class termsep(nodes.Structural, nodes.Element):
"""Separates two terms within a <term> node."""

View File

@ -88,7 +88,7 @@ def create_module_file(package, module, opts):
text = format_heading(1, '%s module' % module)
else:
text = ''
#text += format_heading(2, ':mod:`%s` Module' % module)
# text += format_heading(2, ':mod:`%s` Module' % module)
text += format_directive(module, package)
write_file(makename(package, module), text, opts)
@ -173,7 +173,7 @@ def shall_skip(module, opts):
# skip if it has a "private" name and this is selected
filename = path.basename(module)
if filename != '__init__.py' and filename.startswith('_') and \
not opts.includeprivate:
not opts.includeprivate:
return True
return False
@ -218,7 +218,7 @@ def recurse_tree(rootpath, excludes, opts):
if is_pkg:
# we are in a package with something to document
if subs or len(py_files) > 1 or not \
shall_skip(path.join(root, INITPY), opts):
shall_skip(path.join(root, INITPY), opts):
subpackage = root[len(rootpath):].lstrip(path.sep).\
replace(path.sep, '.')
create_package_file(root, root_package, subpackage,
@ -318,7 +318,7 @@ Note: By default this script will not overwrite already created files.""")
(opts, args) = parser.parse_args(argv[1:])
if opts.show_version:
print('Sphinx (sphinx-apidoc) %s' % __version__)
print('Sphinx (sphinx-apidoc) %s' % __version__)
return 0
if not args:

View File

@ -20,7 +20,7 @@ import traceback
from os import path
from collections import deque
from six import iteritems, itervalues
from six import iteritems, itervalues, text_type
from six.moves import cStringIO
from docutils import nodes
from docutils.parsers.rst import convert_directive_function, \
@ -39,7 +39,8 @@ from sphinx.environment import BuildEnvironment, SphinxStandaloneReader
from sphinx.util import pycompat # imported for side-effects
from sphinx.util.tags import Tags
from sphinx.util.osutil import ENOENT
from sphinx.util.console import bold, lightgray, darkgray
from sphinx.util.console import bold, lightgray, darkgray, darkgreen, \
term_width_line
if hasattr(sys, 'intern'):
intern = sys.intern
@ -49,8 +50,10 @@ events = {
'builder-inited': '',
'env-get-outdated': 'env, added, changed, removed',
'env-purge-doc': 'env, docname',
'env-before-read-docs': 'env, docnames',
'source-read': 'docname, source text',
'doctree-read': 'the doctree before being pickled',
'env-merge-info': 'env, read docnames, other env instance',
'missing-reference': 'env, node, contnode',
'doctree-resolved': 'doctree, docname',
'env-updated': 'env',
@ -72,7 +75,7 @@ class Sphinx(object):
self.verbosity = verbosity
self.next_listener_id = 0
self._extensions = {}
self._extension_versions = {}
self._extension_metadata = {}
self._listeners = {}
self.domains = BUILTIN_DOMAINS.copy()
self.builderclasses = BUILTIN_BUILDERS.copy()
@ -112,6 +115,10 @@ class Sphinx(object):
# status code for command-line application
self.statuscode = 0
if not path.isdir(outdir):
self.info('making output directory...')
os.makedirs(outdir)
# read config
self.tags = Tags(tags)
self.config = Config(confdir, CONFIG_FILENAME,
@ -128,7 +135,7 @@ class Sphinx(object):
self.setup_extension(extension)
# the config file itself can be an extension
if self.config.setup:
# py31 doesn't have 'callable' function for bellow check
# py31 doesn't have 'callable' function for below check
if hasattr(self.config.setup, '__call__'):
self.config.setup(self)
else:
@ -156,7 +163,7 @@ class Sphinx(object):
'version requirement for extension %s, but it is '
'not loaded' % extname)
continue
has_ver = self._extension_versions[extname]
has_ver = self._extension_metadata[extname]['version']
if has_ver == 'unknown version' or needs_ver > has_ver:
raise VersionRequirementError(
'This project needs the extension %s at least in '
@ -200,8 +207,8 @@ class Sphinx(object):
else:
try:
self.info(bold('loading pickled environment... '), nonl=True)
self.env = BuildEnvironment.frompickle(self.config,
path.join(self.doctreedir, ENV_PICKLE_FILENAME))
self.env = BuildEnvironment.frompickle(
self.config, path.join(self.doctreedir, ENV_PICKLE_FILENAME))
self.env.domains = {}
for domain in self.domains.keys():
# this can raise if the data version doesn't fit
@ -245,6 +252,15 @@ class Sphinx(object):
else:
self.builder.compile_update_catalogs()
self.builder.build_update()
status = (self.statuscode == 0
and 'succeeded' or 'finished with problems')
if self._warncount:
self.info(bold('build %s, %s warning%s.' %
(status, self._warncount,
self._warncount != 1 and 's' or '')))
else:
self.info(bold('build %s.' % status))
except Exception as err:
# delete the saved env to force a fresh build next time
envfile = path.join(self.doctreedir, ENV_PICKLE_FILENAME)
@ -291,7 +307,7 @@ class Sphinx(object):
else:
location = None
warntext = location and '%s: %s%s\n' % (location, prefix, message) or \
'%s%s\n' % (prefix, message)
'%s%s\n' % (prefix, message)
if self.warningiserror:
raise SphinxWarning(warntext)
self._warncount += 1
@ -350,6 +366,48 @@ class Sphinx(object):
message = message % (args or kwargs)
self._log(lightgray(message), self._status)
def _display_chunk(chunk):
if isinstance(chunk, (list, tuple)):
if len(chunk) == 1:
return text_type(chunk[0])
return '%s .. %s' % (chunk[0], chunk[-1])
return text_type(chunk)
def old_status_iterator(self, iterable, summary, colorfunc=darkgreen,
stringify_func=_display_chunk):
l = 0
for item in iterable:
if l == 0:
self.info(bold(summary), nonl=1)
l = 1
self.info(colorfunc(stringify_func(item)) + ' ', nonl=1)
yield item
if l == 1:
self.info()
# new version with progress info
def status_iterator(self, iterable, summary, colorfunc=darkgreen, length=0,
stringify_func=_display_chunk):
if length == 0:
for item in self.old_status_iterator(iterable, summary, colorfunc,
stringify_func):
yield item
return
l = 0
summary = bold(summary)
for item in iterable:
l += 1
s = '%s[%3d%%] %s' % (summary, 100*l/length,
colorfunc(stringify_func(item)))
if self.verbosity:
s += '\n'
else:
s = term_width_line(s)
self.info(s, nonl=1)
yield item
if l > 0:
self.info()
# ---- general extensibility interface -------------------------------------
def setup_extension(self, extension):
@ -366,20 +424,22 @@ class Sphinx(object):
if not hasattr(mod, 'setup'):
self.warn('extension %r has no setup() function; is it really '
'a Sphinx extension module?' % extension)
version = None
ext_meta = None
else:
try:
version = mod.setup(self)
ext_meta = mod.setup(self)
except VersionRequirementError as err:
# add the extension name to the version required
raise VersionRequirementError(
'The %s extension used by this project needs at least '
'Sphinx v%s; it therefore cannot be built with this '
'version.' % (extension, err))
if version is None:
version = 'unknown version'
if ext_meta is None:
ext_meta = {}
if not ext_meta.get('version'):
ext_meta['version'] = 'unknown version'
self._extensions[extension] = mod
self._extension_versions[extension] = version
self._extension_metadata[extension] = ext_meta
def require_sphinx(self, version):
# check the Sphinx version if requested
@ -461,7 +521,7 @@ class Sphinx(object):
else:
raise ExtensionError(
'Builder %r already exists (in module %s)' % (
builder.name, self.builderclasses[builder.name].__module__))
builder.name, self.builderclasses[builder.name].__module__))
self.builderclasses[builder.name] = builder
def add_config_value(self, name, default, rebuild):

View File

@ -22,7 +22,9 @@ from docutils import nodes
from sphinx.util import i18n, path_stabilize
from sphinx.util.osutil import SEP, relative_uri, find_catalog
from sphinx.util.console import bold, purple, darkgreen, term_width_line
from sphinx.util.console import bold, darkgreen
from sphinx.util.parallel import ParallelTasks, SerialTasks, make_chunks, \
parallel_available
# side effect: registers roles and directives
from sphinx import roles
@ -62,10 +64,17 @@ class Builder(object):
self.tags.add(self.name)
self.tags.add("format_%s" % self.format)
self.tags.add("builder_%s" % self.name)
# compatibility aliases
self.status_iterator = app.status_iterator
self.old_status_iterator = app.old_status_iterator
# images that need to be copied over (source -> dest)
self.images = {}
# these get set later
self.parallel_ok = False
self.finish_tasks = None
# load default translator class
self.translator_class = app._translators.get(self.name)
@ -113,41 +122,6 @@ class Builder(object):
"""
raise NotImplementedError
def old_status_iterator(self, iterable, summary, colorfunc=darkgreen,
stringify_func=lambda x: x):
l = 0
for item in iterable:
if l == 0:
self.info(bold(summary), nonl=1)
l = 1
self.info(colorfunc(stringify_func(item)) + ' ', nonl=1)
yield item
if l == 1:
self.info()
# new version with progress info
def status_iterator(self, iterable, summary, colorfunc=darkgreen, length=0,
stringify_func=lambda x: x):
if length == 0:
for item in self.old_status_iterator(iterable, summary, colorfunc,
stringify_func):
yield item
return
l = 0
summary = bold(summary)
for item in iterable:
l += 1
s = '%s[%3d%%] %s' % (summary, 100*l/length,
colorfunc(stringify_func(item)))
if self.app.verbosity:
s += '\n'
else:
s = term_width_line(s)
self.info(s, nonl=1)
yield item
if l > 0:
self.info()
supported_image_types = []
def post_process_images(self, doctree):
@ -179,9 +153,8 @@ class Builder(object):
def compile_catalogs(self, catalogs, message):
if not self.config.gettext_auto_build:
return
self.info(bold('building [mo]: '), nonl=1)
self.info(message)
for catalog in self.status_iterator(
self.info(bold('building [mo]: ') + message)
for catalog in self.app.status_iterator(
catalogs, 'writing output... ', darkgreen, len(catalogs),
lambda c: c.mo_path):
catalog.write_mo(self.config.language)
@ -263,25 +236,17 @@ class Builder(object):
First updates the environment, and then calls :meth:`write`.
"""
if summary:
self.info(bold('building [%s]: ' % self.name), nonl=1)
self.info(summary)
self.info(bold('building [%s]' % self.name) + ': ' + summary)
updated_docnames = set()
# while reading, collect all warnings from docutils
warnings = []
self.env.set_warnfunc(lambda *args: warnings.append(args))
self.info(bold('updating environment: '), nonl=1)
msg, length, iterator = self.env.update(self.config, self.srcdir,
self.doctreedir, self.app)
self.info(msg)
for docname in self.status_iterator(iterator, 'reading sources... ',
purple, length):
updated_docnames.add(docname)
# nothing further to do, the environment has already
# done the reading
updated_docnames = self.env.update(self.config, self.srcdir,
self.doctreedir, self.app)
self.env.set_warnfunc(self.warn)
for warning in warnings:
self.warn(*warning)
self.env.set_warnfunc(self.warn)
doccount = len(updated_docnames)
self.info(bold('looking for now-outdated files... '), nonl=1)
@ -315,20 +280,33 @@ class Builder(object):
if docnames and docnames != ['__all__']:
docnames = set(docnames) & self.env.found_docs
# another indirection to support builders that don't build
# files individually
# determine if we can write in parallel
self.parallel_ok = False
if parallel_available and self.app.parallel > 1 and self.allow_parallel:
self.parallel_ok = True
for extname, md in self.app._extension_metadata.items():
par_ok = md.get('parallel_write_safe', True)
if not par_ok:
self.app.warn('the %s extension is not safe for parallel '
'writing, doing serial read' % extname)
self.parallel_ok = False
break
# create a task executor to use for misc. "finish-up" tasks
# if self.parallel_ok:
# self.finish_tasks = ParallelTasks(self.app.parallel)
# else:
# for now, just execute them serially
self.finish_tasks = SerialTasks()
# write all "normal" documents (or everything for some builders)
self.write(docnames, list(updated_docnames), method)
# finish (write static files etc.)
self.finish()
status = (self.app.statuscode == 0
and 'succeeded' or 'finished with problems')
if self.app._warncount:
self.info(bold('build %s, %s warning%s.' %
(status, self.app._warncount,
self.app._warncount != 1 and 's' or '')))
else:
self.info(bold('build %s.' % status))
# wait for all tasks
self.finish_tasks.join()
def write(self, build_docnames, updated_docnames, method='update'):
if build_docnames is None or build_docnames == ['__all__']:
@ -354,23 +332,17 @@ class Builder(object):
warnings = []
self.env.set_warnfunc(lambda *args: warnings.append(args))
# check for prerequisites to parallel build
# (parallel only works on POSIX, because the forking impl of
# multiprocessing is required)
if not (multiprocessing and
self.app.parallel > 1 and
self.allow_parallel and
os.name == 'posix'):
self._write_serial(sorted(docnames), warnings)
else:
if self.parallel_ok:
# number of subprocesses is parallel-1 because the main process
# is busy loading doctrees and doing write_doc_serialized()
self._write_parallel(sorted(docnames), warnings,
nproc=self.app.parallel - 1)
else:
self._write_serial(sorted(docnames), warnings)
self.env.set_warnfunc(self.warn)
def _write_serial(self, docnames, warnings):
for docname in self.status_iterator(
for docname in self.app.status_iterator(
docnames, 'writing output... ', darkgreen, len(docnames)):
doctree = self.env.get_and_resolve_doctree(docname, self)
self.write_doc_serialized(docname, doctree)
@ -380,60 +352,34 @@ class Builder(object):
def _write_parallel(self, docnames, warnings, nproc):
def write_process(docs):
try:
for docname, doctree in docs:
self.write_doc(docname, doctree)
except KeyboardInterrupt:
pass # do not print a traceback on Ctrl-C
finally:
for warning in warnings:
self.warn(*warning)
for docname, doctree in docs:
self.write_doc(docname, doctree)
return warnings
def process_thread(docs):
p = multiprocessing.Process(target=write_process, args=(docs,))
p.start()
p.join()
semaphore.release()
# allow only "nproc" worker processes at once
semaphore = threading.Semaphore(nproc)
# list of threads to join when waiting for completion
threads = []
def add_warnings(docs, wlist):
warnings.extend(wlist)
# warm up caches/compile templates using the first document
firstname, docnames = docnames[0], docnames[1:]
doctree = self.env.get_and_resolve_doctree(firstname, self)
self.write_doc_serialized(firstname, doctree)
self.write_doc(firstname, doctree)
# for the rest, determine how many documents to write in one go
ndocs = len(docnames)
chunksize = min(ndocs // nproc, 10)
if chunksize == 0:
chunksize = 1
nchunks, rest = divmod(ndocs, chunksize)
if rest:
nchunks += 1
# partition documents in "chunks" that will be written by one Process
chunks = [docnames[i*chunksize:(i+1)*chunksize] for i in range(nchunks)]
for docnames in self.status_iterator(
chunks, 'writing output... ', darkgreen, len(chunks),
lambda chk: '%s .. %s' % (chk[0], chk[-1])):
docs = []
for docname in docnames:
tasks = ParallelTasks(nproc)
chunks = make_chunks(docnames, nproc)
for chunk in self.app.status_iterator(
chunks, 'writing output... ', darkgreen, len(chunks)):
arg = []
for i, docname in enumerate(chunk):
doctree = self.env.get_and_resolve_doctree(docname, self)
self.write_doc_serialized(docname, doctree)
docs.append((docname, doctree))
# start a new thread to oversee the completion of this chunk
semaphore.acquire()
t = threading.Thread(target=process_thread, args=(docs,))
t.setDaemon(True)
t.start()
threads.append(t)
arg.append((docname, doctree))
tasks.add_task(write_process, arg, add_warnings)
# make sure all threads have finished
self.info(bold('waiting for workers... '))
for t in threads:
t.join()
self.info(bold('waiting for workers...'))
tasks.join()
def prepare_writing(self, docnames):
"""A place where you can add logic before :meth:`write_doc` is run"""

View File

@ -130,6 +130,9 @@ class ChangesBuilder(Builder):
self.env.config.source_encoding)
try:
lines = f.readlines()
except UnicodeDecodeError:
self.warn('could not read %r for changelog creation' % docname)
continue
finally:
f.close()
targetfn = path.join(self.outdir, 'rst', os_path(docname)) + '.html'

View File

@ -405,8 +405,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
converting the format and resizing the image if necessary/possible.
"""
ensuredir(path.join(self.outdir, '_images'))
for src in self.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
for src in self.app.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
dest = self.images[src]
try:
img = Image.open(path.join(self.srcdir, src))

View File

@ -170,8 +170,8 @@ class MessageCatalogBuilder(I18nBuilder):
extract_translations = self.templates.environment.extract_translations
for template in self.status_iterator(files,
'reading templates... ', purple, len(files)):
for template in self.app.status_iterator(
files, 'reading templates... ', purple, len(files)):
with open(template, 'r', encoding='utf-8') as f:
context = f.read()
for line, meth, msg in extract_translations(context):
@ -191,7 +191,7 @@ class MessageCatalogBuilder(I18nBuilder):
ctime = datetime.fromtimestamp(
timestamp, ltz).strftime('%Y-%m-%d %H:%M%z'),
)
for textdomain, catalog in self.status_iterator(
for textdomain, catalog in self.app.status_iterator(
iteritems(self.catalogs), "writing message catalogs... ",
darkgreen, len(self.catalogs),
lambda textdomain__: textdomain__[0]):

View File

@ -29,7 +29,7 @@ from docutils.readers.doctree import Reader as DoctreeReader
from sphinx import package_dir, __version__
from sphinx.util import jsonimpl, copy_static_entry
from sphinx.util.osutil import SEP, os_path, relative_uri, ensuredir, \
movefile, ustrftime, copyfile
movefile, ustrftime, copyfile
from sphinx.util.nodes import inline_all_toctrees
from sphinx.util.matching import patmatch, compile_matchers
from sphinx.locale import _
@ -40,7 +40,7 @@ from sphinx.application import ENV_PICKLE_FILENAME
from sphinx.highlighting import PygmentsBridge
from sphinx.util.console import bold, darkgreen, brown
from sphinx.writers.html import HTMLWriter, HTMLTranslator, \
SmartyPantsHTMLTranslator
SmartyPantsHTMLTranslator
#: the filename for the inventory of objects
INVENTORY_FILENAME = 'objects.inv'
@ -443,12 +443,19 @@ class StandaloneHTMLBuilder(Builder):
self.index_page(docname, doctree, title)
def finish(self):
self.info(bold('writing additional files...'), nonl=1)
self.finish_tasks.add_task(self.gen_indices)
self.finish_tasks.add_task(self.gen_additional_pages)
self.finish_tasks.add_task(self.copy_image_files)
self.finish_tasks.add_task(self.copy_download_files)
self.finish_tasks.add_task(self.copy_static_files)
self.finish_tasks.add_task(self.copy_extra_files)
self.finish_tasks.add_task(self.write_buildinfo)
# pages from extensions
for pagelist in self.app.emit('html-collect-pages'):
for pagename, context, template in pagelist:
self.handle_page(pagename, context, template)
# dump the search index
self.handle_finish()
def gen_indices(self):
self.info(bold('generating indices...'), nonl=1)
# the global general index
if self.get_builder_config('use_index', 'html'):
@ -457,16 +464,27 @@ class StandaloneHTMLBuilder(Builder):
# the global domain-specific indices
self.write_domain_indices()
# the search page
if self.name != 'htmlhelp':
self.info(' search', nonl=1)
self.handle_page('search', {}, 'search.html')
self.info()
def gen_additional_pages(self):
# pages from extensions
for pagelist in self.app.emit('html-collect-pages'):
for pagename, context, template in pagelist:
self.handle_page(pagename, context, template)
self.info(bold('writing additional pages...'), nonl=1)
# additional pages from conf.py
for pagename, template in self.config.html_additional_pages.items():
self.info(' '+pagename, nonl=1)
self.handle_page(pagename, {}, template)
# the search page
if self.name != 'htmlhelp':
self.info(' search', nonl=1)
self.handle_page('search', {}, 'search.html')
# the opensearch xml file
if self.config.html_use_opensearch and self.name != 'htmlhelp':
self.info(' opensearch', nonl=1)
fn = path.join(self.outdir, '_static', 'opensearch.xml')
@ -474,15 +492,6 @@ class StandaloneHTMLBuilder(Builder):
self.info()
self.copy_image_files()
self.copy_download_files()
self.copy_static_files()
self.copy_extra_files()
self.write_buildinfo()
# dump the search index
self.handle_finish()
def write_genindex(self):
# the total count of lines for each index letter, used to distribute
# the entries into two columns
@ -526,8 +535,8 @@ class StandaloneHTMLBuilder(Builder):
# copy image files
if self.images:
ensuredir(path.join(self.outdir, '_images'))
for src in self.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
for src in self.app.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
dest = self.images[src]
try:
copyfile(path.join(self.srcdir, src),
@ -540,9 +549,9 @@ class StandaloneHTMLBuilder(Builder):
# copy downloadable files
if self.env.dlfiles:
ensuredir(path.join(self.outdir, '_downloads'))
for src in self.status_iterator(self.env.dlfiles,
'copying downloadable files... ',
brown, len(self.env.dlfiles)):
for src in self.app.status_iterator(self.env.dlfiles,
'copying downloadable files... ',
brown, len(self.env.dlfiles)):
dest = self.env.dlfiles[src][1]
try:
copyfile(path.join(self.srcdir, src),
@ -786,8 +795,8 @@ class StandaloneHTMLBuilder(Builder):
copyfile(self.env.doc2path(pagename), source_name)
def handle_finish(self):
self.dump_search_index()
self.dump_inventory()
self.finish_tasks.add_task(self.dump_search_index)
self.finish_tasks.add_task(self.dump_inventory)
def dump_inventory(self):
self.info(bold('dumping object inventory... '), nonl=True)

View File

@ -12,7 +12,7 @@ from __future__ import print_function
import os
import sys
import getopt
import optparse
import traceback
from os import path
@ -32,89 +32,121 @@ def usage(argv, msg=None):
if msg:
print(msg, file=sys.stderr)
print(file=sys.stderr)
print("""\
USAGE = """\
Sphinx v%s
Usage: %s [options] sourcedir outdir [filenames...]
Usage: %%prog [options] sourcedir outdir [filenames...]
General options
^^^^^^^^^^^^^^^
-b <builder> builder to use; default is html
-a write all files; default is to only write new and changed files
-E don't use a saved environment, always read all files
-d <path> path for the cached environment and doctree files
(default: outdir/.doctrees)
-j <N> build in parallel with N processes where possible
-M <builder> "make" mode -- used by Makefile, like "sphinx-build -M html"
Filename arguments:
without -a and without filenames, write new and changed files.
with -a, write all files.
with filenames, write these.
""" % __version__
Build configuration options
^^^^^^^^^^^^^^^^^^^^^^^^^^^
-c <path> path where configuration file (conf.py) is located
(default: same as sourcedir)
-C use no config file at all, only -D options
-D <setting=value> override a setting in configuration file
-t <tag> define tag: include "only" blocks with <tag>
-A <name=value> pass a value into the templates, for HTML builder
-n nit-picky mode, warn about all missing references
EPILOG = """\
For more information, visit <http://sphinx-doc.org/>.
"""
Console output options
^^^^^^^^^^^^^^^^^^^^^^
-v increase verbosity (can be repeated)
-q no output on stdout, just warnings on stderr
-Q no output at all, not even warnings
-w <file> write warnings (and errors) to given file
-W turn warnings into errors
-T show full traceback on exception
-N do not emit colored output
-P run Pdb on exception
Filename arguments
^^^^^^^^^^^^^^^^^^
* without -a and without filenames, write new and changed files.
* with -a, write all files.
* with filenames, write these.
class MyFormatter(optparse.IndentedHelpFormatter):
def format_usage(self, usage):
return usage
Standard options
^^^^^^^^^^^^^^^^
-h, --help show this help and exit
--version show version information and exit
""" % (__version__, argv[0]), file=sys.stderr)
def format_help(self, formatter):
result = []
if self.description:
result.append(self.format_description(formatter))
if self.option_list:
result.append(self.format_option_help(formatter))
return "\n".join(result)
def main(argv):
if not color_terminal():
nocolor()
parser = optparse.OptionParser(USAGE, epilog=EPILOG, formatter=MyFormatter())
parser.add_option('--version', action='store_true', dest='version',
help='show version information and exit')
group = parser.add_option_group('General options')
group.add_option('-b', metavar='BUILDER', dest='builder', default='html',
help='builder to use; default is html')
group.add_option('-a', action='store_true', dest='force_all',
help='write all files; default is to only write new and '
'changed files')
group.add_option('-E', action='store_true', dest='freshenv',
help='don\'t use a saved environment, always read '
'all files')
group.add_option('-d', metavar='PATH', default=None, dest='doctreedir',
help='path for the cached environment and doctree files '
'(default: outdir/.doctrees)')
group.add_option('-j', metavar='N', default=1, type='int', dest='jobs',
help='build in parallel with N processes where possible')
# this option never gets through to this point (it is intercepted earlier)
# group.add_option('-M', metavar='BUILDER', dest='make_mode',
# help='"make" mode -- as used by Makefile, like '
# '"sphinx-build -M html"')
group = parser.add_option_group('Build configuration options')
group.add_option('-c', metavar='PATH', dest='confdir',
help='path where configuration file (conf.py) is located '
'(default: same as sourcedir)')
group.add_option('-C', action='store_true', dest='noconfig',
help='use no config file at all, only -D options')
group.add_option('-D', metavar='setting=value', action='append',
dest='define', default=[],
help='override a setting in configuration file')
group.add_option('-A', metavar='name=value', action='append',
dest='htmldefine', default=[],
help='pass a value into HTML templates')
group.add_option('-t', metavar='TAG', action='append',
dest='tags', default=[],
help='define tag: include "only" blocks with TAG')
group.add_option('-n', action='store_true', dest='nitpicky',
help='nit-picky mode, warn about all missing references')
group = parser.add_option_group('Console output options')
group.add_option('-v', action='count', dest='verbosity', default=0,
help='increase verbosity (can be repeated)')
group.add_option('-q', action='store_true', dest='quiet',
help='no output on stdout, just warnings on stderr')
group.add_option('-Q', action='store_true', dest='really_quiet',
help='no output at all, not even warnings')
group.add_option('-N', action='store_true', dest='nocolor',
help='do not emit colored output')
group.add_option('-w', metavar='FILE', dest='warnfile',
help='write warnings (and errors) to given file')
group.add_option('-W', action='store_true', dest='warningiserror',
help='turn warnings into errors')
group.add_option('-T', action='store_true', dest='traceback',
help='show full traceback on exception')
group.add_option('-P', action='store_true', dest='pdb',
help='run Pdb on exception')
# parse options
try:
opts, args = getopt.getopt(argv[1:], 'ab:t:d:c:CD:A:nNEqQWw:PThvj:',
['help', 'version'])
except getopt.error as err:
usage(argv, 'Error: %s' % err)
return 1
opts, args = parser.parse_args()
except SystemExit as err:
return err.code
# handle basic options
allopts = set(opt[0] for opt in opts)
# help and version options
if '-h' in allopts or '--help' in allopts:
usage(argv)
print(file=sys.stderr)
print('For more information, see <http://sphinx-doc.org/>.',
file=sys.stderr)
return 0
if '--version' in allopts:
print('Sphinx (sphinx-build) %s' % __version__)
if opts.version:
print('Sphinx (sphinx-build) %s' % __version__)
return 0
# get paths (first and second positional argument)
try:
srcdir = confdir = abspath(args[0])
srcdir = abspath(args[0])
confdir = abspath(opts.confdir or srcdir)
if opts.noconfig:
confdir = None
if not path.isdir(srcdir):
print('Error: Cannot find source directory `%s\'.' % srcdir,
file=sys.stderr)
return 1
if not path.isfile(path.join(srcdir, 'conf.py')) and \
'-c' not in allopts and '-C' not in allopts:
print('Error: Source directory doesn\'t contain a conf.py file.',
if not opts.noconfig and not path.isfile(path.join(confdir, 'conf.py')):
print('Error: Config directory doesn\'t contain a conf.py file.',
file=sys.stderr)
return 1
outdir = abspath(args[1])
@ -144,116 +176,77 @@ def main(argv):
except Exception:
likely_encoding = None
buildername = None
force_all = freshenv = warningiserror = use_pdb = False
show_traceback = False
verbosity = 0
parallel = 0
if opts.force_all and filenames:
print('Error: Cannot combine -a option and filenames.', file=sys.stderr)
return 1
if opts.nocolor:
nocolor()
doctreedir = abspath(opts.doctreedir or path.join(outdir, '.doctrees'))
status = sys.stdout
warning = sys.stderr
error = sys.stderr
warnfile = None
if opts.quiet:
status = None
if opts.really_quiet:
status = warning = None
if warning and opts.warnfile:
try:
warnfp = open(opts.warnfile, 'w')
except Exception as exc:
print('Error: Cannot open warning file %r: %s' %
(opts.warnfile, exc), file=sys.stderr)
sys.exit(1)
warning = Tee(warning, warnfp)
error = warning
confoverrides = {}
tags = []
doctreedir = path.join(outdir, '.doctrees')
for opt, val in opts:
if opt == '-b':
buildername = val
elif opt == '-a':
if filenames:
usage(argv, 'Error: Cannot combine -a option and filenames.')
return 1
force_all = True
elif opt == '-t':
tags.append(val)
elif opt == '-d':
doctreedir = abspath(val)
elif opt == '-c':
confdir = abspath(val)
if not path.isfile(path.join(confdir, 'conf.py')):
print('Error: Configuration directory doesn\'t contain conf.py file.',
file=sys.stderr)
return 1
elif opt == '-C':
confdir = None
elif opt == '-D':
for val in opts.define:
try:
key, val = val.split('=')
except ValueError:
print('Error: -D option argument must be in the form name=value.',
file=sys.stderr)
return 1
if likely_encoding and isinstance(val, binary_type):
try:
key, val = val.split('=')
except ValueError:
print('Error: -D option argument must be in the form name=value.',
file=sys.stderr)
return 1
val = val.decode(likely_encoding)
except UnicodeError:
pass
confoverrides[key] = val
for val in opts.htmldefine:
try:
key, val = val.split('=')
except ValueError:
print('Error: -A option argument must be in the form name=value.',
file=sys.stderr)
return 1
try:
val = int(val)
except ValueError:
if likely_encoding and isinstance(val, binary_type):
try:
val = val.decode(likely_encoding)
except UnicodeError:
pass
confoverrides[key] = val
elif opt == '-A':
try:
key, val = val.split('=')
except ValueError:
print('Error: -A option argument must be in the form name=value.',
file=sys.stderr)
return 1
try:
val = int(val)
except ValueError:
if likely_encoding and isinstance(val, binary_type):
try:
val = val.decode(likely_encoding)
except UnicodeError:
pass
confoverrides['html_context.%s' % key] = val
elif opt == '-n':
confoverrides['nitpicky'] = True
elif opt == '-N':
nocolor()
elif opt == '-E':
freshenv = True
elif opt == '-q':
status = None
elif opt == '-Q':
status = None
warning = None
elif opt == '-W':
warningiserror = True
elif opt == '-w':
warnfile = val
elif opt == '-P':
use_pdb = True
elif opt == '-T':
show_traceback = True
elif opt == '-v':
verbosity += 1
show_traceback = True
elif opt == '-j':
try:
parallel = int(val)
except ValueError:
print('Error: -j option argument must be an integer.',
file=sys.stderr)
return 1
confoverrides['html_context.%s' % key] = val
if warning and warnfile:
warnfp = open(warnfile, 'w')
warning = Tee(warning, warnfp)
error = warning
if not path.isdir(outdir):
if status:
print('Making output directory...', file=status)
os.makedirs(outdir)
if opts.nitpicky:
confoverrides['nitpicky'] = True
app = None
try:
app = Sphinx(srcdir, confdir, outdir, doctreedir, buildername,
confoverrides, status, warning, freshenv,
warningiserror, tags, verbosity, parallel)
app.build(force_all, filenames)
app = Sphinx(srcdir, confdir, outdir, doctreedir, opts.builder,
confoverrides, status, warning, opts.freshenv,
opts.warningiserror, opts.tags, opts.verbosity, opts.jobs)
app.build(opts.force_all, filenames)
return app.statuscode
except (Exception, KeyboardInterrupt) as err:
if use_pdb:
if opts.pdb:
import pdb
print(red('Exception occurred while building, starting debugger:'),
file=error)
@ -261,7 +254,7 @@ def main(argv):
pdb.post_mortem(sys.exc_info()[2])
else:
print(file=error)
if show_traceback:
if opts.verbosity or opts.traceback:
traceback.print_exc(None, error)
print(file=error)
if isinstance(err, KeyboardInterrupt):

View File

@ -11,7 +11,8 @@
import re
from docutils.parsers.rst import Directive, directives
from docutils import nodes
from docutils.parsers.rst import Directive, directives, roles
from sphinx import addnodes
from sphinx.util.docfields import DocFieldTransformer
@ -162,6 +163,34 @@ class ObjectDescription(Directive):
DescDirective = ObjectDescription
class DefaultRole(Directive):
"""
Set the default interpreted text role. Overridden from docutils.
"""
optional_arguments = 1
final_argument_whitespace = False
def run(self):
if not self.arguments:
if '' in roles._roles:
# restore the "default" default role
del roles._roles['']
return []
role_name = self.arguments[0]
role, messages = roles.role(role_name, self.state_machine.language,
self.lineno, self.state.reporter)
if role is None:
error = self.state.reporter.error(
'Unknown interpreted text role "%s".' % role_name,
nodes.literal_block(self.block_text, self.block_text),
line=self.lineno)
return messages + [error]
roles._roles[''] = role
self.state.document.settings.env.temp_data['default_role'] = role_name
return messages
class DefaultDomain(Directive):
"""
Directive to (re-)set the default domain for this source file.
@ -186,6 +215,7 @@ class DefaultDomain(Directive):
return []
directives.register_directive('default-role', DefaultRole)
directives.register_directive('default-domain', DefaultDomain)
directives.register_directive('describe', ObjectDescription)
# new, more consistent, name

View File

@ -155,10 +155,13 @@ class Domain(object):
self._role_cache = {}
self._directive_cache = {}
self._role2type = {}
self._type2role = {}
for name, obj in iteritems(self.object_types):
for rolename in obj.roles:
self._role2type.setdefault(rolename, []).append(name)
self._type2role[name] = obj.roles[0] if obj.roles else ''
self.objtypes_for_role = self._role2type.get
self.role_for_objtype = self._type2role.get
def role(self, name):
"""Return a role adapter function that always gives the registered
@ -199,6 +202,14 @@ class Domain(object):
"""Remove traces of a document in the domain-specific inventories."""
pass
def merge_domaindata(self, docnames, otherdata):
"""Merge in data regarding *docnames* from a different domaindata
inventory (coming from a subprocess in parallel builds).
"""
raise NotImplementedError('merge_domaindata must be implemented in %s '
'to be able to do parallel builds!' %
self.__class__)
def process_doc(self, env, docname, document):
"""Process a document after it is read by the environment."""
pass
@ -220,6 +231,22 @@ class Domain(object):
"""
pass
def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode):
"""Resolve the pending_xref *node* with the given *target*.
The reference comes from an "any" or similar role, which means that we
don't know the type. Otherwise, the arguments are the same as for
:meth:`resolve_xref`.
The method must return a list (potentially empty) of tuples
``('domain:role', newnode)``, where ``'domain:role'`` is the name of a
role that could have created the same reference, e.g. ``'py:func'``.
``newnode`` is what :meth:`resolve_xref` would return.
.. versionadded:: 1.3
"""
raise NotImplementedError
def get_objects(self):
"""Return an iterable of "object descriptions", which are tuples with
five items:

View File

@ -130,7 +130,7 @@ class CObject(ObjectDescription):
if m:
name = m.group(1)
typename = self.env.temp_data.get('c:type')
typename = self.env.ref_context.get('c:type')
if self.name == 'c:member' and typename:
fullname = typename + '.' + name
else:
@ -212,12 +212,12 @@ class CObject(ObjectDescription):
self.typename_set = False
if self.name == 'c:type':
if self.names:
self.env.temp_data['c:type'] = self.names[0]
self.env.ref_context['c:type'] = self.names[0]
self.typename_set = True
def after_content(self):
if self.typename_set:
self.env.temp_data['c:type'] = None
self.env.ref_context.pop('c:type', None)
class CXRefRole(XRefRole):
@ -269,6 +269,12 @@ class CDomain(Domain):
if fn == docname:
del self.data['objects'][fullname]
def merge_domaindata(self, docnames, otherdata):
# XXX check duplicates
for fullname, (fn, objtype) in otherdata['objects'].items():
if fn in docnames:
self.data['objects'][fullname] = (fn, objtype)
def resolve_xref(self, env, fromdocname, builder,
typ, target, node, contnode):
# strip pointer asterisk
@ -279,6 +285,17 @@ class CDomain(Domain):
return make_refnode(builder, fromdocname, obj[0], 'c.' + target,
contnode, target)
def resolve_any_xref(self, env, fromdocname, builder, target,
node, contnode):
# strip pointer asterisk
target = target.rstrip(' *')
if target not in self.data['objects']:
return []
obj = self.data['objects'][target]
return [('c:' + self.role_for_objtype(obj[1]),
make_refnode(builder, fromdocname, obj[0], 'c.' + target,
contnode, target))]
def get_objects(self):
for refname, (docname, type) in list(self.data['objects'].items()):
yield (refname, refname, type, docname, 'c.' + refname, 1)

View File

@ -141,7 +141,6 @@
"""
import re
import traceback
from copy import deepcopy
from six import iteritems, text_type
@ -222,9 +221,9 @@ _id_operator = {
'delete[]': 'da',
# the arguments will make the difference between unary and binary
# '+(unary)' : 'ps',
#'-(unary)' : 'ng',
#'&(unary)' : 'ad',
#'*(unary)' : 'de',
# '-(unary)' : 'ng',
# '&(unary)' : 'ad',
# '*(unary)' : 'de',
'~': 'co',
'+': 'pl',
'-': 'mi',
@ -319,7 +318,7 @@ class ASTBase(UnicodeMixin):
def _verify_description_mode(mode):
if not mode in ('lastIsName', 'noneIsName', 'markType', 'param'):
if mode not in ('lastIsName', 'noneIsName', 'markType', 'param'):
raise Exception("Description mode '%s' is invalid." % mode)
@ -328,7 +327,7 @@ class ASTOperatorBuildIn(ASTBase):
self.op = op
def get_id(self):
if not self.op in _id_operator:
if self.op not in _id_operator:
raise Exception('Internal error: Build-in operator "%s" can not '
'be mapped to an id.' % self.op)
return _id_operator[self.op]
@ -434,7 +433,7 @@ class ASTNestedNameElement(ASTBase):
'', refdomain='cpp', reftype='type',
reftarget=targetText, modname=None, classname=None)
if env: # during testing we don't have an env, do we?
pnode['cpp:parent'] = env.temp_data.get('cpp:parent')
pnode['cpp:parent'] = env.ref_context.get('cpp:parent')
pnode += nodes.Text(text_type(self.identifier))
signode += pnode
elif mode == 'lastIsName':
@ -532,7 +531,7 @@ class ASTTrailingTypeSpecFundamental(ASTBase):
return self.name
def get_id(self):
if not self.name in _id_fundamental:
if self.name not in _id_fundamental:
raise Exception(
'Semi-internal error: Fundamental type "%s" can not be mapped '
'to an id. Is it a true fundamental type? If not so, the '
@ -866,7 +865,7 @@ class ASTDeclerator(ASTBase):
isinstance(self.ptrOps[-1], ASTPtrOpParamPack)):
return False
else:
return self.declId != None
return self.declId is not None
def __unicode__(self):
res = []
@ -949,7 +948,7 @@ class ASTType(ASTBase):
_verify_description_mode(mode)
self.declSpecs.describe_signature(signode, 'markType', env)
if (self.decl.require_start_space() and
len(text_type(self.declSpecs)) > 0):
len(text_type(self.declSpecs)) > 0):
signode += nodes.Text(' ')
self.decl.describe_signature(signode, mode, env)
@ -1178,7 +1177,7 @@ class DefinitionParser(object):
else:
while not self.eof:
if (len(symbols) == 0 and
self.current_char in (
self.current_char in (
',', '>')):
break
# TODO: actually implement nice handling
@ -1190,8 +1189,7 @@ class DefinitionParser(object):
self.fail(
'Could not find end of constant '
'template argument.')
value = self.definition[
startPos:self.pos].strip()
value = self.definition[startPos:self.pos].strip()
templateArgs.append(ASTTemplateArgConstant(value))
self.skip_ws()
if self.skip_string('>'):
@ -1422,7 +1420,7 @@ class DefinitionParser(object):
def _parse_declerator(self, named, paramMode=None, typed=True):
if paramMode:
if not paramMode in ('type', 'function'):
if paramMode not in ('type', 'function'):
raise Exception(
"Internal error, unknown paramMode '%s'." % paramMode)
ptrOps = []
@ -1493,7 +1491,7 @@ class DefinitionParser(object):
if outer == 'member':
value = self.read_rest().strip()
return ASTInitializer(value)
elif outer == None: # function parameter
elif outer is None: # function parameter
symbols = []
startPos = self.pos
self.skip_ws()
@ -1528,7 +1526,7 @@ class DefinitionParser(object):
doesn't need to name the arguments
"""
if outer: # always named
if not outer in ('type', 'member', 'function'):
if outer not in ('type', 'member', 'function'):
raise Exception('Internal error, unknown outer "%s".' % outer)
assert not named
@ -1652,12 +1650,12 @@ class CPPObject(ObjectDescription):
if theid not in self.state.document.ids:
# the name is not unique, the first one will win
objects = self.env.domaindata['cpp']['objects']
if not name in objects:
if name not in objects:
signode['names'].append(name)
signode['ids'].append(theid)
signode['first'] = (not self.names)
self.state.document.note_explicit_target(signode)
if not name in objects:
if name not in objects:
objects.setdefault(name,
(self.env.docname, ast.objectType, theid))
# add the uninstantiated template if it doesn't exist
@ -1665,8 +1663,8 @@ class CPPObject(ObjectDescription):
if uninstantiated != name and uninstantiated not in objects:
signode['names'].append(uninstantiated)
objects.setdefault(uninstantiated, (
self.env.docname, ast.objectType, theid))
self.env.temp_data['cpp:lastname'] = ast.prefixedName
self.env.docname, ast.objectType, theid))
self.env.ref_context['cpp:lastname'] = ast.prefixedName
indextext = self.get_index_text(name)
if not re.compile(r'^[a-zA-Z0-9_]*$').match(theid):
@ -1693,7 +1691,7 @@ class CPPObject(ObjectDescription):
raise ValueError
self.describe_signature(signode, ast)
parent = self.env.temp_data.get('cpp:parent')
parent = self.env.ref_context.get('cpp:parent')
if parent and len(parent) > 0:
ast = ast.clone()
ast.prefixedName = ast.name.prefix_nested_name(parent[-1])
@ -1741,15 +1739,15 @@ class CPPClassObject(CPPObject):
return _('%s (C++ class)') % name
def before_content(self):
lastname = self.env.temp_data['cpp:lastname']
lastname = self.env.ref_context['cpp:lastname']
assert lastname
if 'cpp:parent' in self.env.temp_data:
self.env.temp_data['cpp:parent'].append(lastname)
if 'cpp:parent' in self.env.ref_context:
self.env.ref_context['cpp:parent'].append(lastname)
else:
self.env.temp_data['cpp:parent'] = [lastname]
self.env.ref_context['cpp:parent'] = [lastname]
def after_content(self):
self.env.temp_data['cpp:parent'].pop()
self.env.ref_context['cpp:parent'].pop()
def parse_definition(self, parser):
return parser.parse_class_object()
@ -1774,7 +1772,7 @@ class CPPNamespaceObject(Directive):
def run(self):
env = self.state.document.settings.env
if self.arguments[0].strip() in ('NULL', '0', 'nullptr'):
env.temp_data['cpp:parent'] = []
env.ref_context['cpp:parent'] = []
else:
parser = DefinitionParser(self.arguments[0])
try:
@ -1784,13 +1782,13 @@ class CPPNamespaceObject(Directive):
self.state_machine.reporter.warning(e.description,
line=self.lineno)
else:
env.temp_data['cpp:parent'] = [prefix]
env.ref_context['cpp:parent'] = [prefix]
return []
class CPPXRefRole(XRefRole):
def process_link(self, env, refnode, has_explicit_title, title, target):
parent = env.temp_data.get('cpp:parent')
parent = env.ref_context.get('cpp:parent')
if parent:
refnode['cpp:parent'] = parent[:]
if not has_explicit_title:
@ -1838,18 +1836,24 @@ class CPPDomain(Domain):
if data[0] == docname:
del self.data['objects'][fullname]
def resolve_xref(self, env, fromdocname, builder,
typ, target, node, contnode):
def merge_domaindata(self, docnames, otherdata):
# XXX check duplicates
for fullname, data in otherdata['objects'].items():
if data[0] in docnames:
self.data['objects'][fullname] = data
def _resolve_xref_inner(self, env, fromdocname, builder,
target, node, contnode, warn=True):
def _create_refnode(nameAst):
name = text_type(nameAst)
if name not in self.data['objects']:
# try dropping the last template
name = nameAst.get_name_no_last_template()
if name not in self.data['objects']:
return None
return None, None
docname, objectType, id = self.data['objects'][name]
return make_refnode(builder, fromdocname, docname, id, contnode,
name)
name), objectType
parser = DefinitionParser(target)
try:
@ -1858,20 +1862,34 @@ class CPPDomain(Domain):
if not parser.eof:
raise DefinitionError('')
except DefinitionError:
env.warn_node('unparseable C++ definition: %r' % target, node)
return None
if warn:
env.warn_node('unparseable C++ definition: %r' % target, node)
return None, None
# try as is the name is fully qualified
refNode = _create_refnode(nameAst)
if refNode:
return refNode
res = _create_refnode(nameAst)
if res[0]:
return res
# try qualifying it with the parent
parent = node.get('cpp:parent', None)
if parent and len(parent) > 0:
return _create_refnode(nameAst.prefix_nested_name(parent[-1]))
else:
return None
return None, None
def resolve_xref(self, env, fromdocname, builder,
typ, target, node, contnode):
return self._resolve_xref_inner(env, fromdocname, builder, target, node,
contnode)[0]
def resolve_any_xref(self, env, fromdocname, builder, target,
node, contnode):
node, objtype = self._resolve_xref_inner(env, fromdocname, builder,
target, node, contnode, warn=False)
if node:
return [('cpp:' + self.role_for_objtype(objtype), node)]
return []
def get_objects(self):
for refname, (docname, type, theid) in iteritems(self.data['objects']):

View File

@ -45,7 +45,7 @@ class JSObject(ObjectDescription):
nameprefix = None
name = prefix
objectname = self.env.temp_data.get('js:object')
objectname = self.env.ref_context.get('js:object')
if nameprefix:
if objectname:
# someone documenting the method of an attribute of the current
@ -77,7 +77,7 @@ class JSObject(ObjectDescription):
def add_target_and_index(self, name_obj, sig, signode):
objectname = self.options.get(
'object', self.env.temp_data.get('js:object'))
'object', self.env.ref_context.get('js:object'))
fullname = name_obj[0]
if fullname not in self.state.document.ids:
signode['names'].append(fullname)
@ -140,7 +140,7 @@ class JSConstructor(JSCallable):
class JSXRefRole(XRefRole):
def process_link(self, env, refnode, has_explicit_title, title, target):
# basically what sphinx.domains.python.PyXRefRole does
refnode['js:object'] = env.temp_data.get('js:object')
refnode['js:object'] = env.ref_context.get('js:object')
if not has_explicit_title:
title = title.lstrip('.')
target = target.lstrip('~')
@ -179,7 +179,7 @@ class JavaScriptDomain(Domain):
'attr': JSXRefRole(),
}
initial_data = {
'objects': {}, # fullname -> docname, objtype
'objects': {}, # fullname -> docname, objtype
}
def clear_doc(self, docname):
@ -187,6 +187,12 @@ class JavaScriptDomain(Domain):
if fn == docname:
del self.data['objects'][fullname]
def merge_domaindata(self, docnames, otherdata):
# XXX check duplicates
for fullname, (fn, objtype) in otherdata['objects'].items():
if fn in docnames:
self.data['objects'][fullname] = (fn, objtype)
def find_obj(self, env, obj, name, typ, searchorder=0):
if name[-2:] == '()':
name = name[:-2]
@ -214,6 +220,16 @@ class JavaScriptDomain(Domain):
return make_refnode(builder, fromdocname, obj[0],
name.replace('$', '_S_'), contnode, name)
def resolve_any_xref(self, env, fromdocname, builder, target, node,
contnode):
objectname = node.get('js:object')
name, obj = self.find_obj(env, objectname, target, None, 1)
if not obj:
return []
return [('js:' + self.role_for_objtype(obj[1]),
make_refnode(builder, fromdocname, obj[0],
name.replace('$', '_S_'), contnode, name))]
def get_objects(self):
for refname, (docname, type) in list(self.data['objects'].items()):
yield refname, refname, type, docname, \

View File

@ -156,8 +156,8 @@ class PyObject(ObjectDescription):
# determine module and class name (if applicable), as well as full name
modname = self.options.get(
'module', self.env.temp_data.get('py:module'))
classname = self.env.temp_data.get('py:class')
'module', self.env.ref_context.get('py:module'))
classname = self.env.ref_context.get('py:class')
if classname:
add_module = False
if name_prefix and name_prefix.startswith(classname):
@ -194,7 +194,7 @@ class PyObject(ObjectDescription):
# 'exceptions' module.
elif add_module and self.env.config.add_module_names:
modname = self.options.get(
'module', self.env.temp_data.get('py:module'))
'module', self.env.ref_context.get('py:module'))
if modname and modname != 'exceptions':
nodetext = modname + '.'
signode += addnodes.desc_addname(nodetext, nodetext)
@ -225,7 +225,7 @@ class PyObject(ObjectDescription):
def add_target_and_index(self, name_cls, sig, signode):
modname = self.options.get(
'module', self.env.temp_data.get('py:module'))
'module', self.env.ref_context.get('py:module'))
fullname = (modname and modname + '.' or '') + name_cls[0]
# note target
if fullname not in self.state.document.ids:
@ -254,7 +254,7 @@ class PyObject(ObjectDescription):
def after_content(self):
if self.clsname_set:
self.env.temp_data['py:class'] = None
self.env.ref_context.pop('py:class', None)
class PyModulelevel(PyObject):
@ -299,7 +299,7 @@ class PyClasslike(PyObject):
def before_content(self):
PyObject.before_content(self)
if self.names:
self.env.temp_data['py:class'] = self.names[0][0]
self.env.ref_context['py:class'] = self.names[0][0]
self.clsname_set = True
@ -377,8 +377,8 @@ class PyClassmember(PyObject):
def before_content(self):
PyObject.before_content(self)
lastname = self.names and self.names[-1][1]
if lastname and not self.env.temp_data.get('py:class'):
self.env.temp_data['py:class'] = lastname.strip('.')
if lastname and not self.env.ref_context.get('py:class'):
self.env.ref_context['py:class'] = lastname.strip('.')
self.clsname_set = True
@ -434,7 +434,7 @@ class PyModule(Directive):
env = self.state.document.settings.env
modname = self.arguments[0].strip()
noindex = 'noindex' in self.options
env.temp_data['py:module'] = modname
env.ref_context['py:module'] = modname
ret = []
if not noindex:
env.domaindata['py']['modules'][modname] = \
@ -472,16 +472,16 @@ class PyCurrentModule(Directive):
env = self.state.document.settings.env
modname = self.arguments[0].strip()
if modname == 'None':
env.temp_data['py:module'] = None
env.ref_context.pop('py:module', None)
else:
env.temp_data['py:module'] = modname
env.ref_context['py:module'] = modname
return []
class PyXRefRole(XRefRole):
def process_link(self, env, refnode, has_explicit_title, title, target):
refnode['py:module'] = env.temp_data.get('py:module')
refnode['py:class'] = env.temp_data.get('py:class')
refnode['py:module'] = env.ref_context.get('py:module')
refnode['py:class'] = env.ref_context.get('py:class')
if not has_explicit_title:
title = title.lstrip('.') # only has a meaning for the target
target = target.lstrip('~') # only has a meaning for the title
@ -627,6 +627,15 @@ class PythonDomain(Domain):
if fn == docname:
del self.data['modules'][modname]
def merge_domaindata(self, docnames, otherdata):
# XXX check duplicates?
for fullname, (fn, objtype) in otherdata['objects'].items():
if fn in docnames:
self.data['objects'][fullname] = (fn, objtype)
for modname, data in otherdata['modules'].items():
if data[0] in docnames:
self.data['modules'][modname] = data
def find_obj(self, env, modname, classname, name, type, searchmode=0):
"""Find a Python object for "name", perhaps using the given module
and/or classname. Returns a list of (name, object entry) tuples.
@ -643,7 +652,10 @@ class PythonDomain(Domain):
newname = None
if searchmode == 1:
objtypes = self.objtypes_for_role(type)
if type is None:
objtypes = list(self.object_types)
else:
objtypes = self.objtypes_for_role(type)
if objtypes is not None:
if modname and classname:
fullname = modname + '.' + classname + '.' + name
@ -704,22 +716,44 @@ class PythonDomain(Domain):
name, obj = matches[0]
if obj[1] == 'module':
# get additional info for modules
docname, synopsis, platform, deprecated = self.data['modules'][name]
assert docname == obj[0]
title = name
if synopsis:
title += ': ' + synopsis
if deprecated:
title += _(' (deprecated)')
if platform:
title += ' (' + platform + ')'
return make_refnode(builder, fromdocname, docname,
'module-' + name, contnode, title)
return self._make_module_refnode(builder, fromdocname, name,
contnode)
else:
return make_refnode(builder, fromdocname, obj[0], name,
contnode, name)
def resolve_any_xref(self, env, fromdocname, builder, target,
node, contnode):
modname = node.get('py:module')
clsname = node.get('py:class')
results = []
# always search in "refspecific" mode with the :any: role
matches = self.find_obj(env, modname, clsname, target, None, 1)
for name, obj in matches:
if obj[1] == 'module':
results.append(('py:mod',
self._make_module_refnode(builder, fromdocname,
name, contnode)))
else:
results.append(('py:' + self.role_for_objtype(obj[1]),
make_refnode(builder, fromdocname, obj[0], name,
contnode, name)))
return results
def _make_module_refnode(self, builder, fromdocname, name, contnode):
# get additional info for modules
docname, synopsis, platform, deprecated = self.data['modules'][name]
title = name
if synopsis:
title += ': ' + synopsis
if deprecated:
title += _(' (deprecated)')
if platform:
title += ' (' + platform + ')'
return make_refnode(builder, fromdocname, docname,
'module-' + name, contnode, title)
def get_objects(self):
for modname, info in iteritems(self.data['modules']):
yield (modname, modname, 'module', info[0], 'module-' + modname, 0)

View File

@ -123,6 +123,12 @@ class ReSTDomain(Domain):
if doc == docname:
del self.data['objects'][typ, name]
def merge_domaindata(self, docnames, otherdata):
# XXX check duplicates
for (typ, name), doc in otherdata['objects'].items():
if doc in docnames:
self.data['objects'][typ, name] = doc
def resolve_xref(self, env, fromdocname, builder, typ, target, node,
contnode):
objects = self.data['objects']
@ -134,6 +140,19 @@ class ReSTDomain(Domain):
objtype + '-' + target,
contnode, target + ' ' + objtype)
def resolve_any_xref(self, env, fromdocname, builder, target,
node, contnode):
objects = self.data['objects']
results = []
for objtype in self.object_types:
if (objtype, target) in self.data['objects']:
results.append(('rst:' + self.role_for_objtype(objtype),
make_refnode(builder, fromdocname,
objects[objtype, target],
objtype + '-' + target,
contnode, target + ' ' + objtype)))
return results
def get_objects(self):
for (typ, name), docname in iteritems(self.data['objects']):
yield name, name, typ, docname, typ + '-' + name, 1

View File

@ -28,7 +28,9 @@ from sphinx.util.compat import Directive
# RE for option descriptions
option_desc_re = re.compile(r'((?:/|-|--)?[-_a-zA-Z0-9]+)(\s*.*)')
option_desc_re = re.compile(r'((?:/|--|-|\+)?[-?@#_a-zA-Z0-9]+)(=?\s*.*)')
# RE for grammar tokens
token_re = re.compile('`(\w+)`', re.U)
class GenericObject(ObjectDescription):
@ -144,8 +146,9 @@ class Cmdoption(ObjectDescription):
self.env.warn(
self.env.docname,
'Malformed option description %r, should '
'look like "opt", "-opt args", "--opt args" or '
'"/opt args"' % potential_option, self.lineno)
'look like "opt", "-opt args", "--opt args", '
'"/opt args" or "+opt args"' % potential_option,
self.lineno)
continue
optname, args = m.groups()
if count:
@ -163,7 +166,7 @@ class Cmdoption(ObjectDescription):
return firstname
def add_target_and_index(self, firstname, sig, signode):
currprogram = self.env.temp_data.get('std:program')
currprogram = self.env.ref_context.get('std:program')
for optname in signode.get('allnames', []):
targetname = optname.replace('/', '-')
if not targetname.startswith('-'):
@ -198,36 +201,19 @@ class Program(Directive):
env = self.state.document.settings.env
program = ws_re.sub('-', self.arguments[0].strip())
if program == 'None':
env.temp_data['std:program'] = None
env.ref_context.pop('std:program', None)
else:
env.temp_data['std:program'] = program
env.ref_context['std:program'] = program
return []
class OptionXRefRole(XRefRole):
innernodeclass = addnodes.literal_emphasis
def _split(self, text, refnode, env):
try:
program, target = re.split(' (?=-|--|/)', text, 1)
except ValueError:
env.warn_node('Malformed :option: %r, does not contain option '
'marker - or -- or /' % text, refnode)
return None, text
else:
program = ws_re.sub('-', program)
return program, target
def process_link(self, env, refnode, has_explicit_title, title, target):
program = env.temp_data.get('std:program')
if not has_explicit_title:
if ' ' in title and not (title.startswith('/') or
title.startswith('-')):
program, target = self._split(title, refnode, env)
target = target.strip()
elif ' ' in target:
program, target = self._split(target, refnode, env)
refnode['refprogram'] = program
# validate content
if not re.match('(.+ )?[-/+]', target):
env.warn_node('Malformed :option: %r, does not contain option '
'marker - or -- or / or +' % target, refnode)
refnode['std:program'] = env.ref_context.get('std:program')
return title, target
@ -327,7 +313,7 @@ class Glossary(Directive):
else:
messages.append(self.state.reporter.system_message(
2, 'glossary seems to be misformatted, check '
'indentation', source=source, line=lineno))
'indentation', source=source, line=lineno))
else:
if not in_definition:
# first line of definition, determines indentation
@ -338,7 +324,7 @@ class Glossary(Directive):
else:
messages.append(self.state.reporter.system_message(
2, 'glossary seems to be misformatted, check '
'indentation', source=source, line=lineno))
'indentation', source=source, line=lineno))
was_empty = False
# now, parse all the entries into a big definition list
@ -359,7 +345,7 @@ class Glossary(Directive):
tmp.source = source
tmp.line = lineno
new_id, termtext, new_termnodes = \
make_termnodes_from_paragraph_node(env, tmp)
make_termnodes_from_paragraph_node(env, tmp)
ids.append(new_id)
termtexts.append(termtext)
termnodes.extend(new_termnodes)
@ -386,8 +372,6 @@ class Glossary(Directive):
return messages + [node]
token_re = re.compile('`(\w+)`', re.U)
def token_xrefs(text):
retnodes = []
pos = 0
@ -472,7 +456,7 @@ class StandardDomain(Domain):
'productionlist': ProductionList,
}
roles = {
'option': OptionXRefRole(innernodeclass=addnodes.literal_emphasis),
'option': OptionXRefRole(),
'envvar': EnvVarXRefRole(),
# links to tokens in grammar productions
'token': XRefRole(),
@ -522,6 +506,21 @@ class StandardDomain(Domain):
if fn == docname:
del self.data['anonlabels'][key]
def merge_domaindata(self, docnames, otherdata):
# XXX duplicates?
for key, data in otherdata['progoptions'].items():
if data[0] in docnames:
self.data['progoptions'][key] = data
for key, data in otherdata['objects'].items():
if data[0] in docnames:
self.data['objects'][key] = data
for key, data in otherdata['labels'].items():
if data[0] in docnames:
self.data['labels'][key] = data
for key, data in otherdata['anonlabels'].items():
if data[0] in docnames:
self.data['anonlabels'][key] = data
def process_doc(self, env, docname, document):
labels, anonlabels = self.data['labels'], self.data['anonlabels']
for name, explicit in iteritems(document.nametypes):
@ -532,7 +531,7 @@ class StandardDomain(Domain):
continue
node = document.ids[labelid]
if name.isdigit() or 'refuri' in node or \
node.tagname.startswith('desc_'):
node.tagname.startswith('desc_'):
# ignore footnote labels, labels automatically generated from a
# link and object descriptions
continue
@ -541,7 +540,7 @@ class StandardDomain(Domain):
'in ' + env.doc2path(labels[name][0]), node)
anonlabels[name] = docname, labelid
if node.tagname == 'section':
sectname = clean_astext(node[0]) # node[0] == title node
sectname = clean_astext(node[0]) # node[0] == title node
elif node.tagname == 'figure':
for n in node:
if n.tagname == 'caption':
@ -579,13 +578,13 @@ class StandardDomain(Domain):
if node['refexplicit']:
# reference to anonymous label; the reference uses
# the supplied link caption
docname, labelid = self.data['anonlabels'].get(target, ('',''))
docname, labelid = self.data['anonlabels'].get(target, ('', ''))
sectname = node.astext()
else:
# reference to named label; the final node will
# contain the section name after the label
docname, labelid, sectname = self.data['labels'].get(target,
('','',''))
('', '', ''))
if not docname:
return None
newnode = nodes.reference('', '', internal=True)
@ -607,13 +606,22 @@ class StandardDomain(Domain):
return newnode
elif typ == 'keyword':
# keywords are oddballs: they are referenced by named labels
docname, labelid, _ = self.data['labels'].get(target, ('','',''))
docname, labelid, _ = self.data['labels'].get(target, ('', '', ''))
if not docname:
return None
return make_refnode(builder, fromdocname, docname,
labelid, contnode)
elif typ == 'option':
progname = node['refprogram']
target = target.strip()
# most obvious thing: we are a flag option without program
if target.startswith(('-', '/', '+')):
progname = node.get('std:program')
else:
try:
progname, target = re.split(r' (?=-|--|/|\+)', target, 1)
except ValueError:
return None
progname = ws_re.sub('-', progname.strip())
docname, labelid = self.data['progoptions'].get((progname, target),
('', ''))
if not docname:
@ -633,6 +641,28 @@ class StandardDomain(Domain):
return make_refnode(builder, fromdocname, docname,
labelid, contnode)
def resolve_any_xref(self, env, fromdocname, builder, target,
node, contnode):
results = []
ltarget = target.lower() # :ref: lowercases its target automatically
for role in ('ref', 'option'): # do not try "keyword"
res = self.resolve_xref(env, fromdocname, builder, role,
ltarget if role == 'ref' else target,
node, contnode)
if res:
results.append(('std:' + role, res))
# all others
for objtype in self.object_types:
key = (objtype, target)
if objtype == 'term':
key = (objtype, ltarget)
if key in self.data['objects']:
docname, labelid = self.data['objects'][key]
results.append(('std:' + self.role_for_objtype(objtype),
make_refnode(builder, fromdocname, docname,
labelid, contnode)))
return results
def get_objects(self):
for (prog, option), info in iteritems(self.data['progoptions']):
yield (option, option, 'option', info[0], info[1], 1)

View File

@ -33,26 +33,31 @@ from docutils.parsers.rst import roles, directives
from docutils.parsers.rst.languages import en as english
from docutils.parsers.rst.directives.html import MetaBody
from docutils.writers import UnfilteredWriter
from docutils.frontend import OptionParser
from sphinx import addnodes
from sphinx.util import url_re, get_matching_docs, docname_join, split_into, \
FilenameUniqDict
FilenameUniqDict
from sphinx.util.nodes import clean_astext, make_refnode, WarningStream
from sphinx.util.osutil import SEP, fs_encoding, find_catalog_files
from sphinx.util.osutil import SEP, find_catalog_files, getcwd, fs_encoding
from sphinx.util.console import bold, purple
from sphinx.util.matching import compile_matchers
from sphinx.util.parallel import ParallelTasks, parallel_available, make_chunks
from sphinx.util.websupport import is_commentable
from sphinx.errors import SphinxError, ExtensionError
from sphinx.locale import _
from sphinx.versioning import add_uids, merge_doctrees
from sphinx.transforms import DefaultSubstitutions, MoveModuleTargets, \
HandleCodeBlocks, SortIds, CitationReferences, Locale, \
RemoveTranslatableInline, SphinxContentsFilter
HandleCodeBlocks, SortIds, CitationReferences, Locale, \
RemoveTranslatableInline, SphinxContentsFilter
orig_role_function = roles.role
orig_directive_function = directives.directive
class ElementLookupError(Exception): pass
class ElementLookupError(Exception):
pass
default_settings = {
@ -69,7 +74,9 @@ default_settings = {
# This is increased every time an environment attribute is added
# or changed to properly invalidate pickle files.
ENV_VERSION = 42 + (sys.version_info[0] - 2)
#
# NOTE: increase base version by 2 to have distinct numbers for Py2 and 3
ENV_VERSION = 44 + (sys.version_info[0] - 2)
dummy_reporter = Reporter('', 4, 4)
@ -105,6 +112,33 @@ class SphinxDummyWriter(UnfilteredWriter):
pass
class SphinxFileInput(FileInput):
def __init__(self, app, env, *args, **kwds):
self.app = app
self.env = env
# don't call sys.exit() on IOErrors
kwds['handle_io_errors'] = False
kwds['error_handler'] = 'sphinx' # py3: handle error on open.
FileInput.__init__(self, *args, **kwds)
def decode(self, data):
if isinstance(data, text_type): # py3: `data` already decoded.
return data
return data.decode(self.encoding, 'sphinx') # py2: decoding
def read(self):
data = FileInput.read(self)
if self.app:
arg = [data]
self.app.emit('source-read', self.env.docname, arg)
data = arg[0]
if self.env.config.rst_epilog:
data = data + '\n' + self.env.config.rst_epilog + '\n'
if self.env.config.rst_prolog:
data = self.env.config.rst_prolog + '\n' + data
return data
class BuildEnvironment:
"""
The environment in which the ReST files are translated.
@ -122,7 +156,7 @@ class BuildEnvironment:
finally:
picklefile.close()
if env.version != ENV_VERSION:
raise IOError('env version not current')
raise IOError('build environment version not current')
env.config.values = config.values
return env
@ -138,9 +172,9 @@ class BuildEnvironment:
# remove potentially pickling-problematic values from config
for key, val in list(vars(self.config).items()):
if key.startswith('_') or \
isinstance(val, types.ModuleType) or \
isinstance(val, types.FunctionType) or \
isinstance(val, class_types):
isinstance(val, types.ModuleType) or \
isinstance(val, types.FunctionType) or \
isinstance(val, class_types):
del self.config[key]
try:
pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
@ -181,8 +215,8 @@ class BuildEnvironment:
# the source suffix.
self.found_docs = set() # contains all existing docnames
self.all_docs = {} # docname -> mtime at the time of build
# contains all built docnames
self.all_docs = {} # docname -> mtime at the time of reading
# contains all read docnames
self.dependencies = {} # docname -> set of dependent file
# names, relative to documentation root
self.reread_always = set() # docnames to re-read unconditionally on
@ -223,6 +257,10 @@ class BuildEnvironment:
# temporary data storage while reading a document
self.temp_data = {}
# context for cross-references (e.g. current module or class)
# this is similar to temp_data, but will for example be copied to
# attributes of "any" cross references
self.ref_context = {}
def set_warnfunc(self, func):
self._warnfunc = func
@ -292,6 +330,50 @@ class BuildEnvironment:
for domain in self.domains.values():
domain.clear_doc(docname)
def merge_info_from(self, docnames, other, app):
"""Merge global information gathered about *docnames* while reading them
from the *other* environment.
This possibly comes from a parallel build process.
"""
docnames = set(docnames)
for docname in docnames:
self.all_docs[docname] = other.all_docs[docname]
if docname in other.reread_always:
self.reread_always.add(docname)
self.metadata[docname] = other.metadata[docname]
if docname in other.dependencies:
self.dependencies[docname] = other.dependencies[docname]
self.titles[docname] = other.titles[docname]
self.longtitles[docname] = other.longtitles[docname]
self.tocs[docname] = other.tocs[docname]
self.toc_num_entries[docname] = other.toc_num_entries[docname]
# toc_secnumbers is not assigned during read
if docname in other.toctree_includes:
self.toctree_includes[docname] = other.toctree_includes[docname]
self.indexentries[docname] = other.indexentries[docname]
if docname in other.glob_toctrees:
self.glob_toctrees.add(docname)
if docname in other.numbered_toctrees:
self.numbered_toctrees.add(docname)
self.images.merge_other(docnames, other.images)
self.dlfiles.merge_other(docnames, other.dlfiles)
for subfn, fnset in other.files_to_rebuild.items():
self.files_to_rebuild.setdefault(subfn, set()).update(fnset & docnames)
for key, data in other.citations.items():
# XXX duplicates?
if data[0] in docnames:
self.citations[key] = data
for version, changes in other.versionchanges.items():
self.versionchanges.setdefault(version, []).extend(
change for change in changes if change[1] in docnames)
for domainname, domain in self.domains.items():
domain.merge_domaindata(docnames, other.domaindata[domainname])
app.emit('env-merge-info', self, docnames, other)
def doc2path(self, docname, base=True, suffix=None):
"""Return the filename for the document name.
@ -407,13 +489,11 @@ class BuildEnvironment:
return added, changed, removed
def update(self, config, srcdir, doctreedir, app=None):
def update(self, config, srcdir, doctreedir, app):
"""(Re-)read all files new or changed since last update.
Returns a summary, the total count of documents to reread and an
iterator that yields docnames as it processes them. Store all
environment docnames in the canonical format (ie using SEP as a
separator in place of os.path.sep).
Store all environment docnames in the canonical format (ie using SEP as
a separator in place of os.path.sep).
"""
config_changed = False
if self.config is None:
@ -445,6 +525,8 @@ class BuildEnvironment:
# this cache also needs to be updated every time
self._nitpick_ignore = set(self.config.nitpick_ignore)
app.info(bold('updating environment: '), nonl=1)
added, changed, removed = self.get_outdated_files(config_changed)
# allow user intervention as well
@ -459,30 +541,98 @@ class BuildEnvironment:
msg += '%s added, %s changed, %s removed' % (len(added), len(changed),
len(removed))
app.info(msg)
def update_generator():
self.app = app
# clear all files no longer present
for docname in removed:
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
# read all new and changed files
docnames = sorted(added | changed)
# allow changing and reordering the list of docs to read
app.emit('env-before-read-docs', self, docnames)
# check if we should do parallel or serial read
par_ok = False
if parallel_available and len(docnames) > 5 and app.parallel > 1:
par_ok = True
for extname, md in app._extension_metadata.items():
ext_ok = md.get('parallel_read_safe')
if ext_ok:
continue
if ext_ok is None:
app.warn('the %s extension does not declare if it '
'is safe for parallel reading, assuming it '
'isn\'t - please ask the extension author to '
'check and make it explicit' % extname)
app.warn('doing serial read')
else:
app.warn('the %s extension is not safe for parallel '
'reading, doing serial read' % extname)
par_ok = False
break
if par_ok:
self._read_parallel(docnames, app, nproc=app.parallel)
else:
self._read_serial(docnames, app)
if config.master_doc not in self.all_docs:
self.warn(None, 'master file %s not found' %
self.doc2path(config.master_doc))
self.app = None
app.emit('env-updated', self)
return docnames
def _read_serial(self, docnames, app):
for docname in app.status_iterator(docnames, 'reading sources... ',
purple, len(docnames)):
# remove all inventory entries for that file
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
self.read_doc(docname, app)
def _read_parallel(self, docnames, app, nproc):
# clear all outdated docs at once
for docname in docnames:
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
def read_process(docs):
self.app = app
self.warnings = []
self.set_warnfunc(lambda *args: self.warnings.append(args))
for docname in docs:
self.read_doc(docname, app)
# allow pickling self to send it back
self.set_warnfunc(None)
del self.app
del self.domains
del self.config.values
del self.config
return self
# clear all files no longer present
for docname in removed:
if app:
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
def merge(docs, otherenv):
warnings.extend(otherenv.warnings)
self.merge_info_from(docs, otherenv, app)
# read all new and changed files
for docname in sorted(added | changed):
yield docname
self.read_doc(docname, app=app)
tasks = ParallelTasks(nproc)
chunks = make_chunks(docnames, nproc)
if config.master_doc not in self.all_docs:
self.warn(None, 'master file %s not found' %
self.doc2path(config.master_doc))
warnings = []
for chunk in app.status_iterator(
chunks, 'reading sources... ', purple, len(chunks)):
tasks.add_task(read_process, chunk, merge)
self.app = None
if app:
app.emit('env-updated', self)
# make sure all threads have finished
app.info(bold('waiting for workers...'))
tasks.join()
return msg, len(added | changed), update_generator()
for warning in warnings:
self._warnfunc(*warning)
def check_dependents(self, already):
to_rewrite = self.assign_section_numbers()
@ -496,7 +646,8 @@ class BuildEnvironment:
"""Custom decoding error handler that warns and replaces."""
linestart = error.object.rfind(b'\n', 0, error.start)
lineend = error.object.find(b'\n', error.start)
if lineend == -1: lineend = len(error.object)
if lineend == -1:
lineend = len(error.object)
lineno = error.object.count(b'\n', 0, error.start) + 1
self.warn(self.docname, 'undecodable source characters, '
'replacing with "?": %r' %
@ -550,19 +701,8 @@ class BuildEnvironment:
directives.directive = directive
roles.role = role
def read_doc(self, docname, src_path=None, save_parsed=True, app=None):
"""Parse a file and add/update inventory entries for the doctree.
If srcpath is given, read from a different source file.
"""
# remove all inventory entries for that file
if app:
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
if src_path is None:
src_path = self.doc2path(docname)
def read_doc(self, docname, app=None):
"""Parse a file and add/update inventory entries for the doctree."""
self.temp_data['docname'] = docname
# defaults to the global default, but can be re-set in a document
@ -576,6 +716,12 @@ class BuildEnvironment:
self.patch_lookup_functions()
docutilsconf = path.join(self.srcdir, 'docutils.conf')
# read docutils.conf from source dir, not from current dir
OptionParser.standard_config_files[1] = docutilsconf
if path.isfile(docutilsconf):
self.note_dependency(docutilsconf)
if self.config.default_role:
role_fn, messages = roles.role(self.config.default_role, english,
0, dummy_reporter)
@ -587,38 +733,17 @@ class BuildEnvironment:
codecs.register_error('sphinx', self.warn_and_replace)
class SphinxSourceClass(FileInput):
def __init__(self_, *args, **kwds):
# don't call sys.exit() on IOErrors
kwds['handle_io_errors'] = False
kwds['error_handler'] = 'sphinx' # py3: handle error on open.
FileInput.__init__(self_, *args, **kwds)
def decode(self_, data):
if isinstance(data, text_type): # py3: `data` already decoded.
return data
return data.decode(self_.encoding, 'sphinx') # py2: decoding
def read(self_):
data = FileInput.read(self_)
if app:
arg = [data]
app.emit('source-read', docname, arg)
data = arg[0]
if self.config.rst_epilog:
data = data + '\n' + self.config.rst_epilog + '\n'
if self.config.rst_prolog:
data = self.config.rst_prolog + '\n' + data
return data
# publish manually
pub = Publisher(reader=SphinxStandaloneReader(),
writer=SphinxDummyWriter(),
source_class=SphinxSourceClass,
destination_class=NullOutput)
pub.set_components(None, 'restructuredtext', None)
pub.process_programmatic_settings(None, self.settings, None)
pub.set_source(None, src_path)
src_path = self.doc2path(docname)
source = SphinxFileInput(app, self, source=None, source_path=src_path,
encoding=self.config.source_encoding)
pub.source = source
pub.settings._source = src_path
pub.set_destination(None, None)
pub.publish()
doctree = pub.document
@ -641,12 +766,12 @@ class BuildEnvironment:
if app:
app.emit('doctree-read', doctree)
# store time of build, for outdated files detection
# store time of reading, for outdated files detection
# (Some filesystems have coarse timestamp resolution;
# therefore time.time() can be older than filesystem's timestamp.
# For example, FAT32 has 2sec timestamp resolution.)
self.all_docs[docname] = max(
time.time(), path.getmtime(self.doc2path(docname)))
time.time(), path.getmtime(self.doc2path(docname)))
if self.versioning_condition:
# get old doctree
@ -679,21 +804,20 @@ class BuildEnvironment:
# cleanup
self.temp_data.clear()
self.ref_context.clear()
roles._roles.pop('', None) # if a document has set a local default role
if save_parsed:
# save the parsed doctree
doctree_filename = self.doc2path(docname, self.doctreedir,
'.doctree')
dirname = path.dirname(doctree_filename)
if not path.isdir(dirname):
os.makedirs(dirname)
f = open(doctree_filename, 'wb')
try:
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
finally:
f.close()
else:
return doctree
# save the parsed doctree
doctree_filename = self.doc2path(docname, self.doctreedir,
'.doctree')
dirname = path.dirname(doctree_filename)
if not path.isdir(dirname):
os.makedirs(dirname)
f = open(doctree_filename, 'wb')
try:
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
finally:
f.close()
# utilities to use while reading a document
@ -704,13 +828,17 @@ class BuildEnvironment:
@property
def currmodule(self):
"""Backwards compatible alias."""
return self.temp_data.get('py:module')
"""Backwards compatible alias. Will be removed."""
self.warn(self.docname, 'env.currmodule is being referenced by an '
'extension; this API will be removed in the future')
return self.ref_context.get('py:module')
@property
def currclass(self):
"""Backwards compatible alias."""
return self.temp_data.get('py:class')
"""Backwards compatible alias. Will be removed."""
self.warn(self.docname, 'env.currclass is being referenced by an '
'extension; this API will be removed in the future')
return self.ref_context.get('py:class')
def new_serialno(self, category=''):
"""Return a serial number, e.g. for index entry targets.
@ -740,7 +868,7 @@ class BuildEnvironment:
def note_versionchange(self, type, version, node, lineno):
self.versionchanges.setdefault(version, []).append(
(type, self.temp_data['docname'], lineno,
self.temp_data.get('py:module'),
self.ref_context.get('py:module'),
self.temp_data.get('object'), node.astext()))
# post-processing of read doctrees
@ -755,7 +883,7 @@ class BuildEnvironment:
def process_dependencies(self, docname, doctree):
"""Process docutils-generated dependency info."""
cwd = os.getcwd()
cwd = getcwd()
frompath = path.join(path.normpath(self.srcdir), 'dummy')
deps = doctree.settings.record_dependencies
if not deps:
@ -763,6 +891,8 @@ class BuildEnvironment:
for dep in deps.list:
# the dependency path is relative to the working dir, so get
# one relative to the srcdir
if isinstance(dep, bytes):
dep = dep.decode(fs_encoding)
relpath = relative_path(frompath,
path.normpath(path.join(cwd, dep)))
self.dependencies.setdefault(docname, set()).add(relpath)
@ -846,7 +976,7 @@ class BuildEnvironment:
# nodes are multiply inherited...
if isinstance(node, nodes.authors):
md['authors'] = [author.astext() for author in node]
elif isinstance(node, nodes.TextElement): # e.g. author
elif isinstance(node, nodes.TextElement): # e.g. author
md[node.__class__.__name__] = node.astext()
else:
name, body = node
@ -976,7 +1106,7 @@ class BuildEnvironment:
def build_toc_from(self, docname, document):
"""Build a TOC from the doctree and store it in the inventory."""
numentries = [0] # nonlocal again...
numentries = [0] # nonlocal again...
def traverse_in_section(node, cls):
"""Like traverse(), but stay within the same section."""
@ -1102,7 +1232,6 @@ class BuildEnvironment:
stream=WarningStream(self._warnfunc))
return doctree
def get_and_resolve_doctree(self, docname, builder, doctree=None,
prune_toctrees=True, includehidden=False):
"""Read the doctree from the pickle, resolve cross-references and
@ -1117,7 +1246,8 @@ class BuildEnvironment:
# now, resolve all toctree nodes
for toctreenode in doctree.traverse(addnodes.toctree):
result = self.resolve_toctree(docname, builder, toctreenode,
prune=prune_toctrees, includehidden=includehidden)
prune=prune_toctrees,
includehidden=includehidden)
if result is None:
toctreenode.replace_self([])
else:
@ -1174,7 +1304,7 @@ class BuildEnvironment:
else:
# cull sub-entries whose parents aren't 'current'
if (collapse and depth > 1 and
'iscurrent' not in subnode.parent):
'iscurrent' not in subnode.parent):
subnode.parent.remove(subnode)
else:
# recurse on visible children
@ -1256,7 +1386,7 @@ class BuildEnvironment:
child = toc.children[0]
for refnode in child.traverse(nodes.reference):
if refnode['refuri'] == ref and \
not refnode['anchorname']:
not refnode['anchorname']:
refnode.children = [nodes.Text(title)]
if not toc.children:
# empty toc means: no titles will show up in the toctree
@ -1346,49 +1476,23 @@ class BuildEnvironment:
domain = self.domains[node['refdomain']]
except KeyError:
raise NoUri
newnode = domain.resolve_xref(self, fromdocname, builder,
newnode = domain.resolve_xref(self, refdoc, builder,
typ, target, node, contnode)
# really hardwired reference types
elif typ == 'any':
newnode = self._resolve_any_reference(builder, node, contnode)
elif typ == 'doc':
# directly reference to document by source name;
# can be absolute or relative
docname = docname_join(refdoc, target)
if docname in self.all_docs:
if node['refexplicit']:
# reference with explicit title
caption = node.astext()
else:
caption = clean_astext(self.titles[docname])
innernode = nodes.emphasis(caption, caption)
newnode = nodes.reference('', '', internal=True)
newnode['refuri'] = builder.get_relative_uri(
fromdocname, docname)
newnode.append(innernode)
newnode = self._resolve_doc_reference(builder, node, contnode)
elif typ == 'citation':
docname, labelid = self.citations.get(target, ('', ''))
if docname:
try:
newnode = make_refnode(builder, fromdocname,
docname, labelid, contnode)
except NoUri:
# remove the ids we added in the CitationReferences
# transform since they can't be transfered to
# the contnode (if it's a Text node)
if not isinstance(contnode, nodes.Element):
del node['ids'][:]
raise
elif 'ids' in node:
# remove ids attribute that annotated at
# transforms.CitationReference.apply.
del node['ids'][:]
newnode = self._resolve_citation(builder, refdoc, node, contnode)
# no new node found? try the missing-reference event
if newnode is None:
newnode = builder.app.emit_firstresult(
'missing-reference', self, node, contnode)
# still not found? warn if in nit-picky mode
# still not found? warn if node wishes to be warned about or
# we are in nit-picky mode
if newnode is None:
self._warn_missing_reference(
fromdocname, typ, target, node, domain)
self._warn_missing_reference(refdoc, typ, target, node, domain)
except NoUri:
newnode = contnode
node.replace_self(newnode or contnode)
@ -1399,7 +1503,7 @@ class BuildEnvironment:
# allow custom references to be resolved
builder.app.emit('doctree-resolved', doctree, fromdocname)
def _warn_missing_reference(self, fromdoc, typ, target, node, domain):
def _warn_missing_reference(self, refdoc, typ, target, node, domain):
warn = node.get('refwarn')
if self.config.nitpicky:
warn = True
@ -1418,13 +1522,91 @@ class BuildEnvironment:
msg = 'unknown document: %(target)s'
elif typ == 'citation':
msg = 'citation not found: %(target)s'
elif node.get('refdomain', 'std') != 'std':
elif node.get('refdomain', 'std') not in ('', 'std'):
msg = '%s:%s reference target not found: %%(target)s' % \
(node['refdomain'], typ)
else:
msg = '%s reference target not found: %%(target)s' % typ
msg = '%r reference target not found: %%(target)s' % typ
self.warn_node(msg % {'target': target}, node)
def _resolve_doc_reference(self, builder, node, contnode):
# directly reference to document by source name;
# can be absolute or relative
docname = docname_join(node['refdoc'], node['reftarget'])
if docname in self.all_docs:
if node['refexplicit']:
# reference with explicit title
caption = node.astext()
else:
caption = clean_astext(self.titles[docname])
innernode = nodes.emphasis(caption, caption)
newnode = nodes.reference('', '', internal=True)
newnode['refuri'] = builder.get_relative_uri(node['refdoc'], docname)
newnode.append(innernode)
return newnode
def _resolve_citation(self, builder, fromdocname, node, contnode):
docname, labelid = self.citations.get(node['reftarget'], ('', ''))
if docname:
try:
newnode = make_refnode(builder, fromdocname,
docname, labelid, contnode)
return newnode
except NoUri:
# remove the ids we added in the CitationReferences
# transform since they can't be transfered to
# the contnode (if it's a Text node)
if not isinstance(contnode, nodes.Element):
del node['ids'][:]
raise
elif 'ids' in node:
# remove ids attribute that annotated at
# transforms.CitationReference.apply.
del node['ids'][:]
def _resolve_any_reference(self, builder, node, contnode):
"""Resolve reference generated by the "any" role."""
refdoc = node['refdoc']
target = node['reftarget']
results = []
# first, try resolving as :doc:
doc_ref = self._resolve_doc_reference(builder, node, contnode)
if doc_ref:
results.append(('doc', doc_ref))
# next, do the standard domain (makes this a priority)
results.extend(self.domains['std'].resolve_any_xref(
self, refdoc, builder, target, node, contnode))
for domain in self.domains.values():
if domain.name == 'std':
continue # we did this one already
try:
results.extend(domain.resolve_any_xref(self, refdoc, builder,
target, node, contnode))
except NotImplementedError:
# the domain doesn't yet support the new interface
# we have to manually collect possible references (SLOW)
for role in domain.roles:
res = domain.resolve_xref(self, refdoc, builder, role, target,
node, contnode)
if res:
results.append(('%s:%s' % (domain.name, role), res))
# now, see how many matches we got...
if not results:
return None
if len(results) > 1:
nice_results = ' or '.join(':%s:' % r[0] for r in results)
self.warn_node('more than one target found for \'any\' cross-'
'reference %r: could be %s' % (target, nice_results),
node)
res_role, newnode = results[0]
# Override "any" class with the actual role type to get the styling
# approximately correct.
res_domain = res_role.split(':')[0]
if newnode and newnode[0].get('classes'):
newnode[0]['classes'].append(res_domain)
newnode[0]['classes'].append(res_role.replace(':', '-'))
return newnode
def process_only_nodes(self, doctree, builder, fromdocname=None):
# A comment on the comment() nodes being inserted: replacing by [] would
# result in a "Losing ids" exception if there is a target node before
@ -1595,7 +1777,7 @@ class BuildEnvironment:
# prefixes match: add entry as subitem of the
# previous entry
oldsubitems.setdefault(m.group(2), [[], {}])[0].\
extend(targets)
extend(targets)
del newlist[i]
continue
oldkey = m.group(1)
@ -1622,6 +1804,7 @@ class BuildEnvironment:
def collect_relations(self):
relations = {}
getinc = self.toctree_includes.get
def collect(parents, parents_set, docname, previous, next):
# circular relationship?
if docname in parents_set:
@ -1661,8 +1844,8 @@ class BuildEnvironment:
# same for children
if includes:
for subindex, args in enumerate(zip(includes,
[None] + includes,
includes[1:] + [None])):
[None] + includes,
includes[1:] + [None])):
collect([(docname, subindex)] + parents,
parents_set.union([docname]), *args)
relations[docname] = [parents[0][0], previous, next]

View File

@ -10,6 +10,9 @@
:license: BSD, see LICENSE for details.
"""
import traceback
class SphinxError(Exception):
"""
Base class for Sphinx errors that are shown to the user in a nicer
@ -62,3 +65,13 @@ class PycodeError(Exception):
if len(self.args) > 1:
res += ' (exception was: %r)' % self.args[1]
return res
class SphinxParallelError(Exception):
def __init__(self, orig_exc, traceback):
self.orig_exc = orig_exc
self.traceback = traceback
def __str__(self):
return traceback.format_exception_only(
self.orig_exc.__class__, self.orig_exc)[0].strip()

View File

@ -30,7 +30,7 @@ from sphinx.application import ExtensionError
from sphinx.util.nodes import nested_parse_with_titles
from sphinx.util.compat import Directive
from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \
safe_getattr, safe_repr, is_builtin_class_method
safe_getattr, safe_repr, is_builtin_class_method
from sphinx.util.docstrings import prepare_docstring
@ -50,11 +50,13 @@ class DefDict(dict):
def __init__(self, default):
dict.__init__(self)
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return self.default
def __bool__(self):
# docutils check "if option_spec"
return True
@ -92,6 +94,7 @@ class _MockModule(object):
else:
return _MockModule()
def mock_import(modname):
if '.' in modname:
pkg, _n, mods = modname.rpartition('.')
@ -104,12 +107,14 @@ def mock_import(modname):
ALL = object()
INSTANCEATTR = object()
def members_option(arg):
"""Used to convert the :members: option to auto directives."""
if arg is None:
return ALL
return [x.strip() for x in arg.split(',')]
def members_set_option(arg):
"""Used to convert the :members: option to auto directives."""
if arg is None:
@ -118,6 +123,7 @@ def members_set_option(arg):
SUPPRESS = object()
def annotation_option(arg):
if arg is None:
# suppress showing the representation of the object
@ -125,6 +131,7 @@ def annotation_option(arg):
else:
return arg
def bool_option(arg):
"""Used to convert flag options to auto directives. (Instead of
directives.flag(), which returns None).
@ -201,6 +208,7 @@ def cut_lines(pre, post=0, what=None):
lines.append('')
return process
def between(marker, what=None, keepempty=False, exclude=False):
"""Return a listener that either keeps, or if *exclude* is True excludes,
lines between lines that match the *marker* regular expression. If no line
@ -211,6 +219,7 @@ def between(marker, what=None, keepempty=False, exclude=False):
be processed.
"""
marker_re = re.compile(marker)
def process(app, what_, name, obj, options, lines):
if what and what_ not in what:
return
@ -325,7 +334,7 @@ class Documenter(object):
# an autogenerated one
try:
explicit_modname, path, base, args, retann = \
py_ext_sig_re.match(self.name).groups()
py_ext_sig_re.match(self.name).groups()
except AttributeError:
self.directive.warn('invalid signature for auto%s (%r)' %
(self.objtype, self.name))
@ -340,7 +349,7 @@ class Documenter(object):
parents = []
self.modname, self.objpath = \
self.resolve_name(modname, parents, path, base)
self.resolve_name(modname, parents, path, base)
if not self.modname:
return False
@ -637,19 +646,19 @@ class Documenter(object):
keep = False
if want_all and membername.startswith('__') and \
membername.endswith('__') and len(membername) > 4:
membername.endswith('__') and len(membername) > 4:
# special __methods__
if self.options.special_members is ALL and \
membername != '__doc__':
keep = has_doc or self.options.undoc_members
elif self.options.special_members and \
self.options.special_members is not ALL and \
self.options.special_members is not ALL and \
membername in self.options.special_members:
keep = has_doc or self.options.undoc_members
elif want_all and membername.startswith('_'):
# ignore members whose name starts with _ by default
keep = self.options.private_members and \
(has_doc or self.options.undoc_members)
(has_doc or self.options.undoc_members)
elif (namespace, membername) in attr_docs:
# keep documented attributes
keep = True
@ -685,7 +694,7 @@ class Documenter(object):
self.env.temp_data['autodoc:class'] = self.objpath[0]
want_all = all_members or self.options.inherited_members or \
self.options.members is ALL
self.options.members is ALL
# find out which members are documentable
members_check_module, members = self.get_object_members(want_all)
@ -707,11 +716,11 @@ class Documenter(object):
# give explicitly separated module name, so that members
# of inner classes can be documented
full_mname = self.modname + '::' + \
'.'.join(self.objpath + [mname])
'.'.join(self.objpath + [mname])
documenter = classes[-1](self.directive, full_mname, self.indent)
memberdocumenters.append((documenter, isattr))
member_order = self.options.member_order or \
self.env.config.autodoc_member_order
self.env.config.autodoc_member_order
if member_order == 'groupwise':
# sort by group; relies on stable sort to keep items in the
# same group sorted alphabetically
@ -719,6 +728,7 @@ class Documenter(object):
elif member_order == 'bysource' and self.analyzer:
# sort by source order, by virtue of the module analyzer
tagorder = self.analyzer.tagorder
def keyfunc(entry):
fullname = entry[0].name.split('::')[1]
return tagorder.get(fullname, len(tagorder))
@ -872,7 +882,7 @@ class ModuleDocumenter(Documenter):
self.directive.warn(
'missing attribute mentioned in :members: or __all__: '
'module %s, attribute %s' % (
safe_getattr(self.object, '__name__', '???'), mname))
safe_getattr(self.object, '__name__', '???'), mname))
return False, ret
@ -891,7 +901,7 @@ class ModuleLevelDocumenter(Documenter):
modname = self.env.temp_data.get('autodoc:module')
# ... or in the scope of a module directive
if not modname:
modname = self.env.temp_data.get('py:module')
modname = self.env.ref_context.get('py:module')
# ... else, it stays None, which means invalid
return modname, parents + [base]
@ -913,7 +923,7 @@ class ClassLevelDocumenter(Documenter):
mod_cls = self.env.temp_data.get('autodoc:class')
# ... or from a class directive
if mod_cls is None:
mod_cls = self.env.temp_data.get('py:class')
mod_cls = self.env.ref_context.get('py:class')
# ... if still None, there's no way to know
if mod_cls is None:
return None, []
@ -923,7 +933,7 @@ class ClassLevelDocumenter(Documenter):
if not modname:
modname = self.env.temp_data.get('autodoc:module')
if not modname:
modname = self.env.temp_data.get('py:module')
modname = self.env.ref_context.get('py:module')
# ... else, it stays None, which means invalid
return modname, parents + [base]
@ -976,6 +986,7 @@ class DocstringSignatureMixin(object):
self.args, self.retann = result
return Documenter.format_signature(self)
class DocstringStripSignatureMixin(DocstringSignatureMixin):
"""
Mixin for AttributeDocumenter to provide the
@ -1007,7 +1018,7 @@ class FunctionDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter):
def format_args(self):
if inspect.isbuiltin(self.object) or \
inspect.ismethoddescriptor(self.object):
inspect.ismethoddescriptor(self.object):
# cannot introspect arguments of a C function or method
return None
try:
@ -1070,8 +1081,8 @@ class ClassDocumenter(ModuleLevelDocumenter):
# classes without __init__ method, default __init__ or
# __init__ written in C?
if initmeth is None or \
is_builtin_class_method(self.object, '__init__') or \
not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)):
is_builtin_class_method(self.object, '__init__') or \
not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)):
return None
try:
argspec = getargspec(initmeth)
@ -1109,7 +1120,7 @@ class ClassDocumenter(ModuleLevelDocumenter):
if not self.doc_as_attr and self.options.show_inheritance:
self.add_line(u'', '<autodoc>')
if hasattr(self.object, '__bases__') and len(self.object.__bases__):
bases = [b.__module__ == '__builtin__' and
bases = [b.__module__ in ('__builtin__', 'builtins') and
u':class:`%s`' % b.__name__ or
u':class:`%s.%s`' % (b.__module__, b.__name__)
for b in self.object.__bases__]
@ -1142,7 +1153,7 @@ class ClassDocumenter(ModuleLevelDocumenter):
# for new-style classes, no __init__ means default __init__
if (initdocstring is not None and
(initdocstring == object.__init__.__doc__ or # for pypy
initdocstring.strip() == object.__init__.__doc__)): #for !pypy
initdocstring.strip() == object.__init__.__doc__)): # for !pypy
initdocstring = None
if initdocstring:
if content == 'init':
@ -1186,7 +1197,7 @@ class ExceptionDocumenter(ClassDocumenter):
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return isinstance(member, class_types) and \
issubclass(member, BaseException)
issubclass(member, BaseException)
class DataDocumenter(ModuleLevelDocumenter):
@ -1233,7 +1244,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter):
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
return inspect.isroutine(member) and \
not isinstance(parent, ModuleDocumenter)
not isinstance(parent, ModuleDocumenter)
def import_object(self):
ret = ClassLevelDocumenter.import_object(self)
@ -1257,7 +1268,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter):
def format_args(self):
if inspect.isbuiltin(self.object) or \
inspect.ismethoddescriptor(self.object):
inspect.ismethoddescriptor(self.object):
# can never get arguments of a C function or method
return None
argspec = getargspec(self.object)
@ -1272,7 +1283,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter):
pass
class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter):
class AttributeDocumenter(DocstringStripSignatureMixin, ClassLevelDocumenter):
"""
Specialized Documenter subclass for attributes.
"""
@ -1290,9 +1301,9 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter):
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
isdatadesc = isdescriptor(member) and not \
isinstance(member, cls.method_types) and not \
type(member).__name__ in ("type", "method_descriptor",
"instancemethod")
isinstance(member, cls.method_types) and not \
type(member).__name__ in ("type", "method_descriptor",
"instancemethod")
return isdatadesc or (not isinstance(parent, ModuleDocumenter)
and not inspect.isroutine(member)
and not isinstance(member, class_types))
@ -1303,7 +1314,7 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter):
def import_object(self):
ret = ClassLevelDocumenter.import_object(self)
if isdescriptor(self.object) and \
not isinstance(self.object, self.method_types):
not isinstance(self.object, self.method_types):
self._datadescriptor = True
else:
# if it's not a data descriptor
@ -1312,7 +1323,7 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter):
def get_real_modname(self):
return self.get_attr(self.parent or self.object, '__module__', None) \
or self.modname
or self.modname
def add_directive_header(self, sig):
ClassLevelDocumenter.add_directive_header(self, sig)
@ -1479,7 +1490,7 @@ def add_documenter(cls):
raise ExtensionError('autodoc documenter %r must be a subclass '
'of Documenter' % cls)
# actually, it should be possible to override Documenters
#if cls.objtype in AutoDirective._registry:
# if cls.objtype in AutoDirective._registry:
# raise ExtensionError('autodoc documenter for %r is already '
# 'registered' % cls.objtype)
AutoDirective._registry[cls.objtype] = cls
@ -1504,7 +1515,7 @@ def setup(app):
app.add_event('autodoc-process-signature')
app.add_event('autodoc-skip-member')
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}
class testcls:

View File

@ -432,11 +432,11 @@ def get_import_prefixes_from_env(env):
"""
prefixes = [None]
currmodule = env.temp_data.get('py:module')
currmodule = env.ref_context.get('py:module')
if currmodule:
prefixes.insert(0, currmodule)
currclass = env.temp_data.get('py:class')
currclass = env.ref_context.get('py:class')
if currclass:
if currmodule:
prefixes.insert(0, currmodule + "." + currclass)
@ -570,4 +570,4 @@ def setup(app):
app.connect('doctree-read', process_autosummary_toc)
app.connect('builder-inited', process_generate_options)
app.add_config_value('autosummary_generate', [], True)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -265,4 +265,4 @@ def setup(app):
app.add_config_value('coverage_ignore_c_items', {}, False)
app.add_config_value('coverage_write_headline', True, False)
app.add_config_value('coverage_skip_undoc_in_source', False, False)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -32,6 +32,7 @@ from sphinx.util.console import bold
blankline_re = re.compile(r'^\s*<BLANKLINE>', re.MULTILINE)
doctestopt_re = re.compile(r'#\s*doctest:.+$', re.MULTILINE)
# set up the necessary directives
class TestDirective(Directive):
@ -79,30 +80,35 @@ class TestDirective(Directive):
option_strings = self.options['options'].replace(',', ' ').split()
for option in option_strings:
if (option[0] not in '+-' or option[1:] not in
doctest.OPTIONFLAGS_BY_NAME):
doctest.OPTIONFLAGS_BY_NAME):
# XXX warn?
continue
flag = doctest.OPTIONFLAGS_BY_NAME[option[1:]]
node['options'][flag] = (option[0] == '+')
return [node]
class TestsetupDirective(TestDirective):
option_spec = {}
class TestcleanupDirective(TestDirective):
option_spec = {}
class DoctestDirective(TestDirective):
option_spec = {
'hide': directives.flag,
'options': directives.unchanged,
}
class TestcodeDirective(TestDirective):
option_spec = {
'hide': directives.flag,
}
class TestoutputDirective(TestDirective):
option_spec = {
'hide': directives.flag,
@ -112,6 +118,7 @@ class TestoutputDirective(TestDirective):
parser = doctest.DocTestParser()
# helper classes
class TestGroup(object):
@ -196,7 +203,7 @@ class DocTestBuilder(Builder):
def init(self):
# default options
self.opt = doctest.DONT_ACCEPT_TRUE_FOR_1 | doctest.ELLIPSIS | \
doctest.IGNORE_EXCEPTION_DETAIL
doctest.IGNORE_EXCEPTION_DETAIL
# HACK HACK HACK
# doctest compiles its snippets with type 'single'. That is nice
@ -247,6 +254,10 @@ Results of doctest builder run on %s
# write executive summary
def s(v):
return v != 1 and 's' or ''
repl = (self.total_tries, s(self.total_tries),
self.total_failures, s(self.total_failures),
self.setup_failures, s(self.setup_failures),
self.cleanup_failures, s(self.cleanup_failures))
self._out('''
Doctest summary
===============
@ -254,10 +265,7 @@ Doctest summary
%5d failure%s in tests
%5d failure%s in setup code
%5d failure%s in cleanup code
''' % (self.total_tries, s(self.total_tries),
self.total_failures, s(self.total_failures),
self.setup_failures, s(self.setup_failures),
self.cleanup_failures, s(self.cleanup_failures)))
''' % repl)
self.outfile.close()
if self.total_failures or self.setup_failures or self.cleanup_failures:
@ -290,11 +298,11 @@ Doctest summary
def condition(node):
return (isinstance(node, (nodes.literal_block, nodes.comment))
and 'testnodetype' in node) or \
isinstance(node, nodes.doctest_block)
isinstance(node, nodes.doctest_block)
else:
def condition(node):
return isinstance(node, (nodes.literal_block, nodes.comment)) \
and 'testnodetype' in node
and 'testnodetype' in node
for node in doctree.traverse(condition):
source = 'test' in node and node['test'] or node.astext()
if not source:
@ -364,7 +372,7 @@ Doctest summary
filename, 0, None)
sim_doctest.globs = ns
old_f = runner.failures
self.type = 'exec' # the snippet may contain multiple statements
self.type = 'exec' # the snippet may contain multiple statements
runner.run(sim_doctest, out=self._warn_out, clear_globs=False)
if runner.failures > old_f:
return False
@ -394,7 +402,7 @@ Doctest summary
new_opt = code[0].options.copy()
new_opt.update(example.options)
example.options = new_opt
self.type = 'single' # as for ordinary doctests
self.type = 'single' # as for ordinary doctests
else:
# testcode and output separate
output = code[1] and code[1].code or ''
@ -413,7 +421,7 @@ Doctest summary
options=options)
test = doctest.DocTest([example], {}, group.name,
filename, code[0].lineno, None)
self.type = 'exec' # multiple statements again
self.type = 'exec' # multiple statements again
# DocTest.__init__ copies the globs namespace, which we don't want
test.globs = ns
# also don't clear the globs namespace after running the doctest
@ -435,4 +443,4 @@ def setup(app):
app.add_config_value('doctest_test_doctest_blocks', 'default', False)
app.add_config_value('doctest_global_setup', '', False)
app.add_config_value('doctest_global_cleanup', '', False)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -59,4 +59,4 @@ def setup_link_roles(app):
def setup(app):
app.add_config_value('extlinks', {}, 'env')
app.connect('builder-inited', setup_link_roles)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -323,4 +323,4 @@ def setup(app):
app.add_config_value('graphviz_dot', 'dot', 'html')
app.add_config_value('graphviz_dot_args', [], 'html')
app.add_config_value('graphviz_output_format', 'png', 'html')
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -73,4 +73,4 @@ def setup(app):
app.add_node(ifconfig)
app.add_directive('ifconfig', IfConfig)
app.connect('doctree-resolved', process_ifconfig_nodes)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -39,13 +39,14 @@ r"""
import re
import sys
import inspect
import __builtin__ as __builtin__ # as __builtin__ is for lib2to3 compatibility
try:
from hashlib import md5
except ImportError:
from md5 import md5
from six import text_type
from six.moves import builtins
from docutils import nodes
from docutils.parsers.rst import directives
@ -147,10 +148,10 @@ class InheritanceGraph(object):
displayed node names.
"""
all_classes = {}
builtins = vars(__builtin__).values()
py_builtins = vars(builtins).values()
def recurse(cls):
if not show_builtins and cls in builtins:
if not show_builtins and cls in py_builtins:
return
if not private_bases and cls.__name__.startswith('_'):
return
@ -174,7 +175,7 @@ class InheritanceGraph(object):
baselist = []
all_classes[cls] = (nodename, fullname, baselist, tooltip)
for base in cls.__bases__:
if not show_builtins and base in builtins:
if not show_builtins and base in py_builtins:
continue
if not private_bases and base.__name__.startswith('_'):
continue
@ -194,7 +195,7 @@ class InheritanceGraph(object):
completely general.
"""
module = cls.__module__
if module == '__builtin__':
if module in ('__builtin__', 'builtins'):
fullname = cls.__name__
else:
fullname = '%s.%s' % (module, cls.__name__)
@ -310,7 +311,7 @@ class InheritanceDiagram(Directive):
# Create a graph starting with the list of classes
try:
graph = InheritanceGraph(
class_names, env.temp_data.get('py:module'),
class_names, env.ref_context.get('py:module'),
parts=node['parts'],
private_bases='private-bases' in self.options)
except InheritanceException as err:
@ -407,4 +408,4 @@ def setup(app):
app.add_config_value('inheritance_graph_attrs', {}, False),
app.add_config_value('inheritance_node_attrs', {}, False),
app.add_config_value('inheritance_edge_attrs', {}, False),
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -222,15 +222,21 @@ def load_mappings(app):
def missing_reference(app, env, node, contnode):
"""Attempt to resolve a missing reference via intersphinx references."""
domain = node.get('refdomain')
if not domain:
# only objects in domains are in the inventory
return
target = node['reftarget']
objtypes = env.domains[domain].objtypes_for_role(node['reftype'])
if not objtypes:
return
objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes]
if node['reftype'] == 'any':
# we search anything!
objtypes = ['%s:%s' % (domain.name, objtype)
for domain in env.domains.values()
for objtype in domain.object_types]
else:
domain = node.get('refdomain')
if not domain:
# only objects in domains are in the inventory
return
objtypes = env.domains[domain].objtypes_for_role(node['reftype'])
if not objtypes:
return
objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes]
to_try = [(env.intersphinx_inventory, target)]
in_set = None
if ':' in target:
@ -248,7 +254,7 @@ def missing_reference(app, env, node, contnode):
# get correct path in case of subdirectories
uri = path.join(relative_path(node['refdoc'], env.srcdir), uri)
newnode = nodes.reference('', '', internal=False, refuri=uri,
reftitle=_('(in %s v%s)') % (proj, version))
reftitle=_('(in %s v%s)') % (proj, version))
if node.get('refexplicit'):
# use whatever title was given
newnode.append(contnode)
@ -276,4 +282,4 @@ def setup(app):
app.add_config_value('intersphinx_cache_limit', 5, False)
app.connect('missing-reference', missing_reference)
app.connect('builder-inited', load_mappings)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -57,4 +57,4 @@ def setup(app):
mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None))
app.add_config_value('jsmath_path', '', False)
app.connect('builder-inited', builder_inited)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -16,9 +16,11 @@ from sphinx import addnodes
from sphinx.locale import _
from sphinx.errors import SphinxError
class LinkcodeError(SphinxError):
category = "linkcode error"
def doctree_read(app, doctree):
env = app.builder.env
@ -68,7 +70,8 @@ def doctree_read(app, doctree):
classes=['viewcode-link'])
signode += onlynode
def setup(app):
app.connect('doctree-read', doctree_read)
app.add_config_value('linkcode_resolve', None, '')
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -69,4 +69,4 @@ def setup(app):
app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html')
app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html')
app.connect('builder-inited', builder_inited)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -256,7 +256,7 @@ def setup(app):
for name, (default, rebuild) in iteritems(Config._config_values):
app.add_config_value(name, default, rebuild)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}
def _process_docstring(app, what, name, obj, options, lines):

View File

@ -246,4 +246,4 @@ def setup(app):
app.add_config_value('pngmath_latex_preamble', '', 'html')
app.add_config_value('pngmath_add_tooltips', True, 'html')
app.connect('build-finished', cleanup_tempdir)
return sphinx.__version__
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -150,6 +150,14 @@ def purge_todos(app, env, docname):
if todo['docname'] != docname]
def merge_info(app, env, docnames, other):
if not hasattr(other, 'todo_all_todos'):
return
if not hasattr(env, 'todo_all_todos'):
env.todo_all_todos = []
env.todo_all_todos.extend(other.todo_all_todos)
def visit_todo_node(self, node):
self.visit_admonition(node)
@ -172,4 +180,5 @@ def setup(app):
app.connect('doctree-read', process_todos)
app.connect('doctree-resolved', process_todo_nodes)
app.connect('env-purge-doc', purge_todos)
return sphinx.__version__
app.connect('env-merge-info', merge_info)
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -20,6 +20,7 @@ from sphinx.locale import _
from sphinx.pycode import ModuleAnalyzer
from sphinx.util import get_full_modname
from sphinx.util.nodes import make_refnode
from sphinx.util.console import blue
def _get_full_modname(app, modname, attribute):
@ -37,7 +38,7 @@ def _get_full_modname(app, modname, attribute):
# It should be displayed only verbose mode.
app.verbose(traceback.format_exc().rstrip())
app.verbose('viewcode can\'t import %s, failed with error "%s"' %
(modname, e))
(modname, e))
return None
@ -100,6 +101,16 @@ def doctree_read(app, doctree):
signode += onlynode
def env_merge_info(app, env, docnames, other):
if not hasattr(other, '_viewcode_modules'):
return
# create a _viewcode_modules dict on the main environment
if not hasattr(env, '_viewcode_modules'):
env._viewcode_modules = {}
# now merge in the information from the subprocess
env._viewcode_modules.update(other._viewcode_modules)
def missing_reference(app, env, node, contnode):
# resolve our "viewcode" reference nodes -- they need special treatment
if node['reftype'] == 'viewcode':
@ -116,10 +127,12 @@ def collect_pages(app):
modnames = set(env._viewcode_modules)
app.builder.info(' (%d module code pages)' %
len(env._viewcode_modules), nonl=1)
# app.builder.info(' (%d module code pages)' %
# len(env._viewcode_modules), nonl=1)
for modname, entry in iteritems(env._viewcode_modules):
for modname, entry in app.status_iterator(
iteritems(env._viewcode_modules), 'highlighting module code... ',
blue, len(env._viewcode_modules), lambda x: x[0]):
if not entry:
continue
code, tags, used, refname = entry
@ -162,15 +175,14 @@ def collect_pages(app):
context = {
'parents': parents,
'title': modname,
'body': _('<h1>Source code for %s</h1>') % modname + \
'\n'.join(lines)
'body': (_('<h1>Source code for %s</h1>') % modname +
'\n'.join(lines)),
}
yield (pagename, context, 'page.html')
if not modnames:
return
app.builder.info(' _modules/index', nonl=True)
html = ['\n']
# the stack logic is needed for using nested lists for submodules
stack = ['']
@ -190,8 +202,8 @@ def collect_pages(app):
html.append('</ul>' * (len(stack) - 1))
context = {
'title': _('Overview: module code'),
'body': _('<h1>All modules for which code is available</h1>') + \
''.join(html),
'body': (_('<h1>All modules for which code is available</h1>') +
''.join(html)),
}
yield ('_modules/index', context, 'page.html')
@ -200,8 +212,9 @@ def collect_pages(app):
def setup(app):
app.add_config_value('viewcode_import', True, False)
app.connect('doctree-read', doctree_read)
app.connect('env-merge-info', env_merge_info)
app.connect('html-collect-pages', collect_pages)
app.connect('missing-reference', missing_reference)
#app.add_config_value('viewcode_include_modules', [], 'env')
#app.add_config_value('viewcode_exclude_modules', [], 'env')
return sphinx.__version__
# app.add_config_value('viewcode_include_modules', [], 'env')
# app.add_config_value('viewcode_exclude_modules', [], 'env')
return {'version': sphinx.__version__, 'parallel_read_safe': True}

View File

@ -24,46 +24,32 @@ from sphinx.util.pycompat import htmlescape
from sphinx.util.texescape import tex_hl_escape_map_new
from sphinx.ext import doctest
try:
import pygments
from pygments import highlight
from pygments.lexers import PythonLexer, PythonConsoleLexer, CLexer, \
TextLexer, RstLexer
from pygments.lexers import get_lexer_by_name, guess_lexer
from pygments.formatters import HtmlFormatter, LatexFormatter
from pygments.filters import ErrorToken
from pygments.styles import get_style_by_name
from pygments.util import ClassNotFound
from sphinx.pygments_styles import SphinxStyle, NoneStyle
except ImportError:
pygments = None
lexers = None
HtmlFormatter = LatexFormatter = None
else:
from pygments import highlight
from pygments.lexers import PythonLexer, PythonConsoleLexer, CLexer, \
TextLexer, RstLexer
from pygments.lexers import get_lexer_by_name, guess_lexer
from pygments.formatters import HtmlFormatter, LatexFormatter
from pygments.filters import ErrorToken
from pygments.styles import get_style_by_name
from pygments.util import ClassNotFound
from sphinx.pygments_styles import SphinxStyle, NoneStyle
lexers = dict(
none = TextLexer(),
python = PythonLexer(),
pycon = PythonConsoleLexer(),
pycon3 = PythonConsoleLexer(python3=True),
rest = RstLexer(),
c = CLexer(),
)
for _lexer in lexers.values():
_lexer.add_filter('raiseonerror')
lexers = dict(
none = TextLexer(),
python = PythonLexer(),
pycon = PythonConsoleLexer(),
pycon3 = PythonConsoleLexer(python3=True),
rest = RstLexer(),
c = CLexer(),
)
for _lexer in lexers.values():
_lexer.add_filter('raiseonerror')
escape_hl_chars = {ord(u'\\'): u'\\PYGZbs{}',
ord(u'{'): u'\\PYGZob{}',
ord(u'}'): u'\\PYGZcb{}'}
# used if Pygments is not available
_LATEX_STYLES = r'''
\newcommand\PYGZbs{\char`\\}
\newcommand\PYGZob{\char`\{}
\newcommand\PYGZcb{\char`\}}
'''
# used if Pygments is available
# use textcomp quote to get a true single quote
_LATEX_ADD_STYLES = r'''
@ -80,8 +66,6 @@ class PygmentsBridge(object):
def __init__(self, dest='html', stylename='sphinx',
trim_doctest_flags=False):
self.dest = dest
if not pygments:
return
if stylename is None or stylename == 'sphinx':
style = SphinxStyle
elif stylename == 'none':
@ -153,8 +137,6 @@ class PygmentsBridge(object):
def highlight_block(self, source, lang, warn=None, force=False, **kwargs):
if not isinstance(source, text_type):
source = source.decode()
if not pygments:
return self.unhighlighted(source)
# find out which lexer to use
if lang in ('py', 'python'):
@ -213,11 +195,6 @@ class PygmentsBridge(object):
return hlsource.translate(tex_hl_escape_map_new)
def get_stylesheet(self):
if not pygments:
if self.dest == 'latex':
return _LATEX_STYLES
# no HTML styles needed
return ''
formatter = self.get_formatter()
if self.dest == 'html':
return formatter.get_style_defs('.highlight')

View File

@ -10,13 +10,16 @@
"""
from __future__ import print_function
import sys, os, time, re
import re
import os
import sys
import time
from os import path
from io import open
TERM_ENCODING = getattr(sys.stdin, 'encoding', None)
#try to import readline, unix specific enhancement
# try to import readline, unix specific enhancement
try:
import readline
if readline.__doc__ and 'libedit' in readline.__doc__:
@ -33,7 +36,7 @@ from docutils.utils import column_width
from sphinx import __version__
from sphinx.util.osutil import make_filename
from sphinx.util.console import purple, bold, red, turquoise, \
nocolor, color_terminal
nocolor, color_terminal
from sphinx.util import texescape
# function to get input from terminal -- overridden by the test suite
@ -972,17 +975,20 @@ def mkdir_p(dir):
class ValidationError(Exception):
"""Raised for validation errors."""
def is_path(x):
x = path.expanduser(x)
if path.exists(x) and not path.isdir(x):
raise ValidationError("Please enter a valid path name.")
return x
def nonempty(x):
if not x:
raise ValidationError("Please enter some text.")
return x
def choice(*l):
def val(x):
if x not in l:
@ -990,17 +996,20 @@ def choice(*l):
return x
return val
def boolean(x):
if x.upper() not in ('Y', 'YES', 'N', 'NO'):
raise ValidationError("Please enter either 'y' or 'n'.")
return x.upper() in ('Y', 'YES')
def suffix(x):
if not (x[0:1] == '.' and len(x) > 1):
raise ValidationError("Please enter a file suffix, "
"e.g. '.rst' or '.txt'.")
return x
def ok(x):
return x
@ -1097,7 +1106,7 @@ Enter the root path for documentation.''')
do_prompt(d, 'path', 'Root path for the documentation', '.', is_path)
while path.isfile(path.join(d['path'], 'conf.py')) or \
path.isfile(path.join(d['path'], 'source', 'conf.py')):
path.isfile(path.join(d['path'], 'source', 'conf.py')):
print()
print(bold('Error: an existing conf.py has been found in the '
'selected root path.'))
@ -1169,7 +1178,7 @@ document is a custom template, you can also set this to another filename.''')
'index')
while path.isfile(path.join(d['path'], d['master']+d['suffix'])) or \
path.isfile(path.join(d['path'], 'source', d['master']+d['suffix'])):
path.isfile(path.join(d['path'], 'source', d['master']+d['suffix'])):
print()
print(bold('Error: the master file %s has already been found in the '
'selected root path.' % (d['master']+d['suffix'])))
@ -1256,10 +1265,10 @@ def generate(d, overwrite=True, silent=False):
d['extensions'] = extensions
d['copyright'] = time.strftime('%Y') + ', ' + d['author']
d['author_texescaped'] = text_type(d['author']).\
translate(texescape.tex_escape_map)
translate(texescape.tex_escape_map)
d['project_doc'] = d['project'] + ' Documentation'
d['project_doc_texescaped'] = text_type(d['project'] + ' Documentation').\
translate(texescape.tex_escape_map)
translate(texescape.tex_escape_map)
# escape backslashes and single quotes in strings that are put into
# a Python string literal

View File

@ -17,22 +17,23 @@ from docutils.parsers.rst import roles
from sphinx import addnodes
from sphinx.locale import _
from sphinx.errors import SphinxError
from sphinx.util import ws_re
from sphinx.util.nodes import split_explicit_title, process_index_entry, \
set_role_source_info
set_role_source_info
generic_docroles = {
'command' : addnodes.literal_strong,
'dfn' : nodes.emphasis,
'kbd' : nodes.literal,
'mailheader' : addnodes.literal_emphasis,
'makevar' : addnodes.literal_strong,
'manpage' : addnodes.literal_emphasis,
'mimetype' : addnodes.literal_emphasis,
'newsgroup' : addnodes.literal_emphasis,
'program' : addnodes.literal_strong, # XXX should be an x-ref
'regexp' : nodes.literal,
'command': addnodes.literal_strong,
'dfn': nodes.emphasis,
'kbd': nodes.literal,
'mailheader': addnodes.literal_emphasis,
'makevar': addnodes.literal_strong,
'manpage': addnodes.literal_emphasis,
'mimetype': addnodes.literal_emphasis,
'newsgroup': addnodes.literal_emphasis,
'program': addnodes.literal_strong, # XXX should be an x-ref
'regexp': nodes.literal,
}
for rolename, nodeclass in iteritems(generic_docroles):
@ -40,6 +41,7 @@ for rolename, nodeclass in iteritems(generic_docroles):
role = roles.CustomRole(rolename, generic, {'classes': [rolename]})
roles.register_local_role(rolename, role)
# -- generic cross-reference role ----------------------------------------------
class XRefRole(object):
@ -96,7 +98,11 @@ class XRefRole(object):
options={}, content=[]):
env = inliner.document.settings.env
if not typ:
typ = env.config.default_role
typ = env.temp_data.get('default_role')
if not typ:
typ = env.config.default_role
if not typ:
raise SphinxError('cannot determine default role!')
else:
typ = typ.lower()
if ':' not in typ:
@ -158,6 +164,15 @@ class XRefRole(object):
return [node], []
class AnyXRefRole(XRefRole):
def process_link(self, env, refnode, has_explicit_title, title, target):
result = XRefRole.process_link(self, env, refnode, has_explicit_title,
title, target)
# add all possible context info (i.e. std:program, py:module etc.)
refnode.attributes.update(env.ref_context)
return result
def indexmarkup_role(typ, rawtext, text, lineno, inliner,
options={}, content=[]):
"""Role for PEP/RFC references that generate an index entry."""
@ -221,6 +236,7 @@ def indexmarkup_role(typ, rawtext, text, lineno, inliner,
_amp_re = re.compile(r'(?<!&)&(?![&\s])')
def menusel_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
text = utils.unescape(text)
if typ == 'menuselection':
@ -246,8 +262,10 @@ def menusel_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
node['classes'].append(typ)
return [node], []
_litvar_re = re.compile('{([^}]+)}')
def emph_literal_role(typ, rawtext, text, lineno, inliner,
options={}, content=[]):
text = utils.unescape(text)
@ -266,6 +284,7 @@ def emph_literal_role(typ, rawtext, text, lineno, inliner,
_abbr_re = re.compile('\((.*)\)$', re.S)
def abbr_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
text = utils.unescape(text)
m = _abbr_re.search(text)
@ -311,6 +330,8 @@ specific_docroles = {
'download': XRefRole(nodeclass=addnodes.download_reference),
# links to documents
'doc': XRefRole(warn_dangling=True),
# links to anything
'any': AnyXRefRole(warn_dangling=True),
'pep': indexmarkup_role,
'rfc': indexmarkup_role,

View File

@ -522,3 +522,9 @@
\gdef\@chappos{}
}
\fi
% Define literal-block environment
\RequirePackage{float}
\floatstyle{plaintop}
\newfloat{literal-block}{htbp}{loc}[chapter]
\floatname{literal-block}{List}

View File

@ -30,6 +30,7 @@ from sphinx.errors import ThemeError
NODEFAULT = object()
THEMECONF = 'theme.conf'
class Theme(object):
"""
Represents the theme chosen in the configuration.
@ -94,7 +95,8 @@ class Theme(object):
self.themedir = tempfile.mkdtemp('sxt')
self.themedir_created = True
for name in tinfo.namelist():
if name.endswith('/'): continue
if name.endswith('/'):
continue
dirname = path.dirname(name)
if not path.isdir(path.join(self.themedir, dirname)):
os.makedirs(path.join(self.themedir, dirname))

View File

@ -34,6 +34,7 @@ default_substitutions = set([
'today',
])
class DefaultSubstitutions(Transform):
"""
Replace some substitutions if they aren't defined in the document.
@ -69,9 +70,9 @@ class MoveModuleTargets(Transform):
if not node['ids']:
continue
if ('ismod' in node and
node.parent.__class__ is nodes.section and
# index 0 is the section title node
node.parent.index(node) == 1):
node.parent.__class__ is nodes.section and
# index 0 is the section title node
node.parent.index(node) == 1):
node.parent['ids'][0:0] = node['ids']
node.parent.remove(node)
@ -86,10 +87,10 @@ class HandleCodeBlocks(Transform):
# move doctest blocks out of blockquotes
for node in self.document.traverse(nodes.block_quote):
if all(isinstance(child, nodes.doctest_block) for child
in node.children):
in node.children):
node.replace_self(node.children)
# combine successive doctest blocks
#for node in self.document.traverse(nodes.doctest_block):
# for node in self.document.traverse(nodes.doctest_block):
# if node not in node.parent.children:
# continue
# parindex = node.parent.index(node)
@ -173,7 +174,7 @@ class Locale(Transform):
parser = RSTParser()
#phase1: replace reference ids with translated names
# phase1: replace reference ids with translated names
for node, msg in extract_messages(self.document):
msgstr = catalog.gettext(msg)
# XXX add marker to untranslated parts
@ -198,7 +199,7 @@ class Locale(Transform):
pass
# XXX doctest and other block markup
if not isinstance(patch, nodes.paragraph):
continue # skip for now
continue # skip for now
processed = False # skip flag
@ -281,15 +282,14 @@ class Locale(Transform):
node.children = patch.children
node['translated'] = True
#phase2: translation
# phase2: translation
for node, msg in extract_messages(self.document):
if node.get('translated', False):
continue
msgstr = catalog.gettext(msg)
# XXX add marker to untranslated parts
if not msgstr or msgstr == msg: # as-of-yet untranslated
if not msgstr or msgstr == msg: # as-of-yet untranslated
continue
# Avoid "Literal block expected; none found." warnings.
@ -309,12 +309,13 @@ class Locale(Transform):
pass
# XXX doctest and other block markup
if not isinstance(patch, nodes.paragraph):
continue # skip for now
continue # skip for now
# auto-numbered foot note reference should use original 'ids'.
def is_autonumber_footnote_ref(node):
return isinstance(node, nodes.footnote_reference) and \
node.get('auto') == 1
def list_replace_or_append(lst, old, new):
if old in lst:
lst[lst.index(old)] = new
@ -339,7 +340,7 @@ class Locale(Transform):
for id in new['ids']:
self.document.ids[id] = new
list_replace_or_append(
self.document.autofootnote_refs, old, new)
self.document.autofootnote_refs, old, new)
if refname:
list_replace_or_append(
self.document.footnote_refs.setdefault(refname, []),
@ -404,6 +405,7 @@ class Locale(Transform):
if len(old_refs) != len(new_refs):
env.warn_node('inconsistent term references in '
'translated message', node)
def get_ref_key(node):
case = node["refdomain"], node["reftype"]
if case == ('std', 'term'):

View File

@ -29,15 +29,16 @@ from docutils.utils import relative_path
import jinja2
import sphinx
from sphinx.errors import PycodeError
from sphinx.errors import PycodeError, SphinxParallelError
from sphinx.util.console import strip_colors
from sphinx.util.osutil import fs_encoding
# import other utilities; partly for backwards compatibility, so don't
# prune unused ones indiscriminately
from sphinx.util.osutil import SEP, os_path, relative_uri, ensuredir, walk, \
mtimes_of_files, movefile, copyfile, copytimes, make_filename, ustrftime
mtimes_of_files, movefile, copyfile, copytimes, make_filename, ustrftime
from sphinx.util.nodes import nested_parse_with_titles, split_explicit_title, \
explicit_title_re, caption_ref_re
explicit_title_re, caption_ref_re
from sphinx.util.matching import patfilter
# Generally useful regular expressions.
@ -129,6 +130,11 @@ class FilenameUniqDict(dict):
del self[filename]
self._existing.discard(unique)
def merge_other(self, docnames, other):
for filename, (docs, unique) in other.items():
for doc in docs & docnames:
self.add_file(doc, filename)
def __getstate__(self):
return self._existing
@ -185,7 +191,11 @@ _DEBUG_HEADER = '''\
def save_traceback(app):
"""Save the current exception's traceback in a temporary file."""
import platform
exc = traceback.format_exc()
exc = sys.exc_info()[1]
if isinstance(exc, SphinxParallelError):
exc_format = '(Error in parallel process)\n' + exc.traceback
else:
exc_format = traceback.format_exc()
fd, path = tempfile.mkstemp('.log', 'sphinx-err-')
last_msgs = ''
if app is not None:
@ -200,11 +210,13 @@ def save_traceback(app):
last_msgs)).encode('utf-8'))
if app is not None:
for extname, extmod in iteritems(app._extensions):
modfile = getattr(extmod, '__file__', 'unknown')
if isinstance(modfile, bytes):
modfile = modfile.decode(fs_encoding, 'replace')
os.write(fd, ('# %s (%s) from %s\n' % (
extname, app._extension_versions[extname],
getattr(extmod, '__file__', 'unknown'))
).encode('utf-8'))
os.write(fd, exc.encode('utf-8'))
extname, app._extension_metadata[extname]['version'],
modfile)).encode('utf-8'))
os.write(fd, exc_format.encode('utf-8'))
os.close(fd)
return path

View File

@ -194,3 +194,9 @@ def abspath(pathdir):
if isinstance(pathdir, bytes):
pathdir = pathdir.decode(fs_encoding)
return pathdir
def getcwd():
if hasattr(os, 'getcwdu'):
return os.getcwdu()
return os.getcwd()

131
sphinx/util/parallel.py Normal file
View File

@ -0,0 +1,131 @@
# -*- coding: utf-8 -*-
"""
sphinx.util.parallel
~~~~~~~~~~~~~~~~~~~~
Parallel building utilities.
:copyright: Copyright 2007-2014 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import os
import traceback
try:
import multiprocessing
import threading
except ImportError:
multiprocessing = threading = None
from six.moves import queue
from sphinx.errors import SphinxParallelError
# our parallel functionality only works for the forking Process
parallel_available = multiprocessing and (os.name == 'posix')
class SerialTasks(object):
"""Has the same interface as ParallelTasks, but executes tasks directly."""
def __init__(self, nproc=1):
pass
def add_task(self, task_func, arg=None, result_func=None):
if arg is not None:
res = task_func(arg)
else:
res = task_func()
if result_func:
result_func(res)
def join(self):
pass
class ParallelTasks(object):
"""Executes *nproc* tasks in parallel after forking."""
def __init__(self, nproc):
self.nproc = nproc
# list of threads to join when waiting for completion
self._taskid = 0
self._threads = {}
self._nthreads = 0
# queue of result objects to process
self.result_queue = queue.Queue()
self._nprocessed = 0
# maps tasks to result functions
self._result_funcs = {}
# allow only "nproc" worker processes at once
self._semaphore = threading.Semaphore(self.nproc)
def _process(self, pipe, func, arg):
try:
if arg is None:
ret = func()
else:
ret = func(arg)
pipe.send((False, ret))
except BaseException as err:
pipe.send((True, (err, traceback.format_exc())))
def _process_thread(self, tid, func, arg):
precv, psend = multiprocessing.Pipe(False)
proc = multiprocessing.Process(target=self._process,
args=(psend, func, arg))
proc.start()
result = precv.recv()
self.result_queue.put((tid, arg) + result)
proc.join()
self._semaphore.release()
def add_task(self, task_func, arg=None, result_func=None):
tid = self._taskid
self._taskid += 1
self._semaphore.acquire()
thread = threading.Thread(target=self._process_thread,
args=(tid, task_func, arg))
thread.setDaemon(True)
thread.start()
self._nthreads += 1
self._threads[tid] = thread
self._result_funcs[tid] = result_func or (lambda *x: None)
# try processing results already in parallel
try:
tid, arg, exc, result = self.result_queue.get(False)
except queue.Empty:
pass
else:
del self._threads[tid]
if exc:
raise SphinxParallelError(*result)
self._result_funcs.pop(tid)(arg, result)
self._nprocessed += 1
def join(self):
while self._nprocessed < self._nthreads:
tid, arg, exc, result = self.result_queue.get()
del self._threads[tid]
if exc:
raise SphinxParallelError(*result)
self._result_funcs.pop(tid)(arg, result)
self._nprocessed += 1
# there shouldn't be any threads left...
for t in self._threads.values():
t.join()
def make_chunks(arguments, nproc, maxbatch=10):
# determine how many documents to read in one go
nargs = len(arguments)
chunksize = min(nargs // nproc, maxbatch)
if chunksize == 0:
chunksize = 1
nchunks, rest = divmod(nargs, chunksize)
if rest:
nchunks += 1
# partition documents in "chunks" that will be written by one Process
return [arguments[i*chunksize:(i+1)*chunksize] for i in range(nchunks)]

View File

@ -123,6 +123,9 @@ class path(text_type):
"""
os.unlink(self)
def utime(self, arg):
os.utime(self, arg)
def write_text(self, text, **kwargs):
"""
Writes the given `text` to the file.
@ -195,6 +198,9 @@ class path(text_type):
"""
return self.__class__(os.path.join(self, *map(self.__class__, args)))
def listdir(self):
return os.listdir(self)
__div__ = __truediv__ = joinpath
def __repr__(self):

View File

@ -3,12 +3,9 @@
import sys, os
sys.path.append(os.path.abspath('.'))
sys.path.append(os.path.abspath('..'))
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.jsmath', 'sphinx.ext.todo',
'sphinx.ext.coverage', 'sphinx.ext.autosummary',
'sphinx.ext.doctest', 'sphinx.ext.extlinks',
'sphinx.ext.viewcode', 'ext']
'sphinx.ext.coverage', 'sphinx.ext.extlinks', 'ext']
jsmath_path = 'dummy.js'
@ -18,7 +15,7 @@ master_doc = 'contents'
source_suffix = '.txt'
project = 'Sphinx <Tests>'
copyright = '2010, Georg Brandl & Team'
copyright = '2010-2014, Georg Brandl & Team'
# If this is changed, remember to update the versionchanges!
version = '0.6'
release = '0.6alpha1'
@ -34,7 +31,8 @@ html_theme = 'testtheme'
html_theme_path = ['.']
html_theme_options = {'testopt': 'testoverride'}
html_sidebars = {'**': 'customsb.html',
'contents': ['contentssb.html', 'localtoc.html'] }
'contents': ['contentssb.html', 'localtoc.html',
'globaltoc.html']}
html_style = 'default.css'
html_static_path = ['_static', 'templated.css_t']
html_extra_path = ['robots.txt']
@ -44,15 +42,15 @@ html_context = {'hckey': 'hcval', 'hckey_co': 'wrong_hcval_co'}
htmlhelp_basename = 'SphinxTestsdoc'
latex_documents = [
('contents', 'SphinxTests.tex', 'Sphinx Tests Documentation',
'Georg Brandl \\and someone else', 'manual'),
('contents', 'SphinxTests.tex', 'Sphinx Tests Documentation',
'Georg Brandl \\and someone else', 'manual'),
]
latex_additional_files = ['svgimg.svg']
texinfo_documents = [
('contents', 'SphinxTests', 'Sphinx Tests',
'Georg Brandl \\and someone else', 'Sphinx Testing', 'Miscellaneous'),
('contents', 'SphinxTests', 'Sphinx Tests',
'Georg Brandl \\and someone else', 'Sphinx Testing', 'Miscellaneous'),
]
man_pages = [
@ -65,8 +63,6 @@ value_from_conf_py = 84
coverage_c_path = ['special/*.h']
coverage_c_regexes = {'function': r'^PyAPI_FUNC\(.*\)\s+([^_][\w_]+)'}
autosummary_generate = ['autosummary']
extlinks = {'issue': ('http://bugs.python.org/issue%s', 'issue '),
'pyurl': ('http://python.org/%s', None)}
@ -80,35 +76,13 @@ autodoc_mock_imports = [
# modify tags from conf.py
tags.add('confpytag')
# -- linkcode
if 'test_linkcode' in tags:
import glob
extensions.remove('sphinx.ext.viewcode')
extensions.append('sphinx.ext.linkcode')
exclude_patterns.extend(glob.glob('*.txt') + glob.glob('*/*.txt'))
exclude_patterns.remove('contents.txt')
exclude_patterns.remove('objects.txt')
def linkcode_resolve(domain, info):
if domain == 'py':
fn = info['module'].replace('.', '/')
return "http://foobar/source/%s.py" % fn
elif domain == "js":
return "http://foobar/js/" + info['fullname']
elif domain in ("c", "cpp"):
return "http://foobar/%s/%s" % (domain, "".join(info['names']))
else:
raise AssertionError()
# -- extension API
from docutils import nodes
from sphinx import addnodes
from sphinx.util.compat import Directive
def userdesc_parse(env, sig, signode):
x, y = sig.split(':')
signode += addnodes.desc_name(x, x)
@ -116,15 +90,19 @@ def userdesc_parse(env, sig, signode):
signode[-1] += addnodes.desc_parameter(y, y)
return x
def functional_directive(name, arguments, options, content, lineno,
content_offset, block_text, state, state_machine):
return [nodes.strong(text='from function: %s' % options['opt'])]
class ClassDirective(Directive):
option_spec = {'opt': lambda x: x}
def run(self):
return [nodes.strong(text='from class: %s' % self.options['opt'])]
def setup(app):
app.add_config_value('value_from_conf_py', 42, False)
app.add_directive('funcdir', functional_directive, opt=lambda x: x)

View File

@ -21,15 +21,14 @@ Contents:
bom
math
autodoc
autosummary
metadata
extensions
doctest
extensions
versioning/index
footnote
lists
http://sphinx-doc.org/
Latest reference <http://sphinx-doc.org/latest/>
Python <http://python.org/>
Indices and tables
@ -44,3 +43,13 @@ References
.. [Ref1] Reference target.
.. [Ref_1] Reference target 2.
Test for issue #1157
====================
This used to crash:
.. toctree::
.. toctree::
:hidden:

View File

@ -142,6 +142,7 @@ Adding \n to test unescaping.
* :ref:`here <some-label>`
* :ref:`my-figure`
* :ref:`my-table`
* :ref:`my-code-block`
* :doc:`subdir/includes`
* ``:download:`` is tested in includes.txt
* :option:`Python -c option <python -c>`
@ -228,8 +229,11 @@ Version markup
Code blocks
-----------
.. _my-code-block:
.. code-block:: ruby
:linenos:
:caption: my ruby code
def ruby?
false
@ -356,6 +360,25 @@ Only directive
Always present, because set through conf.py/command line.
Any role
--------
.. default-role:: any
Test referencing to `headings <with>` and `objects <func_without_body>`.
Also `modules <mod>` and `classes <Time>`.
More domains:
* `JS <bar.baz>`
* `C <SphinxType>`
* `myobj` (user markup)
* `n::Array`
* `perl -c`
.. default-role::
.. rubric:: Footnotes
.. [#] Like footnotes.

View File

@ -170,6 +170,10 @@ Others
.. cmdoption:: -c
.. option:: +p
Link to :option:`perl +p`.
User markup
===========

View File

@ -0,0 +1,3 @@
:orphan:
here: »

View File

@ -1,3 +1,7 @@
import sys, os
sys.path.insert(0, os.path.abspath('.'))
extensions = ['sphinx.ext.autosummary']
# The suffix of source filenames.

View File

@ -4,3 +4,4 @@
:toctree:
dummy_module
sphinx

View File

@ -0,0 +1,2 @@
master_doc = 'contents'
source_suffix = '.txt'

View File

@ -0,0 +1,8 @@
.. toctree::
maxwidth
lineblock
nonascii_title
nonascii_table
nonascii_maxwidth
table

View File

@ -0,0 +1,6 @@
* one
| line-block 1
| line-block 2
followed paragraph.

View File

@ -0,0 +1,6 @@
.. seealso:: ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham
* ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham
* ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham ham
spam egg

View File

@ -0,0 +1,5 @@
abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc abc
日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語 日本語
abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語 abc 日本語

View File

@ -0,0 +1,7 @@
.. list-table::
- - spam
- egg
- - 日本語
- 日本語

View File

@ -0,0 +1,2 @@
日本語
======

View File

@ -0,0 +1,7 @@
+-----+-----+
| XXX | XXX |
+-----+-----+
| | XXX |
+-----+-----+
| XXX | |
+-----+-----+

View File

View File

@ -0,0 +1,4 @@
.. toctree::
sub

View File

@ -0,0 +1,3 @@
.. toctree::
contents

View File

@ -1,22 +1,35 @@
Dedent
======
Code blocks
-----------
.. code-block:: ruby
:linenos:
:dedent: 4
def ruby?
false
end
Literal Include
---------------
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 0
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 1
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 2
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 3
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 4
.. literalinclude:: literal.inc
:language: python
:lines: 10-11
:dedent: 1000

View File

@ -0,0 +1,53 @@
Dedent
======
Code blocks
-----------
.. code-block:: ruby
:linenos:
:dedent: 0
def ruby?
false
end
.. code-block:: ruby
:linenos:
:dedent: 1
def ruby?
false
end
.. code-block:: ruby
:linenos:
:dedent: 2
def ruby?
false
end
.. code-block:: ruby
:linenos:
:dedent: 3
def ruby?
false
end
.. code-block:: ruby
:linenos:
:dedent: 4
def ruby?
false
end
.. code-block:: ruby
:linenos:
:dedent: 1000
def ruby?
false
end

View File

@ -0,0 +1,5 @@
extensions = ['sphinx.ext.doctest']
project = 'test project for doctest'
master_doc = 'doctest.txt'
source_suffix = '.txt'

View File

@ -125,5 +125,5 @@ Special directives
.. testcleanup:: *
import test_doctest
test_doctest.cleanup_call()
import test_ext_doctest
test_ext_doctest.cleanup_call()

View File

@ -6,3 +6,19 @@ import os
sys.path.insert(0, os.path.abspath('.'))
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode']
master_doc = 'index'
if 'test_linkcode' in tags:
extensions.remove('sphinx.ext.viewcode')
extensions.append('sphinx.ext.linkcode')
def linkcode_resolve(domain, info):
if domain == 'py':
fn = info['module'].replace('.', '/')
return "http://foobar/source/%s.py" % fn
elif domain == "js":
return "http://foobar/js/" + info['fullname']
elif domain in ("c", "cpp"):
return "http://foobar/%s/%s" % (domain, "".join(info['names']))
else:
raise AssertionError()

View File

@ -27,3 +27,8 @@ viewcode
.. literalinclude:: spam/mod1.py
:language: python
:pyobject: func1
.. toctree::
objects

View File

@ -0,0 +1,169 @@
Testing object descriptions
===========================
.. function:: func_without_module(a, b, *c[, d])
Does something.
.. function:: func_without_body()
.. function:: func_noindex
:noindex:
.. function:: func_with_module
:module: foolib
Referring to :func:`func with no index <func_noindex>`.
Referring to :func:`nothing <>`.
.. module:: mod
:synopsis: Module synopsis.
:platform: UNIX
.. function:: func_in_module
.. class:: Cls
.. method:: meth1
.. staticmethod:: meths
.. attribute:: attr
.. explicit class given
.. method:: Cls.meth2
.. explicit module given
.. exception:: Error(arg1, arg2)
:module: errmod
.. data:: var
.. currentmodule:: None
.. function:: func_without_module2() -> annotation
.. object:: long(parameter, \
list)
another one
.. class:: TimeInt
Has only one parameter (triggers special behavior...)
:param moo: |test|
:type moo: |test|
.. |test| replace:: Moo
.. class:: Time(hour, minute, isdst)
:param year: The year.
:type year: TimeInt
:param TimeInt minute: The minute.
:param isdst: whether it's DST
:type isdst: * some complex
* expression
:returns: a new :class:`Time` instance
:rtype: :class:`Time`
:raises ValueError: if the values are out of range
:ivar int hour: like *hour*
:ivar minute: like *minute*
:vartype minute: int
:param hour: Some parameter
:type hour: DuplicateType
:param hour: Duplicate param. Should not lead to crashes.
:type hour: DuplicateType
:param .Cls extcls: A class from another module.
C items
=======
.. c:function:: Sphinx_DoSomething()
.. c:member:: SphinxStruct.member
.. c:macro:: SPHINX_USE_PYTHON
.. c:type:: SphinxType
.. c:var:: sphinx_global
Javascript items
================
.. js:function:: foo()
.. js:data:: bar
.. documenting the method of any object
.. js:function:: bar.baz(href, callback[, errback])
:param string href: The location of the resource.
:param callback: Get's called with the data returned by the resource.
:throws InvalidHref: If the `href` is invalid.
:returns: `undefined`
.. js:attribute:: bar.spam
References
==========
Referencing :class:`mod.Cls` or :Class:`mod.Cls` should be the same.
With target: :c:func:`Sphinx_DoSomething()` (parentheses are handled),
:c:member:`SphinxStruct.member`, :c:macro:`SPHINX_USE_PYTHON`,
:c:type:`SphinxType *` (pointer is handled), :c:data:`sphinx_global`.
Without target: :c:func:`CFunction`. :c:func:`!malloc`.
:js:func:`foo()`
:js:func:`foo`
:js:data:`bar`
:js:func:`bar.baz()`
:js:func:`bar.baz`
:js:func:`~bar.baz()`
:js:attr:`bar.baz`
Others
======
.. envvar:: HOME
.. program:: python
.. cmdoption:: -c command
.. program:: perl
.. cmdoption:: -c
.. option:: +p
Link to :option:`perl +p`.
User markup
===========
.. userdesc:: myobj:parameter
Description of userdesc.
Referencing :userdescrole:`myobj`.
CPP domain
==========
.. cpp:class:: n::Array<T,d>
.. cpp:function:: T& operator[]( unsigned j )
const T& operator[]( unsigned j ) const

View File

@ -0,0 +1,5 @@
.. toctree::
:numbered:
sub

View File

@ -0,0 +1,3 @@
.. toctree::
contents

View File

@ -4,10 +4,4 @@ Autosummary templating test
.. autosummary::
:toctree: generated
sphinx.application.Sphinx
.. currentmodule:: sphinx.application
.. autoclass:: TemplateBridge
.. automethod:: render
sphinx.application.TemplateBridge

View File

@ -0,0 +1,3 @@
project = 'versioning test root'
master_doc = 'index'
source_suffix = '.txt'

Some files were not shown because too many files have changed in this diff Show More