Merge remote-tracking branch 'upstream/master'

Conflicts:
	tests/test_build_html.py
This commit is contained in:
Jellby
2017-03-04 12:03:15 +01:00
332 changed files with 14894 additions and 13829 deletions

27
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,27 @@
Subject: <what happen when you do on which document project>
### Problem
- <Detail of problem>
#### Procedure to reproduce the problem
```
<Paste your command-line here which cause the problem>
```
#### Error logs / results
```
<Paste your error log here>
```
- <public link of unexpected result if you have>
#### Expected results
<Describe what to actually do>
### Reproducible project / your project
- <link to your project, or attach zipped small project sample>
### Environment info
- OS: <Unix/Linux/Mac/Win/other with version>
- Python version:
- Sphinx version:
- <Extra tools e.g.: Browser, tex or something else>

18
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,18 @@
Subject: <short purpose of this pull request>
### Feature or Bugfix
<!-- please choose -->
- Feature
- Bugfix
### Purpose
- <long purpose of this pull request>
- <Environment if this PR depends on>
### Detail
- <feature1 or bug1>
- <feature2 or bug2>
### Relates
- <URL or Ticket>

1
.gitignore vendored
View File

@@ -4,6 +4,7 @@
*.swp
.dir-locals.el
.cache/
.mypy_cache/
.ropeproject/
TAGS

View File

@@ -1,22 +1,36 @@
language: python
sudo: false
dist: trusty
cache:
directories:
- $HOME/.cache/pip
python:
- "2.7"
- "3.4"
- "pypy-5.4.1"
- "3.6"
- "3.5"
- "3.4"
- "2.7"
- "nightly"
- "pypy"
env:
global:
- TEST='-v --with-timer --timer-top-n 25'
- TEST='-v --durations 25'
- PYTHONFAULTHANDLER=x
- PYTHONWARNINGS=all
matrix:
- DOCUTILS=0.11
- DOCUTILS=0.12
- DOCUTILS=0.13.1
matrix:
exclude:
- python: "3.4"
env: DOCUTILS=0.12
- python: "3.5"
env: DOCUTILS=0.12
- python: "3.6"
env: DOCUTILS=0.12
- python: nightly
env: DOCUTILS=0.12
- python: "pypy-5.4.1"
env: DOCUTILS=0.12
addons:
apt:
packages:
@@ -25,14 +39,15 @@ addons:
- texlive-latex-extra
- texlive-fonts-recommended
- texlive-fonts-extra
- texlive-luatex
- texlive-xetex
- lmodern
install:
- pip install -U pip setuptools
- pip install docutils==$DOCUTILS
- pip install -r test-reqs.txt
before_script:
- if [[ $TRAVIS_PYTHON_VERSION != '2.6' ]]; then flake8; fi
- if [[ $TRAVIS_PYTHON_VERSION == '3.6' ]]; then python3.6 -m pip install mypy typed-ast; fi
script:
- if [[ $TRAVIS_PYTHON_VERSION == '3.5' ]]; then make style-check test-async; fi
- if [[ $TRAVIS_PYTHON_VERSION != '3.5' ]]; then make test; fi
- flake8
- if [[ $TRAVIS_PYTHON_VERSION == '3.6' ]]; then make style-check type-check test-async; fi
- if [[ $TRAVIS_PYTHON_VERSION != '3.6' ]]; then make test; fi

View File

@@ -21,6 +21,7 @@ Other contributors, listed alphabetically, are:
* Henrique Bastos -- SVG support for graphviz extension
* Daniel Bültmann -- todo extension
* Jean-François Burnol -- LaTeX improvements
* Marco Buttu -- doctest extension (pyversion option)
* Etienne Desautels -- apidoc module
* Michael Droettboom -- inheritance_diagram extension
* Charles Duffy -- original graphviz extension
@@ -67,6 +68,7 @@ Other contributors, listed alphabetically, are:
* Michael Wilson -- Intersphinx HTTP basic auth support
* Joel Wurtz -- cellspanning support in LaTeX
* Hong Xu -- svg support in imgmath extension and various bug fixes
* Stephen Finucane -- setup command improvements and documentation
Many thanks for all contributions!

274
CHANGES
View File

@@ -4,14 +4,269 @@ Release 1.6 (in development)
Incompatible changes
--------------------
* #1061, #2336, #3235: Now generation of autosummary doesn't contain imported
members by default. Thanks to Luc Saffre.
* LaTeX ``\includegraphics`` command isn't overloaded: only ``\sphinxincludegraphics``
has the custom code to fit image to available width if oversized.
* The subclasses of ``sphinx.domains.Index`` should override ``generate()``
method. The default implementation raises NotImplementedError
* LaTeX positioned long tables horizontally centered, and short ones
flushed left (no text flow around table.) The position now defaults to center in
both cases, and it will obey Docutils 0.13 ``:align:`` option (refs #3415, #3377)
* option directive also allows all punctuations for the option name (refs: #3366)
* #3413: if :rst:dir:`literalinclude`'s ``:start-after:`` is used, make ``:lines:``
relative (refs #3412)
* ``literalinclude`` directive does not allow the combination of ``:diff:``
option and other options (refs: #3416)
* Default config for LuaLaTeX engine uses ``fontspec`` hence needs TeXLive 2013
or more recent TeX installation for compatibility. This also means that the
fonts used by default have changed to the defaults as chosen by ``fontspec``.
(refs #3070, #3466)
* :confval:`latex_keep_old_macro_names` default value has been changed from
``True`` to ``False``. This means that some LaTeX macros for styling are
by default defined only with ``\sphinx..`` prefixed names. (refs: #3429)
* Footer "Continued on next page" of LaTeX longtable's now not framed (refs: #3497)
Features removed
----------------
* Configuration variables
- epub3_contributor
- epub3_description
- epub3_page_progression_direction
- html_translator_class
- html_use_modindex
- latex_font_size
- latex_paper_size
- latex_preamble
- latex_use_modindex
- latex_use_parts
* ``termsep`` node
* defindex.html template
* LDML format support in `today`, `today_fmt` and `html_last_updated_fmt`
* ``:inline:`` option for the directives of sphinx.ext.graphviz extension
* sphinx.ext.pngmath extension
* ``sphinx.util.compat.make_admonition()``
Features added
--------------
* #3136: Add ``:name:`` option to the directives in ``sphinx.ext.graphviz``
* #2336: Add ``imported_members`` option to ``sphinx-autogen`` command to document
imported members.
* C++, add ``:tparam-line-spec:`` option to templated declarations.
When specified, each template parameter will be rendered on a separate line.
* #3359: Allow sphinx.js in a user locale dir to override sphinx.js from Sphinx
* #3303: Add ``:pyversion:`` option to the doctest directive.
* #3378: (latex) support for ``:widths:`` option of table directives
(refs: #3379, #3381)
* #3402: Allow to suppress "download file not readable" warnings using
:confval:`suppress_warnings`.
* #3377: latex: Add support for Docutils 0.13 ``:align:`` option for tables
(but does not implement text flow around table).
* latex: footnotes from inside tables are hyperlinked (except from captions or
headers) (refs: #3422)
* Emit warning if over dedent has detected on ``literalinclude`` directive
(refs: #3416)
* Use for LuaLaTeX same default settings as for XeLaTeX (i.e. ``fontspec`` and
``polyglossia``). (refs: #3070, #3466)
* Make ``'extraclassoptions'`` key of ``latex_elements`` public (refs #3480)
* #3463: Add warning messages for required EPUB3 metadata. Add default value to
``epub_description`` to avoid warning like other settings.
* #3476: setuptools: Support multiple builders
* latex: merged cells in LaTeX tables allow code-blocks, lists, blockquotes...
as do normal cells (refs: #3435)
* HTML buildre uses experimental HTML5 writer if ``html_experimental_html5_builder`` is True
and docutils 0.13 and newer is installed.
Bugs fixed
----------
* ``literalinclude`` directive expands tabs after dedent-ing (refs: #3416)
* #1574: Paragraphs in table cell doesn't work in Latex output
* #3288: Table with merged headers not wrapping text
Deprecated
----------
* ``sphinx.util.compat.Directive`` class is now deprecated. Please use instead
``docutils.parsers.rsr.Directive``
* ``sphinx.util.compat.docutils_version`` is now deprecated
* #2367: ``Sphinx.warn()``, ``Sphinx.info()`` and other logging methods are now
deprecated. Please use ``sphinx.util.logging`` (:ref:`logging-api`) instead.
* #3318: ``notice`` is now deprecated as LaTeX environment name and will be
removed at Sphinx 1.7. Extension authors please use ``sphinxadmonition``
instead (as Sphinx does since 1.5.)
* ``Sphinx.status_iterator()`` and ``Sphinx.old_status_iterator()`` is now
deprecated. Please use ``sphinx.util:status_iterator()`` intead.
* ``BuildEnvironment.set_warnfunc()`` is now deprecated
* Following methods of ``BuildEnvironment`` is now deprecated.
- ``BuildEnvironment.note_toctree()``
- ``BuildEnvironment.get_toc_for()``
- ``BuildEnvironment.get_toctree_for()``
- ``BuildEnvironment.create_index()``
Please use ``sphinx.environment.adapters`` modules instead.
* latex package ``footnote`` is not loaded anymore by its bundled replacement
``footnotehyper-sphinx``. The redefined macros keep the same names as in the
original package.
* #3429: deprecate config setting ``latex_keep_old_macro_names``. It will be
removed at 1.7, and already its default value has changed from ``True`` to
``False``.
Release 1.5.4 (in development)
==============================
Incompatible changes
--------------------
Deprecated
----------
Features added
--------------
* #3470: Make genindex support all kinds of letters, not only Latin ones
Bugs fixed
----------
* #3445: setting ``'inputenc'`` key to ``\\usepackage[utf8x]{inputenc}`` leads
to failed PDF build
* EPUB file has duplicated ``nav.xhtml`` link in ``content.opf``
except first time build
* #3488: objects.inv has broken when ``release`` or ``version`` contain
return code
* #2073, #3443, #3490: gettext builder that writes pot files unless the content
are same without creation date. Thanks to Yoshiki Shibukawa.
* #3487: intersphinx: failed to refer options
* #3496: latex longtable's last column may be much wider than its contents
Testing
--------
Release 1.5.3 (released Feb 26, 2017)
=====================================
Features added
--------------
* Support requests-2.0.0 (experimental) (refs: #3367)
* (latex) PDF page margin dimensions may be customized (refs: #3387)
* ``literalinclude`` directive allows combination of ``:pyobject:`` and
``:lines:`` options (refs: #3416)
* #3400: make-mode doesn't use subprocess on building docs
Bugs fixed
----------
* #3370: the caption of code-block is not picked up for translation
* LaTeX: :confval:`release` is not escaped (refs: #3362)
* #3364: sphinx-quickstart prompts overflow on Console with 80 chars width
* since 1.5, PDF's TOC and bookmarks lack an entry for general Index
(refs: #3383)
* #3392: ``'releasename'`` in :confval:`latex_elements` is not working
* #3356: Page layout for Japanese ``'manual'`` docclass has a shorter text area
* #3394: When ``'pointsize'`` is not ``10pt``, Japanese ``'manual'`` document
gets wrong PDF page dimensions
* #3399: quickstart: conf.py was not overwritten by template
* #3366: option directive does not allow punctuations
* #3410: return code in :confval:`release` breaks html search
* #3427: autodoc: memory addresses are not stripped on Windows
* #3428: xetex build tests fail due to fontspec v2.6 defining ``\strong``
* #3349: Result of ``IndexBuilder.load()`` is broken
* #3450: &nbsp is appeared in EPUB docs
* #3418: Search button is misaligned in nature and pyramid theme
* #3421: Could not translate a caption of tables
Release 1.5.2 (released Jan 22, 2017)
=====================================
Incompatible changes
--------------------
* Dependency requirement updates: requests 2.4.0 or above (refs: #3268, #3310)
Features added
--------------
* #3241: emit latex warning if buggy titlesec (ref #3210)
* #3194: Refer the $MAKE environment variable to determine ``make`` command
* Emit warning for nested numbered toctrees (refs: #3142)
* #978: `intersphinx_mapping` also allows a list as a parameter
* #3340: (LaTeX) long lines in :dudir:`parsed-literal` are wrapped like in
:rst:dir:`code-block`, inline math and footnotes are fully functional.
Bugs fixed
----------
* #3246: xapian search adapter crashes
* #3253: In Py2 environment, building another locale with a non-captioned
toctree produces ``None`` captions
* #185: References to section title including raw node has broken
* #3255: In Py3.4 environment, autodoc doesn't support documentation for
attributes of Enum class correctly.
* #3261: ``latex_use_parts`` makes sphinx crash
* The warning type ``misc.highlighting_failure`` does not work
* #3294: ``add_latex_package()`` make crashes non-LaTeX builders
* The caption of table are rendered as invalid HTML (refs: #3287)
* #3268: Sphinx crashes with requests package from Debian jessie
* #3284: Sphinx crashes on parallel build with an extension which raises
unserializable exception
* #3315: Bibliography crashes on latex build with docclass 'memoir'
* #3328: Could not refer rubric implicitly
* #3329: emit warnings if po file is invalid and can't read it. Also writing mo too
* #3337: Ugly rendering of definition list term's classifier
* #3335: gettext does not extract field_name of a field in a field_list
* #2952: C++, fix refs to operator() functions.
* Fix Unicode super- and subscript digits in :rst:dir:`code-block` and
parsed-literal LaTeX output (ref #3342)
* LaTeX writer: leave ``"`` character inside parsed-literal as is (ref #3341)
* #3234: intersphinx failed for encoded inventories
* #3158: too much space after captions in PDF output
* #3317: An URL in parsed-literal contents gets wrongly rendered in PDF if
with hyphen
* LaTeX crash if the filename of an image inserted in parsed-literal
via a substitution contains an hyphen (ref #3340)
* LaTeX rendering of inserted footnotes in parsed-literal is wrong (ref #3340)
* Inline math in parsed-literal is not rendered well by LaTeX (ref #3340)
* #3308: Parsed-literals don't wrap very long lines with pdf builder (ref #3340)
* #3295: Could not import extension sphinx.builders.linkcheck
* #3285: autosummary: asterisks are escaped twice
* LaTeX, pass dvipdfm option to geometry package for Japanese documents (ref #3363)
* Fix parselinenos() could not parse left half open range (cf. "-4")
Release 1.5.1 (released Dec 13, 2016)
=====================================
Features added
--------------
* #3214: Allow to suppress "unknown mimetype" warnings from epub builder using
:confval:`suppress_warnings`.
Bugs fixed
----------
* #3195: Can not build in parallel
* #3198: AttributeError is raised when toctree has 'self'
* #3211: Remove untranslated sphinx locale catalogs (it was covered by
untranslated it_IT)
* #3212: HTML Builders crashes with docutils-0.13
* #3207: more latex problems with references inside parsed-literal directive
(``\DUrole``)
* #3205: sphinx.util.requests crashes with old pyOpenSSL (< 0.14)
* #3220: KeyError when having a duplicate citation
* #3200: LaTeX: xref inside desc_name not allowed
* #3228: ``build_sphinx`` command crashes when missing dependency
* #2469: Ignore updates of catalog files for gettext builder. Thanks to
Hiroshi Ohkubo.
* #3183: Randomized jump box order in generated index page.
Release 1.5 (released Dec 5, 2016)
==================================
@@ -25,10 +280,10 @@ Incompatible changes
* latex, package ifthen is not any longer a dependency of sphinx.sty
* latex, style file does not modify fancyvrb's Verbatim (also available as
OriginalVerbatim) but uses sphinxVerbatim for name of custom wrapper.
* latex, package newfloat is no longer a dependency of sphinx.sty (ref #2660;
it was shipped with Sphinx since 1.3.4).
* latex, package newfloat is not used (and not included) anymore (ref #2660;
it was used since 1.3.4 and shipped with Sphinx since 1.4).
* latex, literal blocks in tables do not use OriginalVerbatim but
sphinxVerbatimintable which handles captions and wraps lines(ref #2704).
sphinxVerbatimintable which handles captions and wraps lines (ref #2704).
* latex, replace ``pt`` by TeX equivalent ``bp`` if found in ``width`` or
``height`` attribute of an image.
* latex, if ``width`` or ``height`` attribute of an image is given with no unit,
@@ -46,7 +301,7 @@ Incompatible changes
* QtHelpBuilder doens't generate search page (ref: #2352)
* QtHelpBuilder uses ``nonav`` theme instead of default one
to improve readability.
* latex: To provide good default settings to Japanese docs, Sphinx uses ``jsbooks``
* latex: To provide good default settings to Japanese docs, Sphinx uses ``jsbook``
as a docclass by default if the ``language`` is ``ja``.
* latex: To provide good default settings to Japanese docs, Sphinx uses
``jreport`` and ``jsbooks`` as a docclass by default if the ``language`` is
@@ -60,7 +315,7 @@ Incompatible changes
* Fix ``genindex.html``, Sphinx's document template, link address to itself to satisfy xhtml standard.
* Use epub3 builder by default. And the old epub builder is renamed to epub2.
* Fix ``epub`` and ``epub3`` builders that contained links to ``genindex`` even if ``epub_use_index = False``.
* `html_translator_class` is now deprecated.
* ``html_translator_class`` is now deprecated.
Use `Sphinx.set_translator()` API instead.
* Drop python 2.6 and 3.3 support
* Drop epub3 builder's ``epub3_page_progression_direction`` option (use ``epub3_writing_mode``).
@@ -80,6 +335,8 @@ Incompatible changes
The non-modified package is used.
* #3057: By default, footnote marks in latex PDF output are not preceded by a
space anymore, ``\sphinxBeforeFootnote`` allows user customization if needed.
* LaTeX target requires that option ``hyperfootnotes`` of package ``hyperref``
be left unchanged to its default (i.e. ``true``) (refs: #3022)
1.5 final
@@ -229,7 +486,7 @@ Bugs fixed
* `sphinx.ext.autodoc` crashes if target code imports * from mock modules
by `autodoc_mock_imports`.
* #1953: ``Sphinx.add_node`` does not add handlers the translator installed by
`html_translator_class`
``html_translator_class``
* #1797: text builder inserts blank line on top
* #2894: quickstart main() doesn't use argv argument
* #2874: gettext builder could not extract all text under the ``only``
@@ -636,7 +893,8 @@ Incompatible changes
``"MMMM dd, YYYY"`` is default format for `today_fmt` and `html_last_updated_fmt`.
However strftime format like ``"%B %d, %Y"`` is also supported for backward
compatibility until Sphinx-1.5. Later format will be disabled from Sphinx-1.5.
* #2327: `latex_use_parts` is deprecated now. Use `latex_toplevel_sectioning` instead.
* #2327: ``latex_use_parts`` is deprecated now. Use `latex_toplevel_sectioning`
instead.
* #2337: Use ``\url{URL}`` macro instead of ``\href{URL}{URL}`` in LaTeX writer.
* #1498: manpage writer: don't make whole of item in definition list bold if it includes strong node.
* #582: Remove hint message from quick search box for html output.
@@ -1199,7 +1457,7 @@ Features added
for the ids defined on the node. Thanks to Olivier Heurtier.
* PR#229: Allow registration of other translators. Thanks to Russell Sim.
* Add app.set_translator() API to register or override a Docutils translator
class like `html_translator_class`.
class like ``html_translator_class``.
* PR#267, #1134: add 'diff' parameter to literalinclude. Thanks to Richard Wall
and WAKAYAMA shirou.
* PR#272: Added 'bizstyle' theme. Thanks to Shoji KUMAGAI.

334
CONTRIBUTING.rst Normal file
View File

@@ -0,0 +1,334 @@
Sphinx Developer's Guide
========================
.. topic:: Abstract
This document describes the development process of Sphinx, a documentation
system used by developers to document systems used by other developers to
develop other systems that may also be documented using Sphinx.
.. contents::
:local:
The Sphinx source code is managed using Git and is hosted on Github.
git clone git://github.com/sphinx-doc/sphinx
.. rubric:: Community
sphinx-users <sphinx-users@googlegroups.com>
Mailing list for user support.
sphinx-dev <sphinx-dev@googlegroups.com>
Mailing list for development related discussions.
#sphinx-doc on irc.freenode.net
IRC channel for development questions and user support.
Bug Reports and Feature Requests
--------------------------------
If you have encountered a problem with Sphinx or have an idea for a new
feature, please submit it to the `issue tracker`_ on Github or discuss it
on the sphinx-dev mailing list.
For bug reports, please include the output produced during the build process
and also the log file Sphinx creates after it encounters an un-handled
exception. The location of this file should be shown towards the end of the
error message.
Including or providing a link to the source files involved may help us fix the
issue. If possible, try to create a minimal project that produces the error
and post that instead.
.. _`issue tracker`: https://github.com/sphinx-doc/sphinx/issues
Contributing to Sphinx
----------------------
The recommended way for new contributors to submit code to Sphinx is to fork
the repository on Github and then submit a pull request after
committing the changes. The pull request will then need to be approved by one
of the core developers before it is merged into the main repository.
#. Check for open issues or open a fresh issue to start a discussion around a
feature idea or a bug.
#. If you feel uncomfortable or uncertain about an issue or your changes, feel
free to email sphinx-dev@googlegroups.com.
#. Fork `the repository`_ on Github to start making your changes to the
**master** branch for next major version, or **stable** branch for next
minor version.
#. Write a test which shows that the bug was fixed or that the feature works
as expected.
#. Send a pull request and bug the maintainer until it gets merged and
published. Make sure to add yourself to AUTHORS_ and the change to
CHANGES_.
.. _`the repository`: https://github.com/sphinx-doc/sphinx
.. _AUTHORS: https://github.com/sphinx-doc/sphinx/blob/master/AUTHORS
.. _CHANGES: https://github.com/sphinx-doc/sphinx/blob/master/CHANGES
Getting Started
~~~~~~~~~~~~~~~
These are the basic steps needed to start developing on Sphinx.
#. Create an account on Github.
#. Fork the main Sphinx repository (`sphinx-doc/sphinx
<https://github.com/sphinx-doc/sphinx>`_) using the Github interface.
#. Clone the forked repository to your machine. ::
git clone https://github.com/USERNAME/sphinx
cd sphinx
#. Checkout the appropriate branch.
For changes that should be included in the next minor release (namely bug
fixes), use the ``stable`` branch. ::
git checkout stable
For new features or other substantial changes that should wait until the
next major release, use the ``master`` branch.
#. Optional: setup a virtual environment. ::
virtualenv ~/sphinxenv
. ~/sphinxenv/bin/activate
pip install -e .
#. Create a new working branch. Choose any name you like. ::
git checkout -b feature-xyz
#. Hack, hack, hack.
For tips on working with the code, see the `Coding Guide`_.
#. Test, test, test. Possible steps:
* Run the unit tests::
pip install -r test-reqs.txt
make test
* Again, it's useful to turn on deprecation warnings on so they're shown in
the test output::
PYTHONWARNINGS=all make test
* Build the documentation and check the output for different builders::
cd doc
make clean html latexpdf
* Run the unit tests under different Python environments using
:program:`tox`::
pip install tox
tox -v
* Add a new unit test in the ``tests`` directory if you can.
* For bug fixes, first add a test that fails without your changes and passes
after they are applied.
* Tests that need a sphinx-build run should be integrated in one of the
existing test modules if possible. New tests that to ``@with_app`` and
then ``build_all`` for a few assertions are not good since *the test suite
should not take more than a minute to run*.
#. Please add a bullet point to :file:`CHANGES` if the fix or feature is not
trivial (small doc updates, typo fixes). Then commit::
git commit -m '#42: Add useful new feature that does this.'
Github recognizes certain phrases that can be used to automatically
update the issue tracker.
For example::
git commit -m 'Closes #42: Fix invalid markup in docstring of Foo.bar.'
would close issue #42.
#. Push changes in the branch to your forked repository on Github. ::
git push origin feature-xyz
#. Submit a pull request from your branch to the respective branch (``master``
or ``stable``) on ``sphinx-doc/sphinx`` using the Github interface.
#. Wait for a core developer to review your changes.
Core Developers
~~~~~~~~~~~~~~~
The core developers of Sphinx have write access to the main repository. They
can commit changes, accept/reject pull requests, and manage items on the issue
tracker.
You do not need to be a core developer or have write access to be involved in
the development of Sphinx. You can submit patches or create pull requests
from forked repositories and have a core developer add the changes for you.
The following are some general guidelines for core developers:
* Questionable or extensive changes should be submitted as a pull request
instead of being committed directly to the main repository. The pull
request should be reviewed by another core developer before it is merged.
* Trivial changes can be committed directly but be sure to keep the repository
in a good working state and that all tests pass before pushing your changes.
* When committing code written by someone else, please attribute the original
author in the commit message and any relevant :file:`CHANGES` entry.
Locale updates
~~~~~~~~~~~~~~
The parts of messages in Sphinx that go into builds are translated into several
locales. The translations are kept as gettext ``.po`` files translated from the
master template ``sphinx/locale/sphinx.pot``.
Sphinx uses `Babel <http://babel.edgewall.org>`_ to extract messages and
maintain the catalog files. It is integrated in ``setup.py``:
* Use ``python setup.py extract_messages`` to update the ``.pot`` template.
* Use ``python setup.py update_catalog`` to update all existing language
catalogs in ``sphinx/locale/*/LC_MESSAGES`` with the current messages in the
template file.
* Use ``python setup.py compile_catalog`` to compile the ``.po`` files to binary
``.mo`` files and ``.js`` files.
When an updated ``.po`` file is submitted, run compile_catalog to commit both
the source and the compiled catalogs.
When a new locale is submitted, add a new directory with the ISO 639-1 language
identifier and put ``sphinx.po`` in there. Don't forget to update the possible
values for :confval:`language` in ``doc/config.rst``.
The Sphinx core messages can also be translated on `Transifex
<https://www.transifex.com/>`_. There exists a client tool named ``tx`` in the
Python package "transifex_client", which can be used to pull translations in
``.po`` format from Transifex. To do this, go to ``sphinx/locale`` and then run
``tx pull -f -l LANG`` where LANG is an existing language identifier. It is
good practice to run ``python setup.py update_catalog`` afterwards to make sure
the ``.po`` file has the canonical Babel formatting.
Coding Guide
------------
* Try to use the same code style as used in the rest of the project. See the
`Pocoo Styleguide`__ for more information.
__ http://flask.pocoo.org/docs/styleguide/
* For non-trivial changes, please update the :file:`CHANGES` file. If your
changes alter existing behavior, please document this.
* New features should be documented. Include examples and use cases where
appropriate. If possible, include a sample that is displayed in the
generated output.
* When adding a new configuration variable, be sure to document it and update
:file:`sphinx/quickstart.py` if it's important enough.
* Use the included :program:`utils/check_sources.py` script to check for
common formatting issues (trailing whitespace, lengthy lines, etc).
* Add appropriate unit tests.
Debugging Tips
~~~~~~~~~~~~~~
* Delete the build cache before building documents if you make changes in the
code by running the command ``make clean`` or using the
:option:`sphinx-build -E` option.
* Use the :option:`sphinx-build -P` option to run Pdb on exceptions.
* Use ``node.pformat()`` and ``node.asdom().toxml()`` to generate a printable
representation of the document structure.
* Set the configuration variable :confval:`keep_warnings` to ``True`` so
warnings will be displayed in the generated output.
* Set the configuration variable :confval:`nitpicky` to ``True`` so that Sphinx
will complain about references without a known target.
* Set the debugging options in the `Docutils configuration file
<http://docutils.sourceforge.net/docs/user/config.html>`_.
* JavaScript stemming algorithms in `sphinx/search/*.py` (except `en.py`) are
generated by this
`modified snowballcode generator <https://github.com/shibukawa/snowball>`_.
Generated `JSX <http://jsx.github.io/>`_ files are
in `this repository <https://github.com/shibukawa/snowball-stemmer.jsx>`_.
You can get the resulting JavaScript files using the following command:
.. code-block:: bash
$ npm install
$ node_modules/.bin/grunt build # -> dest/*.global.js
Deprecating a feature
---------------------
There are a couple reasons that code in Sphinx might be deprecated:
* If a feature has been improved or modified in a backwards-incompatible way,
the old feature or behavior will be deprecated.
* Sometimes Sphinx will include a backport of a Python library that's not
included in a version of Python that Sphinx currently supports. When Sphinx
no longer needs to support the older version of Python that doesn't include
the library, the library will be deprecated in Sphinx.
As the :ref:`deprecation-policy` describes,
the first release of Sphinx that deprecates a feature (``A.B``) should raise a
``RemovedInSphinxXXWarning`` (where XX is the Sphinx version where the feature
will be removed) when the deprecated feature is invoked. Assuming we have good
test coverage, these warnings are converted to errors when running the test
suite with warnings enabled: ``python -Wall tests/run.py``. Thus, when adding
a ``RemovedInSphinxXXWarning`` you need to eliminate or silence any warnings
generated when running the tests.
.. _deprecation-policy:
Deprecation policy
------------------
A feature release may deprecate certain features from previous releases. If a
feature is deprecated in feature release 1.A, it will continue to work in all
1.A.x versions (for all versions of x) but raise warnings. Deprecated features
will be removed in the first 1.B release, or 1.B.1 for features deprecated in
the last 1.A.x feature release to ensure deprecations are done over at least 2
feature releases.
So, for example, if we decided to start the deprecation of a function in
Sphinx 1.4:
* Sphinx 1.4.x will contain a backwards-compatible replica of the function
which will raise a ``RemovedInSphinx16Warning``.
* Sphinx 1.5 (the version that follows 1.4) will still contain the
backwards-compatible replica.
* Sphinx 1.6 will remove the feature outright.
The warnings are displayed by default. You can turn off display of these
warnings with:
* ``PYTHONWARNINGS= make html`` (Linux/Mac)
* ``export PYTHONWARNINGS=`` and do ``make html`` (Linux/Mac)
* ``set PYTHONWARNINGS=`` and do ``make html`` (Windows)

View File

@@ -12,58 +12,43 @@ interesting examples.
Documentation using the alabaster theme
---------------------------------------
* CodePy: https://documen.tician.de/codepy/
* MeshPy: https://documen.tician.de/meshpy/
* PyCuda: https://documen.tician.de/pycuda/
* PyLangAcq: http://pylangacq.org/
Documentation using the classic theme
-------------------------------------
* APSW: http://apidoc.apsw.googlecode.com/hg/index.html
* ASE: https://wiki.fysik.dtu.dk/ase/
* APSW: https://rogerbinns.github.io/apsw/
* Calibre: http://manual.calibre-ebook.com/
* CodePy: https://documen.tician.de/codepy/
* Cython: http://docs.cython.org/
* Cormoran: http://cormoran.nhopkg.org/docs/
* Director: http://pythonhosted.org/director/
* Dirigible: http://www.projectdirigible.com/
* F2py: http://f2py.sourceforge.net/docs/
* GeoDjango: https://docs.djangoproject.com/en/dev/ref/contrib/gis/
* Genomedata:
http://noble.gs.washington.edu/proj/genomedata/doc/1.2.2/genomedata.html
* gevent: http://www.gevent.org/
* Google Wave API:
http://wave-robot-python-client.googlecode.com/svn/trunk/pydocs/index.html
* GSL Shell: http://www.nongnu.org/gsl-shell/
* Heapkeeper: http://heapkeeper.org/
* Hands-on Python Tutorial:
http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/
* Hedge: https://documen.tician.de/hedge/
* Leo: http://leoeditor.com/
* Lino: http://www.lino-framework.org/
* MeshPy: https://documen.tician.de/meshpy/
* mpmath: http://mpmath.googlecode.com/svn/trunk/doc/build/index.html
* mpmath: http://mpmath.org/doc/current/
* OpenEXR: http://excamera.com/articles/26/doc/index.html
* OpenGDA: http://www.opengda.org/gdadoc/html/
* openWNS: http://docs.openwns.org/
* Paste: http://pythonpaste.org/script/
* Paver: http://paver.github.io/paver/
* Pioneers and Prominent Men of Utah: http://pioneers.rstebbing.com/
* PyCantonese: http://pycantonese.org/
* Pyccuracy: https://github.com/heynemann/pyccuracy/wiki/
* PyCuda: https://documen.tician.de/pycuda/
* Pyevolve: http://pyevolve.sourceforge.net/
* Pylo: https://documen.tician.de/pylo/
* PyMQI: http://pythonhosted.org/pymqi/
* PyPubSub: http://pubsub.sourceforge.net/
* pySPACE: http://pyspace.github.io/pyspace/
* Python: https://docs.python.org/3/
* python-apt: http://apt.alioth.debian.org/python-apt-doc/
* PyUblas: https://documen.tician.de/pyublas/
* Quex: http://quex.sourceforge.net/doc/html/main.html
* Ring programming language: http://ring-lang.sourceforge.net/doc/index.html
* Scapy: http://www.secdev.org/projects/scapy/doc/
* Seaborn: https://stanford.edu/~mwaskom/software/seaborn/
* Segway: http://noble.gs.washington.edu/proj/segway/doc/1.1.0/segway.html
* SimPy: http://simpy.readthedocs.org/en/latest/
* SymPy: http://docs.sympy.org/
* WTForms: http://wtforms.simplecodes.com/docs/
* z3c: http://www.ibiblio.org/paulcarduner/z3ctutorial/
@@ -129,6 +114,7 @@ Documentation using the sphinxdoc theme
Documentation using another builtin theme
-----------------------------------------
* ASE: https://wiki.fysik.dtu.dk/ase/ (sphinx_rtd_theme)
* C/C++ Development with Eclipse: http://eclipsebook.in/ (agogo)
* ESWP3 (http://eswp3.org) (sphinx_rtd_theme)
* Jinja: http://jinja.pocoo.org/ (scrolls)
@@ -137,13 +123,17 @@ Documentation using another builtin theme
* Linguistica: http://linguistica-uchicago.github.io/lxa5/ (sphinx_rtd_theme)
* MoinMoin: https://moin-20.readthedocs.io/en/latest/ (sphinx_rtd_theme)
* MPipe: http://vmlaker.github.io/mpipe/ (sphinx13)
* Paver: http://paver.readthedocs.io/en/latest/
* pip: https://pip.pypa.io/en/latest/ (sphinx_rtd_theme)
* Pyramid web framework:
http://docs.pylonsproject.org/projects/pyramid/en/latest/ (pyramid)
* Programmieren mit PyGTK und Glade (German):
http://www.florian-diesch.de/doc/python-und-glade/online/ (agogo)
* PyPubSub: http://pypubsub.readthedocs.io/ (bizstyle)
* Pyramid web framework:
http://docs.pylonsproject.org/projects/pyramid/en/latest/ (pyramid)
* Quex: http://quex.sourceforge.net/doc/html/main.html
* Satchmo: http://docs.satchmoproject.com/en/latest/ (sphinx_rtd_theme)
* Setuptools: http://pythonhosted.org/setuptools/ (nature)
* SimPy: http://simpy.readthedocs.org/en/latest/
* Spring Python: http://docs.spring.io/spring-python/1.2.x/sphinx/html/ (nature)
* sqlparse: http://python-sqlparse.googlecode.com/svn/docs/api/index.html
(agogo)
@@ -169,6 +159,7 @@ Documentation using a custom theme/integrated in a site
* Flask-OpenID: http://pythonhosted.org/Flask-OpenID/
* Gameduino: http://excamera.com/sphinx/gameduino/
* GeoServer: http://docs.geoserver.org/
* gevent: http://www.gevent.org/
* GHC - Glasgow Haskell Compiler: http://downloads.haskell.org/~ghc/master/users-guide/
* Glashammer: http://glashammer.org/
* Istihza (Turkish Python documentation project): http://belgeler.istihza.com/py2/
@@ -194,6 +185,7 @@ Documentation using a custom theme/integrated in a site
* QGIS: http://qgis.org/en/docs/index.html
* qooxdoo: http://manual.qooxdoo.org/current/
* Roundup: http://www.roundup-tracker.org/
* Seaborn: https://stanford.edu/~mwaskom/software/seaborn/
* Selenium: http://docs.seleniumhq.org/docs/
* Self: http://www.selflanguage.org/
* Substance D: http://docs.pylonsproject.org/projects/substanced/en/latest/

View File

@@ -2,21 +2,25 @@ include README.rst
include LICENSE
include AUTHORS
include CHANGES
include CHANGES.old
include CONTRIBUTING.rst
include EXAMPLES
include TODO
include babel.cfg
include Makefile
include ez_setup.py
include sphinx-autogen.py
include sphinx-build.py
include sphinx-quickstart.py
include sphinx-apidoc.py
include test-reqs.txt
include tox.ini
include sphinx/locale/.tx/config
recursive-include sphinx/templates *
recursive-include sphinx/texinputs *
recursive-include sphinx/themes *
recursive-include sphinx/locale *
recursive-include sphinx/pycode/pgen2 *.c *.pyx
recursive-include sphinx/locale *.js *.pot *.po *.mo
recursive-include sphinx/search/non-minified-js *.js
recursive-include sphinx/ext/autosummary/templates *
recursive-include tests *
@@ -25,3 +29,4 @@ include sphinx/pycode/Grammar-py*
recursive-include doc *
prune doc/_build
prune sphinx/locale/.tx

View File

@@ -3,15 +3,12 @@ PYTHON ?= python
.PHONY: all style-check type-check clean clean-pyc clean-patchfiles clean-backupfiles \
clean-generated pylint reindent test covertest build
DONT_CHECK = -i build -i dist -i sphinx/style/jquery.js \
-i sphinx/pycode/pgen2 -i sphinx/util/smartypants.py \
-i .ropeproject -i doc/_build -i tests/path.py \
-i tests/coverage.py -i utils/convert.py \
-i tests/typing_test_data.py \
-i tests/test_autodoc_py35.py \
-i tests/roots/test-warnings/undecodable.rst \
-i tests/build \
-i tests/roots/test-warnings/undecodable.rst \
DONT_CHECK = -i .ropeproject \
-i .tox \
-i build \
-i dist \
-i doc/_build \
-i sphinx/pycode/pgen2 \
-i sphinx/search/da.py \
-i sphinx/search/de.py \
-i sphinx/search/en.py \
@@ -28,17 +25,25 @@ DONT_CHECK = -i build -i dist -i sphinx/style/jquery.js \
-i sphinx/search/ru.py \
-i sphinx/search/sv.py \
-i sphinx/search/tr.py \
-i .tox
-i sphinx/style/jquery.js \
-i sphinx/util/smartypants.py \
-i tests/build \
-i tests/path.py \
-i tests/roots/test-directive-code/target.py \
-i tests/roots/test-warnings/undecodable.rst \
-i tests/test_autodoc_py35.py \
-i tests/typing_test_data.py \
-i utils/convert.py
all: clean-pyc clean-backupfiles style-check type-check test
style-check:
@$(PYTHON) utils/check_sources.py $(DONT_CHECK) .
@PYTHONWARNINGS=all $(PYTHON) utils/check_sources.py $(DONT_CHECK) .
type-check:
mypy sphinx/
clean: clean-pyc clean-pycache clean-patchfiles clean-backupfiles clean-generated clean-testfiles clean-buildfiles
clean: clean-pyc clean-pycache clean-patchfiles clean-backupfiles clean-generated clean-testfiles clean-buildfiles clean-mypyfiles
clean-pyc:
find . -name '*.pyc' -exec rm -f {} +
@@ -54,17 +59,28 @@ clean-patchfiles:
clean-backupfiles:
find . -name '*~' -exec rm -f {} +
find . -name '*.bak' -exec rm -f {} +
find . -name '*.swp' -exec rm -f {} +
find . -name '*.swo' -exec rm -f {} +
clean-generated:
find . -name '.DS_Store' -exec rm -f {} +
rm -rf doc/_build/
rm -f sphinx/pycode/*.pickle
rm -f utils/*3.py*
rm -f utils/regression_test.js
clean-testfiles:
rm -rf tests/.coverage
rm -rf tests/build
rm -rf .tox/
rm -rf .cache/
clean-buildfiles:
rm -rf build
clean-mypyfiles:
rm -rf .mypy_cache/
pylint:
@pylint --rcfile utils/pylintrc sphinx
@@ -72,14 +88,13 @@ reindent:
@$(PYTHON) utils/reindent.py -r -n .
test:
@cd tests; $(PYTHON) run.py -I py35 -d -m '^[tT]est' $(TEST)
@cd tests; $(PYTHON) run.py --ignore py35 -v $(TEST)
test-async:
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' $(TEST)
@cd tests; $(PYTHON) run.py -v $(TEST)
covertest:
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' --with-coverage \
--cover-package=sphinx $(TEST)
@cd tests; $(PYTHON) run.py -v --cov=sphinx --junitxml=.junit.xml $(TEST)
build:
@$(PYTHON) setup.py build

View File

@@ -1,8 +1,16 @@
.. image:: https://img.shields.io/pypi/v/sphinx.svg
:target: http://pypi.python.org/pypi/sphinx
.. image:: https://readthedocs.org/projects/sphinx/badge/
:target: http://www.sphinx-doc.org/
:alt: Documentation Status
.. image:: https://travis-ci.org/sphinx-doc/sphinx.svg?branch=master
:target: https://travis-ci.org/sphinx-doc/sphinx
=================
README for Sphinx
=================
This is the Sphinx documentation generator, see http://sphinx-doc.org/.
This is the Sphinx documentation generator, see http://www.sphinx-doc.org/.
Installing
@@ -36,22 +44,23 @@ Install from cloned source as editable::
Release signatures
==================
Releases are signed with `498D6B9E <https://pgp.mit.edu/pks/lookup?op=vindex&search=0x102C2C17498D6B9E>`_
Releases are signed with following keys:
* `498D6B9E <https://pgp.mit.edu/pks/lookup?op=vindex&search=0x102C2C17498D6B9E>`_
* `5EBA0E07 <https://pgp.mit.edu/pks/lookup?op=vindex&search=0x1425F8CE5EBA0E07>`_
Reading the docs
================
After installing::
You can read them online at <http://www.sphinx-doc.org/>.
Or, after installing::
cd doc
make html
Then, direct your browser to ``_build/html/index.html``.
Or read them online at <http://sphinx-doc.org/>.
Testing
=======
@@ -63,25 +72,13 @@ If you want to use a different interpreter, e.g. ``python3``, use::
PYTHON=python3 make test
Continuous testing runs on travis:
.. image:: https://travis-ci.org/sphinx-doc/sphinx.svg?branch=master
:target: https://travis-ci.org/sphinx-doc/sphinx
Continuous testing runs on travis: https://travis-ci.org/sphinx-doc/sphinx
Contributing
============
#. Check for open issues or open a fresh issue to start a discussion around a
feature idea or a bug.
#. If you feel uncomfortable or uncertain about an issue or your changes, feel
free to email sphinx-dev@googlegroups.com.
#. Fork the repository on GitHub https://github.com/sphinx-doc/sphinx
to start making your changes to the **master** branch for next major
version, or **stable** branch for next minor version.
#. Write a test which shows that the bug was fixed or that the feature works
as expected. Use ``make test`` to run the test suite.
#. Send a pull request and bug the maintainer until it gets merged and
published. Make sure to add yourself to AUTHORS
<https://github.com/sphinx-doc/sphinx/blob/master/AUTHORS> and the change to
CHANGES <https://github.com/sphinx-doc/sphinx/blob/master/CHANGES>.
See `CONTRIBUTING.rst`__
.. __: CONTRIBUTING.rst

View File

@@ -79,7 +79,7 @@ language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
# These patterns also affect html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The reST default role (used for this markup: `text`) to use for all
@@ -268,11 +268,6 @@ latex_documents = [
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False

View File

@@ -46,18 +46,30 @@
<h2 style="margin-bottom: 0">{%trans%}Documentation{%endtrans%}</h2>
<table class="contentstable"><tr>
<td>
<p class="biglink"><a class="biglink" href="{{ pathto("tutorial") }}">{%trans%}First steps with Sphinx{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}overview of basic tasks{%endtrans%}</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("contents") }}">{%trans%}Contents{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}for a complete overview{%endtrans%}</span></p>
</td><td>
{%- if hasdoc('search') %}<p class="biglink"><a class="biglink" href="{{ pathto("search") }}">{%trans%}Search page{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}search the documentation{%endtrans%}</span></p>{%- endif %}
{%- if hasdoc('genindex') %}<p class="biglink"><a class="biglink" href="{{ pathto("genindex") }}">{%trans%}General Index{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}all functions, classes, terms{%endtrans%}</span></p>{%- endif %}
</td></tr>
<table class="contentstable">
<tr>
<td>
<p class="biglink"><a class="biglink" href="{{ pathto("tutorial") }}">{%trans%}First steps with Sphinx{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}overview of basic tasks{%endtrans%}</span></p>
</td><td>
{%- if hasdoc('search') %}<p class="biglink"><a class="biglink" href="{{ pathto("search") }}">{%trans%}Search page{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}search the documentation{%endtrans%}</span></p>{%- endif %}
</td>
</tr><tr>
<td>
<p class="biglink"><a class="biglink" href="{{ pathto("contents") }}">{%trans%}Contents{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}for a complete overview{%endtrans%}</span></p>
</td><td>
{%- if hasdoc('genindex') %}<p class="biglink"><a class="biglink" href="{{ pathto("genindex") }}">{%trans%}General Index{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}all functions, classes, terms{%endtrans%}</span></p>{%- endif %}
</td>
</tr><tr>
<td>
<p class="biglink"><a class="biglink" href="{{ pathto("changes") }}">{%trans%}Changes{%endtrans%}</a><br/>
<span class="linkdescr">{%trans%}release history{%endtrans%}</span></p>
</td><td>
</td>
</tr>
</table>
<p>{%trans%}

View File

@@ -4,14 +4,14 @@
<h3>Download</h3>
{% if version.endswith('a0') %}
<p>{%trans%}This documentation is for version <b>{{ version }}</b>, which is
<p>{%trans%}This documentation is for version <b><a href="changes.html">{{ version }}</a></b>, which is
not released yet.{%endtrans%}</p>
<p>{%trans%}You can use it from the
<a href="https://github.com/sphinx-doc/sphinx/">Git repo</a> or look for
released versions in the <a href="https://pypi.python.org/pypi/Sphinx">Python
Package Index</a>.{%endtrans%}</p>
{% else %}
<p>{%trans%}Current version: <b>{{ version }}</b>{%endtrans%}</p>
<p>{%trans%}Current version: <b><a href="changes.html">{{ version }}</a></b>{%endtrans%}</p>
<p>{%trans%}Get Sphinx from the <a href="https://pypi.python.org/pypi/Sphinx">Python Package
Index</a>, or install it with:{%endtrans%}</p>
<pre>pip install -U Sphinx</pre>

View File

@@ -181,9 +181,25 @@ The builder's "name" must be given to the **-b** command-line option of
present in a "minimal" TeX distribution installation. For TeXLive,
the following packages need to be installed:
* latex-recommended
* latex-extra
* fonts-recommended
* texlive-latex-recommended
* texlive-fonts-recommended
* texlive-latex-extra
You may also need latex-xcolor, but Sphinx does not require it (and
recent distributions have ``xcolor.sty`` included in latex-recommended).
Unicode engines will need their respective packages texlive-luatex or
texlive-xetex.
The testing of Sphinx LaTeX is done on Ubuntu trusty with the above
texlive packages. They are from a `TeXLive 2013 snapshot dated
20140215`__.
__ http://packages.ubuntu.com/trusty/texlive-latex-recommended
.. versionchanged::
1.6 Formerly, testing was done for some years on Ubuntu precise
(based on TeXLive 2009).
.. autoattribute:: name

View File

@@ -56,6 +56,7 @@ latex_logo = '_static/sphinx.png'
latex_elements = {
'fontpkg': '\\usepackage{palatino}',
'passoptionstopackages': '\\PassOptionsToPackage{svgnames}{xcolor}',
'printindex': '\\footnotesize\\raggedright\\printindex',
}
latex_show_urls = 'footnote'

View File

@@ -101,10 +101,13 @@ General configuration
suffix that is not in the dictionary will be parsed with the default
reStructuredText parser.
For example::
source_parsers = {'.md': 'some.markdown.module.Parser'}
source_parsers = {'.md': 'recommonmark.parser.CommonMarkParser'}
.. note::
Read more about how to use Markdown with Sphinx at :ref:`markdown`.
.. versionadded:: 1.3
@@ -223,8 +226,10 @@ General configuration
* app.add_role
* app.add_generic_role
* app.add_source_parser
* download.not_readable
* image.data_uri
* image.nonlocal_uri
* image.not_readable
* ref.term
* ref.ref
* ref.numref
@@ -233,6 +238,8 @@ General configuration
* ref.citation
* ref.doc
* misc.highlighting_failure
* toc.secnum
* epub.unknown_project_files
You can choose from these types.
@@ -244,6 +251,10 @@ General configuration
Added ``misc.highlighting_failure``
.. versionchanged:: 1.5.1
Added ``epub.unknown_project_files``
.. confval:: needs_sphinx
If set to a ``major.minor`` version string like ``'1.1'``, Sphinx will
@@ -373,18 +384,6 @@ Project information
%Y'`` (or, if translation is enabled with :confval:`language`, an equivalent
format for the selected locale).
.. versionchanged:: 1.4
Format specification was changed from strftime to Locale Data Markup
Language. strftime format is also supported for backward compatibility
until Sphinx-1.5.
.. versionchanged:: 1.4.1
Format specification was changed again from Locale Data Markup Language
to strftime. LDML format is also supported for backward compatibility
until Sphinx-1.5.
.. confval:: highlight_language
The default language to highlight source code in. The default is
@@ -760,19 +759,6 @@ that use Sphinx's HTMLWriter class.
The empty string is equivalent to ``'%b %d, %Y'`` (or a
locale-dependent equivalent).
.. versionchanged:: 1.4
Format specification was changed from strftime to Locale Data Markup
Language. strftime format is also supported for backward compatibility
until Sphinx-1.5.
.. versionchanged:: 1.4.1
Format specification was changed again from Locale Data Markup Language
to strftime. LDML format is also supported for backward compatibility
until Sphinx-1.5.
.. confval:: html_use_smartypants
If true, `SmartyPants <http://daringfireball.net/projects/smartypants/>`_
@@ -872,13 +858,6 @@ that use Sphinx's HTMLWriter class.
.. versionadded:: 1.0
.. confval:: html_use_modindex
If true, add a module index to the HTML documents. Default is ``True``.
.. deprecated:: 1.0
Use :confval:`html_domain_indices`.
.. confval:: html_use_index
If true, add an index to the HTML documents. Default is ``True``.
@@ -941,20 +920,6 @@ that use Sphinx's HTMLWriter class.
.. versionadded:: 0.6
.. confval:: html_translator_class
A string with the fully-qualified name of a HTML Translator class, that is, a
subclass of Sphinx's :class:`~sphinx.writers.html.HTMLTranslator`, that is
used to translate document trees to HTML. Default is ``None`` (use the
builtin translator).
.. seealso:: :meth:`~sphinx.application.Sphinx.set_translator`
.. deprecated:: 1.5
Implement your translator as extension and use `Sphinx.set_translator`
instead.
.. confval:: html_show_copyright
If true, "(C) Copyright ..." is shown in the HTML footer. Default is
@@ -979,9 +944,10 @@ that use Sphinx's HTMLWriter class.
.. confval:: html_compact_lists
If true, list items containing only a single paragraph will not be rendered
with a ``<p>`` element. This is standard docutils behavior. Default:
``True``.
If true, a list all whose items consist of a single paragraph and/or a
sub-list all whose items etc... (recursive definition) will not use the
``<p>`` element for any of its items. This is standard docutils behavior.
Default: ``True``.
.. versionadded:: 1.0
@@ -1123,6 +1089,11 @@ that use Sphinx's HTMLWriter class.
Output file base name for HTML help builder. Default is ``'pydoc'``.
.. confval:: html_experimental_html5_writer
Output is processed with HTML5 writer. This feature needs docutils 0.13 or newer. Default is ``False``.
.. versionadded:: 1.6
.. _applehelp-options:
@@ -1324,7 +1295,7 @@ the `Dublin Core metadata <http://dublincore.org/>`_.
.. confval:: epub_description
The description of the document. The default value is ``''``.
The description of the document. The default value is ``'unknown'``.
.. versionadded:: 1.4
@@ -1530,20 +1501,6 @@ the `Dublin Core metadata <http://dublincore.org/>`_.
.. [#] https://developer.mozilla.org/en-US/docs/Web/CSS/writing-mode
.. confval:: epub3_page_progression_direction
The global direction in which the content flows.
Allowed values are ``'ltr'`` (left-to-right), ``'rtl'`` (right-to-left) and
``'default'``. The default value is ``'ltr'``.
When the ``'default'`` value is specified, the Author is expressing no
preference and the Reading System may chose the rendering direction.
.. versionadded:: 1.4
.. deprecated:: 1.5
Use ``epub_writing_mode`` instead.
.. _latex-options:
Options for LaTeX output
@@ -1577,8 +1534,8 @@ These options influence LaTeX output. See further :doc:`latex`.
backslash or ampersand must be represented by the proper LaTeX commands if
they are to be inserted literally.
* *author*: Author for the LaTeX document. The same LaTeX markup caveat as
for *title* applies. Use ``\and`` to separate multiple authors, as in:
``'John \and Sarah'``.
for *title* applies. Use ``\\and`` to separate multiple authors, as in:
``'John \\and Sarah'`` (backslashes must be Python-escaped to reach LaTeX).
* *documentclass*: Normally, one of ``'manual'`` or ``'howto'`` (provided
by Sphinx and based on ``'report'``, resp. ``'article'``; Japanese
documents use ``'jsbook'``, resp. ``'jreport'``.) "howto" (non-Japanese)
@@ -1615,16 +1572,6 @@ These options influence LaTeX output. See further :doc:`latex`.
.. versionadded:: 1.4
.. confval:: latex_use_parts
If true, the topmost sectioning unit is parts, else it is chapters. Default:
``False``.
.. versionadded:: 0.3
.. deprecated:: 1.4
Use :confval:`latex_toplevel_sectioning`.
.. confval:: latex_appendices
A list of document names to append as an appendix to all manuals.
@@ -1640,13 +1587,6 @@ These options influence LaTeX output. See further :doc:`latex`.
.. versionadded:: 1.0
.. confval:: latex_use_modindex
If true, add a module index to LaTeX documents. Default is ``True``.
.. deprecated:: 1.0
Use :confval:`latex_domain_indices`.
.. confval:: latex_show_pagerefs
If true, add page references after internal references. This is very useful
@@ -1671,18 +1611,39 @@ These options influence LaTeX output. See further :doc:`latex`.
.. confval:: latex_keep_old_macro_names
If ``True`` (default) the ``\strong``, ``\code``, ``\bfcode``, ``\email``,
If ``True`` the ``\strong``, ``\code``, ``\bfcode``, ``\email``,
``\tablecontinued``, ``\titleref``, ``\menuselection``, ``\accelerator``,
``\crossref``, ``\termref``, and ``\optional`` text styling macros are
pre-defined by Sphinx and may be user-customized by some
``\renewcommand``'s inserted either via ``'preamble'`` key or :dudir:`raw
<raw-data-pass-through>` directive. If ``False``, only ``\sphinxstrong``,
etc... macros are defined (and may be redefined by user). Setting to
``False`` may help solve macro name conflicts caused by user-added latex
packages.
etc... macros are defined (and may be redefined by user).
The default is ``False`` as it prevents macro name conflicts caused by
latex packages. For example (``lualatex`` or ``xelatex``) ``fontspec v2.6``
has its own ``\strong`` macro.
.. versionadded:: 1.4.5
.. versionchanged:: 1.6
Default was changed from ``True`` to ``False``.
.. deprecated:: 1.6
This setting will be removed at Sphinx 1.7.
.. confval:: latex_use_latex_multicolumn
If ``False`` (default), the LaTeX writer uses for merged cells in grid
tables Sphinx's own macros. They have the advantage to allow the same
contents as in non-merged cells (inclusive of literal blocks, lists,
blockquotes, ...). But they assume that the columns are separated by the
standard vertical rule. Further, in case the :rst:dir:`tabularcolumns`
directive was employed to inject more macros (using LaTeX's mark-up of the
type ``>{..}``, ``<{..}``, ``@{..}``) the multicolumn cannot ignore these
extra macros, contrarily to LaTeX's own ``\multicolumn``; but Sphinx's
version does arrange for ignoring ``\columncolor`` like the standard
``\multicolumn`` does. Setting to ``True`` means to use LaTeX's standard
``\multicolumn`` macro.
.. versionadded:: 1.6
.. confval:: latex_elements
@@ -1704,11 +1665,12 @@ These options influence LaTeX output. See further :doc:`latex`.
``'12pt'``), default ``'10pt'``.
``'pxunit'``
the value of the ``px`` when used in image attributes ``width`` and
``height``. The default value is ``'49336sp'`` which achieves
``96px=1in`` (``1in = 72.27*65536 = 4736286.72sp``, and all dimensions
in TeX are internally integer multiples of ``sp``). To obtain for
example ``100px=1in``, one can use ``'0.01in'`` but it is more precise
to use ``'47363sp'``. To obtain ``72px=1in``, use ``'1bp'``.
``height``. The default value is ``'0.75bp'`` which achieves
``96px=1in`` (in TeX ``1in = 72bp = 72.27pt``.) To obtain for
example ``100px=1in`` use ``'0.01in'`` or ``'0.7227pt'`` (the latter
leads to TeX computing a more precise value, due to the smaller unit
used in the specification); for ``72px=1in``,
simply use ``'1bp'``; for ``90px=1in``, use ``'0.8bp'`` or ``'0.803pt'``.
.. versionadded:: 1.5
``'sphinxsetup'``
@@ -1721,12 +1683,6 @@ These options influence LaTeX output. See further :doc:`latex`.
contain ``\\PassOptionsToPackage{options}{foo}`` commands. Default empty.
.. versionadded:: 1.4
``'geometry'``
"geometry" package inclusion, the default definition is:
``'\\usepackage[margin=1in,marginparwidth=0.5in]{geometry}'``.
.. versionadded:: 1.5
``'babel'``
"babel" package inclusion, default ``'\\usepackage{babel}'`` (the
suitable document language string is passed as class option, and
@@ -1736,6 +1692,8 @@ These options influence LaTeX output. See further :doc:`latex`.
.. versionchanged:: 1.5
For :confval:`latex_engine` set to ``'xelatex'``, the default
is ``'\\usepackage{polyglossia}\n\\setmainlanguage{<language>}'``.
.. versionchanged:: 1.6
``'lualatex'`` uses same default setting as ``'xelatex'``
``'fontpkg'``
Font package inclusion, default ``'\\usepackage{times}'`` (which uses
Times and Helvetica). You can set this to ``''`` to use the Computer
@@ -1746,6 +1704,8 @@ These options influence LaTeX output. See further :doc:`latex`.
script.
.. versionchanged:: 1.5
Defaults to ``''`` when :confval:`latex_engine` is ``'xelatex'``.
.. versionchanged:: 1.6
Defaults to ``''`` also with ``'lualatex'``.
``'fncychap'``
Inclusion of the "fncychap" package (which makes fancy chapter titles),
default ``'\\usepackage[Bjarne]{fncychap}'`` for English documentation
@@ -1774,8 +1734,16 @@ These options influence LaTeX output. See further :doc:`latex`.
.. deprecated:: 1.5
Use ``'atendofbody'`` key instead.
* Keys that don't need be overridden unless in special cases are:
* Keys that don't need to be overridden unless in special cases are:
``'extraclassoptions'``
The default is the empty string. Example: ``'extraclassoptions':
'openany'`` will allow chapters (for documents of the ``'manual'``
type) to start on any page.
.. versionadded:: 1.2
.. versionchanged:: 1.6
Added this documentation.
``'maxlistdepth'``
LaTeX allows by default at most 6 levels for nesting list and
quote-like environments, with at most 4 enumerated lists, and 4 bullet
@@ -1811,6 +1779,38 @@ These options influence LaTeX output. See further :doc:`latex`.
.. versionchanged:: 1.5
Defaults to ``'\\usepackage{fontspec}'`` when
:confval:`latex_engine` is ``'xelatex'``.
.. versionchanged:: 1.6
``'lualatex'`` also uses ``fontspec`` per default.
``'geometry'``
"geometry" package inclusion, the default definition is:
``'\\usepackage{geometry}'``
with an additional ``[dvipdfm]`` for Japanese documents.
The Sphinx LaTeX style file executes:
``\PassOptionsToPackage{hmargin=1in,vmargin=1in,marginpar=0.5in}{geometry}``
which can be customized via corresponding :ref:`'sphinxsetup' options
<latexsphinxsetup>`.
.. versionadded:: 1.5
.. versionchanged:: 1.5.2
``dvipdfm`` option if :confval:`latex_engine` is ``'platex'``.
.. versionadded:: 1.5.3
The :ref:`'sphinxsetup' keys for the margins
<latexsphinxsetuphmargin>`.
.. versionchanged:: 1.5.3
The location in the LaTeX file has been moved to after
``\usepackage{sphinx}`` and ``\sphinxsetup{..}``, hence also after
insertion of ``'fontpkg'`` key. This is in order to handle the paper
layout options in a special way for Japanese documents: the text
width will be set to an integer multiple of the *zenkaku* width, and
the text height to an integer multiple of the baseline. See the
:ref:`hmargin <latexsphinxsetuphmargin>` documentation for more.
``'hyperref'``
"hyperref" package inclusion; also loads package "hypcap" and issues
``\urlstyle{same}``. This is done after :file:`sphinx.sty` file is
@@ -1829,7 +1829,8 @@ These options influence LaTeX output. See further :doc:`latex`.
generate a differently-styled title page.
``'releasename'``
value that prefixes ``'release'`` element on title page, default
``'Release'``.
``'Release'``. As for *title* and *author* used in the tuples of
:confval:`latex_documents`, it is inserted as LaTeX markup.
``'tableofcontents'``
"tableofcontents" call, default ``'\\sphinxtableofcontents'`` (it is a
wrapper of unmodified ``\tableofcontents``, which may itself be
@@ -1844,10 +1845,12 @@ These options influence LaTeX output. See further :doc:`latex`.
modifying it also such as "tocloft" or "etoc".
``'transition'``
Commands used to display transitions, default
``'\n\n\\bigskip\\hrule{}\\bigskip\n\n'``. Override if you want to
``'\n\n\\bigskip\\hrule\\bigskip\n\n'``. Override if you want to
display transitions differently.
.. versionadded:: 1.2
.. versionchanged:: 1.6
Remove unneeded ``{}`` after ``\\hrule``.
``'printindex'``
"printindex" call, the last thing in the file, default
``'\\printindex'``. Override if you want to generate the index
@@ -1897,27 +1900,6 @@ These options influence LaTeX output. See further :doc:`latex`.
.. versionchanged:: 1.2
This overrides the files which is provided from Sphinx such as sphinx.sty.
.. confval:: latex_preamble
Additional LaTeX markup for the preamble.
.. deprecated:: 0.5
Use the ``'preamble'`` key in the :confval:`latex_elements` value.
.. confval:: latex_paper_size
The output paper size (``'letter'`` or ``'a4'``). Default is ``'letter'``.
.. deprecated:: 0.5
Use the ``'papersize'`` key in the :confval:`latex_elements` value.
.. confval:: latex_font_size
The font size ('10pt', '11pt' or '12pt'). Default is ``'10pt'``.
.. deprecated:: 0.5
Use the ``'pointsize'`` key in the :confval:`latex_elements` value.
.. _text-options:

View File

@@ -17,8 +17,10 @@ Sphinx documentation contents
config
intl
theming
setuptools
templating
latex
markdown
extensions
extdev/index
websupport

View File

@@ -55,17 +55,17 @@ This is the current list of contributed extensions in that repository:
- hyphenator: client-side hyphenation of HTML using hyphenator_
- inlinesyntaxhighlight_: inline syntax highlighting
- lassodomain: a domain for documenting Lasso_ source code
- libreoffice: an extension to include any drawing supported by LibreOffice (e.g. odg, vsd...).
- libreoffice: an extension to include any drawing supported by LibreOffice (e.g. odg, vsd, ...).
- lilypond: an extension inserting music scripts from Lilypond_ in PNG format.
- makedomain_: a domain for `GNU Make`_
- matlabdomain: document MATLAB_ code.
- mockautodoc: mock imports.
- mscgen: embed mscgen-formatted MSC (Message Sequence Chart)s.
- napoleon: supports `Google style`_ and `NumPy style`_ docstrings.
- nicoviceo: embed videos from nicovideo
- nicovideo: embed videos from nicovideo
- nwdiag: embed network diagrams by using nwdiag_
- omegat: support tools to collaborate with OmegaT_ (Sphinx 1.1 needed)
- osaka: convert standard Japanese doc to Osaka dialect (it is joke extension)
- osaka: convert standard Japanese doc to Osaka dialect (this is a joke extension)
- paverutils: an alternate integration of Sphinx with Paver_.
- phpdomain: an extension for PHP support
- plantuml: embed UML diagram by using PlantUML_
@@ -113,7 +113,7 @@ own extensions.
.. _Google Analytics: http://www.google.com/analytics/
.. _Google Chart: https://developers.google.com/chart/
.. _Google Maps: https://www.google.com/maps
.. _Google style: http://google-styleguide.googlecode.com/svn/trunk/pyguide.html
.. _Google style: https://google.github.io/styleguide/pyguide.html
.. _NumPy style: https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
.. _hyphenator: https://github.com/mnater/hyphenator
.. _exceltable: http://pythonhosted.org/sphinxcontrib-exceltable/

View File

@@ -1,331 +1 @@
Sphinx Developer's Guide
========================
.. topic:: Abstract
This document describes the development process of Sphinx, a documentation
system used by developers to document systems used by other developers to
develop other systems that may also be documented using Sphinx.
The Sphinx source code is managed using Git and is hosted on Github.
git clone git://github.com/sphinx-doc/sphinx
.. rubric:: Community
sphinx-users <sphinx-users@googlegroups.com>
Mailing list for user support.
sphinx-dev <sphinx-dev@googlegroups.com>
Mailing list for development related discussions.
#sphinx-doc on irc.freenode.net
IRC channel for development questions and user support.
Bug Reports and Feature Requests
--------------------------------
If you have encountered a problem with Sphinx or have an idea for a new
feature, please submit it to the `issue tracker`_ on Github or discuss it
on the sphinx-dev mailing list.
For bug reports, please include the output produced during the build process
and also the log file Sphinx creates after it encounters an un-handled
exception. The location of this file should be shown towards the end of the
error message.
Including or providing a link to the source files involved may help us fix the
issue. If possible, try to create a minimal project that produces the error
and post that instead.
.. _`issue tracker`: https://github.com/sphinx-doc/sphinx/issues
Contributing to Sphinx
----------------------
The recommended way for new contributors to submit code to Sphinx is to fork
the repository on Github and then submit a pull request after
committing the changes. The pull request will then need to be approved by one
of the core developers before it is merged into the main repository.
#. Check for open issues or open a fresh issue to start a discussion around a
feature idea or a bug.
#. If you feel uncomfortable or uncertain about an issue or your changes, feel
free to email sphinx-dev@googlegroups.com.
#. Fork `the repository`_ on Github to start making your changes to the
**master** branch for next major version, or **stable** branch for next
minor version.
#. Write a test which shows that the bug was fixed or that the feature works
as expected.
#. Send a pull request and bug the maintainer until it gets merged and
published. Make sure to add yourself to AUTHORS_ and the change to
CHANGES_.
.. _`the repository`: https://github.com/sphinx-doc/sphinx
.. _AUTHORS: https://github.com/sphinx-doc/sphinx/blob/master/AUTHORS
.. _CHANGES: https://github.com/sphinx-doc/sphinx/blob/master/CHANGES
Getting Started
~~~~~~~~~~~~~~~
These are the basic steps needed to start developing on Sphinx.
#. Create an account on Github.
#. Fork the main Sphinx repository (`sphinx-doc/sphinx
<https://github.com/sphinx-doc/sphinx>`_) using the Github interface.
#. Clone the forked repository to your machine. ::
git clone https://github.com/USERNAME/sphinx
cd sphinx
#. Checkout the appropriate branch.
For changes that should be included in the next minor release (namely bug
fixes), use the ``stable`` branch. ::
git checkout stable
For new features or other substantial changes that should wait until the
next major release, use the ``master`` branch.
#. Optional: setup a virtual environment. ::
virtualenv ~/sphinxenv
. ~/sphinxenv/bin/activate
pip install -e .
#. Create a new working branch. Choose any name you like. ::
git checkout -b feature-xyz
#. Hack, hack, hack.
For tips on working with the code, see the `Coding Guide`_.
#. Test, test, test. Possible steps:
* Run the unit tests::
pip install -r test-reqs.txt
make test
* Again, it's useful to turn on deprecation warnings on so they're shown in
the test output::
PYTHONWARNINGS=all make test
* Build the documentation and check the output for different builders::
cd doc
make clean html latexpdf
* Run the unit tests under different Python environments using
:program:`tox`::
pip install tox
tox -v
* Add a new unit test in the ``tests`` directory if you can.
* For bug fixes, first add a test that fails without your changes and passes
after they are applied.
* Tests that need a sphinx-build run should be integrated in one of the
existing test modules if possible. New tests that to ``@with_app`` and
then ``build_all`` for a few assertions are not good since *the test suite
should not take more than a minute to run*.
#. Please add a bullet point to :file:`CHANGES` if the fix or feature is not
trivial (small doc updates, typo fixes). Then commit::
git commit -m '#42: Add useful new feature that does this.'
Github recognizes certain phrases that can be used to automatically
update the issue tracker.
For example::
git commit -m 'Closes #42: Fix invalid markup in docstring of Foo.bar.'
would close issue #42.
#. Push changes in the branch to your forked repository on Github. ::
git push origin feature-xyz
#. Submit a pull request from your branch to the respective branch (``master``
or ``stable``) on ``sphinx-doc/sphinx`` using the Github interface.
#. Wait for a core developer to review your changes.
Core Developers
~~~~~~~~~~~~~~~
The core developers of Sphinx have write access to the main repository. They
can commit changes, accept/reject pull requests, and manage items on the issue
tracker.
You do not need to be a core developer or have write access to be involved in
the development of Sphinx. You can submit patches or create pull requests
from forked repositories and have a core developer add the changes for you.
The following are some general guidelines for core developers:
* Questionable or extensive changes should be submitted as a pull request
instead of being committed directly to the main repository. The pull
request should be reviewed by another core developer before it is merged.
* Trivial changes can be committed directly but be sure to keep the repository
in a good working state and that all tests pass before pushing your changes.
* When committing code written by someone else, please attribute the original
author in the commit message and any relevant :file:`CHANGES` entry.
Locale updates
~~~~~~~~~~~~~~
The parts of messages in Sphinx that go into builds are translated into several
locales. The translations are kept as gettext ``.po`` files translated from the
master template ``sphinx/locale/sphinx.pot``.
Sphinx uses `Babel <http://babel.edgewall.org>`_ to extract messages and
maintain the catalog files. It is integrated in ``setup.py``:
* Use ``python setup.py extract_messages`` to update the ``.pot`` template.
* Use ``python setup.py update_catalog`` to update all existing language
catalogs in ``sphinx/locale/*/LC_MESSAGES`` with the current messages in the
template file.
* Use ``python setup.py compile_catalog`` to compile the ``.po`` files to binary
``.mo`` files and ``.js`` files.
When an updated ``.po`` file is submitted, run compile_catalog to commit both
the source and the compiled catalogs.
When a new locale is submitted, add a new directory with the ISO 639-1 language
identifier and put ``sphinx.po`` in there. Don't forget to update the possible
values for :confval:`language` in ``doc/config.rst``.
The Sphinx core messages can also be translated on `Transifex
<https://www.transifex.com/>`_. There exists a client tool named ``tx`` in the
Python package "transifex_client", which can be used to pull translations in
``.po`` format from Transifex. To do this, go to ``sphinx/locale`` and then run
``tx pull -f -l LANG`` where LANG is an existing language identifier. It is
good practice to run ``python setup.py update_catalog`` afterwards to make sure
the ``.po`` file has the canonical Babel formatting.
Coding Guide
------------
* Try to use the same code style as used in the rest of the project. See the
`Pocoo Styleguide`__ for more information.
__ http://flask.pocoo.org/docs/styleguide/
* For non-trivial changes, please update the :file:`CHANGES` file. If your
changes alter existing behavior, please document this.
* New features should be documented. Include examples and use cases where
appropriate. If possible, include a sample that is displayed in the
generated output.
* When adding a new configuration variable, be sure to document it and update
:file:`sphinx/quickstart.py` if it's important enough.
* Use the included :program:`utils/check_sources.py` script to check for
common formatting issues (trailing whitespace, lengthy lines, etc).
* Add appropriate unit tests.
Debugging Tips
~~~~~~~~~~~~~~
* Delete the build cache before building documents if you make changes in the
code by running the command ``make clean`` or using the
:option:`sphinx-build -E` option.
* Use the :option:`sphinx-build -P` option to run Pdb on exceptions.
* Use ``node.pformat()`` and ``node.asdom().toxml()`` to generate a printable
representation of the document structure.
* Set the configuration variable :confval:`keep_warnings` to ``True`` so
warnings will be displayed in the generated output.
* Set the configuration variable :confval:`nitpicky` to ``True`` so that Sphinx
will complain about references without a known target.
* Set the debugging options in the `Docutils configuration file
<http://docutils.sourceforge.net/docs/user/config.html>`_.
* JavaScript stemming algorithms in `sphinx/search/*.py` (except `en.py`) are
generated by this
`modified snowballcode generator <https://github.com/shibukawa/snowball>`_.
Generated `JSX <http://jsx.github.io/>`_ files are
in `this repository <https://github.com/shibukawa/snowball-stemmer.jsx>`_.
You can get the resulting JavaScript files using the following command:
.. code-block:: bash
$ npm install
$ node_modules/.bin/grunt build # -> dest/*.global.js
Deprecating a feature
---------------------
There are a couple reasons that code in Sphinx might be deprecated:
* If a feature has been improved or modified in a backwards-incompatible way,
the old feature or behavior will be deprecated.
* Sometimes Sphinx will include a backport of a Python library that's not
included in a version of Python that Sphinx currently supports. When Sphinx
no longer needs to support the older version of Python that doesn't include
the library, the library will be deprecated in Sphinx.
As the :ref:`deprecation-policy` describes,
the first release of Sphinx that deprecates a feature (``A.B``) should raise a
``RemovedInSphinxXXWarning`` (where XX is the Sphinx version where the feature
will be removed) when the deprecated feature is invoked. Assuming we have good
test coverage, these warnings are converted to errors when running the test
suite with warnings enabled: ``python -Wall tests/run.py``. Thus, when adding
a ``RemovedInSphinxXXWarning`` you need to eliminate or silence any warnings
generated when running the tests.
.. _deprecation-policy:
Deprecation policy
------------------
A feature release may deprecate certain features from previous releases. If a
feature is deprecated in feature release 1.A, it will continue to work in all
1.A.x versions (for all versions of x) but raise warnings. Deprecated features
will be removed in the first 1.B release, or 1.B.1 for features deprecated in
the last 1.A.x feature release to ensure deprecations are done over at least 2
feature releases.
So, for example, if we decided to start the deprecation of a function in
Sphinx 1.4:
* Sphinx 1.4.x will contain a backwards-compatible replica of the function
which will raise a ``RemovedInSphinx16Warning``.
* Sphinx 1.5 (the version that follows 1.4) will still contain the
backwards-compatible replica.
* Sphinx 1.6 will remove the feature outright.
The warnings are displayed by default. You can turn off display of these
warnings with:
* ``PYTHONWARNINGS= make html`` (Linux/Mac)
* ``export PYTHONWARNINGS=`` and do ``make html`` (Linux/Mac)
* ``set PYTHONWARNINGS=`` and do ``make html`` (Windows)
.. include:: ../CONTRIBUTING.rst

View File

@@ -318,6 +318,11 @@ are recognized and formatted nicely:
* ``returns``, ``return``: Description of the return value.
* ``rtype``: Return type. Creates a link if possible.
.. note::
In current release, all ``var``, ``ivar`` and ``cvar`` are represented as "Variable".
There is no difference at all.
The field names must consist of one of these keywords and an argument (except
for ``returns`` and ``rtype``, which do not need an argument). This is best
explained by an example::
@@ -540,6 +545,10 @@ The C++ Domain
The C++ domain (name **cpp**) supports documenting C++ projects.
Directives
~~~~~~~~~~
The following directives are available. All declarations can start with
a visibility statement (``public``, ``private`` or ``protected``).
@@ -735,6 +744,16 @@ a visibility statement (``public``, ``private`` or ``protected``).
Holder of elements, to which it can provide access via
:cpp:concept:`Iterator` s.
Options
.......
Some directives support options:
- ``:noindex:``, see :ref:`basic-domain-markup`.
- ``:tparam-line-spec:``, for templated declarations.
If specified, each template parameter will be rendered on a separate line.
Constrained Templates
~~~~~~~~~~~~~~~~~~~~~

View File

@@ -63,7 +63,7 @@ a comma-separated list of group names.
default set of flags is specified by the :confval:`doctest_default_flags`
configuration variable.
This directive supports two options:
This directive supports three options:
* ``hide``, a flag option, hides the doctest block in other builders. By
default it is shown as a highlighted doctest block.
@@ -73,6 +73,19 @@ a comma-separated list of group names.
explicit flags per example, with doctest comments, but they will show up in
other builders too.)
* ``pyversion``, a string option, can be used to specify the required Python
version for the example to be tested. For instance, in the following case
the example will be tested only for Python versions greather than 3.3::
.. doctest::
:pyversion: > 3.3
The supported operands are ``<``, ``<=``, ``==``, ``>=``, ``>``, and
comparison is performed by `distutils.version.LooseVersion
<https://www.python.org/dev/peps/pep-0386/#distutils>`__.
.. versionadded:: 1.6
Note that like with standard doctests, you have to use ``<BLANKLINE>`` to
signal a blank line in the expected output. The ``<BLANKLINE>`` is removed
when building presentation output (HTML, LaTeX etc.).

View File

@@ -83,7 +83,7 @@ def module_level_function(param1, param2=None, *args, **kwargs):
of each parameter is required. The type and description of each parameter
is optional, but should be included if not obvious.
If \*args or \*\*kwargs are accepted,
If ``*args`` or ``**kwargs`` are accepted,
they should be listed as ``*args`` and ``**kwargs``.
The format for a parameter is::

View File

@@ -106,7 +106,7 @@ def module_level_function(param1, param2=None, *args, **kwargs):
The name of each parameter is required. The type and description of each
parameter is optional, but should be included if not obvious.
If \*args or \*\*kwargs are accepted,
If ``*args`` or ``**kwargs`` are accepted,
they should be listed as ``*args`` and ``**kwargs``.
The format for a parameter is::

View File

@@ -76,21 +76,9 @@ It adds these directives:
alternate text for HTML output. If not given, the alternate text defaults to
the graphviz code.
.. versionadded:: 1.1
All three directives support an ``inline`` flag that controls paragraph
breaks in the output. When set, the graph is inserted into the current
paragraph. If the flag is not given, paragraph breaks are introduced before
and after the image (the default).
.. versionadded:: 1.1
All three directives support a ``caption`` option that can be used to give a
caption to the diagram. Naturally, diagrams marked as "inline" cannot have a
caption.
.. deprecated:: 1.4
``inline`` option is deprecated.
All three directives generate inline node by default. If ``caption`` is given,
these generate block node instead.
caption to the diagram.
.. versionchanged:: 1.4
All three directives support a ``graphviz_dot`` option that can be switch the

View File

@@ -89,11 +89,6 @@ package.
This allows extensions to use custom translator and define custom
nodes for the translator (see :meth:`add_node`).
This is a API version of :confval:`html_translator_class` for all other
builders. Note that if :confval:`html_translator_class` is specified and
this API is called for html related builders, API overriding takes
precedence.
.. versionadded:: 1.3
.. method:: Sphinx.add_node(node, **kwds)
@@ -368,6 +363,12 @@ package.
.. versionadded:: 1.4
.. method:: Sphinx.add_env_collector(collector)
Register an environment collector class (refs: :ref:`collector-api`)
.. versionadded:: 1.6
.. method:: Sphinx.require_sphinx(version)
Compare *version* (which must be a ``major.minor`` version string,
@@ -424,6 +425,10 @@ The application object also provides support for emitting leveled messages.
the build; just raise an exception (:exc:`sphinx.errors.SphinxError` or a
custom subclass) to do that.
.. deprecated:: 1.6
Please use :ref:`logging-api` instead.
.. automethod:: Sphinx.warn
.. automethod:: Sphinx.info

View File

@@ -0,0 +1,9 @@
.. _collector-api:
Environment Collector API
-------------------------
.. module:: sphinx.environment.collectors
.. autoclass:: EnvironmentCollector
:members:

View File

@@ -50,7 +50,9 @@ APIs used for writing extensions
appapi
envapi
builderapi
collectorapi
markupapi
domainapi
parserapi
nodes
logging

77
doc/extdev/logging.rst Normal file
View File

@@ -0,0 +1,77 @@
.. _logging-api:
Logging API
===========
.. function:: sphinx.util.logging.getLogger(name)
Returns a logger wrapped by :class:`SphinxLoggerAdapter` with the specified *name*.
Example usage::
from sphinx.util import logging # Load on top of python's logging module
logger = logging.getLogger(__name__)
logger.info('Hello, this is an extension!')
.. class:: SphinxLoggerAdapter(logging.LoggerAdapter)
.. method:: SphinxLoggerAdapter.error(level, msg, *args, **kwargs)
.. method:: SphinxLoggerAdapter.critical(level, msg, *args, **kwargs)
.. method:: SphinxLoggerAdapter.warning(level, msg, *args, **kwargs)
Logs a message on this logger with the specified level.
Basically, the arguments are as with python's logging module.
In addition, Sphinx logger supports following keyword arguments:
**type**, ***subtype***
Categories of warning logs. It is used to suppress
warnings by :confval:`suppress_warnings` setting.
**location**
Where the warning happened. It is used to include
the path and line number in each log. It allows docname,
tuple of docname and line number and nodes::
logger = sphinx.util.logging.getLogger(__name__)
logger.warning('Warning happened!', location='index')
logger.warning('Warning happened!', location=('chapter1/index', 10))
logger.warning('Warning happened!', location=some_node)
**color**
The color of logs. By default, warning level logs are
colored as ``"darkred"``. The others are not colored.
.. method:: SphinxLoggerAdapter.log(level, msg, *args, **kwargs)
.. method:: SphinxLoggerAdapter.info(level, msg, *args, **kwargs)
.. method:: SphinxLoggerAdapter.verbose(level, msg, *args, **kwargs)
.. method:: SphinxLoggerAdapter.debug(level, msg, *args, **kwargs)
Logs a message to this logger with the specified level.
Basically, the arguments are as with python's logging module.
In addition, Sphinx logger supports following keyword arguments:
**nonl**
If true, the logger does not fold lines at the end of the log message.
The default is ``False``.
**color**
The color of logs. By default, debug level logs are
colored as ``"darkgray"``, and debug2 level ones are ``"lightgray"``.
The others are not colored.
.. function:: pending_logging()
Marks all logs as pending::
with pending_logging():
logger.warning('Warning message!') # not flushed yet
some_long_process()
# the warning is flushed here
.. function:: pending_warnings()
Marks warning logs as pending. Similar to :func:`pending_logging`.

View File

@@ -55,4 +55,3 @@ You should not need to generate the nodes below in extensions.
.. autoclass:: start_of_file
.. autoclass:: productionlist
.. autoclass:: production
.. autoclass:: termsep

View File

@@ -246,7 +246,6 @@ todolist directive has neither content nor arguments that need to be handled.
The ``todo`` directive function looks like this::
from sphinx.util.compat import make_admonition
from sphinx.locale import _
class TodoDirective(Directive):
@@ -260,20 +259,20 @@ The ``todo`` directive function looks like this::
targetid = "todo-%d" % env.new_serialno('todo')
targetnode = nodes.target('', '', ids=[targetid])
ad = make_admonition(todo, self.name, [_('Todo')], self.options,
self.content, self.lineno, self.content_offset,
self.block_text, self.state, self.state_machine)
todo_node = todo('\n'.join(self.content))
todo_node += nodes.title(_('Todo'), _('Todo'))
self.state.nested_parse(self.content, self.content_offset, todo_node)
if not hasattr(env, 'todo_all_todos'):
env.todo_all_todos = []
env.todo_all_todos.append({
'docname': env.docname,
'lineno': self.lineno,
'todo': ad[0].deepcopy(),
'todo': todo_node.deepcopy(),
'target': targetnode,
})
return [targetnode] + ad
return [targetnode, todo_node]
Several important things are covered here. First, as you can see, you can refer
to the build environment instance using ``self.state.document.settings.env``.
@@ -285,11 +284,10 @@ returns a new unique integer on each call and therefore leads to unique target
names. The target node is instantiated without any text (the first two
arguments).
An admonition is created using a standard docutils function (wrapped in Sphinx
for docutils cross-version compatibility). The first argument gives the node
class, in our case ``todo``. The third argument gives the admonition title (use
``arguments`` here to let the user specify the title). A list of nodes is
returned from ``make_admonition``.
On creating admonition node, the content body of the directive are parsed using
``self.state.nested_parse``. The first argument gives the content body, and
the second one gives content offset. The third argument gives the parent node
of parsed result, in our case the ``todo`` node.
Then, the todo node is added to the environment. This is needed to be able to
create a list of all todo entries throughout the documentation, in the place

View File

@@ -201,7 +201,7 @@ The following list gives some hints for the creation of epub files:
Error(prcgen):E24011: TOC section scope is not included in the parent chapter:(title)
Error(prcgen):E24001: The table of content could not be built.
.. _Epubcheck: https://code.google.com/archive/p/epubcheck
.. _Epubcheck: https://github.com/IDPF/epubcheck
.. _Calibre: http://calibre-ebook.com/
.. _FBreader: https://fbreader.org/
.. _Bookworm: http://www.oreilly.com/bookworm/index.html

View File

@@ -79,8 +79,8 @@ sidebar and under "Quick Links", click "Windows Installer" to download.
.. note::
Currently, Python offers two major versions, 2.x and 3.x. Sphinx 1.3 can run
under Python 2.7, 3.4, 3.5, with the recommended version being 2.7. This
Currently, Python offers two major versions, 2.x and 3.x. Sphinx 1.5 can run
under Python 2.7, 3.4, 3.5, 3.6, with the recommended version being 2.7. This
chapter assumes you have installed Python 2.7.
Follow the Windows installer for Python.

View File

@@ -74,7 +74,7 @@ Quick guide
^^^^^^^^^^^
`sphinx-intl`_ is a useful tool to work with Sphinx translation flow.
This section describe a easy way to translate with sphinx-intl.
This section describe an easy way to translate with sphinx-intl.
#. Install `sphinx-intl`_ by :command:`pip install sphinx-intl` or
:command:`easy_install sphinx-intl`.
@@ -94,14 +94,14 @@ This section describe a easy way to translate with sphinx-intl.
$ make gettext
As a result, many pot files are generated under ``_build/locale``
As a result, many pot files are generated under ``_build/gettext``
directory.
#. Setup/Update your `locale_dir`:
.. code-block:: console
$ sphinx-intl update -p _build/locale -l de -l ja
$ sphinx-intl update -p _build/gettext -l de -l ja
Done. You got these directories that contain po files:

View File

@@ -8,9 +8,9 @@ Invocation of sphinx-quickstart
The :program:`sphinx-quickstart` script generates a Sphinx documentation set.
It is called like this:
.. code-block:: console
.. code-block:: console
$ sphinx-quickstart [options] [projectdir]
$ sphinx-quickstart [options] [projectdir]
where *projectdir* is the Sphinx documentation set directory in which you want
to place. If you omit *projectdir*, files are generated into current directory
@@ -180,7 +180,7 @@ called like this:
.. code-block:: console
$ sphinx-build [options] sourcedir builddir [filenames]
$ sphinx-build [options] sourcedir builddir [filenames]
where *sourcedir* is the :term:`source directory`, and *builddir* is the
directory in which you want to place the built documentation. Most of the time,
@@ -384,6 +384,15 @@ You can also give one or more filenames on the command line after the source and
build directories. Sphinx will then try to build only these output files (and
their dependencies).
Environment variables
---------------------
The :program:`sphinx-build` refers following environment variables:
.. describe:: MAKE
A path to make command. A command name is also allowed.
:program:`sphinx-build` uses it to invoke sub-build process on make-mode.
Makefile options
----------------
@@ -395,7 +404,7 @@ variables to customize behavior:
.. describe:: PAPER
The value for :confval:`latex_paper_size`.
The value for '"papersize"` key of :confval:`latex_elements`.
.. describe:: SPHINXBUILD
@@ -438,7 +447,7 @@ for a Python package. It is called like this:
.. code-block:: console
$ sphinx-apidoc [options] -o outputdir packagedir [pathnames]
$ sphinx-apidoc [options] -o outputdir packagedir [pathnames]
where *packagedir* is the path to the package to document, and *outputdir* is
the directory where the generated sources are placed. Any *pathnames* given

View File

@@ -14,13 +14,19 @@ The *latex* target does not benefit from pre-prepared themes like the
.. raw:: latex
\begingroup
\sphinxsetup{verbatimwithframe=false,%
VerbatimColor={named}{OldLace}, TitleColor={named}{DarkGoldenrod},%
hintBorderColor={named}{LightCoral}, attentionBgColor={named}{LightPink},%
attentionborder=3pt, attentionBorderColor={named}{Crimson},%
noteBorderColor={named}{Olive}, noteborder=2pt,%
cautionBorderColor={named}{Cyan}, cautionBgColor={named}{LightCyan},%
cautionborder=3pt}
\sphinxsetup{%
verbatimwithframe=false,
VerbatimColor={named}{OldLace},
TitleColor={named}{DarkGoldenrod},
hintBorderColor={named}{LightCoral},
attentionborder=3pt,
attentionBorderColor={named}{Crimson},
attentionBgColor={named}{FloralWhite},
noteborder=2pt,
noteBorderColor={named}{Olive},
cautionborder=3pt,
cautionBorderColor={named}{Cyan},
cautionBgColor={named}{LightCyan}}
\relax
@@ -75,6 +81,8 @@ configured, for example::
latex_additional_files = ["mystyle.sty"]
.. _latexsphinxsetup:
The Sphinx LaTeX style package options
--------------------------------------
@@ -102,20 +110,26 @@ If non-empty, it will be passed as argument to the ``\sphinxsetup`` command::
the whole ``'sphinxsetup'`` string is passed as argument to
``\sphinxsetup``.
- As an alternative to the ``'sphinxsetup'`` key, it is possibly
to insert explicitely the ``\\sphinxsetup{key=value,..}`` inside the
- As an alternative to the ``'sphinxsetup'`` key, it is possible
to insert the ``\\sphinxsetup{key=value,..}`` inside the
``'preamble'`` key. It is even possible to use the ``\sphinxsetup`` in
the body of the document, via the :rst:dir:`raw` directive, to modify
dynamically the option values: this is actually what we did for the
duration of this chapter for the PDF output, which is styled using::
\sphinxsetup{%
verbatimwithframe=false,
VerbatimColor={named}{OldLace}, TitleColor={named}{DarkGoldenrod},
hintBorderColor={named}{LightCoral}, attentionBgColor={named}{LightPink},
attentionborder=3pt, attentionBorderColor={named}{Crimson},
noteBorderColor={named}{Olive}, noteborder=2pt,
cautionBorderColor={named}{Cyan}, cautionBgColor={named}{LightCyan},
cautionborder=3pt
VerbatimColor={named}{OldLace},
TitleColor={named}{DarkGoldenrod},
hintBorderColor={named}{LightCoral},
attentionborder=3pt,
attentionBorderColor={named}{Crimson},
attentionBgColor={named}{FloralWhite},
noteborder=2pt,
noteBorderColor={named}{Olive},
cautionborder=3pt,
cautionBorderColor={named}{Cyan},
cautionBgColor={named}{LightCyan}}
and with the ``svgnames`` option having been passed to "xcolor" package::
@@ -132,25 +146,85 @@ Here are the currently available options together with their default values.
rendering by Sphinx; if in future Sphinx offers various *themes* for LaTeX,
the interface may change.
.. attention::
LaTeX requires for keys with Boolean values to use **lowercase** ``true`` or
``false``.
.. _latexsphinxsetuphmargin:
``hmargin``
The dimensions of the horizontal margins. Legacy Sphinx default value is
``1in`` (which stands for ``{1in,1in}``.) It is passed over as ``hmargin``
option to ``geometry`` package.
Here is an example for non-Japanese documents of use of this key::
'sphinxsetup': 'hmargin={2in,1.5in}, vmargin={1.5in,2in}, marginpar=1in',
Japanese documents currently accept only the form with only one dimension.
This option is handled then in a special manner in order for ``geometry``
package to set the text width to an exact multiple of the *zenkaku* width
of the base document font.
.. hint::
For a ``'manual'`` type document with :confval:`language` set to
``'ja'``, which by default uses the ``jsbook`` LaTeX document class, the
dimension units, when the pointsize isn't ``10pt``, must be so-called TeX
"true" units::
'sphinxsetup': 'hmargin=1.5truein, vmargin=1.5truein, marginpar=5zw',
This is due to the way the LaTeX class ``jsbook`` handles the
pointsize.
Or, one uses regular units but with ``nomag`` as extra document class
option (cf. ``'extraclassoptions'`` key of :confval:`latex_elements`.)
.. versionadded:: 1.5.3
``vmargin``
The dimension of the vertical margins. Legacy Sphinx default value is
``1in`` (or ``{1in,1in}``.) Passed over as ``vmargin`` option to
``geometry``.
Japanese documents will arrange for the text height to be an integer
multiple of the baselineskip, taking the closest match suitable for the
asked-for vertical margin. It can then be only one dimension. See notice
above.
.. versionadded:: 1.5.3
``marginpar``
The ``\marginparwidth`` LaTeX dimension, defaults to ``0.5in``. For Japanese
documents, the value is modified to be the closest integer multiple of the
*zenkaku* width.
.. versionadded:: 1.5.3
``verbatimwithframe``
default ``true``. Boolean to specify if :rst:dir:`code-block`\ s and literal
includes are framed. Setting it to ``false`` does not deactivate use of
package "framed", because it is still in use for the optional background
colour (see below).
.. attention::
LaTeX requires ``true`` or ``false`` to be specified in *lowercase*.
``verbatimwrapslines``
default ``true``. Tells whether long lines in :rst:dir:`code-block`\ s
should be wrapped.
default ``true``. Tells whether long lines in :rst:dir:`code-block`\ 's
contents should wrap.
.. (comment) It is theoretically possible to customize this even
more and decide at which characters a line-break can occur and whether
before or after, but this is accessible currently only by re-defining some
macros with complicated LaTeX syntax from :file:`sphinx.sty`.
``parsedliteralwraps``
default ``true``. Tells whether long lines in :dudir:`parsed-literal`\ 's
contents should wrap.
.. versionadded:: 1.5.2
set this option value to ``false`` to recover former behaviour.
``inlineliteralwraps``
default ``true``. Allows linebreaks inside inline literals: but extra
potential break-points (additionally to those allowed by LaTeX at spaces
@@ -160,7 +234,7 @@ Here are the currently available options together with their default values.
(or shrinked) in order to accomodate the linebreak.
.. versionadded:: 1.5
set this option to ``false`` to recover former behaviour.
set this option value to ``false`` to recover former behaviour.
``verbatimvisiblespace``
default ``\textcolor{red}{\textvisiblespace}``. When a long code line is
@@ -319,9 +393,10 @@ Here are the currently available options together with their default values.
(non-breakable) space.
.. versionadded:: 1.5
formerly, footnotes from explicit mark-up were
preceded by a space (hence a linebreak there was possible), but
automatically generated footnotes had no such space.
formerly, footnotes from explicit mark-up (but not automatically
generated ones) were preceded by a space in the output ``.tex`` file
hence a linebreak in PDF was possible. To avoid insertion of this space
one could use ``foo\ [#f1]`` mark-up, but this impacts all builders.
``HeaderFamily``
default ``\sffamily\bfseries``. Sets the font used by headings.
@@ -339,16 +414,20 @@ Let us now list some macros from the package file
- text styling commands (they have one argument): ``\sphinx<foo>`` with
``<foo>`` being one of ``strong``, ``bfcode``, ``email``, ``tablecontinued``,
``titleref``, ``menuselection``, ``accelerator``, ``crossref``, ``termref``,
``optional``. By default and for backwards compatibility the ``\sphinx<foo>``
expands to ``\<foo>`` hence the user can choose to customize rather the latter
(the non-prefixed macros will be left undefined if option
:confval:`latex_keep_old_macro_names` is set to ``False`` in :file:`conf.py`.)
``optional``. The non-prefixed macros will still be defined if option
:confval:`latex_keep_old_macro_names` has been set to ``True`` (default is
``False``), in which case the prefixed macros expand to the
non-prefixed ones.
.. versionadded:: 1.4.5
Use of ``\sphinx`` prefixed macro names to limit possibilities of conflict
with LaTeX packages.
.. versionchanged:: 1.6
The default value of :confval:`latex_keep_old_macro_names` changes to
``False``, and even if set to ``True``, if a non-prefixed macro
already exists at ``sphinx.sty`` loading time, only the ``\sphinx``
prefixed one will be defined. The setting will be removed at 1.7.
.. versionchanged:: 1.4.5
use of ``\sphinx`` prefixed macro names to limit possibilities of conflict
with user added packages: if
:confval:`latex_keep_old_macro_names` is set to ``False`` in
:file:`conf.py` only the prefixed names are defined.
- more text styling commands: ``\sphinxstyle<bar>`` with ``<bar>`` one of
``indexentry``, ``indexextra``, ``indexpageref``, ``topictitle``,
``sidebartitle``, ``othertitle``, ``sidebarsubtitle``, ``thead``,

45
doc/markdown.rst Normal file
View File

@@ -0,0 +1,45 @@
.. highlightlang:: python
.. _markdown:
Markdown support
================
`Markdown <https://daringfireball.net/projects/markdown/>`__ is a lightweight markup language with a simplistic plain
text formatting syntax.
It exists in many syntactically different *flavors*.
To support Markdown-based documentation, Sphinx can use
`recommonmark <http://recommonmark.readthedocs.io/en/latest/index.html>`__.
recommonmark is a Docutils bridge to `CommonMark-py <https://github.com/rtfd/CommonMark-py>`__, a
Python package for parsing the `CommonMark <http://commonmark.org/>`__ Markdown flavor.
Configuration
-------------
To configure your Sphinx project for Markdown support, proceed as follows:
#. Install recommonmark:
::
pip install recommonmark
#. Add the Markdown parser to the ``source_parsers`` configuration variable in your Sphinx configuration file:
::
source_parsers = {
'.md': 'recommonmark.parser.CommonMarkParser',
}
You can replace `.md` with a filename extension of your choice.
#. Add the Markdown filename extension to the ``source_suffix`` configuration variable:
::
source_suffix = ['.rst', '.md']
#. You can further configure recommonmark to allow custom syntax that standard CommonMark doesn't support. Read more in
the `recommonmark documentation <http://recommonmark.readthedocs.io/en/latest/auto_structify.html>`__.

View File

@@ -184,9 +184,17 @@ Includes
string option, only lines that precede the first lines containing that string
are included.
With lines selected using ``start-after`` it is still possible to use
``lines``, the first allowed line having by convention the line number ``1``.
When lines have been selected in any of the ways described above, the
line numbers in ``emphasize-lines`` also refer to the selection, with the
first selected line having number ``1``.
When specifying particular parts of a file to display, it can be useful to
display exactly which lines are being presented.
This can be done using the ``lineno-match`` option.
display the original line numbers. This can be done using the
``lineno-match`` option, which is however allowed only when the selection
consists of contiguous lines.
You can prepend and/or append a line to the included code, using the
``prepend`` and ``append`` option, respectively. This is useful e.g. for
@@ -212,7 +220,9 @@ Includes
.. versionadded:: 1.3
The ``diff`` option.
The ``lineno-match`` option.
.. versionchanged:: 1.6
With both ``start-after`` and ``lines`` in use, the first line as per
``start-after`` is considered to be with line number ``1`` for ``lines``.
Caption and name
^^^^^^^^^^^^^^^^
@@ -232,7 +242,7 @@ For example::
:rst:dir:`literalinclude` also supports the ``caption`` and ``name`` option.
``caption`` has a additional feature that if you leave the value empty, the shown
``caption`` has an additional feature that if you leave the value empty, the shown
filename will be exactly the one given as an argument.

View File

@@ -233,35 +233,80 @@ following directive exists:
|``J``| justified column with automatic width |
+-----+------------------------------------------+
The automatic width is determined by rendering the content in the table, and
scaling them according to their share of the total width.
The automatic widths of the ``LRCJ`` columns are attributed by ``tabulary``
in proportion to the observed shares in a first pass where the table cells
are rendered at their natural "horizontal" widths.
By default, Sphinx uses a table layout with ``L`` for every column.
.. hint::
For columns which are known to be much narrower than the others it is
recommended to use the lowercase specifiers. For more information, check
the ``tabulary`` manual.
By default, Sphinx uses a table layout with ``J`` for every column.
.. versionadded:: 0.3
.. warning::
.. versionchanged:: 1.6
Merged cells may now contain multiple paragraphs and are much better
handled, thanks to custom Sphinx LaTeX macros. This novel situation
motivated the switch to ``J`` specifier and not ``L`` by default.
Tables with more than 30 rows are rendered using ``longtable``, not
``tabulary``, in order to allow pagebreaks.
.. hint::
Tables that contain list-like elements such as object descriptions,
blockquotes or any kind of lists cannot be set out of the box with
``tabulary``. They are therefore set with the standard LaTeX ``tabular``
environment if you don't give a ``tabularcolumns`` directive. If you do, the
table will be set with ``tabulary``, but you must use the ``p{width}``
construct for the columns that contain these elements.
Sphinx actually uses ``T`` specifier having done ``\newcolumntype{T}{J}``.
To revert to previous default, insert ``\newcolumntype{T}{L}`` in the
LaTeX preamble (see :confval:`latex_elements`).
Literal blocks do not work with ``tabulary`` at all, so tables containing a
literal block are always set with ``tabular``. Also, the verbatim
environment used for literal blocks only works in ``p{width}`` columns, which
means that by default, Sphinx generates such column specs for such tables.
A frequent issue with tabulary is that columns with little contents are
"squeezed". The minimal column width is a tabulary parameter called
``\tymin``. You may set it globally in the LaTeX preamble via
``\setlength{\tymin}{40pt}`` for example.
Else, use the :rst:dir:`tabularcolumns` directive with an explicit
``p{40pt}`` (for example) for that column. You may use also ``l``
specifier but this makes the task of setting column widths more difficult
if some merged cell intersects that column.
.. warning::
Tables with more than 30 rows are rendered using ``longtable``, not
``tabulary``, in order to allow pagebreaks. The ``L``, ``R``, ... specifiers
do not work for these tables.
Tables that contain list-like elements such as object descriptions,
blockquotes or any kind of lists cannot be set out of the box with
``tabulary``. They are therefore set with the standard LaTeX ``tabular`` (or
``longtable``) environment if you don't give a ``tabularcolumns`` directive.
If you do, the table will be set with ``tabulary`` but you must use the
``p{width}`` construct (or Sphinx's ``\X`` and ``\Y`` specifiers described
below) for the columns containing these elements.
Literal blocks do not work with ``tabulary`` at all, so tables containing
a literal block are always set with ``tabular``. The verbatim environment
used for literal blocks only works in ``p{width}`` (and ``\X`` or ``\Y``)
columns, hence Sphinx generates such column specs for tables containing
literal blocks.
Since Sphinx 1.5, the ``\X{a}{b}`` specifier is used (there *is* a backslash
in the specifier letter). It is like ``p{width}`` with the width set to a
fraction ``a/b`` of the current line width. You can use it in the
:rst:dir:`tabularcolumns` (it is not a problem if some LaTeX macro is also
called ``\X``.)
It is *not* needed for ``b`` to be the total number of columns, nor for the
sum of the fractions of the ``\X`` specifiers to add up to one. For example
``|\X{2}{5}|\X{1}{5}|\X{1}{5}|`` is legitimate and the table will occupy
80% of the line width, the first of its three columns having the same width
as the sum of the next two.
This is used by the ``:widths:`` option of the :dudir:`table` directive.
Since Sphinx 1.6, there is also the ``\Y{f}`` specifier which admits a
decimal argument, such has ``\Y{0.15}``: this would have the same effect as
``\X{3}{20}``.
.. versionchanged:: 1.6
Merged cells from complex grid tables (either multi-row, multi-column, or
both) now allow blockquotes, lists, literal blocks, ... as do regular cells.
Sphinx's merged cells interact well with ``p{width}``, ``\X{a}{b}``, ``Y{f}``
and tabulary's columns.
.. rubric:: Footnotes

View File

@@ -41,7 +41,7 @@ tables of contents. The ``toctree`` directive is the central element.
* Tables of contents from all those documents are inserted, with a maximum
depth of two, that means one nested heading. ``toctree`` directives in
those documents are also taken into account.
* Sphinx knows that the relative order of the documents ``intro``,
* Sphinx knows the relative order of the documents ``intro``,
``strings`` and so forth, and it knows that they are children of the shown
document, the library index. From this information it generates "next
chapter", "previous chapter" and "parent chapter" links.

View File

@@ -226,7 +226,7 @@ as long as the text::
Normally, there are no heading levels assigned to certain characters as the
structure is determined from the succession of headings. However, this
convention is used in `Python's Style Guide for documentating
convention is used in `Python's Style Guide for documenting
<https://docs.python.org/devguide/documenting.html#style-guide>`_ which you may
follow:

178
doc/setuptools.rst Normal file
View File

@@ -0,0 +1,178 @@
.. _setuptools:
Setuptools integration
======================
Sphinx supports integration with setuptools and distutils through a custom
command - :class:`~sphinx.setup_command.BuildDoc`.
Using setuptools integration
----------------------------
The Sphinx build can then be triggered from distutils, and some Sphinx
options can be set in ``setup.py`` or ``setup.cfg`` instead of Sphinx own
configuration file.
For instance, from ``setup.py``::
# this is only necessary when not using setuptools/distribute
from sphinx.setup_command import BuildDoc
cmdclass = {'build_sphinx': BuildDoc}
name = 'My project'
version = '1.2'
release = '1.2.0'
setup(
name=name,
author='Bernard Montgomery',
version=release,
cmdclass=cmdclass,
# these are optional and override conf.py settings
command_options={
'build_sphinx': {
'project': ('setup.py', name),
'version': ('setup.py', version),
'release': ('setup.py', release)}},
)
Or add this section in ``setup.cfg``::
[build_sphinx]
project = 'My project'
version = 1.2
release = 1.2.0
Once configured, call this by calling the relevant command on ``setup.py``::
$ python setup.py build_sphinx
Options for setuptools integration
----------------------------------
.. confval:: fresh-env
A boolean that determines whether the saved environment should be discarded
on build. Default is false.
This can also be set by passing the `-E` flag to ``setup.py``.
.. code-block:: bash
$ python setup.py build_sphinx -E
.. confval:: all-files
A boolean that determines whether all files should be built from scratch.
Default is false.
This can also be set by passing the `-a` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -a
.. confval:: source-dir
The target source directory. This can be relative to the ``setup.py`` or
``setup.cfg`` file, or it can be absolute. Default is ``''``.
This can also be set by passing the `-s` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -s $SOURCE_DIR
.. confval:: build-dir
The target build directory. This can be relative to the ``setup.py`` or
``setup.cfg`` file, or it can be absolute. Default is ``''``.
.. confval:: config-dir
Location of the configuration directory. This can be relative to the
``setup.py`` or ``setup.cfg`` file, or it can be absolute. Default is
``''``.
This can also be set by passing the `-c` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -c $CONFIG_DIR
.. versionadded:: 1.0
.. confval:: builder
The builder or list of builders to use. Default is ``html``.
This can also be set by passing the `-b` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -b $BUILDER
.. versionchanged:: 1.6
This can now be a comma- or space-separated list of builders
.. confval:: warning-is-error
A boolean that ensures Sphinx warnings will result in a failed build.
Default is false.
This can also be set by passing the `-W` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -W
.. versionadded:: 1.5
.. confval:: project
The documented project's name. Default is ``''``.
.. versionadded:: 1.0
.. confval:: version
The short X.Y version. Default is ``''``.
.. versionadded:: 1.0
.. confval:: release
The full version, including alpha/beta/rc tags. Default is ``''``.
.. versionadded:: 1.0
.. confval:: today
How to format the current date, used as the replacement for ``|today|``.
Default is ``''``.
.. versionadded:: 1.0
.. confval:: link-index
A boolean that ensures index.html will be linked to the master doc. Default
is false.
This can also be set by passing the `-i` flag to ``setup.py``:
.. code-block:: bash
$ python setup.py build_sphinx -i
.. versionadded:: 1.0
.. confval:: copyright
The copyright string. Default is ``''``.
.. versionadded:: 1.3
.. confval:: pdb
A boolean to configure ``pdb`` on exception. Default is false.
.. versionadded:: 1.5

View File

@@ -307,31 +307,12 @@ in the future.
The value of :confval:`master_doc`, for usage with :func:`pathto`.
.. data:: next
The next document for the navigation. This variable is either false or has
two attributes `link` and `title`. The title contains HTML markup. For
example, to generate a link to the next page, you can use this snippet::
{% if next %}
<a href="{{ next.link|e }}">{{ next.title }}</a>
{% endif %}
.. data:: pagename
The "page name" of the current file, i.e. either the document name if the
file is generated from a reST source, or the equivalent hierarchical name
relative to the output directory (``[directory/]filename_without_extension``).
.. data:: parents
A list of parent documents for navigation, structured like the :data:`next`
item.
.. data:: prev
Like :data:`next`, but for the previous page.
.. data:: project
The value of :confval:`project`.
@@ -385,16 +366,58 @@ In documents that are created from source files (as opposed to
automatically-generated files like the module index, or documents that already
are in HTML form), these variables are also available:
.. data:: body
A string containing the content of the page in HTML form as produced by the HTML builder,
before the theme is applied.
.. data:: display_toc
A boolean that is True if the toc contains more than one entry.
.. data:: meta
Document metadata (a dictionary), see :ref:`metadata`.
.. data:: metatags
A string containing the page's HTML :dudir:`meta` tags.
.. data:: next
The next document for the navigation. This variable is either false or has
two attributes `link` and `title`. The title contains HTML markup. For
example, to generate a link to the next page, you can use this snippet::
{% if next %}
<a href="{{ next.link|e }}">{{ next.title }}</a>
{% endif %}
.. data:: page_source_suffix
The suffix of the file that was rendered. Since we support a list of :confval:`source_suffix`,
this will allow you to properly link to the original source file.
.. data:: parents
A list of parent documents for navigation, structured like the :data:`next`
item.
.. data:: prev
Like :data:`next`, but for the previous page.
.. data:: sourcename
The name of the copied source file for the current document. This is only
nonempty if the :confval:`html_copy_source` value is ``True``.
This has empty value on creating automatically-generated files.
.. data:: title
The page title.
.. data:: toc
The local table of contents for the current page, rendered as HTML bullet
@@ -417,7 +440,4 @@ are in HTML form), these variables are also available:
* ``includehidden`` (``False`` by default): if true, the TOC tree will also
contain hidden entries.
.. data:: page_source_suffix
The suffix of the file that was rendered. Since we support a list of :confval:`source_suffix`,
this will allow you to properly link to the original source file.

View File

@@ -308,6 +308,7 @@ More topics to be covered
* ...
- Static files
- :doc:`Selecting a theme <theming>`
- :doc:`setuptools`
- :ref:`Templating <templating>`
- Using extensions
- :ref:`Writing extensions <dev-extensions>`

View File

@@ -1,6 +1,8 @@
[mypy]
python_version = 2.7
silent_imports = True
ignore_missing_imports = True
follow_imports = skip
fast_parser = True
incremental = True
check_untyped_defs = True
warn_unused_ignores = True

View File

@@ -24,6 +24,9 @@ directory = sphinx/locale/
universal = 1
[flake8]
max-line-length=95
ignore=E113,E116,E221,E226,E241,E251,E901
exclude=tests/*,build/*,sphinx/search/*,sphinx/pycode/pgen2/*,doc/ext/example*.py,.tox/*
max-line-length = 95
ignore = E116,E241,E251
exclude = .git,.tox,tests/*,build/*,sphinx/search/*,sphinx/pycode/pgen2/*,doc/ext/example*.py
[build_sphinx]
warning-is-error = 1

View File

@@ -50,7 +50,7 @@ requires = [
'babel>=1.3,!=2.0',
'alabaster>=0.7,<0.8',
'imagesize',
'requests',
'requests>=2.0.0',
'typing',
]
extras_require = {
@@ -63,7 +63,7 @@ extras_require = {
'whoosh>=2.0',
],
'test': [
'nose',
'pytest',
'mock', # it would be better for 'test:python_version in 2.7'
'simplejson', # better: 'test:platform_python_implementation=="PyPy"'
'html5lib',

View File

@@ -21,6 +21,10 @@ from os import path
from .deprecation import RemovedInNextVersionWarning
if False:
# For type annotation
from typing import List # NOQA
# by default, all DeprecationWarning under sphinx package will be emit.
# Users can avoid this by using environment variable: PYTHONWARNINGS=
if 'PYTHONWARNINGS' not in os.environ:
@@ -30,7 +34,7 @@ if 'PYTHONWARNINGS' not in os.environ:
warnings.filterwarnings('ignore', "'U' mode is deprecated",
DeprecationWarning, module='docutils.io')
__version__ = '1.6'
__version__ = '1.6'
__released__ = '1.6+' # used when Sphinx builds its own docs
# version info for better programmatic use
@@ -42,7 +46,7 @@ package_dir = path.abspath(path.dirname(__file__))
__display_version__ = __version__ # used for command line version
if __version__.endswith('+'):
# try to find out the changeset hash if checked out from hg, and append
# try to find out the commit hash if checked out from git, and append
# it to __version__ (since we use this value from setup.py, it gets
# automatically propagated to an installed copy as well)
__display_version__ = __version__
@@ -60,13 +64,15 @@ if __version__.endswith('+'):
def main(argv=sys.argv):
# type: (List[str]) -> int
if sys.argv[1:2] == ['-M']:
sys.exit(make_main(argv))
return make_main(argv)
else:
sys.exit(build_main(argv))
return build_main(argv)
def build_main(argv=sys.argv):
# type: (List[str]) -> int
"""Sphinx build "main" command-line entry."""
if (sys.version_info[:3] < (2, 7, 0) or
(3, 0, 0) <= sys.version_info[:3] < (3, 4, 0)):
@@ -99,18 +105,19 @@ def build_main(argv=sys.argv):
return 1
raise
from sphinx.util.compat import docutils_version
if docutils_version < (0, 10):
import sphinx.util.docutils
if sphinx.util.docutils.__version_info__ < (0, 10):
sys.stderr.write('Error: Sphinx requires at least Docutils 0.10 to '
'run.\n')
return 1
return cmdline.main(argv)
return cmdline.main(argv) # type: ignore
def make_main(argv=sys.argv):
# type: (List[str]) -> int
"""Sphinx build "make mode" entry."""
from sphinx import make_mode
return make_mode.run_make_mode(argv[2:])
return make_mode.run_make_mode(argv[2:]) # type: ignore
if __name__ == '__main__':

View File

@@ -9,10 +9,11 @@
:license: BSD, see LICENSE for details.
"""
import warnings
from docutils import nodes
from sphinx.deprecation import RemovedInSphinx16Warning
if False:
# For type annotation
from typing import List, Sequence # NOQA
class translatable(object):
@@ -30,14 +31,17 @@ class translatable(object):
"""
def preserve_original_messages(self):
# type: () -> None
"""Preserve original translatable messages."""
raise NotImplementedError
def apply_translated_message(self, original_message, translated_message):
# type: (unicode, unicode) -> None
"""Apply translated message."""
raise NotImplementedError
def extract_original_messages(self):
# type: () -> Sequence[unicode]
"""Extract translation messages.
:returns: list of extracted messages or messages generator
@@ -49,14 +53,17 @@ class toctree(nodes.General, nodes.Element, translatable):
"""Node for inserting a "TOC tree"."""
def preserve_original_messages(self):
if 'caption' in self:
# type: () -> None
if self.get('caption'):
self['rawcaption'] = self['caption']
def apply_translated_message(self, original_message, translated_message):
# type: (unicode, unicode) -> None
if self.get('rawcaption') == original_message:
self['caption'] = translated_message
def extract_original_messages(self):
# type: () -> List[unicode]
if 'rawcaption' in self:
return [self['rawcaption']]
else:
@@ -109,6 +116,7 @@ class desc_type(nodes.Part, nodes.Inline, nodes.TextElement):
class desc_returns(desc_type):
"""Node for a "returns" annotation (a la -> in Python)."""
def astext(self):
# type: () -> unicode
return ' -> ' + nodes.TextElement.astext(self)
@@ -130,6 +138,7 @@ class desc_optional(nodes.Part, nodes.Inline, nodes.TextElement):
child_text_separator = ', '
def astext(self):
# type: () -> unicode
return '[' + nodes.TextElement.astext(self) + ']'
@@ -273,19 +282,6 @@ class abbreviation(nodes.Inline, nodes.TextElement):
"""Node for abbreviations with explanations."""
class termsep(nodes.Structural, nodes.Element):
"""Separates two terms within a <term> node.
.. versionchanged:: 1.4
sphinx.addnodes.termsep is deprecated. It will be removed at Sphinx-1.6.
"""
def __init__(self, *args, **kw):
warnings.warn('sphinx.addnodes.termsep will be removed at Sphinx-1.6',
RemovedInSphinx16Warning, stacklevel=2)
super(termsep, self).__init__(*args, **kw)
class manpage(nodes.Inline, nodes.TextElement):
"""Node for references to manpages."""

View File

@@ -25,10 +25,11 @@ from fnmatch import fnmatch
from sphinx.util.osutil import FileAvoidWrite, walk
from sphinx import __display_version__
from sphinx.quickstart import EXTENSIONS
if False:
# For type annotation
from typing import Any, Tuple # NOQA
from typing import Any, List, Tuple # NOQA
# automodule options
if 'SPHINX_APIDOC_OPTIONS' in os.environ:
@@ -273,7 +274,7 @@ def is_excluded(root, excludes):
e.g. an exlude "foo" also accidentally excluding "foobar".
"""
for exclude in excludes:
if fnmatch(root, exclude): # type: ignore
if fnmatch(root, exclude):
return True
return False
@@ -346,6 +347,11 @@ Note: By default this script will not overwrite already created files.""")
'defaults to --doc-version')
parser.add_option('--version', action='store_true', dest='show_version',
help='Show version information and exit')
group = parser.add_option_group('Extension options')
for ext in EXTENSIONS:
group.add_option('--ext-' + ext, action='store_true',
dest='ext_' + ext, default=False,
help='enable %s extension' % ext)
(opts, args) = parser.parse_args(argv[1:])
@@ -384,8 +390,8 @@ Note: By default this script will not overwrite already created files.""")
text += ' %s\n' % module
d = dict(
path = opts.destdir,
sep = False,
dot = '_',
sep = False,
dot = '_',
project = opts.header,
author = opts.author or 'Author',
version = opts.version or '',
@@ -404,6 +410,10 @@ Note: By default this script will not overwrite already created files.""")
module_path = rootpath,
append_syspath = opts.append_syspath,
)
enabled_exts = {'ext_' + ext: getattr(opts, 'ext_' + ext)
for ext in EXTENSIONS if getattr(opts, 'ext_' + ext)}
d.update(enabled_exts)
if isinstance(opts.header, binary_type):
d['project'] = d['project'].decode('utf-8')
if isinstance(opts.author, binary_type):
@@ -417,6 +427,7 @@ Note: By default this script will not overwrite already created files.""")
qs.generate(d, silent=True, overwrite=opts.force)
elif not opts.notoc:
create_modules_toc_file(modules, opts)
return 0
# So program can be started with "python -m sphinx.apidoc ..."

View File

@@ -15,12 +15,13 @@ from __future__ import print_function
import os
import sys
import types
import warnings
import posixpath
import traceback
from os import path
from collections import deque
from six import iteritems, itervalues, text_type
from six import iteritems, itervalues
from six.moves import cStringIO
from docutils import nodes
@@ -30,35 +31,38 @@ from docutils.parsers.rst import convert_directive_function, \
import sphinx
from sphinx import package_dir, locale
from sphinx.config import Config
from sphinx.errors import SphinxError, SphinxWarning, ExtensionError, \
VersionRequirementError, ConfigError
from sphinx.errors import SphinxError, ExtensionError, VersionRequirementError, \
ConfigError
from sphinx.domains import ObjType
from sphinx.domains.std import GenericObject, Target, StandardDomain
from sphinx.deprecation import RemovedInSphinx17Warning, RemovedInSphinx20Warning
from sphinx.environment import BuildEnvironment
from sphinx.io import SphinxStandaloneReader
from sphinx.roles import XRefRole
from sphinx.util import pycompat # noqa: F401
from sphinx.util import import_object
from sphinx.util import logging
from sphinx.util import status_iterator, old_status_iterator, display_chunk
from sphinx.util.tags import Tags
from sphinx.util.osutil import ENOENT
from sphinx.util.logging import is_suppressed_warning
from sphinx.util.console import ( # type: ignore
bold, lightgray, darkgray, darkred, darkgreen, term_width_line
)
from sphinx.util.console import bold, darkgreen # type: ignore
from sphinx.util.docutils import is_html5_writer_available
from sphinx.util.i18n import find_catalog_source_files
if False:
# For type annotation
from typing import Any, Callable, IO, Iterable, Iterator, Tuple, Type, Union # NOQA
from typing import Any, Callable, Dict, IO, Iterable, Iterator, List, Tuple, Type, Union # NOQA
from docutils.parsers import Parser # NOQA
from docutils.transform import Transform # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.domains import Domain # NOQA
from sphinx.domains import Domain, Index # NOQA
from sphinx.environment.collectors import EnvironmentCollector # NOQA
# List of all known core events. Maps name to arguments description.
events = {
'builder-inited': '',
'env-get-outdated': 'env, added, changed, removed',
'env-get-updated': 'env',
'env-purge-doc': 'env, docname',
'env-before-read-docs': 'env, docnames',
'source-read': 'docname, source text',
@@ -100,6 +104,13 @@ builtin_extensions = (
'sphinx.directives.other',
'sphinx.directives.patches',
'sphinx.roles',
# collectors should be loaded by specific order
'sphinx.environment.collectors.dependencies',
'sphinx.environment.collectors.asset',
'sphinx.environment.collectors.metadata',
'sphinx.environment.collectors.title',
'sphinx.environment.collectors.toctree',
'sphinx.environment.collectors.indexentries',
) # type: Tuple[unicode, ...]
CONFIG_FILENAME = 'conf.py'
@@ -109,6 +120,8 @@ ENV_PICKLE_FILENAME = 'environment.pickle'
# Values are Sphinx version that merge the extension.
EXTENSION_BLACKLIST = {"sphinxjp.themecore": "1.2"} # type: Dict[unicode, unicode]
logger = logging.getLogger(__name__)
class Sphinx(object):
@@ -116,7 +129,7 @@ class Sphinx(object):
confoverrides=None, status=sys.stdout, warning=sys.stderr,
freshenv=False, warningiserror=False, tags=None, verbosity=0,
parallel=0):
# type: (unicode, unicode, unicode, unicode, unicode, Dict, IO, IO, bool, bool, unicode, int, int) -> None # NOQA
# type: (unicode, unicode, unicode, unicode, unicode, Dict, IO, IO, bool, bool, List[unicode], int, int) -> None # NOQA
self.verbosity = verbosity
self.next_listener_id = 0
self._extensions = {} # type: Dict[unicode, Any]
@@ -151,6 +164,7 @@ class Sphinx(object):
self._warning = warning
self._warncount = 0
self.warningiserror = warningiserror
logging.setup(self, self._status, self._warning)
self._events = events.copy()
self._translators = {} # type: Dict[unicode, nodes.GenericNodeVisitor]
@@ -159,24 +173,24 @@ class Sphinx(object):
self.messagelog = deque(maxlen=10) # type: deque
# say hello to the world
self.info(bold('Running Sphinx v%s' % sphinx.__display_version__))
logger.info(bold('Running Sphinx v%s' % sphinx.__display_version__))
# status code for command-line application
self.statuscode = 0
if not path.isdir(outdir):
self.info('making output directory...')
logger.info('making output directory...')
os.makedirs(outdir)
# read config
self.tags = Tags(tags)
self.config = Config(confdir, CONFIG_FILENAME,
confoverrides or {}, self.tags)
self.config.check_unicode(self.warn)
self.config.check_unicode()
# defer checking types until i18n has been initialized
# initialize some limited config variables before loading extensions
self.config.pre_init_values(self.warn)
self.config.pre_init_values()
# check the Sphinx version if requested
if self.config.needs_sphinx and self.config.needs_sphinx > sphinx.__display_version__:
@@ -184,12 +198,6 @@ class Sphinx(object):
'This project needs at least Sphinx v%s and therefore cannot '
'be built with this version.' % self.config.needs_sphinx)
# force preload html_translator_class
if self.config.html_translator_class:
translator_class = self.import_object(self.config.html_translator_class,
'html_translator_class setting')
self.set_translator('html', translator_class)
# set confdir to srcdir if -C given (!= no confdir); a few pieces
# of code expect a confdir to be set
if self.confdir is None:
@@ -222,15 +230,15 @@ class Sphinx(object):
)
# now that we know all config values, collect them from conf.py
self.config.init_values(self.warn)
self.config.init_values()
# check extension versions if requested
if self.config.needs_extensions:
for extname, needs_ver in self.config.needs_extensions.items():
if extname not in self._extensions:
self.warn('needs_extensions config value specifies a '
'version requirement for extension %s, but it is '
'not loaded' % extname)
logger.warning('needs_extensions config value specifies a '
'version requirement for extension %s, but it is '
'not loaded', extname)
continue
has_ver = self._extension_metadata[extname]['version']
if has_ver == 'unknown version' or needs_ver > has_ver:
@@ -241,12 +249,12 @@ class Sphinx(object):
# check primary_domain if requested
if self.config.primary_domain and self.config.primary_domain not in self.domains:
self.warn('primary_domain %r not found, ignored.' % self.config.primary_domain)
logger.warning('primary_domain %r not found, ignored.', self.config.primary_domain)
# set up translation infrastructure
self._init_i18n()
# check all configuration values for permissible types
self.config.check_types(self.warn)
self.config.check_types()
# set up source_parsers
self._init_source_parsers()
# set up the build environment
@@ -262,8 +270,8 @@ class Sphinx(object):
the configuration.
"""
if self.config.language is not None:
self.info(bold('loading translations [%s]... ' %
self.config.language), nonl=True)
logger.info(bold('loading translations [%s]... ' % self.config.language),
nonl=True)
user_locale_dirs = [
path.join(self.srcdir, x) for x in self.config.locale_dirs]
# compile mo files if sphinx.po file in user locale directories are updated
@@ -278,9 +286,9 @@ class Sphinx(object):
if self.config.language is not None:
if has_translation or self.config.language == 'en':
# "en" never needs to be translated
self.info('done')
logger.info('done')
else:
self.info('not available for built-in messages')
logger.info('not available for built-in messages')
def _init_source_parsers(self):
# type: () -> None
@@ -294,27 +302,24 @@ class Sphinx(object):
# type: (bool) -> None
if freshenv:
self.env = BuildEnvironment(self.srcdir, self.doctreedir, self.config)
self.env.set_warnfunc(self.warn)
self.env.find_files(self.config)
self.env.find_files(self.config, self.buildername)
for domain in self.domains.keys():
self.env.domains[domain] = self.domains[domain](self.env)
else:
try:
self.info(bold('loading pickled environment... '), nonl=True)
logger.info(bold('loading pickled environment... '), nonl=True)
self.env = BuildEnvironment.frompickle(
self.srcdir, self.config, path.join(self.doctreedir, ENV_PICKLE_FILENAME))
self.env.set_warnfunc(self.warn)
self.env.init_managers()
self.env.domains = {}
for domain in self.domains.keys():
# this can raise if the data version doesn't fit
self.env.domains[domain] = self.domains[domain](self.env)
self.info('done')
logger.info('done')
except Exception as err:
if isinstance(err, IOError) and err.errno == ENOENT:
self.info('not yet created')
logger.info('not yet created')
else:
self.info('failed: %s' % err)
logger.info('failed: %s', err)
self._init_env(freshenv=True)
def _init_builder(self, buildername):
@@ -352,11 +357,11 @@ class Sphinx(object):
status = (self.statuscode == 0 and
'succeeded' or 'finished with problems')
if self._warncount:
self.info(bold('build %s, %s warning%s.' %
(status, self._warncount,
self._warncount != 1 and 's' or '')))
logger.info(bold('build %s, %s warning%s.' %
(status, self._warncount,
self._warncount != 1 and 's' or '')))
else:
self.info(bold('build %s.' % status))
logger.info(bold('build %s.' % status))
except Exception as err:
# delete the saved env to force a fresh build next time
envfile = path.join(self.doctreedir, ENV_PICKLE_FILENAME)
@@ -369,24 +374,8 @@ class Sphinx(object):
self.builder.cleanup()
# ---- logging handling ----------------------------------------------------
def _log(self, message, wfile, nonl=False):
# type: (unicode, IO, bool) -> None
try:
wfile.write(message)
except UnicodeEncodeError:
encoding = getattr(wfile, 'encoding', 'ascii') or 'ascii'
# wfile.write accept only str, not bytes.So, we encode and replace
# non-encodable characters, then decode them.
wfile.write(message.encode(encoding, 'replace').decode(encoding))
if not nonl:
wfile.write('\n')
if hasattr(wfile, 'flush'):
wfile.flush()
self.messagelog.append(message)
def warn(self, message, location=None, prefix='WARNING: ',
type=None, subtype=None, colorfunc=darkred):
def warn(self, message, location=None, prefix=None,
type=None, subtype=None, colorfunc=None):
# type: (unicode, unicode, unicode, unicode, unicode, Callable) -> None
"""Emit a warning.
@@ -403,21 +392,16 @@ class Sphinx(object):
:meth:`.BuildEnvironment.warn` since that will collect all
warnings during parsing for later output.
"""
if is_suppressed_warning(type, subtype, self.config.suppress_warnings):
return
if prefix:
warnings.warn('prefix option of warn() is now deprecated.',
RemovedInSphinx17Warning)
if colorfunc:
warnings.warn('colorfunc option of warn() is now deprecated.',
RemovedInSphinx17Warning)
if isinstance(location, tuple):
docname, lineno = location
if docname:
location = '%s:%s' % (self.env.doc2path(docname), lineno or '')
else:
location = None
warntext = location and '%s: %s%s\n' % (location, prefix, message) or \
'%s%s\n' % (prefix, message)
if self.warningiserror:
raise SphinxWarning(warntext)
self._warncount += 1
self._log(colorfunc(warntext), self._warning, True)
warnings.warn('app.warning() is now deprecated. Use sphinx.util.logging instead.',
RemovedInSphinx20Warning)
logger.warning(message, type=type, subtype=subtype, location=location)
def info(self, message='', nonl=False):
# type: (unicode, bool) -> None
@@ -426,124 +410,82 @@ class Sphinx(object):
If *nonl* is true, don't emit a newline at the end (which implies that
more info output will follow soon.)
"""
self._log(message, self._status, nonl)
warnings.warn('app.info() is now deprecated. Use sphinx.util.logging instead.',
RemovedInSphinx20Warning)
logger.info(message, nonl=nonl)
def verbose(self, message, *args, **kwargs):
# type: (unicode, Any, Any) -> None
"""Emit a verbose informational message.
The message will only be emitted for verbosity levels >= 1 (i.e. at
least one ``-v`` option was given).
The message can contain %-style interpolation placeholders, which is
formatted with either the ``*args`` or ``**kwargs`` when output.
"""
if self.verbosity < 1:
return
if args or kwargs:
message = message % (args or kwargs)
self._log(message, self._status)
"""Emit a verbose informational message."""
warnings.warn('app.verbose() is now deprecated. Use sphinx.util.logging instead.',
RemovedInSphinx20Warning)
logger.verbose(message, *args, **kwargs)
def debug(self, message, *args, **kwargs):
# type: (unicode, Any, Any) -> None
"""Emit a debug-level informational message.
The message will only be emitted for verbosity levels >= 2 (i.e. at
least two ``-v`` options were given).
The message can contain %-style interpolation placeholders, which is
formatted with either the ``*args`` or ``**kwargs`` when output.
"""
if self.verbosity < 2:
return
if args or kwargs:
message = message % (args or kwargs)
self._log(darkgray(message), self._status)
"""Emit a debug-level informational message."""
warnings.warn('app.debug() is now deprecated. Use sphinx.util.logging instead.',
RemovedInSphinx20Warning)
logger.debug(message, *args, **kwargs)
def debug2(self, message, *args, **kwargs):
# type: (unicode, Any, Any) -> None
"""Emit a lowlevel debug-level informational message.
The message will only be emitted for verbosity level 3 (i.e. three
``-v`` options were given).
The message can contain %-style interpolation placeholders, which is
formatted with either the ``*args`` or ``**kwargs`` when output.
"""
if self.verbosity < 3:
return
if args or kwargs:
message = message % (args or kwargs)
self._log(lightgray(message), self._status)
"""Emit a lowlevel debug-level informational message."""
warnings.warn('app.debug2() is now deprecated. Use debug() instead.',
RemovedInSphinx20Warning)
logger.debug(message, *args, **kwargs)
def _display_chunk(chunk):
# type: (Any) -> unicode
if isinstance(chunk, (list, tuple)):
if len(chunk) == 1:
return text_type(chunk[0])
return '%s .. %s' % (chunk[0], chunk[-1])
return text_type(chunk)
warnings.warn('app._display_chunk() is now deprecated. '
'Use sphinx.util.display_chunk() instead.',
RemovedInSphinx17Warning)
return display_chunk(chunk)
def old_status_iterator(self, iterable, summary, colorfunc=darkgreen,
stringify_func=_display_chunk):
# type: (Iterable, unicode, Callable, Callable) -> Iterator
l = 0
for item in iterable:
if l == 0:
self.info(bold(summary), nonl=True)
l = 1
self.info(colorfunc(stringify_func(item)) + ' ', nonl=True)
stringify_func=display_chunk):
# type: (Iterable, unicode, Callable, Callable[[Any], unicode]) -> Iterator
warnings.warn('app.old_status_iterator() is now deprecated. '
'Use sphinx.util.status_iterator() instead.',
RemovedInSphinx17Warning)
for item in old_status_iterator(iterable, summary,
color="darkgreen", stringify_func=stringify_func):
yield item
if l == 1:
self.info()
# new version with progress info
def status_iterator(self, iterable, summary, colorfunc=darkgreen, length=0,
stringify_func=_display_chunk):
# type: (Iterable, unicode, Callable, int, Callable) -> Iterable
if length == 0:
for item in self.old_status_iterator(iterable, summary, colorfunc,
stringify_func):
yield item
return
l = 0
summary = bold(summary)
for item in iterable:
l += 1
s = '%s[%3d%%] %s' % (summary, 100*l/length,
colorfunc(stringify_func(item)))
if self.verbosity:
s += '\n'
else:
s = term_width_line(s)
self.info(s, nonl=True)
# type: (Iterable, unicode, Callable, int, Callable[[Any], unicode]) -> Iterable
warnings.warn('app.status_iterator() is now deprecated. '
'Use sphinx.util.status_iterator() instead.',
RemovedInSphinx17Warning)
for item in status_iterator(iterable, summary, length=length, verbosity=self.verbosity,
color="darkgreen", stringify_func=stringify_func):
yield item
if l > 0:
self.info()
# ---- general extensibility interface -------------------------------------
def setup_extension(self, extension):
# type: (unicode) -> None
"""Import and setup a Sphinx extension module. No-op if called twice."""
self.debug('[app] setting up extension: %r', extension)
logger.debug('[app] setting up extension: %r', extension)
if extension in self._extensions:
return
if extension in EXTENSION_BLACKLIST:
self.warn('the extension %r was already merged with Sphinx since version %s; '
'this extension is ignored.' % (
extension, EXTENSION_BLACKLIST[extension]))
logger.warning('the extension %r was already merged with Sphinx since version %s; '
'this extension is ignored.',
extension, EXTENSION_BLACKLIST[extension])
return
self._setting_up_extension.append(extension)
try:
mod = __import__(extension, None, None, ['setup'])
except ImportError as err:
self.verbose('Original exception:\n' + traceback.format_exc())
logger.verbose('Original exception:\n' + traceback.format_exc())
raise ExtensionError('Could not import extension %s' % extension,
err)
if not hasattr(mod, 'setup'):
self.warn('extension %r has no setup() function; is it really '
'a Sphinx extension module?' % extension)
logger.warning('extension %r has no setup() function; is it really '
'a Sphinx extension module?', extension)
ext_meta = None
else:
try:
@@ -563,9 +505,9 @@ class Sphinx(object):
if not ext_meta.get('version'):
ext_meta['version'] = 'unknown version'
except Exception:
self.warn('extension %r returned an unsupported object from '
'its setup() function; it should return None or a '
'metadata dictionary' % extension)
logger.warning('extension %r returned an unsupported object from '
'its setup() function; it should return None or a '
'metadata dictionary', extension)
ext_meta = {'version': 'unknown version'}
self._extensions[extension] = mod
self._extension_metadata[extension] = ext_meta
@@ -598,20 +540,20 @@ class Sphinx(object):
else:
self._listeners[event][listener_id] = callback
self.next_listener_id += 1
self.debug('[app] connecting event %r: %r [id=%s]',
event, callback, listener_id)
logger.debug('[app] connecting event %r: %r [id=%s]',
event, callback, listener_id)
return listener_id
def disconnect(self, listener_id):
# type: (int) -> None
self.debug('[app] disconnecting event: [id=%s]', listener_id)
logger.debug('[app] disconnecting event: [id=%s]', listener_id)
for event in itervalues(self._listeners):
event.pop(listener_id, None)
def emit(self, event, *args):
# type: (unicode, Any) -> List
try:
self.debug2('[app] emitting event: %r%s', event, repr(args)[:100])
logger.debug('[app] emitting event: %r%s', event, repr(args)[:100])
except Exception:
# not every object likes to be repr()'d (think
# random stuff coming via autodoc)
@@ -633,7 +575,7 @@ class Sphinx(object):
def add_builder(self, builder):
# type: (Type[Builder]) -> None
self.debug('[app] adding builder: %r', builder)
logger.debug('[app] adding builder: %r', builder)
if not hasattr(builder, 'name'):
raise ExtensionError('Builder class %s has no "name" attribute'
% builder)
@@ -645,35 +587,35 @@ class Sphinx(object):
def add_config_value(self, name, default, rebuild, types=()):
# type: (unicode, Any, Union[bool, unicode], Any) -> None
self.debug('[app] adding config value: %r',
(name, default, rebuild) + ((types,) if types else ())) # type: ignore
if name in self.config.values:
logger.debug('[app] adding config value: %r',
(name, default, rebuild) + ((types,) if types else ())) # type: ignore
if name in self.config:
raise ExtensionError('Config value %r already present' % name)
if rebuild in (False, True):
rebuild = rebuild and 'env' or ''
self.config.values[name] = (default, rebuild, types)
self.config.add(name, default, rebuild, types)
def add_event(self, name):
# type: (unicode) -> None
self.debug('[app] adding event: %r', name)
logger.debug('[app] adding event: %r', name)
if name in self._events:
raise ExtensionError('Event %r already present' % name)
self._events[name] = ''
def set_translator(self, name, translator_class):
# type: (unicode, Any) -> None
self.info(bold('A Translator for the %s builder is changed.' % name))
logger.info(bold('A Translator for the %s builder is changed.' % name))
self._translators[name] = translator_class
def add_node(self, node, **kwds):
# type: (nodes.Node, Any) -> None
self.debug('[app] adding node: %r', (node, kwds))
logger.debug('[app] adding node: %r', (node, kwds))
if not kwds.pop('override', False) and \
hasattr(nodes.GenericNodeVisitor, 'visit_' + node.__name__):
self.warn('while setting up extension %s: node class %r is '
'already registered, its visitors will be overridden' %
(self._setting_up_extension, node.__name__),
type='app', subtype='add_node')
logger.warning('while setting up extension %s: node class %r is '
'already registered, its visitors will be overridden',
self._setting_up_extension, node.__name__,
type='app', subtype='add_node')
nodes._add_node_class_names([node.__name__])
for key, val in iteritems(kwds):
try:
@@ -682,24 +624,32 @@ class Sphinx(object):
raise ExtensionError('Value for key %r must be a '
'(visit, depart) function tuple' % key)
translator = self._translators.get(key)
translators = []
if translator is not None:
pass
translators.append(translator)
elif key == 'html':
from sphinx.writers.html import HTMLTranslator as translator # type: ignore
from sphinx.writers.html import HTMLTranslator
translators.append(HTMLTranslator)
if is_html5_writer_available():
from sphinx.writers.html5 import HTML5Translator
translators.append(HTML5Translator)
elif key == 'latex':
from sphinx.writers.latex import LaTeXTranslator as translator # type: ignore
from sphinx.writers.latex import LaTeXTranslator
translators.append(LaTeXTranslator)
elif key == 'text':
from sphinx.writers.text import TextTranslator as translator # type: ignore
from sphinx.writers.text import TextTranslator
translators.append(TextTranslator)
elif key == 'man':
from sphinx.writers.manpage import ManualPageTranslator as translator # type: ignore # NOQA
from sphinx.writers.manpage import ManualPageTranslator
translators.append(ManualPageTranslator)
elif key == 'texinfo':
from sphinx.writers.texinfo import TexinfoTranslator as translator # type: ignore # NOQA
else:
# ignore invalid keys for compatibility
continue
setattr(translator, 'visit_'+node.__name__, visit)
if depart:
setattr(translator, 'depart_'+node.__name__, depart)
from sphinx.writers.texinfo import TexinfoTranslator
translators.append(TexinfoTranslator)
for translator in translators:
setattr(translator, 'visit_' + node.__name__, visit)
if depart:
setattr(translator, 'depart_' + node.__name__, depart)
def add_enumerable_node(self, node, figtype, title_getter=None, **kwds):
# type: (nodes.Node, unicode, Callable, Any) -> None
@@ -721,49 +671,49 @@ class Sphinx(object):
def add_directive(self, name, obj, content=None, arguments=None, **options):
# type: (unicode, Any, unicode, Any, Any) -> None
self.debug('[app] adding directive: %r',
(name, obj, content, arguments, options))
logger.debug('[app] adding directive: %r',
(name, obj, content, arguments, options))
if name in directives._directives:
self.warn('while setting up extension %s: directive %r is '
'already registered, it will be overridden' %
(self._setting_up_extension[-1], name),
type='app', subtype='add_directive')
logger.warning('while setting up extension %s: directive %r is '
'already registered, it will be overridden',
self._setting_up_extension[-1], name,
type='app', subtype='add_directive')
directives.register_directive(
name, self._directive_helper(obj, content, arguments, **options))
def add_role(self, name, role):
# type: (unicode, Any) -> None
self.debug('[app] adding role: %r', (name, role))
logger.debug('[app] adding role: %r', (name, role))
if name in roles._roles:
self.warn('while setting up extension %s: role %r is '
'already registered, it will be overridden' %
(self._setting_up_extension[-1], name),
type='app', subtype='add_role')
logger.warning('while setting up extension %s: role %r is '
'already registered, it will be overridden',
self._setting_up_extension[-1], name,
type='app', subtype='add_role')
roles.register_local_role(name, role)
def add_generic_role(self, name, nodeclass):
# type: (unicode, Any) -> None
# don't use roles.register_generic_role because it uses
# register_canonical_role
self.debug('[app] adding generic role: %r', (name, nodeclass))
logger.debug('[app] adding generic role: %r', (name, nodeclass))
if name in roles._roles:
self.warn('while setting up extension %s: role %r is '
'already registered, it will be overridden' %
(self._setting_up_extension[-1], name),
type='app', subtype='add_generic_role')
logger.warning('while setting up extension %s: role %r is '
'already registered, it will be overridden',
self._setting_up_extension[-1], name,
type='app', subtype='add_generic_role')
role = roles.GenericRole(name, nodeclass)
roles.register_local_role(name, role)
def add_domain(self, domain):
# type: (Type[Domain]) -> None
self.debug('[app] adding domain: %r', domain)
logger.debug('[app] adding domain: %r', domain)
if domain.name in self.domains:
raise ExtensionError('domain %s already registered' % domain.name)
self.domains[domain.name] = domain
def override_domain(self, domain):
# type: (Type[Domain]) -> None
self.debug('[app] overriding domain: %r', domain)
logger.debug('[app] overriding domain: %r', domain)
if domain.name not in self.domains:
raise ExtensionError('domain %s not yet registered' % domain.name)
if not issubclass(domain, self.domains[domain.name]):
@@ -774,8 +724,8 @@ class Sphinx(object):
def add_directive_to_domain(self, domain, name, obj,
content=None, arguments=None, **options):
# type: (unicode, unicode, Any, unicode, Any, Any) -> None
self.debug('[app] adding directive to domain: %r',
(domain, name, obj, content, arguments, options))
logger.debug('[app] adding directive to domain: %r',
(domain, name, obj, content, arguments, options))
if domain not in self.domains:
raise ExtensionError('domain %s not yet registered' % domain)
self.domains[domain].directives[name] = \
@@ -783,14 +733,14 @@ class Sphinx(object):
def add_role_to_domain(self, domain, name, role):
# type: (unicode, unicode, Any) -> None
self.debug('[app] adding role to domain: %r', (domain, name, role))
logger.debug('[app] adding role to domain: %r', (domain, name, role))
if domain not in self.domains:
raise ExtensionError('domain %s not yet registered' % domain)
self.domains[domain].roles[name] = role
def add_index_to_domain(self, domain, index):
# type: (unicode, unicode) -> None
self.debug('[app] adding index to domain: %r', (domain, index))
# type: (unicode, Type[Index]) -> None
logger.debug('[app] adding index to domain: %r', (domain, index))
if domain not in self.domains:
raise ExtensionError('domain %s not yet registered' % domain)
self.domains[domain].indices.append(index)
@@ -799,9 +749,9 @@ class Sphinx(object):
parse_node=None, ref_nodeclass=None, objname='',
doc_field_types=[]):
# type: (unicode, unicode, unicode, Callable, nodes.Node, unicode, List) -> None
self.debug('[app] adding object type: %r',
(directivename, rolename, indextemplate, parse_node,
ref_nodeclass, objname, doc_field_types))
logger.debug('[app] adding object type: %r',
(directivename, rolename, indextemplate, parse_node,
ref_nodeclass, objname, doc_field_types))
StandardDomain.object_types[directivename] = \
ObjType(objname or directivename, rolename)
# create a subclass of GenericObject as the new directive
@@ -819,9 +769,9 @@ class Sphinx(object):
def add_crossref_type(self, directivename, rolename, indextemplate='',
ref_nodeclass=None, objname=''):
# type: (unicode, unicode, unicode, nodes.Node, unicode) -> None
self.debug('[app] adding crossref type: %r',
(directivename, rolename, indextemplate, ref_nodeclass,
objname))
logger.debug('[app] adding crossref type: %r',
(directivename, rolename, indextemplate, ref_nodeclass,
objname))
StandardDomain.object_types[directivename] = \
ObjType(objname or directivename, rolename)
# create a subclass of Target as the new directive
@@ -833,12 +783,12 @@ class Sphinx(object):
def add_transform(self, transform):
# type: (Transform) -> None
self.debug('[app] adding transform: %r', transform)
logger.debug('[app] adding transform: %r', transform)
SphinxStandaloneReader.transforms.append(transform)
def add_javascript(self, filename):
# type: (unicode) -> None
self.debug('[app] adding javascript: %r', filename)
logger.debug('[app] adding javascript: %r', filename)
from sphinx.builders.html import StandaloneHTMLBuilder
if '://' in filename:
StandaloneHTMLBuilder.script_files.append(filename)
@@ -848,7 +798,7 @@ class Sphinx(object):
def add_stylesheet(self, filename, alternate=None, title=None):
# type: (unicode) -> None
self.debug('[app] adding stylesheet: %r', filename)
logger.debug('[app] adding stylesheet: %r', filename)
from sphinx.builders.html import StandaloneHTMLBuilder
props = {}
if alternate is not None:
@@ -864,12 +814,13 @@ class Sphinx(object):
def add_latex_package(self, packagename, options=None):
# type: (unicode, unicode) -> None
self.debug('[app] adding latex package: %r', packagename)
self.builder.usepackages.append((packagename, options))
logger.debug('[app] adding latex package: %r', packagename)
if hasattr(self.builder, 'usepackages'): # only for LaTeX builder
self.builder.usepackages.append((packagename, options)) # type: ignore
def add_lexer(self, alias, lexer):
# type: (unicode, Any) -> None
self.debug('[app] adding lexer: %r', (alias, lexer))
logger.debug('[app] adding lexer: %r', (alias, lexer))
from sphinx.highlighting import lexers
if lexers is None:
return
@@ -877,34 +828,39 @@ class Sphinx(object):
def add_autodocumenter(self, cls):
# type: (Any) -> None
self.debug('[app] adding autodocumenter: %r', cls)
logger.debug('[app] adding autodocumenter: %r', cls)
from sphinx.ext import autodoc
autodoc.add_documenter(cls)
self.add_directive('auto' + cls.objtype, autodoc.AutoDirective)
def add_autodoc_attrgetter(self, type, getter):
# type: (Any, Callable) -> None
self.debug('[app] adding autodoc attrgetter: %r', (type, getter))
logger.debug('[app] adding autodoc attrgetter: %r', (type, getter))
from sphinx.ext import autodoc
autodoc.AutoDirective._special_attrgetters[type] = getter
def add_search_language(self, cls):
# type: (Any) -> None
self.debug('[app] adding search language: %r', cls)
logger.debug('[app] adding search language: %r', cls)
from sphinx.search import languages, SearchLanguage
assert issubclass(cls, SearchLanguage)
languages[cls.lang] = cls
def add_source_parser(self, suffix, parser):
# type: (unicode, Parser) -> None
self.debug('[app] adding search source_parser: %r, %r', suffix, parser)
logger.debug('[app] adding search source_parser: %r, %r', suffix, parser)
if suffix in self._additional_source_parsers:
self.warn('while setting up extension %s: source_parser for %r is '
'already registered, it will be overridden' %
(self._setting_up_extension[-1], suffix),
type='app', subtype='add_source_parser')
logger.warning('while setting up extension %s: source_parser for %r is '
'already registered, it will be overridden',
self._setting_up_extension[-1], suffix,
type='app', subtype='add_source_parser')
self._additional_source_parsers[suffix] = parser
def add_env_collector(self, collector):
# type: (Type[EnvironmentCollector]) -> None
logger.debug('[app] adding environment collector: %r', collector)
collector().enable(self)
class TemplateBridge(object):
"""

View File

@@ -19,10 +19,10 @@ except ImportError:
from docutils import nodes
from sphinx.util import i18n, path_stabilize
from sphinx.util import i18n, path_stabilize, logging, status_iterator
from sphinx.util.osutil import SEP, relative_uri
from sphinx.util.i18n import find_catalog
from sphinx.util.console import bold, darkgreen # type: ignore
from sphinx.util.console import bold # type: ignore
from sphinx.util.parallel import ParallelTasks, SerialTasks, make_chunks, \
parallel_available
@@ -32,7 +32,7 @@ from sphinx import directives # noqa
if False:
# For type annotation
from typing import Any, Callable, Iterable, Sequence, Tuple, Union # NOQA
from typing import Any, Callable, Dict, Iterable, List, Sequence, Set, Tuple, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.config import Config # NOQA
from sphinx.environment import BuildEnvironment # NOQA
@@ -40,17 +40,20 @@ if False:
from sphinx.util.tags import Tags # NOQA
logger = logging.getLogger(__name__)
class Builder(object):
"""
Builds target formats from the reST sources.
"""
# builder's name, for the -b command line options
name = ''
name = '' # type: unicode
# builder's output format, or '' if no document output is produced
format = ''
format = '' # type: unicode
# doctree versioning method
versioning_method = 'none'
versioning_method = 'none' # type: unicode
versioning_compare = False
# allow parallel write_doc() calls
allow_parallel = False
@@ -85,7 +88,7 @@ class Builder(object):
# basename of images directory
self.imagedir = ""
# relative path to image directory from current docname (used at writing docs)
self.imgpath = ""
self.imgpath = "" # type: unicode
# these get set later
self.parallel_ok = False
@@ -158,9 +161,8 @@ class Builder(object):
if candidate:
break
else:
self.warn(
'no matching candidate for image URI %r' % node['uri'],
'%s:%s' % (node.source, getattr(node, 'line', '')))
logger.warning('no matching candidate for image URI %r', node['uri'],
location=node)
continue
node['uri'] = candidate
else:
@@ -178,12 +180,13 @@ class Builder(object):
return
def cat2relpath(cat):
# type: (CatalogInfo) -> unicode
return path.relpath(cat.mo_path, self.env.srcdir).replace(path.sep, SEP)
self.info(bold('building [mo]: ') + message)
for catalog in self.app.status_iterator(
catalogs, 'writing output... ', darkgreen, len(catalogs),
cat2relpath):
logger.info(bold('building [mo]: ') + message)
for catalog in status_iterator(catalogs, 'writing output... ', "darkgreen",
len(catalogs), self.app.verbosity,
stringify_func=cat2relpath):
catalog.write_mo(self.config.language)
def compile_all_catalogs(self):
@@ -200,6 +203,7 @@ class Builder(object):
def compile_specific_catalogs(self, specified_files):
# type: (List[unicode]) -> None
def to_domain(fpath):
# type: (unicode) -> unicode
docname, _ = path.splitext(path_stabilize(fpath))
dom = find_catalog(docname, self.config.gettext_compact)
return dom
@@ -243,13 +247,13 @@ class Builder(object):
for filename in filenames:
filename = path.normpath(path.abspath(filename))
if not filename.startswith(self.srcdir):
self.warn('file %r given on command line is not under the '
'source directory, ignoring' % filename)
logger.warning('file %r given on command line is not under the '
'source directory, ignoring', filename)
continue
if not (path.isfile(filename) or
any(path.isfile(filename + suffix) for suffix in suffixes)):
self.warn('file %r given on command line does not exist, '
'ignoring' % filename)
logger.warning('file %r given on command line does not exist, '
'ignoring', filename)
continue
filename = filename[dirlen:]
for suffix in suffixes:
@@ -281,41 +285,37 @@ class Builder(object):
First updates the environment, and then calls :meth:`write`.
"""
if summary:
self.info(bold('building [%s]' % self.name) + ': ' + summary)
logger.info(bold('building [%s]' % self.name) + ': ' + summary)
# while reading, collect all warnings from docutils
warnings = []
self.env.set_warnfunc(lambda *args, **kwargs: warnings.append((args, kwargs)))
updated_docnames = set(self.env.update(self.config, self.srcdir,
self.doctreedir, self.app))
self.env.set_warnfunc(self.warn)
for warning, kwargs in warnings:
self.warn(*warning, **kwargs)
with logging.pending_warnings():
updated_docnames = set(self.env.update(self.config, self.srcdir,
self.doctreedir, self.app))
doccount = len(updated_docnames)
self.info(bold('looking for now-outdated files... '), nonl=1)
for docname in self.env.check_dependents(updated_docnames):
logger.info(bold('looking for now-outdated files... '), nonl=1)
for docname in self.env.check_dependents(self.app, updated_docnames):
updated_docnames.add(docname)
outdated = len(updated_docnames) - doccount
if outdated:
self.info('%d found' % outdated)
logger.info('%d found', outdated)
else:
self.info('none found')
logger.info('none found')
if updated_docnames:
# save the environment
from sphinx.application import ENV_PICKLE_FILENAME
self.info(bold('pickling environment... '), nonl=True)
logger.info(bold('pickling environment... '), nonl=True)
self.env.topickle(path.join(self.doctreedir, ENV_PICKLE_FILENAME))
self.info('done')
logger.info('done')
# global actions
self.info(bold('checking consistency... '), nonl=True)
logger.info(bold('checking consistency... '), nonl=True)
self.env.check_consistency()
self.info('done')
logger.info('done')
else:
if method == 'update' and not docnames:
self.info(bold('no targets are out of date.'))
logger.info(bold('no targets are out of date.'))
return
# filter "docnames" (list of outdated files) by the updated
@@ -331,8 +331,8 @@ class Builder(object):
for extname, md in self.app._extension_metadata.items():
par_ok = md.get('parallel_write_safe', True)
if not par_ok:
self.app.warn('the %s extension is not safe for parallel '
'writing, doing serial write' % extname)
logger.warning('the %s extension is not safe for parallel '
'writing, doing serial write', extname)
self.parallel_ok = False
break
@@ -362,58 +362,45 @@ class Builder(object):
docnames = set(build_docnames) | set(updated_docnames)
else:
docnames = set(build_docnames)
self.app.debug('docnames to write: %s', ', '.join(sorted(docnames)))
logger.debug('docnames to write: %s', ', '.join(sorted(docnames)))
# add all toctree-containing files that may have changed
for docname in list(docnames):
for tocdocname in self.env.files_to_rebuild.get(docname, []):
for tocdocname in self.env.files_to_rebuild.get(docname, set()):
if tocdocname in self.env.found_docs:
docnames.add(tocdocname)
docnames.add(self.config.master_doc)
self.info(bold('preparing documents... '), nonl=True)
logger.info(bold('preparing documents... '), nonl=True)
self.prepare_writing(docnames)
self.info('done')
logger.info('done')
warnings = [] # type: List[Tuple[Tuple, Dict]]
self.env.set_warnfunc(lambda *args, **kwargs: warnings.append((args, kwargs)))
if self.parallel_ok:
# number of subprocesses is parallel-1 because the main process
# is busy loading doctrees and doing write_doc_serialized()
self._write_parallel(sorted(docnames), warnings,
self._write_parallel(sorted(docnames),
nproc=self.app.parallel - 1)
else:
self._write_serial(sorted(docnames), warnings)
self.env.set_warnfunc(self.warn)
self._write_serial(sorted(docnames))
def _write_serial(self, docnames, warnings):
# type: (Sequence[unicode], List[Tuple[Tuple, Dict]]) -> None
for docname in self.app.status_iterator(
docnames, 'writing output... ', darkgreen, len(docnames)):
doctree = self.env.get_and_resolve_doctree(docname, self)
self.write_doc_serialized(docname, doctree)
self.write_doc(docname, doctree)
for warning, kwargs in warnings:
self.warn(*warning, **kwargs)
def _write_serial(self, docnames):
# type: (Sequence[unicode]) -> None
with logging.pending_warnings():
for docname in status_iterator(docnames, 'writing output... ', "darkgreen",
len(docnames), self.app.verbosity):
doctree = self.env.get_and_resolve_doctree(docname, self)
self.write_doc_serialized(docname, doctree)
self.write_doc(docname, doctree)
def _write_parallel(self, docnames, warnings, nproc):
# type: (Iterable[unicode], List[Tuple[Tuple, Dict]], int) -> None
def _write_parallel(self, docnames, nproc):
# type: (Sequence[unicode], int) -> None
def write_process(docs):
# type: (List[Tuple[unicode, nodes.Node]]) -> List[Tuple[Tuple, Dict]]
local_warnings = []
def warnfunc(*args, **kwargs):
local_warnings.append((args, kwargs))
self.env.set_warnfunc(warnfunc)
# type: (List[Tuple[unicode, nodes.Node]]) -> None
for docname, doctree in docs:
self.write_doc(docname, doctree)
return local_warnings
def add_warnings(docs, wlist):
warnings.extend(wlist)
# warm up caches/compile templates using the first document
firstname, docnames = docnames[0], docnames[1:] # type: ignore
firstname, docnames = docnames[0], docnames[1:]
doctree = self.env.get_and_resolve_doctree(firstname, self)
self.write_doc_serialized(firstname, doctree)
self.write_doc(firstname, doctree)
@@ -421,22 +408,19 @@ class Builder(object):
tasks = ParallelTasks(nproc)
chunks = make_chunks(docnames, nproc)
for chunk in self.app.status_iterator(
chunks, 'writing output... ', darkgreen, len(chunks)):
for chunk in status_iterator(chunks, 'writing output... ', "darkgreen",
len(chunks), self.app.verbosity):
arg = []
for i, docname in enumerate(chunk):
doctree = self.env.get_and_resolve_doctree(docname, self)
self.write_doc_serialized(docname, doctree)
arg.append((docname, doctree))
tasks.add_task(write_process, arg, add_warnings)
tasks.add_task(write_process, arg)
# make sure all threads have finished
self.info(bold('waiting for workers...'))
logger.info(bold('waiting for workers...'))
tasks.join()
for warning, kwargs in warnings:
self.warn(*warning, **kwargs)
def prepare_writing(self, docnames):
# type: (Set[unicode]) -> None
"""A place where you can add logic before :meth:`write_doc` is run"""

View File

@@ -18,6 +18,7 @@ import shlex
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.config import string_classes
from sphinx.util import logging
from sphinx.util.osutil import copyfile, ensuredir, make_filename
from sphinx.util.console import bold # type: ignore
from sphinx.util.fileutil import copy_asset
@@ -30,8 +31,12 @@ import subprocess
if False:
# For type annotation
from typing import Any, Dict # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
# Use plistlib.dump in 3.4 and above
try:
write_plist = plistlib.dump # type: ignore
@@ -117,13 +122,13 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
target_dir = self.outdir
if path.isdir(source_dir):
self.info(bold('copying localized files... '), nonl=True)
logger.info(bold('copying localized files... '), nonl=True)
excluded = Matcher(self.config.exclude_patterns + ['**/.*'])
copy_asset(source_dir, target_dir, excluded,
context=self.globalcontext, renderer=self.templates)
self.info('done')
logger.info('done')
def build_helpbook(self):
# type: () -> None
@@ -164,37 +169,36 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
if self.config.applehelp_remote_url is not None:
info_plist['HPDBookRemoteURL'] = self.config.applehelp_remote_url
self.info(bold('writing Info.plist... '), nonl=True)
logger.info(bold('writing Info.plist... '), nonl=True)
with open(path.join(contents_dir, 'Info.plist'), 'wb') as f:
write_plist(info_plist, f)
self.info('done')
logger.info('done')
# Copy the icon, if one is supplied
if self.config.applehelp_icon:
self.info(bold('copying icon... '), nonl=True)
logger.info(bold('copying icon... '), nonl=True)
try:
copyfile(path.join(self.srcdir, self.config.applehelp_icon),
path.join(resources_dir, info_plist['HPDBookIconPath']))
self.info('done')
logger.info('done')
except Exception as err:
self.warn('cannot copy icon file %r: %s' %
(path.join(self.srcdir, self.config.applehelp_icon),
err))
logger.warning('cannot copy icon file %r: %s',
path.join(self.srcdir, self.config.applehelp_icon), err)
del info_plist['HPDBookIconPath']
# Build the access page
self.info(bold('building access page...'), nonl=True)
logger.info(bold('building access page...'), nonl=True)
with codecs.open(path.join(language_dir, '_access.html'), 'w') as f:
f.write(access_page_template % {
'toc': htmlescape(toc, quote=True),
'title': htmlescape(self.config.applehelp_title)
})
self.info('done')
logger.info('done')
# Generate the help index
self.info(bold('generating help index... '), nonl=True)
logger.info(bold('generating help index... '), nonl=True)
args = [
self.config.applehelp_indexer_path,
@@ -216,10 +220,10 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
args += ['-l', self.config.applehelp_locale]
if self.config.applehelp_disable_external_tools:
self.info('skipping')
logger.info('skipping')
self.warn('you will need to index this help book with:\n %s'
% (' '.join([pipes.quote(arg) for arg in args])))
logger.warning('you will need to index this help book with:\n %s',
' '.join([pipes.quote(arg) for arg in args]))
else:
try:
p = subprocess.Popen(args,
@@ -231,13 +235,13 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
if p.returncode != 0:
raise AppleHelpIndexerFailed(output)
else:
self.info('done')
logger.info('done')
except OSError:
raise AppleHelpIndexerFailed('Command not found: %s' % args[0])
# If we've been asked to, sign the bundle
if self.config.applehelp_codesign_identity:
self.info(bold('signing help book... '), nonl=True)
logger.info(bold('signing help book... '), nonl=True)
args = [
self.config.applehelp_codesign_path,
@@ -250,10 +254,9 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
args.append(self.bundle_path)
if self.config.applehelp_disable_external_tools:
self.info('skipping')
self.warn('you will need to sign this help book with:\n %s'
% (' '.join([pipes.quote(arg) for arg in args])))
logger.info('skipping')
logger.warning('you will need to sign this help book with:\n %s',
' '.join([pipes.quote(arg) for arg in args]))
else:
try:
p = subprocess.Popen(args,
@@ -265,13 +268,13 @@ class AppleHelpBuilder(StandaloneHTMLBuilder):
if p.returncode != 0:
raise AppleHelpCodeSigningFailed(output)
else:
self.info('done')
logger.info('done')
except OSError:
raise AppleHelpCodeSigningFailed('Command not found: %s' % args[0])
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.html')
app.add_builder(AppleHelpBuilder)
@@ -301,3 +304,9 @@ def setup(app):
app.add_config_value('applehelp_indexer_path', '/usr/bin/hiutil', 'applehelp')
app.add_config_value('applehelp_codesign_path', '/usr/bin/codesign', 'applehelp')
app.add_config_value('applehelp_disable_external_tools', False, None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -18,6 +18,7 @@ from sphinx import package_dir
from sphinx.locale import _
from sphinx.theming import Theme
from sphinx.builders import Builder
from sphinx.util import logging
from sphinx.util.osutil import ensuredir, os_path
from sphinx.util.console import bold # type: ignore
from sphinx.util.fileutil import copy_asset_file
@@ -25,10 +26,13 @@ from sphinx.util.pycompat import htmlescape
if False:
# For type annotation
from typing import Any, Tuple # NOQA
from typing import Any, Dict, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
class ChangesBuilder(Builder):
"""
Write a summary with all versionadded/changed directives.
@@ -38,8 +42,7 @@ class ChangesBuilder(Builder):
def init(self):
# type: () -> None
self.create_template_bridge()
Theme.init_themes(self.confdir, self.config.html_theme_path,
warn=self.warn)
Theme.init_themes(self.confdir, self.config.html_theme_path)
self.theme = Theme('default')
self.templates.init(self, self.theme)
@@ -60,9 +63,9 @@ class ChangesBuilder(Builder):
apichanges = [] # type: List[Tuple[unicode, unicode, int]]
otherchanges = {} # type: Dict[Tuple[unicode, unicode], List[Tuple[unicode, unicode, int]]] # NOQA
if version not in self.env.versionchanges:
self.info(bold('no changes in version %s.' % version))
logger.info(bold('no changes in version %s.' % version))
return
self.info(bold('writing summary file...'))
logger.info(bold('writing summary file...'))
for type, docname, lineno, module, descname, content in \
self.env.versionchanges[version]:
if isinstance(descname, tuple):
@@ -119,6 +122,7 @@ class ChangesBuilder(Builder):
'.. deprecated:: %s' % version]
def hl(no, line):
# type: (int, unicode) -> unicode
line = '<a name="L%s"> </a>' % no + htmlescape(line)
for x in hltext:
if x in line:
@@ -126,19 +130,19 @@ class ChangesBuilder(Builder):
break
return line
self.info(bold('copying source files...'))
logger.info(bold('copying source files...'))
for docname in self.env.all_docs:
with codecs.open(self.env.doc2path(docname), 'r', # type: ignore
self.env.config.source_encoding) as f:
try:
lines = f.readlines()
except UnicodeDecodeError:
self.warn('could not read %r for changelog creation' % docname)
logger.warning('could not read %r for changelog creation', docname)
continue
targetfn = path.join(self.outdir, 'rst', os_path(docname)) + '.html'
ensuredir(path.dirname(targetfn))
with codecs.open(targetfn, 'w', 'utf-8') as f: # type: ignore
text = ''.join(hl(i+1, line) for (i, line) in enumerate(lines))
text = ''.join(hl(i + 1, line) for (i, line) in enumerate(lines))
ctx = {
'filename': self.env.doc2path(docname, None),
'text': text
@@ -165,5 +169,11 @@ class ChangesBuilder(Builder):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(ChangesBuilder)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -19,8 +19,10 @@ from os import path
from docutils import nodes
from sphinx import addnodes
from sphinx.util import logging
from sphinx.util.osutil import make_filename
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.environment.adapters.indexentries import IndexEntries
try:
import xml.etree.ElementTree as etree
@@ -29,10 +31,13 @@ except ImportError:
if False:
# For type annotation
from typing import Any # NOQA
from typing import Any, Dict, List # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
class DevhelpBuilder(StandaloneHTMLBuilder):
"""
Builder that also outputs GNOME Devhelp file.
@@ -60,7 +65,7 @@ class DevhelpBuilder(StandaloneHTMLBuilder):
def build_devhelp(self, outdir, outname):
# type: (unicode, unicode) -> None
self.info('dumping devhelp index...')
logger.info('dumping devhelp index...')
# Basic info
root = etree.Element('book',
@@ -100,7 +105,7 @@ class DevhelpBuilder(StandaloneHTMLBuilder):
# Index
functions = etree.SubElement(root, 'functions')
index = self.env.create_index(self)
index = IndexEntries(self.env).create_index(self)
def write_index(title, refs, subitems):
# type: (unicode, List[Any], Any) -> None
@@ -116,7 +121,7 @@ class DevhelpBuilder(StandaloneHTMLBuilder):
link=ref[1])
if subitems:
parent_title = re.sub(r'\s*\(.*\)\s*$', '', title) # type: ignore
parent_title = re.sub(r'\s*\(.*\)\s*$', '', title)
for subitem in subitems:
write_index("%s %s" % (parent_title, subitem[0]),
subitem[1], [])
@@ -132,8 +137,14 @@ class DevhelpBuilder(StandaloneHTMLBuilder):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.html')
app.add_builder(DevhelpBuilder)
app.add_config_value('devhelp_basename', lambda self: make_filename(self.project), None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -12,29 +12,48 @@
from sphinx.builders import Builder
if False:
# For type annotation
from typing import Any, Dict, Set # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
class DummyBuilder(Builder):
name = 'dummy'
allow_parallel = True
def init(self):
# type: () -> None
pass
def get_outdated_docs(self):
# type: () -> Set[unicode]
return self.env.found_docs
def get_target_uri(self, docname, typ=None):
# type: (unicode, unicode) -> unicode
return ''
def prepare_writing(self, docnames):
# type: (Set[unicode]) -> None
pass
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
pass
def finish(self):
# type: () -> None
pass
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(DummyBuilder)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -12,10 +12,10 @@
import os
import re
import codecs
import zipfile
from os import path
from zipfile import ZIP_DEFLATED, ZIP_STORED, ZipFile
from datetime import datetime
from collections import namedtuple
try:
from PIL import Image
@@ -28,112 +28,31 @@ except ImportError:
from docutils import nodes
from sphinx import addnodes
from sphinx import package_dir
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.util.osutil import ensuredir, copyfile, make_filename, EEXIST
from sphinx.util import logging
from sphinx.util import status_iterator
from sphinx.util.osutil import ensuredir, copyfile, make_filename
from sphinx.util.fileutil import copy_asset_file
from sphinx.util.smartypants import sphinx_smarty_pants as ssp
from sphinx.util.console import brown # type: ignore
if False:
# For type annotation
from typing import Any, Tuple # NOQA
from typing import Any, Dict, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
# (Fragment) templates from which the metainfo files content.opf, toc.ncx,
# mimetype, and META-INF/container.xml are created.
logger = logging.getLogger(__name__)
# (Fragment) templates from which the metainfo files content.opf and
# toc.ncx are created.
# This template section also defines strings that are embedded in the html
# output but that may be customized by (re-)setting module attributes,
# e.g. from conf.py.
MIMETYPE_TEMPLATE = 'application/epub+zip' # no EOL!
CONTAINER_TEMPLATE = u'''\
<?xml version="1.0" encoding="UTF-8"?>
<container version="1.0"
xmlns="urn:oasis:names:tc:opendocument:xmlns:container">
<rootfiles>
<rootfile full-path="content.opf"
media-type="application/oebps-package+xml"/>
</rootfiles>
</container>
'''
TOC_TEMPLATE = u'''\
<?xml version="1.0"?>
<ncx version="2005-1" xmlns="http://www.daisy.org/z3986/2005/ncx/">
<head>
<meta name="dtb:uid" content="%(uid)s"/>
<meta name="dtb:depth" content="%(level)d"/>
<meta name="dtb:totalPageCount" content="0"/>
<meta name="dtb:maxPageNumber" content="0"/>
</head>
<docTitle>
<text>%(title)s</text>
</docTitle>
<navMap>
%(navpoints)s
</navMap>
</ncx>
'''
NAVPOINT_TEMPLATE = u'''\
%(indent)s <navPoint id="%(navpoint)s" playOrder="%(playorder)d">
%(indent)s <navLabel>
%(indent)s <text>%(text)s</text>
%(indent)s </navLabel>
%(indent)s <content src="%(refuri)s" />
%(indent)s </navPoint>'''
NAVPOINT_INDENT = ' '
NODE_NAVPOINT_TEMPLATE = 'navPoint%d'
CONTENT_TEMPLATE = u'''\
<?xml version="1.0" encoding="UTF-8"?>
<package xmlns="http://www.idpf.org/2007/opf" version="2.0"
unique-identifier="%(uid)s">
<metadata xmlns:opf="http://www.idpf.org/2007/opf"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:language>%(lang)s</dc:language>
<dc:title>%(title)s</dc:title>
<dc:creator opf:role="aut">%(author)s</dc:creator>
<dc:publisher>%(publisher)s</dc:publisher>
<dc:rights>%(copyright)s</dc:rights>
<dc:identifier id="%(uid)s" opf:scheme="%(scheme)s">%(id)s</dc:identifier>
<dc:date>%(date)s</dc:date>
</metadata>
<manifest>
<item id="ncx" href="toc.ncx" media-type="application/x-dtbncx+xml" />
%(files)s
</manifest>
<spine toc="ncx">
%(spine)s
</spine>
<guide>
%(guide)s
</guide>
</package>
'''
COVER_TEMPLATE = u'''\
<meta name="cover" content="%(cover)s"/>
'''
COVERPAGE_NAME = u'epub-cover.xhtml'
FILE_TEMPLATE = u'''\
<item id="%(id)s"
href="%(href)s"
media-type="%(media_type)s" />'''
SPINE_TEMPLATE = u'''\
<itemref idref="%(idref)s" />'''
NO_LINEAR_SPINE_TEMPLATE = u'''\
<itemref idref="%(idref)s" linear="no" />'''
GUIDE_TEMPLATE = u'''\
<reference type="%(type)s" title="%(title)s" href="%(uri)s" />'''
TOCTREE_TEMPLATE = u'toctree-l%d'
DOCTYPE = u'''<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
@@ -174,6 +93,12 @@ VECTOR_GRAPHICS_EXTENSIONS = ('.svg',)
REFURI_RE = re.compile("([^#:]*#)(.*)")
ManifestItem = namedtuple('ManifestItem', ['href', 'id', 'media_type'])
Spine = namedtuple('Spine', ['idref', 'linear'])
Guide = namedtuple('Guide', ['type', 'title', 'uri'])
NavPoint = namedtuple('NavPoint', ['navpoint', 'playorder', 'text', 'refuri', 'children'])
# The epub publisher
class EpubBuilder(StandaloneHTMLBuilder):
@@ -186,6 +111,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
"""
name = 'epub2'
template_dir = path.join(package_dir, 'templates', 'epub2')
# don't copy the reST source
copysource = False
supported_image_types = ['image/svg+xml', 'image/png', 'image/gif',
@@ -204,19 +131,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
# don't generate search index or include search page
search = False
mimetype_template = MIMETYPE_TEMPLATE
container_template = CONTAINER_TEMPLATE
toc_template = TOC_TEMPLATE
navpoint_template = NAVPOINT_TEMPLATE
navpoint_indent = NAVPOINT_INDENT
node_navpoint_template = NODE_NAVPOINT_TEMPLATE
content_template = CONTENT_TEMPLATE
cover_template = COVER_TEMPLATE
coverpage_name = COVERPAGE_NAME
file_template = FILE_TEMPLATE
spine_template = SPINE_TEMPLATE
no_linear_spine_template = NO_LINEAR_SPINE_TEMPLATE
guide_template = GUIDE_TEMPLATE
toctree_template = TOCTREE_TEMPLATE
doctype = DOCTYPE
link_target_template = LINK_TARGET_TEMPLATE
@@ -233,6 +148,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
self.link_suffix = '.xhtml'
self.playorder = 0
self.tocid = 0
self.id_cache = {} # type: Dict[unicode, unicode]
self.use_index = self.get_builder_config('use_index', 'epub')
def get_theme_config(self):
@@ -240,14 +156,14 @@ class EpubBuilder(StandaloneHTMLBuilder):
return self.config.epub_theme, self.config.epub_theme_options
# generic support functions
def make_id(self, name, id_cache={}):
# type: (unicode, Dict[unicode, unicode]) -> unicode
def make_id(self, name):
# type: (unicode) -> unicode
# id_cache is intentionally mutable
"""Return a unique id for name."""
id = id_cache.get(name)
id = self.id_cache.get(name)
if not id:
id = 'epub-%d' % self.env.new_serialno('epub')
id_cache[name] = id
self.id_cache[name] = id
return id
def esc(self, name):
@@ -466,21 +382,21 @@ class EpubBuilder(StandaloneHTMLBuilder):
converting the format and resizing the image if necessary/possible.
"""
ensuredir(path.join(self.outdir, self.imagedir))
for src in self.app.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
for src in status_iterator(self.images, 'copying images... ', "brown",
len(self.images), self.app.verbosity):
dest = self.images[src]
try:
img = Image.open(path.join(self.srcdir, src))
except IOError:
if not self.is_vector_graphics(src):
self.warn('cannot read image file %r: copying it instead' %
(path.join(self.srcdir, src), ))
logger.warning('cannot read image file %r: copying it instead',
path.join(self.srcdir, src))
try:
copyfile(path.join(self.srcdir, src),
path.join(self.outdir, self.imagedir, dest))
except (IOError, OSError) as err:
self.warn('cannot copy image file %r: %s' %
(path.join(self.srcdir, src), err))
logger.warning('cannot copy image file %r: %s',
path.join(self.srcdir, src), err)
continue
if self.config.epub_fix_images:
if img.mode in ('P',):
@@ -495,8 +411,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
try:
img.save(path.join(self.outdir, self.imagedir, dest))
except (IOError, OSError) as err:
self.warn('cannot write image file %r: %s' %
(path.join(self.srcdir, src), err))
logger.warning('cannot write image file %r: %s',
path.join(self.srcdir, src), err)
def copy_image_files(self):
# type: () -> None
@@ -506,7 +422,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
if self.images:
if self.config.epub_fix_images or self.config.epub_max_image_width:
if not Image:
self.warn('PIL not found - copying image files')
logger.warning('PIL not found - copying image files')
super(EpubBuilder, self).copy_image_files()
else:
self.copy_image_files_pil()
@@ -547,25 +463,20 @@ class EpubBuilder(StandaloneHTMLBuilder):
def build_mimetype(self, outdir, outname):
# type: (unicode, unicode) -> None
"""Write the metainfo file mimetype."""
self.info('writing %s file...' % outname)
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f: # type: ignore
f.write(self.mimetype_template)
logger.info('writing %s file...', outname)
copy_asset_file(path.join(self.template_dir, 'mimetype'),
path.join(outdir, outname))
def build_container(self, outdir, outname):
# type: (unicode, unicode) -> None
"""Write the metainfo file META-INF/cointainer.xml."""
self.info('writing %s file...' % outname)
fn = path.join(outdir, outname)
try:
os.mkdir(path.dirname(fn))
except OSError as err:
if err.errno != EEXIST:
raise
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f: # type: ignore
f.write(self.container_template) # type: ignore
"""Write the metainfo file META-INF/container.xml."""
logger.info('writing %s file...', outname)
filename = path.join(outdir, outname)
ensuredir(path.dirname(filename))
copy_asset_file(path.join(self.template_dir, 'container.xml'), filename)
def content_metadata(self, files, spine, guide):
# type: (List[unicode], Any, Any) -> Dict[unicode, Any]
def content_metadata(self):
# type: () -> Dict[unicode, Any]
"""Create a dictionary with all metadata for the content.opf
file properly escaped.
"""
@@ -579,9 +490,9 @@ class EpubBuilder(StandaloneHTMLBuilder):
metadata['scheme'] = self.esc(self.config.epub_scheme)
metadata['id'] = self.esc(self.config.epub_identifier)
metadata['date'] = self.esc(datetime.utcnow().strftime("%Y-%m-%d"))
metadata['files'] = files
metadata['spine'] = spine
metadata['guide'] = guide
metadata['manifest_items'] = []
metadata['spines'] = []
metadata['guides'] = []
return metadata
def build_content(self, outdir, outname):
@@ -589,23 +500,23 @@ class EpubBuilder(StandaloneHTMLBuilder):
"""Write the metainfo file content.opf It contains bibliographic data,
a file list and the spine (the reading order).
"""
self.info('writing %s file...' % outname)
logger.info('writing %s file...', outname)
metadata = self.content_metadata()
# files
if not outdir.endswith(os.sep):
outdir += os.sep
olen = len(outdir)
projectfiles = [] # type: List[unicode]
self.files = [] # type: List[unicode]
self.ignored_files = ['.buildinfo', 'mimetype', 'content.opf',
'toc.ncx', 'META-INF/container.xml',
'Thumbs.db', 'ehthumbs.db', '.DS_Store',
self.config.epub_basename + '.epub'] + \
'nav.xhtml', self.config.epub_basename + '.epub'] + \
self.config.epub_exclude_files
if not self.use_index:
self.ignored_files.append('genindex' + self.out_suffix)
for root, dirs, files in os.walk(outdir):
for fn in files:
for fn in sorted(files):
filename = path.join(root, fn)[olen:]
if filename in self.ignored_files:
continue
@@ -614,73 +525,61 @@ class EpubBuilder(StandaloneHTMLBuilder):
# we always have JS and potentially OpenSearch files, don't
# always warn about them
if ext not in ('.js', '.xml'):
self.warn('unknown mimetype for %s, ignoring' % filename)
logger.warning('unknown mimetype for %s, ignoring', filename,
type='epub', subtype='unknown_project_files')
continue
filename = filename.replace(os.sep, '/')
projectfiles.append(self.file_template % {
'href': self.esc(filename),
'id': self.esc(self.make_id(filename)),
'media_type': self.esc(self.media_types[ext])
})
item = ManifestItem(self.esc(filename),
self.esc(self.make_id(filename)),
self.esc(self.media_types[ext]))
metadata['manifest_items'].append(item)
self.files.append(filename)
# spine
spine = []
spinefiles = set()
for item in self.refnodes:
if '#' in item['refuri']:
for refnode in self.refnodes:
if '#' in refnode['refuri']:
continue
if item['refuri'] in self.ignored_files:
if refnode['refuri'] in self.ignored_files:
continue
spine.append(self.spine_template % {
'idref': self.esc(self.make_id(item['refuri']))
})
spinefiles.add(item['refuri'])
spine = Spine(self.esc(self.make_id(refnode['refuri'])), True)
metadata['spines'].append(spine)
spinefiles.add(refnode['refuri'])
for info in self.domain_indices:
spine.append(self.spine_template % {
'idref': self.esc(self.make_id(info[0] + self.out_suffix))
})
spine = Spine(self.esc(self.make_id(info[0] + self.out_suffix)), True)
metadata['spines'].append(spine)
spinefiles.add(info[0] + self.out_suffix)
if self.use_index:
spine.append(self.spine_template % {
'idref': self.esc(self.make_id('genindex' + self.out_suffix))
})
spine = Spine(self.esc(self.make_id('genindex' + self.out_suffix)), True)
metadata['spines'].append(spine)
spinefiles.add('genindex' + self.out_suffix)
# add auto generated files
for name in self.files:
if name not in spinefiles and name.endswith(self.out_suffix):
spine.append(self.no_linear_spine_template % {
'idref': self.esc(self.make_id(name))
})
spine = Spine(self.esc(self.make_id(name)), False)
metadata['spines'].append(spine)
# add the optional cover
content_tmpl = self.content_template
html_tmpl = None
if self.config.epub_cover:
image, html_tmpl = self.config.epub_cover
image = image.replace(os.sep, '/')
mpos = content_tmpl.rfind('</metadata>')
cpos = content_tmpl.rfind('\n', 0, mpos) + 1
content_tmpl = content_tmpl[:cpos] + \
COVER_TEMPLATE % {'cover': self.esc(self.make_id(image))} + \
content_tmpl[cpos:]
metadata['cover'] = self.esc(self.make_id(image))
if html_tmpl:
spine.insert(0, self.spine_template % {
'idref': self.esc(self.make_id(self.coverpage_name))})
spine = Spine(self.esc(self.make_id(self.coverpage_name)), True)
metadata['spines'].insert(0, spine)
if self.coverpage_name not in self.files:
ext = path.splitext(self.coverpage_name)[-1]
self.files.append(self.coverpage_name)
projectfiles.append(self.file_template % {
'href': self.esc(self.coverpage_name),
'id': self.esc(self.make_id(self.coverpage_name)),
'media_type': self.esc(self.media_types[ext])
})
item = ManifestItem(self.esc(filename),
self.esc(self.make_id(filename)),
self.esc(self.media_types[ext]))
metadata['manifest_items'].append(item)
ctx = {'image': self.esc(image), 'title': self.config.project}
self.handle_page(
path.splitext(self.coverpage_name)[0], ctx, html_tmpl)
spinefiles.add(self.coverpage_name)
guide = []
auto_add_cover = True
auto_add_toc = True
if self.config.epub_guide:
@@ -692,64 +591,43 @@ class EpubBuilder(StandaloneHTMLBuilder):
auto_add_cover = False
if type == 'toc':
auto_add_toc = False
guide.append(self.guide_template % {
'type': self.esc(type),
'title': self.esc(title),
'uri': self.esc(uri)
})
metadata['guides'].append(Guide(self.esc(type),
self.esc(title),
self.esc(uri)))
if auto_add_cover and html_tmpl:
guide.append(self.guide_template % {
'type': 'cover',
'title': self.guide_titles['cover'],
'uri': self.esc(self.coverpage_name)
})
metadata['guides'].append(Guide('cover',
self.guide_titles['cover'],
self.esc(self.coverpage_name)))
if auto_add_toc and self.refnodes:
guide.append(self.guide_template % {
'type': 'toc',
'title': self.guide_titles['toc'],
'uri': self.esc(self.refnodes[0]['refuri'])
})
projectfiles = '\n'.join(projectfiles) # type: ignore
spine = '\n'.join(spine) # type: ignore
guide = '\n'.join(guide) # type: ignore
metadata['guides'].append(Guide('toc',
self.guide_titles['toc'],
self.esc(self.refnodes[0]['refuri'])))
# write the project file
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f: # type: ignore
f.write(content_tmpl % # type: ignore
self.content_metadata(projectfiles, spine, guide))
copy_asset_file(path.join(self.template_dir, 'content.opf_t'),
path.join(outdir, outname),
metadata)
def new_navpoint(self, node, level, incr=True):
# type: (nodes.Node, int, bool) -> unicode
# type: (nodes.Node, int, bool) -> NavPoint
"""Create a new entry in the toc from the node at given level."""
# XXX Modifies the node
if incr:
self.playorder += 1
self.tocid += 1
node['indent'] = self.navpoint_indent * level
node['navpoint'] = self.esc(self.node_navpoint_template % self.tocid)
node['playorder'] = self.playorder
return self.navpoint_template % node
def insert_subnav(self, node, subnav):
# type: (nodes.Node, unicode) -> unicode
"""Insert nested navpoints for given node.
The node and subnav are already rendered to text.
"""
nlist = node.rsplit('\n', 1)
nlist.insert(-1, subnav)
return '\n'.join(nlist)
return NavPoint(self.esc('navPoint%d' % self.tocid), self.playorder,
node['text'], node['refuri'], [])
def build_navpoints(self, nodes):
# type: (nodes.Node) -> unicode
# type: (nodes.Node) -> List[NavPoint]
"""Create the toc navigation structure.
Subelements of a node are nested inside the navpoint. For nested nodes
the parent node is reinserted in the subnav.
"""
navstack = []
navlist = []
level = 1
navstack = [] # type: List[NavPoint]
navstack.append(NavPoint('dummy', '', '', '', []))
level = 0
lastnode = None
for node in nodes:
if not node['text']:
@@ -760,32 +638,33 @@ class EpubBuilder(StandaloneHTMLBuilder):
if node['level'] > self.config.epub_tocdepth:
continue
if node['level'] == level:
navlist.append(self.new_navpoint(node, level))
navpoint = self.new_navpoint(node, level)
navstack.pop()
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
elif node['level'] == level + 1:
navstack.append(navlist)
navlist = []
level += 1
if lastnode and self.config.epub_tocdup:
# Insert starting point in subtoc with same playOrder
navlist.append(self.new_navpoint(lastnode, level, False))
navlist.append(self.new_navpoint(node, level))
navstack[-1].children.append(self.new_navpoint(lastnode, level, False))
navpoint = self.new_navpoint(node, level)
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
elif node['level'] < level:
while node['level'] < len(navstack):
navstack.pop()
level = node['level']
navpoint = self.new_navpoint(node, level)
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
else:
while node['level'] < level:
subnav = '\n'.join(navlist)
navlist = navstack.pop()
navlist[-1] = self.insert_subnav(navlist[-1], subnav)
level -= 1
navlist.append(self.new_navpoint(node, level))
raise
lastnode = node
while level != 1:
subnav = '\n'.join(navlist)
navlist = navstack.pop()
navlist[-1] = self.insert_subnav(navlist[-1], subnav)
level -= 1
return '\n'.join(navlist)
return navstack[0].children
def toc_metadata(self, level, navpoints):
# type: (int, List[unicode]) -> Dict[unicode, Any]
# type: (int, List[NavPoint]) -> Dict[unicode, Any]
"""Create a dictionary with all metadata for the toc.ncx file
properly escaped.
"""
@@ -799,7 +678,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
def build_toc(self, outdir, outname):
# type: (unicode, unicode) -> None
"""Write the metainfo file toc.ncx."""
self.info('writing %s file...' % outname)
logger.info('writing %s file...', outname)
if self.config.epub_tocscope == 'default':
doctree = self.env.get_and_resolve_doctree(self.config.master_doc,
@@ -813,8 +692,9 @@ class EpubBuilder(StandaloneHTMLBuilder):
navpoints = self.build_navpoints(refnodes)
level = max(item['level'] for item in self.refnodes)
level = min(level, self.config.epub_tocdepth)
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f: # type: ignore
f.write(self.toc_template % self.toc_metadata(level, navpoints)) # type: ignore
copy_asset_file(path.join(self.template_dir, 'toc.ncx_t'),
path.join(outdir, outname),
self.toc_metadata(level, navpoints))
def build_epub(self, outdir, outname):
# type: (unicode, unicode) -> None
@@ -823,21 +703,18 @@ class EpubBuilder(StandaloneHTMLBuilder):
It is a zip file with the mimetype file stored uncompressed as the first
entry.
"""
self.info('writing %s file...' % outname)
projectfiles = ['META-INF/container.xml', 'content.opf', 'toc.ncx'] # type: List[unicode] # NOQA
projectfiles.extend(self.files)
epub = zipfile.ZipFile(path.join(outdir, outname), 'w', # type: ignore
zipfile.ZIP_DEFLATED)
epub.write(path.join(outdir, 'mimetype'), 'mimetype', # type: ignore
zipfile.ZIP_STORED)
for file in projectfiles:
fp = path.join(outdir, file)
epub.write(fp, file, zipfile.ZIP_DEFLATED) # type: ignore
epub.close()
logger.info('writing %s file...', outname)
epub_filename = path.join(outdir, outname)
with ZipFile(epub_filename, 'w', ZIP_DEFLATED) as epub: # type: ignore
epub.write(path.join(outdir, 'mimetype'), 'mimetype', ZIP_STORED) # type: ignore
for filename in [u'META-INF/container.xml', u'content.opf', u'toc.ncx']:
epub.write(path.join(outdir, filename), filename, ZIP_DEFLATED) # type: ignore
for filename in self.files:
epub.write(path.join(outdir, filename), filename, ZIP_DEFLATED) # type: ignore
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.html')
app.add_builder(EpubBuilder)
@@ -865,3 +742,9 @@ def setup(app):
app.add_config_value('epub_max_image_width', 0, 'env')
app.add_config_value('epub_show_urls', 'inline', 'html')
app.add_config_value('epub_use_index', lambda self: self.html_use_index, 'html')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -10,87 +10,43 @@
:license: BSD, see LICENSE for details.
"""
import codecs
from os import path
from datetime import datetime
from collections import namedtuple
from sphinx.config import string_classes
from sphinx import package_dir
from sphinx.config import string_classes, ENUM
from sphinx.builders.epub import EpubBuilder
from sphinx.util import logging
from sphinx.util.fileutil import copy_asset_file
if False:
# For type annotation
from typing import Any, Dict, Iterable, List # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
# (Fragment) templates from which the metainfo files content.opf, toc.ncx,
# mimetype, and META-INF/container.xml are created.
# This template section also defines strings that are embedded in the html
# output but that may be customized by (re-)setting module attributes,
# e.g. from conf.py.
NavPoint = namedtuple('NavPoint', ['text', 'refuri', 'children'])
NAVIGATION_DOC_TEMPLATE = u'''\
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml"\
xmlns:epub="http://www.idpf.org/2007/ops" lang="%(lang)s" xml:lang="%(lang)s">
<head>
<title>%(toc_locale)s</title>
</head>
<body>
<nav epub:type="toc">
<h1>%(toc_locale)s</h1>
<ol>
%(navlist)s
</ol>
</nav>
</body>
</html>
'''
NAVLIST_TEMPLATE = u'''%(indent)s <li><a href="%(refuri)s">%(text)s</a></li>'''
NAVLIST_TEMPLATE_HAS_CHILD = u'''%(indent)s <li><a href="%(refuri)s">%(text)s</a>'''
NAVLIST_TEMPLATE_BEGIN_BLOCK = u'''%(indent)s <ol>'''
NAVLIST_TEMPLATE_END_BLOCK = u'''%(indent)s </ol>
%(indent)s </li>'''
NAVLIST_INDENT = ' '
PACKAGE_DOC_TEMPLATE = u'''\
<?xml version="1.0" encoding="UTF-8"?>
<package xmlns="http://www.idpf.org/2007/opf" version="3.0" xml:lang="%(lang)s"
unique-identifier="%(uid)s"
prefix="ibooks: http://vocabulary.itunes.apple.com/rdf/ibooks/vocabulary-extensions-1.0/">
<metadata xmlns:opf="http://www.idpf.org/2007/opf"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:language>%(lang)s</dc:language>
<dc:title>%(title)s</dc:title>
<dc:description>%(description)s</dc:description>
<dc:creator>%(author)s</dc:creator>
<dc:contributor>%(contributor)s</dc:contributor>
<dc:publisher>%(publisher)s</dc:publisher>
<dc:rights>%(copyright)s</dc:rights>
<dc:identifier id="%(uid)s">%(id)s</dc:identifier>
<dc:date>%(date)s</dc:date>
<meta property="dcterms:modified">%(date)s</meta>
<meta property="ibooks:version">%(version)s</meta>
<meta property="ibooks:specified-fonts">true</meta>
<meta property="ibooks:binding">true</meta>
<meta property="ibooks:scroll-axis">%(ibook_scroll_axis)s</meta>
</metadata>
<manifest>
<item id="ncx" href="toc.ncx" media-type="application/x-dtbncx+xml" />
<item id="nav" href="nav.xhtml"\
media-type="application/xhtml+xml" properties="nav"/>
%(files)s
</manifest>
<spine toc="ncx" page-progression-direction="%(page_progression_direction)s">
%(spine)s
</spine>
<guide>
%(guide)s
</guide>
</package>
'''
# writing modes
PAGE_PROGRESSION_DIRECTIONS = {
'horizontal': 'ltr',
'vertical': 'rtl',
}
IBOOK_SCROLL_AXIS = {
'horizontal': 'vertical',
'vertical': 'horizontal',
}
THEME_WRITING_MODES = {
'vertical': 'vertical-rl',
'horizontal': 'horizontal-tb',
}
DOCTYPE = u'''<!DOCTYPE html>'''
# The epub3 publisher
class Epub3Builder(EpubBuilder):
"""
@@ -102,18 +58,14 @@ class Epub3Builder(EpubBuilder):
"""
name = 'epub'
navigation_doc_template = NAVIGATION_DOC_TEMPLATE
navlist_template = NAVLIST_TEMPLATE
navlist_template_has_child = NAVLIST_TEMPLATE_HAS_CHILD
navlist_template_begin_block = NAVLIST_TEMPLATE_BEGIN_BLOCK
navlist_template_end_block = NAVLIST_TEMPLATE_END_BLOCK
navlist_indent = NAVLIST_INDENT
content_template = PACKAGE_DOC_TEMPLATE
template_dir = path.join(package_dir, 'templates', 'epub3')
doctype = DOCTYPE
# Finish by building the epub file
def handle_finish(self):
# type: () -> None
"""Create the metainfo files and finally the epub."""
self.validate_config_value()
self.get_toc()
self.build_mimetype(self.outdir, 'mimetype')
self.build_container(self.outdir, 'META-INF/container.xml')
@@ -122,68 +74,69 @@ class Epub3Builder(EpubBuilder):
self.build_toc(self.outdir, 'toc.ncx')
self.build_epub(self.outdir, self.config.epub_basename + '.epub')
def content_metadata(self, files, spine, guide):
def validate_config_value(self):
# <package> lang attribute, dc:language
if not self.app.config.epub_language:
self.app.warn(
'conf value "epub_language" (or "language") '
'should not be empty for EPUB3')
# <package> unique-identifier attribute
if not self.app.config.epub_uid:
self.app.warn('conf value "epub_uid" should not be empty for EPUB3')
# dc:title
if not self.app.config.epub_title:
self.app.warn(
'conf value "epub_title" (or "html_title") '
'should not be empty for EPUB3')
# dc:creator
if not self.app.config.epub_author:
self.app.warn('conf value "epub_author" should not be empty for EPUB3')
# dc:contributor
if not self.app.config.epub_contributor:
self.app.warn('conf value "epub_contributor" should not be empty for EPUB3')
# dc:description
if not self.app.config.epub_description:
self.app.warn('conf value "epub_description" should not be empty for EPUB3')
# dc:publisher
if not self.app.config.epub_publisher:
self.app.warn('conf value "epub_publisher" should not be empty for EPUB3')
# dc:rights
if not self.app.config.epub_copyright:
self.app.warn(
'conf value "epub_copyright" (or "copyright")'
'should not be empty for EPUB3')
# dc:identifier
if not self.app.config.epub_identifier:
self.app.warn('conf value "epub_identifier" should not be empty for EPUB3')
# meta ibooks:version
if not self.app.config.version:
self.app.warn('conf value "version" should not be empty for EPUB3')
def content_metadata(self):
# type: () -> Dict
"""Create a dictionary with all metadata for the content.opf
file properly escaped.
"""
metadata = super(Epub3Builder, self).content_metadata(
files, spine, guide)
writing_mode = self.config.epub_writing_mode
metadata = super(Epub3Builder, self).content_metadata()
metadata['description'] = self.esc(self.config.epub_description)
metadata['contributor'] = self.esc(self.config.epub_contributor)
metadata['page_progression_direction'] = self._page_progression_direction()
metadata['ibook_scroll_axis'] = self._ibook_scroll_axis()
metadata['page_progression_direction'] = PAGE_PROGRESSION_DIRECTIONS.get(writing_mode)
metadata['ibook_scroll_axis'] = IBOOK_SCROLL_AXIS.get(writing_mode)
metadata['date'] = self.esc(datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ"))
metadata['version'] = self.esc(self.config.version)
return metadata
def _page_progression_direction(self):
if self.config.epub_writing_mode == 'horizontal':
page_progression_direction = 'ltr'
elif self.config.epub_writing_mode == 'vertical':
page_progression_direction = 'rtl'
else:
page_progression_direction = 'default'
return page_progression_direction
def _ibook_scroll_axis(self):
if self.config.epub_writing_mode == 'horizontal':
scroll_axis = 'vertical'
elif self.config.epub_writing_mode == 'vertical':
scroll_axis = 'horizontal'
else:
scroll_axis = 'default'
return scroll_axis
def _css_writing_mode(self):
if self.config.epub_writing_mode == 'vertical':
editing_mode = 'vertical-rl'
else:
editing_mode = 'horizontal-tb'
return editing_mode
def prepare_writing(self, docnames):
# type: (Iterable[unicode]) -> None
super(Epub3Builder, self).prepare_writing(docnames)
self.globalcontext['theme_writing_mode'] = self._css_writing_mode()
def new_navlist(self, node, level, has_child):
"""Create a new entry in the toc from the node at given level."""
# XXX Modifies the node
self.tocid += 1
node['indent'] = self.navlist_indent * level
if has_child:
return self.navlist_template_has_child % node
else:
return self.navlist_template % node
writing_mode = self.config.epub_writing_mode
self.globalcontext['theme_writing_mode'] = THEME_WRITING_MODES.get(writing_mode)
def begin_navlist_block(self, level):
return self.navlist_template_begin_block % {
"indent": self.navlist_indent * level
}
def end_navlist_block(self, level):
return self.navlist_template_end_block % {"indent": self.navlist_indent * level}
def build_navlist(self, nodes):
def build_navlist(self, navnodes):
# type: (List[nodes.Node]) -> List[NavPoint]
"""Create the toc navigation structure.
This method is almost same as build_navpoints method in epub.py.
@@ -193,10 +146,10 @@ class Epub3Builder(EpubBuilder):
The difference from build_navpoints method is templates which are used
when generating navigation documents.
"""
navlist = []
level = 1
usenodes = []
for node in nodes:
navstack = [] # type: List[NavPoint]
navstack.append(NavPoint('', '', []))
level = 0
for node in navnodes:
if not node['text']:
continue
file = node['refuri'].split('#')[0]
@@ -204,38 +157,42 @@ class Epub3Builder(EpubBuilder):
continue
if node['level'] > self.config.epub_tocdepth:
continue
usenodes.append(node)
for i, node in enumerate(usenodes):
curlevel = node['level']
if curlevel == level + 1:
navlist.append(self.begin_navlist_block(level))
while curlevel < level:
level -= 1
navlist.append(self.end_navlist_block(level))
level = curlevel
if i != len(usenodes) - 1 and usenodes[i + 1]['level'] > level:
has_child = True
navpoint = NavPoint(node['text'], node['refuri'], [])
if node['level'] == level:
navstack.pop()
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
elif node['level'] == level + 1:
level += 1
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
elif node['level'] < level:
while node['level'] < len(navstack):
navstack.pop()
level = node['level']
navstack[-1].children.append(navpoint)
navstack.append(navpoint)
else:
has_child = False
navlist.append(self.new_navlist(node, level, has_child))
while level != 1:
level -= 1
navlist.append(self.end_navlist_block(level))
return '\n'.join(navlist)
raise
return navstack[0].children
def navigation_doc_metadata(self, navlist):
# type: (List[NavPoint]) -> Dict
"""Create a dictionary with all metadata for the nav.xhtml file
properly escaped.
"""
metadata = {}
metadata = {} # type: Dict
metadata['lang'] = self.esc(self.config.epub_language)
metadata['toc_locale'] = self.esc(self.guide_titles['toc'])
metadata['navlist'] = navlist
return metadata
def build_navigation_doc(self, outdir, outname):
# type: (unicode, unicode) -> None
"""Write the metainfo file nav.xhtml."""
self.info('writing %s file...' % outname)
logger.info('writing %s file...', outname)
if self.config.epub_tocscope == 'default':
doctree = self.env.get_and_resolve_doctree(
@@ -247,37 +204,27 @@ class Epub3Builder(EpubBuilder):
# 'includehidden'
refnodes = self.refnodes
navlist = self.build_navlist(refnodes)
with codecs.open(path.join(outdir, outname), 'w', 'utf-8') as f:
f.write(self.navigation_doc_template %
self.navigation_doc_metadata(navlist))
copy_asset_file(path.join(self.template_dir, 'nav.xhtml_t'),
path.join(outdir, outname),
self.navigation_doc_metadata(navlist))
# Add nav.xhtml to epub file
if outname not in self.files:
self.files.append(outname)
def validate_config_values(app):
if app.config.epub3_description is not None:
app.warn('epub3_description is deprecated. Use epub_description instead.')
app.config.epub_description = app.config.epub3_description
if app.config.epub3_contributor is not None:
app.warn('epub3_contributor is deprecated. Use epub_contributor instead.')
app.config.epub_contributor = app.config.epub3_contributor
if app.config.epub3_page_progression_direction is not None:
app.warn('epub3_page_progression_direction option is deprecated'
' from 1.5. Use epub_writing_mode instead.')
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.epub')
app.add_builder(Epub3Builder)
app.connect('builder-inited', validate_config_values)
app.add_config_value('epub_description', '', 'epub3', string_classes)
app.add_config_value('epub_description', 'unknown', 'epub3', string_classes)
app.add_config_value('epub_contributor', 'unknown', 'epub3', string_classes)
app.add_config_value('epub_writing_mode', 'horizontal', 'epub3', string_classes)
app.add_config_value('epub3_description', None, 'epub3', string_classes)
app.add_config_value('epub3_contributor', None, 'epub3', string_classes)
app.add_config_value('epub3_page_progression_direction', None, 'epub3', string_classes)
app.add_config_value('epub_writing_mode', 'horizontal', 'epub3',
ENUM('horizontal', 'vertical'))
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -18,24 +18,27 @@ from datetime import datetime, tzinfo, timedelta
from collections import defaultdict
from uuid import uuid4
from six import iteritems
from six import iteritems, StringIO
from sphinx.builders import Builder
from sphinx.util import split_index_msg
from sphinx.util import split_index_msg, logging, status_iterator
from sphinx.util.tags import Tags
from sphinx.util.nodes import extract_messages, traverse_translatable_index
from sphinx.util.osutil import safe_relpath, ensuredir, canon_path
from sphinx.util.i18n import find_catalog
from sphinx.util.console import darkgreen, purple, bold # type: ignore
from sphinx.util.console import bold # type: ignore
from sphinx.locale import pairindextypes
if False:
# For type annotation
from typing import Any, Iterable, Tuple # NOQA
from typing import Any, Dict, Iterable, List, Set, Tuple # NOQA
from docutils import nodes # NOQA
from sphinx.util.i18n import CatalogInfo # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
POHEADER = r"""
# SOME DESCRIPTIVE TITLE.
# Copyright (C) %(copyright)s
@@ -189,6 +192,20 @@ class LocalTimeZone(tzinfo):
ltz = LocalTimeZone()
def should_write(filepath, new_content):
if not path.exists(filepath):
return True
with open(filepath, 'r', encoding='utf-8') as oldpot: # type: ignore
old_content = oldpot.read()
old_header_index = old_content.index('"POT-Creation-Date:')
new_header_index = old_content.index('"POT-Creation-Date:')
old_body_index = old_content.index('"PO-Revision-Date:')
new_body_index = new_content.index('"PO-Revision-Date:')
return ((old_content[:old_header_index] != new_content[:new_header_index]) or
(new_content[new_body_index:] != old_content[old_body_index:]))
return True
class MessageCatalogBuilder(I18nBuilder):
"""
Builds gettext-style message catalogs (.pot files).
@@ -216,13 +233,13 @@ class MessageCatalogBuilder(I18nBuilder):
def _extract_from_template(self):
# type: () -> None
files = self._collect_templates()
self.info(bold('building [%s]: ' % self.name), nonl=1)
self.info('targets for %d template files' % len(files))
logger.info(bold('building [%s]: ' % self.name), nonl=1)
logger.info('targets for %d template files', len(files))
extract_translations = self.templates.environment.extract_translations
for template in self.app.status_iterator(
files, 'reading templates... ', purple, len(files)):
for template in status_iterator(files, 'reading templates... ', "purple", # type: ignore # NOQA
len(files), self.app.verbosity):
with open(template, 'r', encoding='utf-8') as f: # type: ignore
context = f.read()
for line, meth, msg in extract_translations(context):
@@ -241,43 +258,50 @@ class MessageCatalogBuilder(I18nBuilder):
version = self.config.version,
copyright = self.config.copyright,
project = self.config.project,
ctime = datetime.fromtimestamp( # type: ignore
ctime = datetime.fromtimestamp(
timestamp, ltz).strftime('%Y-%m-%d %H:%M%z'),
)
for textdomain, catalog in self.app.status_iterator(
iteritems(self.catalogs), "writing message catalogs... ",
darkgreen, len(self.catalogs),
lambda textdomain__: textdomain__[0]):
for textdomain, catalog in status_iterator(iteritems(self.catalogs), # type: ignore
"writing message catalogs... ",
"darkgreen", len(self.catalogs),
self.app.verbosity,
lambda textdomain__: textdomain__[0]):
# noop if config.gettext_compact is set
ensuredir(path.join(self.outdir, path.dirname(textdomain)))
pofn = path.join(self.outdir, textdomain + '.pot')
with open(pofn, 'w', encoding='utf-8') as pofile: # type: ignore
pofile.write(POHEADER % data) # type: ignore
output = StringIO()
output.write(POHEADER % data) # type: ignore
for message in catalog.messages:
positions = catalog.metadata[message]
for message in catalog.messages:
positions = catalog.metadata[message]
if self.config.gettext_location:
# generate "#: file1:line1\n#: file2:line2 ..."
pofile.write("#: %s\n" % "\n#: ".join( # type: ignore
"%s:%s" % (canon_path(
safe_relpath(source, self.outdir)), line)
for source, line, _ in positions))
if self.config.gettext_uuid:
# generate "# uuid1\n# uuid2\n ..."
pofile.write("# %s\n" % "\n# ".join( # type: ignore
uid for _, _, uid in positions))
if self.config.gettext_location:
# generate "#: file1:line1\n#: file2:line2 ..."
output.write("#: %s\n" % "\n#: ".join( # type: ignore
"%s:%s" % (canon_path(
safe_relpath(source, self.outdir)), line)
for source, line, _ in positions))
if self.config.gettext_uuid:
# generate "# uuid1\n# uuid2\n ..."
output.write("# %s\n" % "\n# ".join( # type: ignore
uid for _, _, uid in positions))
# message contains *one* line of text ready for translation
message = message.replace('\\', r'\\'). \
replace('"', r'\"'). \
replace('\n', '\\n"\n"')
pofile.write('msgid "%s"\nmsgstr ""\n\n' % message) # type: ignore
# message contains *one* line of text ready for translation
message = message.replace('\\', r'\\'). \
replace('"', r'\"'). \
replace('\n', '\\n"\n"')
output.write('msgid "%s"\nmsgstr ""\n\n' % message) # type: ignore
content = output.getvalue()
if should_write(pofn, content):
with open(pofn, 'w', encoding='utf-8') as pofile: # type: ignore
pofile.write(content)
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(MessageCatalogBuilder)
app.add_config_value('gettext_compact', True, 'gettext')
@@ -285,3 +309,9 @@ def setup(app):
app.add_config_value('gettext_uuid', False, 'gettext')
app.add_config_value('gettext_auto_build', True, 'env')
app.add_config_value('gettext_additional_targets', [], 'env')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -10,8 +10,8 @@
"""
import os
import re
import sys
import zlib
import codecs
import posixpath
from os import path
@@ -28,11 +28,13 @@ from docutils.frontend import OptionParser
from docutils.readers.doctree import Reader as DoctreeReader
from sphinx import package_dir, __display_version__
from sphinx.util import jsonimpl
from sphinx.util import jsonimpl, logging, status_iterator
from sphinx.util.i18n import format_date
from sphinx.util.inventory import InventoryFile
from sphinx.util.osutil import SEP, os_path, relative_uri, ensuredir, \
movefile, copyfile
from sphinx.util.nodes import inline_all_toctrees
from sphinx.util.docutils import is_html5_writer_available, __version_info__
from sphinx.util.fileutil import copy_asset
from sphinx.util.matching import patmatch, Matcher, DOTFILES
from sphinx.config import string_classes
@@ -42,21 +44,33 @@ from sphinx.theming import Theme
from sphinx.builders import Builder
from sphinx.application import ENV_PICKLE_FILENAME
from sphinx.highlighting import PygmentsBridge
from sphinx.util.console import bold, darkgreen, brown # type: ignore
from sphinx.util.console import bold, darkgreen # type: ignore
from sphinx.writers.html import HTMLWriter, HTMLTranslator, \
SmartyPantsHTMLTranslator
from sphinx.environment.adapters.toctree import TocTree
from sphinx.environment.adapters.indexentries import IndexEntries
if False:
# For type annotation
from typing import Any, Iterable, Iterator, Tuple, Union # NOQA
from typing import Any, Dict, Iterable, Iterator, List, Type, Tuple, Union # NOQA
from sphinx.domains import Domain, Index # NOQA
from sphinx.application import Sphinx # NOQA
# Experimental HTML5 Writer
if is_html5_writer_available():
from sphinx.writers.html5 import HTML5Translator, SmartyPantsHTML5Translator
html5_ready = True
else:
html5_ready = False
#: the filename for the inventory of objects
INVENTORY_FILENAME = 'objects.inv'
#: the filename for the "last build" file (for serializing builders)
LAST_BUILD_FILENAME = 'last_build'
logger = logging.getLogger(__name__)
return_codes_re = re.compile('[\r\n]+')
def get_stable_hash(obj):
# type: (Any) -> unicode
@@ -82,7 +96,7 @@ class StandaloneHTMLBuilder(Builder):
allow_parallel = True
out_suffix = '.html'
link_suffix = '.html' # defaults to matching out_suffix
indexer_format = js_index
indexer_format = js_index # type: Any
indexer_dumps_unicode = True
# create links to original images from images [True/False]
html_scaled_image_link = True
@@ -104,7 +118,7 @@ class StandaloneHTMLBuilder(Builder):
css_props = {} # type: Dict[unicode, unicode/bool]
imgpath = None # type: unicode
domain_indices = [] # type: List[Tuple[unicode, Index, unicode, bool]]
domain_indices = [] # type: List[Tuple[unicode, Type[Index], List[Tuple[unicode, List[List[Union[unicode, int]]]]], bool]] # NOQA
default_sidebars = ['localtoc.html', 'relations.html',
'sourcelink.html', 'searchbox.html']
@@ -140,15 +154,22 @@ class StandaloneHTMLBuilder(Builder):
self.script_files.append('_static/translations.js')
self.use_index = self.get_builder_config('use_index', 'html')
if self.config.html_experimental_html5_writer and not html5_ready:
self.app.warn(' '.join((
'html_experimental_html5_writer is set, but current version is old.',
'Docutils\' version should be or newer than 0.13, but %s.',
)) % '.'.join(map(str, __version_info__)))
def _get_translations_js(self):
# type: () -> unicode
candidates = [path.join(package_dir, 'locale', self.config.language,
candidates = [path.join(dir, self.config.language,
'LC_MESSAGES', 'sphinx.js')
for dir in self.config.locale_dirs] + \
[path.join(package_dir, 'locale', self.config.language,
'LC_MESSAGES', 'sphinx.js'),
path.join(sys.prefix, 'share/sphinx/locale',
self.config.language, 'sphinx.js')] + \
[path.join(dir, self.config.language,
'LC_MESSAGES', 'sphinx.js')
for dir in self.config.locale_dirs]
self.config.language, 'sphinx.js')]
for jsfile in candidates:
if path.isfile(jsfile):
return jsfile
@@ -160,10 +181,9 @@ class StandaloneHTMLBuilder(Builder):
def init_templates(self):
# type: () -> None
Theme.init_themes(self.confdir, self.config.html_theme_path,
warn=self.warn)
Theme.init_themes(self.confdir, self.config.html_theme_path)
themename, themeoptions = self.get_theme_config()
self.theme = Theme(themename, warn=self.warn)
self.theme = Theme(themename)
self.theme_options = themeoptions.copy()
self.create_template_bridge()
self.templates.init(self, self.theme)
@@ -183,16 +203,20 @@ class StandaloneHTMLBuilder(Builder):
def init_translator_class(self):
# type: () -> None
if self.translator_class is None:
if self.config.html_use_smartypants:
self.translator_class = SmartyPantsHTMLTranslator
if self.config.html_experimental_html5_writer and html5_ready:
if self.config.html_use_smartypants:
self.translator_class = SmartyPantsHTML5Translator
else:
self.translator_class = HTML5Translator
else:
self.translator_class = HTMLTranslator
if self.config.html_use_smartypants:
self.translator_class = SmartyPantsHTMLTranslator
else:
self.translator_class = HTMLTranslator
def get_outdated_docs(self): # type: ignore
def get_outdated_docs(self):
# type: () -> Iterator[unicode]
cfgdict = dict((name, self.config[name])
for (name, desc) in iteritems(self.config.values)
if desc[1] == 'html')
cfgdict = dict((confval.name, confval.value) for confval in self.config.filter('html'))
self.config_hash = get_stable_hash(cfgdict)
self.tags_hash = get_stable_hash(sorted(self.tags)) # type: ignore
old_config_hash = old_tags_hash = ''
@@ -209,8 +233,8 @@ class StandaloneHTMLBuilder(Builder):
if tag != 'tags':
raise ValueError
except ValueError:
self.warn('unsupported build info format in %r, building all' %
path.join(self.outdir, '.buildinfo'))
logger.warning('unsupported build info format in %r, building all',
path.join(self.outdir, '.buildinfo'))
except Exception:
pass
if old_config_hash != self.config_hash or \
@@ -297,14 +321,10 @@ class StandaloneHTMLBuilder(Builder):
domain = None # type: Domain
domain = self.env.domains[domain_name]
for indexcls in domain.indices:
indexname = '%s-%s' % (domain.name, indexcls.name)
indexname = '%s-%s' % (domain.name, indexcls.name) # type: unicode
if isinstance(indices_config, list):
if indexname not in indices_config:
continue
# deprecated config value
if indexname == 'py-modindex' and \
not self.config.html_use_modindex:
continue
content, collapse = indexcls(domain).generate()
if content:
self.domain_indices.append(
@@ -315,8 +335,7 @@ class StandaloneHTMLBuilder(Builder):
lufmt = self.config.html_last_updated_fmt
if lufmt is not None:
self.last_updated = format_date(lufmt or _('%b %d, %Y'),
language=self.config.language,
warn=self.warn)
language=self.config.language)
else:
self.last_updated = None
@@ -326,17 +345,17 @@ class StandaloneHTMLBuilder(Builder):
favicon = self.config.html_favicon and \
path.basename(self.config.html_favicon) or ''
if favicon and os.path.splitext(favicon)[1] != '.ico':
self.warn('html_favicon is not an .ico file')
logger.warning('html_favicon is not an .ico file')
if not isinstance(self.config.html_use_opensearch, string_types):
self.warn('html_use_opensearch config value must now be a string')
logger.warning('html_use_opensearch config value must now be a string')
self.relations = self.env.collect_relations()
rellinks = []
rellinks = [] # type: List[Tuple[unicode, unicode, unicode, unicode]]
if self.use_index:
rellinks.append(('genindex', _('General Index'), 'I', _('index')))
for indexname, indexcls, content, collapse in self.domain_indices: # type: ignore
for indexname, indexcls, content, collapse in self.domain_indices:
# if it has a short name
if indexcls.shortname:
rellinks.append((indexname, indexcls.localname,
@@ -352,7 +371,7 @@ class StandaloneHTMLBuilder(Builder):
self.globalcontext = dict(
embedded = self.embedded,
project = self.config.project,
release = self.config.release,
release = return_codes_re.sub('', self.config.release),
version = self.config.version,
last_updated = self.last_updated,
copyright = self.config.copyright,
@@ -377,6 +396,7 @@ class StandaloneHTMLBuilder(Builder):
parents = [],
logo = logo,
favicon = favicon,
html5_doctype = self.config.html_experimental_html5_writer and html5_ready,
) # type: Dict[unicode, Any]
if self.theme:
self.globalcontext.update(
@@ -446,7 +466,7 @@ class StandaloneHTMLBuilder(Builder):
meta = self.env.metadata.get(docname)
# local TOC and global TOC tree
self_toc = self.env.get_toc_for(docname, self)
self_toc = TocTree(self.env).get_toc_for(docname, self)
toc = self.render_partial(self_toc)['fragment']
return dict(
@@ -506,7 +526,7 @@ class StandaloneHTMLBuilder(Builder):
def gen_indices(self):
# type: () -> None
self.info(bold('generating indices...'), nonl=1)
logger.info(bold('generating indices...'), nonl=1)
# the global general index
if self.use_index:
@@ -515,7 +535,7 @@ class StandaloneHTMLBuilder(Builder):
# the global domain-specific indices
self.write_domain_indices()
self.info()
logger.info('')
def gen_additional_pages(self):
# type: () -> None
@@ -524,31 +544,31 @@ class StandaloneHTMLBuilder(Builder):
for pagename, context, template in pagelist:
self.handle_page(pagename, context, template)
self.info(bold('writing additional pages...'), nonl=1)
logger.info(bold('writing additional pages...'), nonl=1)
# additional pages from conf.py
for pagename, template in self.config.html_additional_pages.items():
self.info(' '+pagename, nonl=1)
self.info(' ' + pagename, nonl=1)
self.handle_page(pagename, {}, template)
# the search page
if self.search:
self.info(' search', nonl=1)
logger.info(' search', nonl=1)
self.handle_page('search', {}, 'search.html')
# the opensearch xml file
if self.config.html_use_opensearch and self.search:
self.info(' opensearch', nonl=1)
logger.info(' opensearch', nonl=1)
fn = path.join(self.outdir, '_static', 'opensearch.xml')
self.handle_page('opensearch', {}, 'opensearch.xml', outfilename=fn)
self.info()
logger.info('')
def write_genindex(self):
# type: () -> None
# the total count of lines for each index letter, used to distribute
# the entries into two columns
genindex = self.env.create_index(self)
genindex = IndexEntries(self.env).create_index(self)
indexcounts = []
for _k, entries in genindex:
indexcounts.append(sum(1 + len(subitems)
@@ -559,7 +579,7 @@ class StandaloneHTMLBuilder(Builder):
genindexcounts = indexcounts,
split_index = self.config.html_split_index,
)
self.info(' genindex', nonl=1)
logger.info(' genindex', nonl=1)
if self.config.html_split_index:
self.handle_page('genindex', genindexcontext,
@@ -582,7 +602,7 @@ class StandaloneHTMLBuilder(Builder):
content = content,
collapse_index = collapse,
)
self.info(' ' + indexname, nonl=1)
logger.info(' ' + indexname, nonl=1)
self.handle_page(indexname, indexcontext, 'domainindex.html')
def copy_image_files(self):
@@ -590,43 +610,43 @@ class StandaloneHTMLBuilder(Builder):
# copy image files
if self.images:
ensuredir(path.join(self.outdir, self.imagedir))
for src in self.app.status_iterator(self.images, 'copying images... ',
brown, len(self.images)):
for src in status_iterator(self.images, 'copying images... ', "brown",
len(self.images), self.app.verbosity):
dest = self.images[src]
try:
copyfile(path.join(self.srcdir, src),
path.join(self.outdir, self.imagedir, dest))
except Exception as err:
self.warn('cannot copy image file %r: %s' %
(path.join(self.srcdir, src), err))
logger.warning('cannot copy image file %r: %s',
path.join(self.srcdir, src), err)
def copy_download_files(self):
# type: () -> None
def to_relpath(f):
# type: (unicode) -> unicode
return relative_path(self.srcdir, f)
# copy downloadable files
if self.env.dlfiles:
ensuredir(path.join(self.outdir, '_downloads'))
for src in self.app.status_iterator(self.env.dlfiles,
'copying downloadable files... ',
brown, len(self.env.dlfiles),
stringify_func=to_relpath):
for src in status_iterator(self.env.dlfiles, 'copying downloadable files... ',
"brown", len(self.env.dlfiles), self.app.verbosity,
stringify_func=to_relpath):
dest = self.env.dlfiles[src][1]
try:
copyfile(path.join(self.srcdir, src),
path.join(self.outdir, '_downloads', dest))
except Exception as err:
self.warn('cannot copy downloadable file %r: %s' %
(path.join(self.srcdir, src), err))
logger.warning('cannot copy downloadable file %r: %s',
path.join(self.srcdir, src), err)
def copy_static_files(self):
# type: () -> None
# copy static files
self.info(bold('copying static files... '), nonl=True)
logger.info(bold('copying static files... '), nonl=True)
ensuredir(path.join(self.outdir, '_static'))
# first, create pygments style file
with open(path.join(self.outdir, '_static', 'pygments.css'), 'w') as f:
f.write(self.highlighter.get_stylesheet())
f.write(self.highlighter.get_stylesheet()) # type: ignore
# then, copy translations JavaScript file
if self.config.language is not None:
jsfile = self._get_translations_js()
@@ -657,7 +677,7 @@ class StandaloneHTMLBuilder(Builder):
for static_path in self.config.html_static_path:
entry = path.join(self.confdir, static_path)
if not path.exists(entry):
self.warn('html_static_path entry %r does not exist' % entry)
logger.warning('html_static_path entry %r does not exist', entry)
continue
copy_asset(entry, path.join(self.outdir, '_static'), excluded,
context=ctx, renderer=self.templates)
@@ -666,7 +686,7 @@ class StandaloneHTMLBuilder(Builder):
logobase = path.basename(self.config.html_logo)
logotarget = path.join(self.outdir, '_static', logobase)
if not path.isfile(path.join(self.confdir, self.config.html_logo)):
self.warn('logo file %r does not exist' % self.config.html_logo)
logger.warning('logo file %r does not exist', self.config.html_logo)
elif not path.isfile(logotarget):
copyfile(path.join(self.confdir, self.config.html_logo),
logotarget)
@@ -674,26 +694,26 @@ class StandaloneHTMLBuilder(Builder):
iconbase = path.basename(self.config.html_favicon)
icontarget = path.join(self.outdir, '_static', iconbase)
if not path.isfile(path.join(self.confdir, self.config.html_favicon)):
self.warn('favicon file %r does not exist' % self.config.html_favicon)
logger.warning('favicon file %r does not exist', self.config.html_favicon)
elif not path.isfile(icontarget):
copyfile(path.join(self.confdir, self.config.html_favicon),
icontarget)
self.info('done')
logger.info('done')
def copy_extra_files(self):
# type: () -> None
# copy html_extra_path files
self.info(bold('copying extra files... '), nonl=True)
logger.info(bold('copying extra files... '), nonl=True)
excluded = Matcher(self.config.exclude_patterns)
for extra_path in self.config.html_extra_path:
entry = path.join(self.confdir, extra_path)
if not path.exists(entry):
self.warn('html_extra_path entry %r does not exist' % entry)
logger.warning('html_extra_path entry %r does not exist', entry)
continue
copy_asset(entry, self.outdir, excluded)
self.info('done')
logger.info('done')
def write_buildinfo(self):
# type: () -> None
@@ -738,7 +758,7 @@ class StandaloneHTMLBuilder(Builder):
reference.append(node)
def load_indexer(self, docnames):
# type: (Set[unicode]) -> None
# type: (Iterable[unicode]) -> None
keep = set(self.env.all_docs) - set(docnames)
try:
searchindexfn = path.join(self.outdir, self.searchindex_filename)
@@ -750,9 +770,9 @@ class StandaloneHTMLBuilder(Builder):
self.indexer.load(f, self.indexer_format) # type: ignore
except (IOError, OSError, ValueError):
if keep:
self.warn('search index couldn\'t be loaded, but not all '
'documents will be built: the index will be '
'incomplete.')
logger.warning('search index couldn\'t be loaded, but not all '
'documents will be built: the index will be '
'incomplete.')
# delete all entries for files that will be rebuilt
self.indexer.prune(keep)
@@ -765,13 +785,13 @@ class StandaloneHTMLBuilder(Builder):
self.indexer.feed(pagename, filename, title, doctree)
except TypeError:
# fallback for old search-adapters
self.indexer.feed(pagename, title, doctree)
self.indexer.feed(pagename, title, doctree) # type: ignore
def _get_local_toctree(self, docname, collapse=True, **kwds):
# type: (unicode, bool, Any) -> unicode
if 'includehidden' not in kwds:
kwds['includehidden'] = False
return self.render_partial(self.env.get_toctree_for(
return self.render_partial(TocTree(self.env).get_toctree_for(
docname, self, collapse, **kwds))['fragment']
def get_outfilename(self, pagename):
@@ -781,6 +801,7 @@ class StandaloneHTMLBuilder(Builder):
def add_sidebars(self, pagename, ctx):
# type: (unicode, Dict) -> None
def has_wildcard(pattern):
# type: (unicode) -> bool
return any(char in pattern for char in '*?[')
sidebars = None
matched = None
@@ -791,9 +812,9 @@ class StandaloneHTMLBuilder(Builder):
if has_wildcard(pattern):
# warn if both patterns contain wildcards
if has_wildcard(matched):
self.warn('page %s matches two patterns in '
'html_sidebars: %r and %r' %
(pagename, matched, pattern))
logger.warning('page %s matches two patterns in '
'html_sidebars: %r and %r',
pagename, matched, pattern)
# else the already matched pattern is more specific
# than the present one, because it contains no wildcard
continue
@@ -822,12 +843,14 @@ class StandaloneHTMLBuilder(Builder):
ctx['warn'] = self.warn
# current_page_name is backwards compatibility
ctx['pagename'] = ctx['current_page_name'] = pagename
ctx['encoding'] = self.config.html_output_encoding
default_baseuri = self.get_target_uri(pagename)
# in the singlehtml builder, default_baseuri still contains an #anchor
# part, which relative_uri doesn't really like...
default_baseuri = default_baseuri.rsplit('#', 1)[0]
def pathto(otheruri, resource=False, baseuri=default_baseuri):
# type: (unicode, bool, unicode) -> unicode
if resource and '://' in otheruri:
# allow non-local resources given by scheme
return otheruri
@@ -840,6 +863,7 @@ class StandaloneHTMLBuilder(Builder):
ctx['pathto'] = pathto
def hasdoc(name):
# type: (unicode) -> bool
if name in self.env.all_docs:
return True
elif name == 'search' and self.search:
@@ -849,14 +873,11 @@ class StandaloneHTMLBuilder(Builder):
return False
ctx['hasdoc'] = hasdoc
if self.name != 'htmlhelp':
ctx['encoding'] = encoding = self.config.html_output_encoding
else:
ctx['encoding'] = encoding = self.encoding
ctx['toctree'] = lambda **kw: self._get_local_toctree(pagename, **kw)
self.add_sidebars(pagename, ctx)
ctx.update(addctx)
self.update_page_context(pagename, templatename, ctx, event_arg)
newtmpl = self.app.emit_firstresult('html-page-context', pagename,
templatename, ctx, event_arg)
if newtmpl:
@@ -865,9 +886,9 @@ class StandaloneHTMLBuilder(Builder):
try:
output = self.templates.render(templatename, ctx)
except UnicodeError:
self.warn("a Unicode error occurred when rendering the page %s. "
"Please make sure all config values that contain "
"non-ASCII content are Unicode strings." % pagename)
logger.warning("a Unicode error occurred when rendering the page %s. "
"Please make sure all config values that contain "
"non-ASCII content are Unicode strings.", pagename)
return
if not outfilename:
@@ -875,10 +896,10 @@ class StandaloneHTMLBuilder(Builder):
# outfilename's path is in general different from self.outdir
ensuredir(path.dirname(outfilename))
try:
with codecs.open(outfilename, 'w', encoding, 'xmlcharrefreplace') as f: # type: ignore # NOQA
with codecs.open(outfilename, 'w', ctx['encoding'], 'xmlcharrefreplace') as f: # type: ignore # NOQA
f.write(output)
except (IOError, OSError) as err:
self.warn("error writing file %s: %s" % (outfilename, err))
logger.warning("error writing file %s: %s", outfilename, err)
if self.copysource and ctx.get('sourcename'):
# copy the source file for the "show source" link
source_name = path.join(self.outdir, '_sources',
@@ -886,6 +907,10 @@ class StandaloneHTMLBuilder(Builder):
ensuredir(path.dirname(source_name))
copyfile(self.env.doc2path(pagename), source_name)
def update_page_context(self, pagename, templatename, ctx, event_arg):
# type: (unicode, unicode, Dict, Any) -> None
pass
def handle_finish(self):
# type: () -> None
if self.indexer:
@@ -894,34 +919,13 @@ class StandaloneHTMLBuilder(Builder):
def dump_inventory(self):
# type: () -> None
self.info(bold('dumping object inventory... '), nonl=True)
with open(path.join(self.outdir, INVENTORY_FILENAME), 'wb') as f:
f.write((u'# Sphinx inventory version 2\n'
u'# Project: %s\n'
u'# Version: %s\n'
u'# The remainder of this file is compressed using zlib.\n'
% (self.config.project, self.config.version)).encode('utf-8'))
compressor = zlib.compressobj(9)
for domainname, domain in sorted(self.env.domains.items()):
for name, dispname, type, docname, anchor, prio in \
sorted(domain.get_objects()):
if anchor.endswith(name):
# this can shorten the inventory by as much as 25%
anchor = anchor[:-len(name)] + '$'
uri = self.get_target_uri(docname)
if anchor:
uri += '#' + anchor
if dispname == name:
dispname = u'-'
f.write(compressor.compress(
(u'%s %s:%s %s %s %s\n' % (name, domainname, type,
prio, uri, dispname)).encode('utf-8')))
f.write(compressor.flush())
self.info('done')
logger.info(bold('dumping object inventory... '), nonl=True)
InventoryFile.dump(path.join(self.outdir, INVENTORY_FILENAME), self.env, self)
logger.info('done')
def dump_search_index(self):
# type: () -> None
self.info(
logger.info(
bold('dumping search index in %s ... ' % self.indexer.label()),
nonl=True)
self.indexer.prune(self.env.all_docs)
@@ -935,7 +939,7 @@ class StandaloneHTMLBuilder(Builder):
with f:
self.indexer.dump(f, self.indexer_format) # type: ignore
movefile(searchindexfn + '.tmp', searchindexfn)
self.info('done')
logger.info('done')
class DirectoryHTMLBuilder(StandaloneHTMLBuilder):
@@ -1009,7 +1013,7 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
hashindex = refuri.find('#')
if hashindex < 0:
continue
hashindex = refuri.find('#', hashindex+1)
hashindex = refuri.find('#', hashindex + 1)
if hashindex >= 0:
refnode['refuri'] = fname + refuri[hashindex:]
@@ -1017,7 +1021,7 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
# type: (unicode, bool, Any) -> unicode
if 'includehidden' not in kwds:
kwds['includehidden'] = False
toctree = self.env.get_toctree_for(docname, self, collapse, **kwds)
toctree = TocTree(self.env).get_toctree_for(docname, self, collapse, **kwds)
self.fix_refuris(toctree)
return self.render_partial(toctree)['fragment']
@@ -1032,7 +1036,7 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
return tree
def assemble_toc_secnumbers(self):
# type: () -> Dict[unicode, Dict[Tuple[unicode, unicode], Tuple[int, ...]]]
# type: () -> Dict[unicode, Dict[unicode, Tuple[int, ...]]]
# Assemble toc_secnumbers to resolve section numbers on SingleHTML.
# Merge all secnumbers to single secnumber.
#
@@ -1042,15 +1046,16 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
#
# There are related codes in inline_all_toctres() and
# HTMLTranslter#add_secnumber().
new_secnumbers = {}
new_secnumbers = {} # type: Dict[unicode, Tuple[int, ...]]
for docname, secnums in iteritems(self.env.toc_secnumbers):
for id, secnum in iteritems(secnums):
new_secnumbers[(docname, id)] = secnum
alias = "%s/%s" % (docname, id)
new_secnumbers[alias] = secnum
return {self.config.master_doc: new_secnumbers}
def assemble_toc_fignumbers(self):
# type: () -> Dict[unicode, Dict[Tuple[unicode, unicode], Dict[unicode, Tuple[int, ...]]]] # NOQA
# type: () -> Dict[unicode, Dict[unicode, Dict[unicode, Tuple[int, ...]]]] # NOQA
# Assemble toc_fignumbers to resolve figure numbers on SingleHTML.
# Merge all fignumbers to single fignumber.
#
@@ -1060,20 +1065,22 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
#
# There are related codes in inline_all_toctres() and
# HTMLTranslter#add_fignumber().
new_fignumbers = {} # type: Dict[Tuple[unicode, unicode], Dict[unicode, Tuple[int, ...]]] # NOQA
new_fignumbers = {} # type: Dict[unicode, Dict[unicode, Tuple[int, ...]]]
# {u'foo': {'figure': {'id2': (2,), 'id1': (1,)}}, u'bar': {'figure': {'id1': (3,)}}}
for docname, fignumlist in iteritems(self.env.toc_fignumbers):
for figtype, fignums in iteritems(fignumlist):
new_fignumbers.setdefault((docname, figtype), {})
alias = "%s/%s" % (docname, figtype)
new_fignumbers.setdefault(alias, {})
for id, fignum in iteritems(fignums):
new_fignumbers[(docname, figtype)][id] = fignum
new_fignumbers[alias][id] = fignum
return {self.config.master_doc: new_fignumbers}
def get_doc_context(self, docname, body, metatags):
# type: (unicode, unicode, Dict) -> Dict
# no relation links...
toc = self.env.get_toctree_for(self.config.master_doc, self, False) # type: Any
toc = TocTree(self.env).get_toctree_for(self.config.master_doc,
self, False)
# if there is no toctree, toc is None
if toc:
self.fix_refuris(toc)
@@ -1101,36 +1108,36 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
# type: (Any) -> None
docnames = self.env.all_docs
self.info(bold('preparing documents... '), nonl=True)
logger.info(bold('preparing documents... '), nonl=True)
self.prepare_writing(docnames)
self.info('done')
logger.info('done')
self.info(bold('assembling single document... '), nonl=True)
logger.info(bold('assembling single document... '), nonl=True)
doctree = self.assemble_doctree()
self.env.toc_secnumbers = self.assemble_toc_secnumbers()
self.env.toc_fignumbers = self.assemble_toc_fignumbers()
self.info()
self.info(bold('writing... '), nonl=True)
logger.info('')
logger.info(bold('writing... '), nonl=True)
self.write_doc_serialized(self.config.master_doc, doctree)
self.write_doc(self.config.master_doc, doctree)
self.info('done')
logger.info('done')
def finish(self):
# type: () -> None
# no indices or search pages are supported
self.info(bold('writing additional files...'), nonl=1)
logger.info(bold('writing additional files...'), nonl=1)
# additional pages from conf.py
for pagename, template in self.config.html_additional_pages.items():
self.info(' '+pagename, nonl=1)
self.info(' ' + pagename, nonl=1)
self.handle_page(pagename, {}, template)
if self.config.html_use_opensearch:
self.info(' opensearch', nonl=1)
logger.info(' opensearch', nonl=1)
fn = path.join(self.outdir, '_static', 'opensearch.xml')
self.handle_page('opensearch', {}, 'opensearch.xml', outfilename=fn)
self.info()
logger.info('')
self.copy_image_files()
self.copy_download_files()
@@ -1150,7 +1157,7 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
implementation = None # type: Any
implementation_dumps_unicode = False
#: additional arguments for dump()
additional_dump_args = ()
additional_dump_args = () # type: Tuple
#: the filename for the global context file
globalcontext_filename = None # type: unicode
@@ -1269,15 +1276,8 @@ class JSONHTMLBuilder(SerializingHTMLBuilder):
SerializingHTMLBuilder.init(self)
def validate_config_values(app):
# type: (Sphinx) -> None
if app.config.html_translator_class:
app.warn('html_translator_class is deprecated. '
'Use Sphinx.set_translator() API instead.')
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
# builders
app.add_builder(StandaloneHTMLBuilder)
app.add_builder(DirectoryHTMLBuilder)
@@ -1285,8 +1285,6 @@ def setup(app):
app.add_builder(PickleHTMLBuilder)
app.add_builder(JSONHTMLBuilder)
app.connect('builder-inited', validate_config_values)
# config values
app.add_config_value('html_theme', 'alabaster', 'html')
app.add_config_value('html_theme_path', [], 'html')
@@ -1304,7 +1302,6 @@ def setup(app):
app.add_config_value('html_use_smartypants', True, 'html')
app.add_config_value('html_sidebars', {}, 'html')
app.add_config_value('html_additional_pages', {}, 'html')
app.add_config_value('html_use_modindex', True, 'html') # deprecated
app.add_config_value('html_domain_indices', True, 'html', [list])
app.add_config_value('html_add_permalinks', u'\u00B6', 'html')
app.add_config_value('html_use_index', True, 'html')
@@ -1325,3 +1322,10 @@ def setup(app):
app.add_config_value('html_search_options', {}, 'html')
app.add_config_value('html_search_scorer', '', None)
app.add_config_value('html_scaled_image_link', True, 'html')
app.add_config_value('html_experimental_html5_writer', False, 'html')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -18,10 +18,20 @@ from os import path
from docutils import nodes
from sphinx import addnodes
from sphinx.util.osutil import make_filename
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.environment.adapters.indexentries import IndexEntries
from sphinx.util import logging
from sphinx.util.osutil import make_filename
from sphinx.util.pycompat import htmlescape
if False:
# For type annotation
from typing import Any, Dict, IO, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
# Project file (*.hhp) template. 'outname' is the file basename (like
# the pythlp in pythlp.hhp); 'version' is the doc version number (like
@@ -181,6 +191,7 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
encoding = 'cp1252'
def init(self):
# type: () -> None
StandaloneHTMLBuilder.init(self)
# the output files for HTML help must be .html only
self.out_suffix = '.html'
@@ -191,14 +202,21 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
self.lcid, self.encoding = locale
def open_file(self, outdir, basename, mode='w'):
# type: (unicode, unicode, unicode) -> IO
# open a file with the correct encoding for the selected language
return codecs.open(path.join(outdir, basename), mode,
return codecs.open(path.join(outdir, basename), mode, # type: ignore
self.encoding, 'xmlcharrefreplace')
def update_page_context(self, pagename, templatename, ctx, event_arg):
# type: (unicode, unicode, Dict, unicode) -> None
ctx['encoding'] = self.encoding
def handle_finish(self):
# type: () -> None
self.build_hhx(self.outdir, self.config.htmlhelp_basename)
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
for node in doctree.traverse(nodes.reference):
# add ``target=_blank`` attributes to external links
if node.get('internal') is None and 'refuri' in node:
@@ -207,13 +225,14 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
StandaloneHTMLBuilder.write_doc(self, docname, doctree)
def build_hhx(self, outdir, outname):
self.info('dumping stopword list...')
with self.open_file(outdir, outname+'.stp') as f:
# type: (unicode, unicode) -> None
logger.info('dumping stopword list...')
with self.open_file(outdir, outname + '.stp') as f:
for word in sorted(stopwords):
print(word, file=f)
self.info('writing project file...')
with self.open_file(outdir, outname+'.hhp') as f:
logger.info('writing project file...')
with self.open_file(outdir, outname + '.hhp') as f:
f.write(project_template % {
'outname': outname,
'title': self.config.html_title,
@@ -233,8 +252,8 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
print(path.join(root, fn)[olen:].replace(os.sep, '\\'),
file=f)
self.info('writing TOC file...')
with self.open_file(outdir, outname+'.hhc') as f:
logger.info('writing TOC file...')
with self.open_file(outdir, outname + '.hhc') as f:
f.write(contents_header)
# special books
f.write('<LI> ' + object_sitemap % (self.config.html_short_title,
@@ -247,6 +266,7 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
self.config.master_doc, self, prune_toctrees=False)
def write_toc(node, ullevel=0):
# type: (nodes.Node, int) -> None
if isinstance(node, nodes.list_item):
f.write('<LI> ')
for subnode in node:
@@ -259,7 +279,7 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
if ullevel != 0:
f.write('<UL>\n')
for subnode in node:
write_toc(subnode, ullevel+1)
write_toc(subnode, ullevel + 1)
if ullevel != 0:
f.write('</UL>\n')
elif isinstance(node, addnodes.compact_paragraph):
@@ -267,19 +287,22 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
write_toc(subnode, ullevel)
def istoctree(node):
# type: (nodes.Node) -> bool
return isinstance(node, addnodes.compact_paragraph) and \
'toctree' in node
for node in tocdoc.traverse(istoctree):
write_toc(node)
f.write(contents_footer)
self.info('writing index file...')
index = self.env.create_index(self)
with self.open_file(outdir, outname+'.hhk') as f:
logger.info('writing index file...')
index = IndexEntries(self.env).create_index(self)
with self.open_file(outdir, outname + '.hhk') as f:
f.write('<UL>\n')
def write_index(title, refs, subitems):
# type: (unicode, List[Tuple[unicode, unicode]], List[Tuple[unicode, List[Tuple[unicode, unicode]]]]) -> None # NOQA
def write_param(name, value):
# type: (unicode, unicode) -> None
item = ' <param name="%s" value="%s">\n' % \
(name, value)
f.write(item)
@@ -308,7 +331,14 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.html')
app.add_builder(HTMLHelpBuilder)
app.add_config_value('htmlhelp_basename', lambda self: make_filename(self.project), None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -10,6 +10,7 @@
"""
import os
import warnings
from os import path
from six import iteritems
@@ -20,7 +21,8 @@ from docutils.utils import new_document
from docutils.frontend import OptionParser
from sphinx import package_dir, addnodes, highlighting
from sphinx.util import texescape
from sphinx.deprecation import RemovedInSphinx17Warning
from sphinx.util import texescape, logging
from sphinx.config import string_classes, ENUM
from sphinx.errors import SphinxError
from sphinx.locale import _
@@ -34,8 +36,12 @@ from sphinx.writers.latex import LaTeXWriter
if False:
# For type annotation
from typing import Any, Iterable, Tuple, Union # NOQA
from typing import Any, Dict, Iterable, List, Tuple, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.config import Config # NOQA
logger = logging.getLogger(__name__)
class LaTeXBuilder(Builder):
@@ -73,19 +79,19 @@ class LaTeXBuilder(Builder):
# type: () -> None
preliminary_document_data = [list(x) for x in self.config.latex_documents]
if not preliminary_document_data:
self.warn('no "latex_documents" config value found; no documents '
'will be written')
logger.warning('no "latex_documents" config value found; no documents '
'will be written')
return
# assign subdirs to titles
self.titles = [] # type: List[Tuple[unicode, unicode]]
for entry in preliminary_document_data:
docname = entry[0]
if docname not in self.env.all_docs:
self.warn('"latex_documents" config value references unknown '
'document %s' % docname)
logger.warning('"latex_documents" config value references unknown '
'document %s', docname)
continue
self.document_data.append(entry) # type: ignore
if docname.endswith(SEP+'index'):
if docname.endswith(SEP + 'index'):
docname = docname[:-5]
self.titles.append((docname, entry[2]))
@@ -98,7 +104,7 @@ class LaTeXBuilder(Builder):
f.write('\\NeedsTeXFormat{LaTeX2e}[1995/12/01]\n')
f.write('\\ProvidesPackage{sphinxhighlight}'
'[2016/05/29 stylesheet for highlighting with pygments]\n\n')
f.write(highlighter.get_stylesheet())
f.write(highlighter.get_stylesheet()) # type: ignore
def write(self, *ignored):
# type: (Any) -> None
@@ -119,7 +125,7 @@ class LaTeXBuilder(Builder):
destination = FileOutput(
destination_path=path.join(self.outdir, targetname),
encoding='utf-8')
self.info("processing " + targetname + "... ", nonl=1)
logger.info("processing %s...", targetname, nonl=1)
toctrees = self.env.get_doctree(docname).traverse(addnodes.toctree)
if toctrees:
if toctrees[0].get('maxdepth') > 0:
@@ -133,7 +139,7 @@ class LaTeXBuilder(Builder):
appendices=((docclass != 'howto') and self.config.latex_appendices or []))
doctree['tocdepth'] = tocdepth
self.post_process_images(doctree)
self.info("writing... ", nonl=1)
logger.info("writing... ", nonl=1)
doctree.settings = docsettings
doctree.settings.author = author
doctree.settings.title = title
@@ -141,7 +147,7 @@ class LaTeXBuilder(Builder):
doctree.settings.docname = docname
doctree.settings.docclass = docclass
docwriter.write(doctree, destination)
self.info("done")
logger.info("done")
def get_contentsname(self, indexfile):
# type: (unicode) -> unicode
@@ -157,7 +163,7 @@ class LaTeXBuilder(Builder):
def assemble_doctree(self, indexfile, toctree_only, appendices):
# type: (unicode, bool, List[unicode]) -> nodes.Node
self.docnames = set([indexfile] + appendices)
self.info(darkgreen(indexfile) + " ", nonl=1)
logger.info(darkgreen(indexfile) + " ", nonl=1)
tree = self.env.get_doctree(indexfile)
tree['docname'] = indexfile
if toctree_only:
@@ -178,8 +184,8 @@ class LaTeXBuilder(Builder):
appendix = self.env.get_doctree(docname)
appendix['docname'] = docname
largetree.append(appendix)
self.info()
self.info("resolving references...")
logger.info('')
logger.info("resolving references...")
self.env.resolve_references(largetree, indexfile, self)
# resolve :ref:s to distant tex files -- we can't add a cross-reference,
# but append the document name
@@ -202,16 +208,16 @@ class LaTeXBuilder(Builder):
# type: () -> None
# copy image files
if self.images:
self.info(bold('copying images...'), nonl=1)
logger.info(bold('copying images...'), nonl=1)
for src, dest in iteritems(self.images):
self.info(' '+src, nonl=1)
logger.info(' ' + src, nonl=1)
copy_asset_file(path.join(self.srcdir, src),
path.join(self.outdir, dest))
self.info()
logger.info('')
# copy TeX support files from texinputs
context = {'latex_engine': self.config.latex_engine}
self.info(bold('copying TeX support files...'))
logger.info(bold('copying TeX support files...'))
staticdirname = path.join(package_dir, 'texinputs')
for filename in os.listdir(staticdirname):
if not filename.startswith('.'):
@@ -220,11 +226,11 @@ class LaTeXBuilder(Builder):
# copy additional files
if self.config.latex_additional_files:
self.info(bold('copying additional files...'), nonl=1)
logger.info(bold('copying additional files...'), nonl=1)
for filename in self.config.latex_additional_files:
self.info(' '+filename, nonl=1)
logger.info(' ' + filename, nonl=1)
copy_asset_file(path.join(self.confdir, filename), self.outdir)
self.info()
logger.info('')
# the logo is handled differently
if self.config.latex_logo:
@@ -232,67 +238,57 @@ class LaTeXBuilder(Builder):
raise SphinxError('logo file %r does not exist' % self.config.latex_logo)
else:
copy_asset_file(path.join(self.confdir, self.config.latex_logo), self.outdir)
self.info('done')
logger.info('done')
def validate_config_values(app):
# type: (Sphinx) -> None
if app.config.latex_toplevel_sectioning not in (None, 'part', 'chapter', 'section'):
app.warn('invalid latex_toplevel_sectioning, ignored: %s' %
app.config.latex_toplevel_sectioning)
logger.warning('invalid latex_toplevel_sectioning, ignored: %s',
app.config.latex_toplevel_sectioning)
app.config.latex_toplevel_sectioning = None # type: ignore
if app.config.latex_use_parts:
if app.config.latex_toplevel_sectioning:
app.warn('latex_use_parts conflicts with latex_toplevel_sectioning, ignored.')
else:
app.warn('latex_use_parts is deprecated. Use latex_toplevel_sectioning instead.')
app.config.latex_toplevel_sectioning = 'parts' # type: ignore
if app.config.latex_use_modindex is not True: # changed by user
app.warn('latex_use_modeindex is deprecated. Use latex_domain_indices instead.')
if app.config.latex_preamble:
if app.config.latex_elements.get('preamble'):
app.warn("latex_preamble conflicts with latex_elements['preamble'], ignored.")
else:
app.warn("latex_preamble is deprecated. Use latex_elements['preamble'] instead.")
app.config.latex_elements['preamble'] = app.config.latex_preamble
if app.config.latex_paper_size != 'letter':
if app.config.latex_elements.get('papersize'):
app.warn("latex_paper_size conflicts with latex_elements['papersize'], ignored.")
else:
app.warn("latex_paper_size is deprecated. "
"Use latex_elements['papersize'] instead.")
if app.config.latex_paper_size:
app.config.latex_elements['papersize'] = app.config.latex_paper_size + 'paper'
if app.config.latex_font_size != '10pt':
if app.config.latex_elements.get('pointsize'):
app.warn("latex_font_size conflicts with latex_elements['pointsize'], ignored.")
else:
app.warn("latex_font_size is deprecated. Use latex_elements['pointsize'] instead.")
app.config.latex_elements['pointsize'] = app.config.latex_font_size
if 'footer' in app.config.latex_elements:
if 'postamble' in app.config.latex_elements:
app.warn("latex_elements['footer'] conflicts with "
"latex_elements['postamble'], ignored.")
logger.warning("latex_elements['footer'] conflicts with "
"latex_elements['postamble'], ignored.")
else:
app.warn("latex_elements['footer'] is deprecated. "
"Use latex_elements['preamble'] instead.")
warnings.warn("latex_elements['footer'] is deprecated. "
"Use latex_elements['preamble'] instead.",
RemovedInSphinx17Warning)
app.config.latex_elements['postamble'] = app.config.latex_elements['footer']
if app.config.latex_keep_old_macro_names:
warnings.warn("latex_keep_old_macro_names is deprecated. "
"LaTeX markup since Sphinx 1.4.5 uses only prefixed macro names.",
RemovedInSphinx17Warning)
def default_latex_engine(config):
# type: (Config) -> unicode
""" Better default latex_engine settings for specific languages. """
if config.language == 'ja':
return 'platex'
else:
return 'pdflatex'
def default_latex_docclass(config):
# type: (Config) -> Dict[unicode, unicode]
""" Better default latex_docclass settings for specific languages. """
if config.language == 'ja':
return {'manual': 'jsbook',
'howto': 'jreport'}
else:
return {}
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(LaTeXBuilder)
app.connect('builder-inited', validate_config_values)
app.add_config_value('latex_engine',
lambda self: 'pdflatex' if self.language != 'ja' else 'platex',
None,
app.add_config_value('latex_engine', default_latex_engine, None,
ENUM('pdflatex', 'xelatex', 'lualatex', 'platex'))
app.add_config_value('latex_documents',
lambda self: [(self.master_doc, make_filename(self.project) + '.tex',
@@ -300,25 +296,19 @@ def setup(app):
None)
app.add_config_value('latex_logo', None, None, string_classes)
app.add_config_value('latex_appendices', [], None)
app.add_config_value('latex_keep_old_macro_names', True, None)
# now deprecated - use latex_toplevel_sectioning
app.add_config_value('latex_use_parts', False, None)
app.add_config_value('latex_keep_old_macro_names', False, None)
app.add_config_value('latex_use_latex_multicolumn', False, None)
app.add_config_value('latex_toplevel_sectioning', None, None, [str])
app.add_config_value('latex_use_modindex', True, None) # deprecated
app.add_config_value('latex_domain_indices', True, None, [list])
app.add_config_value('latex_show_urls', 'no', None)
app.add_config_value('latex_show_pagerefs', False, None)
# paper_size and font_size are still separate values
# so that you can give them easily on the command line
app.add_config_value('latex_paper_size', 'letter', None)
app.add_config_value('latex_font_size', '10pt', None)
app.add_config_value('latex_elements', {}, None)
app.add_config_value('latex_additional_files', [], None)
japanese_default = {'manual': 'jsbook',
'howto': 'jreport'}
app.add_config_value('latex_docclass',
lambda self: japanese_default if self.language == 'ja' else {},
None)
# now deprecated - use latex_elements
app.add_config_value('latex_preamble', '', None)
app.add_config_value('latex_docclass', default_latex_docclass, None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -16,10 +16,8 @@ import threading
from os import path
from requests.exceptions import HTTPError
from six.moves import queue # type: ignore
from six.moves import queue, html_parser # type: ignore
from six.moves.urllib.parse import unquote
from six.moves.html_parser import HTMLParser
from docutils import nodes
# 2015-06-25 barry@python.org. This exception was deprecated in Python 3.3 and
@@ -33,7 +31,7 @@ except ImportError:
pass
from sphinx.builders import Builder
from sphinx.util import encode_uri, requests
from sphinx.util import encode_uri, requests, logging
from sphinx.util.console import ( # type: ignore
purple, red, darkgreen, darkgray, darkred, turquoise
)
@@ -41,23 +39,25 @@ from sphinx.util.requests import is_ssl_error
if False:
# For type annotation
from typing import Any, Tuple, Union # NOQA
from typing import Any, Dict, List, Set, Tuple, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.util.requests.requests import Response # NOQA
class AnchorCheckParser(HTMLParser):
logger = logging.getLogger(__name__)
class AnchorCheckParser(html_parser.HTMLParser):
"""Specialized HTML parser that looks for a specific anchor."""
def __init__(self, search_anchor):
# type: (unicode) -> None
HTMLParser.__init__(self)
html_parser.HTMLParser.__init__(self)
self.search_anchor = search_anchor
self.found = False
def handle_starttag(self, tag, attrs):
# type: (Any, Dict[unicode, unicode]) -> None
for key, value in attrs:
if key in ('id', 'name') and value == self.search_anchor:
self.found = True
@@ -231,24 +231,24 @@ class CheckExternalLinksBuilder(Builder):
if status == 'working' and info == 'old':
return
if lineno:
self.info('(line %4d) ' % lineno, nonl=1)
logger.info('(line %4d) ', lineno, nonl=1)
if status == 'ignored':
if info:
self.info(darkgray('-ignored- ') + uri + ': ' + info)
logger.info(darkgray('-ignored- ') + uri + ': ' + info)
else:
self.info(darkgray('-ignored- ') + uri)
logger.info(darkgray('-ignored- ') + uri)
elif status == 'local':
self.info(darkgray('-local- ') + uri)
logger.info(darkgray('-local- ') + uri)
self.write_entry('local', docname, lineno, uri)
elif status == 'working':
self.info(darkgreen('ok ') + uri + info)
logger.info(darkgreen('ok ') + uri + info)
elif status == 'broken':
self.write_entry('broken', docname, lineno, uri + ': ' + info)
if self.app.quiet or self.app.warningiserror:
self.warn('broken link: %s (%s)' % (uri, info),
'%s:%s' % (self.env.doc2path(docname), lineno))
logger.warning('broken link: %s (%s)', uri, info,
location=(self.env.doc2path(docname), lineno))
else:
self.info(red('broken ') + uri + red(' - ' + info))
logger.info(red('broken ') + uri + red(' - ' + info))
elif status == 'redirected':
text, color = {
301: ('permanently', darkred),
@@ -259,7 +259,7 @@ class CheckExternalLinksBuilder(Builder):
}[code]
self.write_entry('redirected ' + text, docname, lineno,
uri + ' to ' + info)
self.info(color('redirect ') + uri + color(' - ' + text + ' to ' + info))
logger.info(color('redirect ') + uri + color(' - ' + text + ' to ' + info))
def get_target_uri(self, docname, typ=None):
# type: (unicode, unicode) -> unicode
@@ -275,7 +275,7 @@ class CheckExternalLinksBuilder(Builder):
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
self.info()
logger.info('')
n = 0
for node in doctree.traverse(nodes.reference):
if 'refuri' not in node:
@@ -310,7 +310,7 @@ class CheckExternalLinksBuilder(Builder):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(CheckExternalLinksBuilder)
app.add_config_value('linkcheck_ignore', [], None)
@@ -321,3 +321,9 @@ def setup(app):
# Anchors starting with ! are ignored since they are
# commonly used for dynamic pages
app.add_config_value('linkcheck_anchors_ignore', ["^!"], None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -19,6 +19,7 @@ from docutils.frontend import OptionParser
from sphinx import addnodes
from sphinx.builders import Builder
from sphinx.environment import NoUri
from sphinx.util import logging
from sphinx.util.nodes import inline_all_toctrees
from sphinx.util.osutil import make_filename
from sphinx.util.console import bold, darkgreen # type: ignore
@@ -26,10 +27,13 @@ from sphinx.writers.manpage import ManualPageWriter
if False:
# For type annotation
from typing import Any, Union # NOQA
from typing import Any, Dict, List, Set, Union # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
class ManualPageBuilder(Builder):
"""
Builds groff output in manual page format.
@@ -41,8 +45,8 @@ class ManualPageBuilder(Builder):
def init(self):
# type: () -> None
if not self.config.man_pages:
self.warn('no "man_pages" config value found; no manual pages '
'will be written')
logger.warning('no "man_pages" config value found; no manual pages '
'will be written')
def get_outdated_docs(self):
# type: () -> Union[unicode, List[unicode]]
@@ -62,7 +66,7 @@ class ManualPageBuilder(Builder):
components=(docwriter,),
read_config_files=True).get_default_values()
self.info(bold('writing... '), nonl=True)
logger.info(bold('writing... '), nonl=True)
for info in self.config.man_pages:
docname, name, description, authors, section = info
@@ -73,7 +77,7 @@ class ManualPageBuilder(Builder):
authors = []
targetname = '%s.%s' % (name, section)
self.info(darkgreen(targetname) + ' { ', nonl=True)
logger.info(darkgreen(targetname) + ' { ', nonl=True)
destination = FileOutput(
destination_path=path.join(self.outdir, targetname),
encoding='utf-8')
@@ -82,7 +86,7 @@ class ManualPageBuilder(Builder):
docnames = set() # type: Set[unicode]
largetree = inline_all_toctrees(self, docnames, docname, tree,
darkgreen, [docname])
self.info('} ', nonl=True)
logger.info('} ', nonl=True)
self.env.resolve_references(largetree, docname, self)
# remove pending_xref nodes
for pendingnode in largetree.traverse(addnodes.pending_xref):
@@ -95,7 +99,7 @@ class ManualPageBuilder(Builder):
largetree.settings.section = section
docwriter.write(largetree, destination)
self.info()
logger.info('')
def finish(self):
# type: () -> None
@@ -103,7 +107,7 @@ class ManualPageBuilder(Builder):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(ManualPageBuilder)
app.add_config_value('man_pages',
@@ -111,3 +115,9 @@ def setup(app):
'%s %s' % (self.project, self.release), [], 1)],
None)
app.add_config_value('man_show_urls', False, None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -21,16 +21,20 @@ from docutils import nodes
from sphinx import addnodes
from sphinx.builders.html import StandaloneHTMLBuilder
from sphinx.util import force_decode
from sphinx.environment.adapters.indexentries import IndexEntries
from sphinx.util import force_decode, logging
from sphinx.util.osutil import make_filename
from sphinx.util.pycompat import htmlescape
if False:
# For type annotation
from typing import Any, Tuple # NOQA
from typing import Any, Dict, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
_idpattern = re.compile(
r'(?P<title>.+) (\((class in )?(?P<id>[\w\.]+)( (?P<descr>\w+))?\))$')
@@ -95,7 +99,7 @@ project_template = u'''\
'''
section_template = '<section title="%(title)s" ref="%(ref)s"/>'
file_template = ' '*12 + '<file>%(filename)s</file>'
file_template = ' ' * 12 + '<file>%(filename)s</file>'
class QtHelpBuilder(StandaloneHTMLBuilder):
@@ -138,13 +142,14 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
def build_qhp(self, outdir, outname):
# type: (unicode, unicode) -> None
self.info('writing project file...')
logger.info('writing project file...')
# sections
tocdoc = self.env.get_and_resolve_doctree(self.config.master_doc, self,
prune_toctrees=False)
def istoctree(node):
# type: (nodes.Node) -> bool
return isinstance(node, addnodes.compact_paragraph) and \
'toctree' in node
sections = []
@@ -167,7 +172,7 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
# keywords
keywords = []
index = self.env.create_index(self, group_entries=False)
index = IndexEntries(self.env).create_index(self, group_entries=False)
for (key, group) in index:
for title, (refs, subitems, key_) in group:
keywords.extend(self.build_keywords(title, refs, subitems))
@@ -200,7 +205,7 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
nspace = nspace.lower()
# write the project file
with codecs.open(path.join(outdir, outname+'.qhp'), 'w', 'utf-8') as f: # type: ignore
with codecs.open(path.join(outdir, outname + '.qhp'), 'w', 'utf-8') as f: # type: ignore # NOQA
f.write(project_template % { # type: ignore
'outname': htmlescape(outname),
'title': htmlescape(self.config.html_title),
@@ -216,8 +221,8 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
nspace, 'doc', self.get_target_uri(self.config.master_doc))
startpage = 'qthelp://' + posixpath.join(nspace, 'doc', 'index.html')
self.info('writing collection project file...')
with codecs.open(path.join(outdir, outname+'.qhcp'), 'w', 'utf-8') as f: # type: ignore # NOQA
logger.info('writing collection project file...')
with codecs.open(path.join(outdir, outname + '.qhcp'), 'w', 'utf-8') as f: # type: ignore # NOQA
f.write(collection_template % { # type: ignore
'outname': htmlescape(outname),
'title': htmlescape(self.config.html_short_title),
@@ -248,10 +253,10 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
title = htmlescape(refnode.astext()).replace('"', '&quot;')
item = '<section title="%(title)s" ref="%(ref)s">' % \
{'title': title, 'ref': link}
parts.append(' '*4*indentlevel + item)
parts.append(' ' * 4 * indentlevel + item)
for subnode in node.children[1]:
parts.extend(self.write_toc(subnode, indentlevel+1))
parts.append(' '*4*indentlevel + '</section>')
parts.extend(self.write_toc(subnode, indentlevel + 1))
parts.append(' ' * 4 * indentlevel + '</section>')
elif isinstance(node, nodes.list_item):
for subnode in node:
parts.extend(self.write_toc(subnode, indentlevel))
@@ -285,10 +290,10 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
id = None
if id:
item = ' '*12 + '<keyword name="%s" id="%s" ref="%s"/>' % (
item = ' ' * 12 + '<keyword name="%s" id="%s" ref="%s"/>' % (
name, id, ref[1])
else:
item = ' '*12 + '<keyword name="%s" ref="%s"/>' % (name, ref[1])
item = ' ' * 12 + '<keyword name="%s" ref="%s"/>' % (name, ref[1])
item.encode('ascii', 'xmlcharrefreplace')
return item
@@ -318,10 +323,16 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.setup_extension('sphinx.builders.html')
app.add_builder(QtHelpBuilder)
app.add_config_value('qthelp_basename', lambda self: make_filename(self.project), None)
app.add_config_value('qthelp_theme', 'nonav', 'html')
app.add_config_value('qthelp_theme_options', {}, 'html')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -22,6 +22,7 @@ from sphinx import addnodes
from sphinx.locale import _
from sphinx.builders import Builder
from sphinx.environment import NoUri
from sphinx.util import logging
from sphinx.util.nodes import inline_all_toctrees
from sphinx.util.osutil import SEP, copyfile, make_filename
from sphinx.util.console import bold, darkgreen # type: ignore
@@ -30,9 +31,11 @@ from sphinx.writers.texinfo import TexinfoWriter
if False:
# For type annotation
from sphinx.application import Sphinx # NOQA
from typing import Any, Iterable, Tuple, Union # NOQA
from typing import Any, Dict, Iterable, List, Tuple, Union # NOQA
logger = logging.getLogger(__name__)
TEXINFO_MAKEFILE = '''\
# Makefile for Sphinx Texinfo output
@@ -121,19 +124,19 @@ class TexinfoBuilder(Builder):
# type: () -> None
preliminary_document_data = [list(x) for x in self.config.texinfo_documents]
if not preliminary_document_data:
self.warn('no "texinfo_documents" config value found; no documents '
'will be written')
logger.warning('no "texinfo_documents" config value found; no documents '
'will be written')
return
# assign subdirs to titles
self.titles = [] # type: List[Tuple[unicode, unicode]]
for entry in preliminary_document_data:
docname = entry[0]
if docname not in self.env.all_docs:
self.warn('"texinfo_documents" config value references unknown '
'document %s' % docname)
logger.warning('"texinfo_documents" config value references unknown '
'document %s', docname)
continue
self.document_data.append(entry) # type: ignore
if docname.endswith(SEP+'index'):
if docname.endswith(SEP + 'index'):
docname = docname[:-5]
self.titles.append((docname, entry[2]))
@@ -152,11 +155,11 @@ class TexinfoBuilder(Builder):
destination = FileOutput(
destination_path=path.join(self.outdir, targetname),
encoding='utf-8')
self.info("processing " + targetname + "... ", nonl=1)
logger.info("processing " + targetname + "... ", nonl=1)
doctree = self.assemble_doctree(
docname, toctree_only,
appendices=(self.config.texinfo_appendices or []))
self.info("writing... ", nonl=1)
logger.info("writing... ", nonl=1)
self.post_process_images(doctree)
docwriter = TexinfoWriter(self)
settings = OptionParser(
@@ -173,12 +176,12 @@ class TexinfoBuilder(Builder):
settings.docname = docname
doctree.settings = settings
docwriter.write(doctree, destination)
self.info("done")
logger.info("done")
def assemble_doctree(self, indexfile, toctree_only, appendices):
# type: (unicode, bool, List[unicode]) -> nodes.Node
self.docnames = set([indexfile] + appendices)
self.info(darkgreen(indexfile) + " ", nonl=1)
logger.info(darkgreen(indexfile) + " ", nonl=1)
tree = self.env.get_doctree(indexfile)
tree['docname'] = indexfile
if toctree_only:
@@ -199,8 +202,8 @@ class TexinfoBuilder(Builder):
appendix = self.env.get_doctree(docname)
appendix['docname'] = docname
largetree.append(appendix)
self.info()
self.info("resolving references...")
logger.info('')
logger.info("resolving references...")
self.env.resolve_references(largetree, indexfile, self)
# TODO: add support for external :ref:s
for pendingnode in largetree.traverse(addnodes.pending_xref):
@@ -222,27 +225,27 @@ class TexinfoBuilder(Builder):
# type: () -> None
# copy image files
if self.images:
self.info(bold('copying images...'), nonl=1)
logger.info(bold('copying images...'), nonl=1)
for src, dest in iteritems(self.images):
self.info(' '+src, nonl=1)
logger.info(' ' + src, nonl=1)
copyfile(path.join(self.srcdir, src),
path.join(self.outdir, dest))
self.info()
logger.info('')
self.info(bold('copying Texinfo support files... '), nonl=True)
logger.info(bold('copying Texinfo support files... '), nonl=True)
# copy Makefile
fn = path.join(self.outdir, 'Makefile')
self.info(fn, nonl=1)
logger.info(fn, nonl=1)
try:
with open(fn, 'w') as mkfile:
mkfile.write(TEXINFO_MAKEFILE)
except (IOError, OSError) as err:
self.warn("error writing file %s: %s" % (fn, err))
self.info(' done')
logger.warning("error writing file %s: %s", fn, err)
logger.info(' done')
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(TexinfoBuilder)
app.add_config_value('texinfo_documents',
@@ -257,3 +260,9 @@ def setup(app):
app.add_config_value('texinfo_domain_indices', True, None, [list])
app.add_config_value('texinfo_show_urls', 'footnote', None)
app.add_config_value('texinfo_no_detailmenu', False, None)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -15,9 +15,18 @@ from os import path
from docutils.io import StringOutput
from sphinx.builders import Builder
from sphinx.util import logging
from sphinx.util.osutil import ensuredir, os_path
from sphinx.writers.text import TextWriter
if False:
# For type annotation
from typing import Any, Dict, Iterator, Set # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
class TextBuilder(Builder):
name = 'text'
@@ -28,9 +37,11 @@ class TextBuilder(Builder):
current_docname = None # type: unicode
def init(self):
# type: () -> None
pass
def get_outdated_docs(self):
# type: () -> Iterator[unicode]
for docname in self.env.found_docs:
if docname not in self.env.all_docs:
yield docname
@@ -50,29 +61,40 @@ class TextBuilder(Builder):
pass
def get_target_uri(self, docname, typ=None):
# type: (unicode, unicode) -> unicode
return ''
def prepare_writing(self, docnames):
# type: (Set[unicode]) -> None
self.writer = TextWriter(self)
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
self.current_docname = docname
destination = StringOutput(encoding='utf-8')
self.writer.write(doctree, destination)
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
ensuredir(path.dirname(outfilename))
try:
with codecs.open(outfilename, 'w', 'utf-8') as f:
with codecs.open(outfilename, 'w', 'utf-8') as f: # type: ignore
f.write(self.writer.output)
except (IOError, OSError) as err:
self.warn("error writing file %s: %s" % (outfilename, err))
logger.warning("error writing file %s: %s", outfilename, err)
def finish(self):
# type: () -> None
pass
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(TextBuilder)
app.add_config_value('text_sectionchars', '*=-~"+`', 'env')
app.add_config_value('text_newlines', 'unix', 'env')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -20,6 +20,12 @@ from sphinx.util.osutil import os_path, relative_uri, ensuredir, copyfile
from sphinx.builders.html import PickleHTMLBuilder
from sphinx.writers.websupport import WebSupportTranslator
if False:
# For type annotation
from typing import Any, Dict, Iterable, Tuple # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
class WebSupportBuilder(PickleHTMLBuilder):
"""
@@ -30,6 +36,7 @@ class WebSupportBuilder(PickleHTMLBuilder):
versioning_compare = True # for commentable node's uuid stability.
def init(self):
# type: () -> None
PickleHTMLBuilder.init(self)
# templates are needed for this builder, but the serializing
# builder does not initialize them
@@ -41,20 +48,24 @@ class WebSupportBuilder(PickleHTMLBuilder):
self.script_files.append('_static/websupport.js')
def set_webinfo(self, staticdir, virtual_staticdir, search, storage):
# type: (unicode, unicode, Any, unicode) -> None
self.staticdir = staticdir
self.virtual_staticdir = virtual_staticdir
self.search = search
self.storage = storage
def init_translator_class(self):
# type: () -> None
if self.translator_class is None:
self.translator_class = WebSupportTranslator
def prepare_writing(self, docnames):
# type: (Iterable[unicode]) -> None
PickleHTMLBuilder.prepare_writing(self, docnames)
self.globalcontext['no_search_suffix'] = True
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
destination = StringOutput(encoding='utf-8')
doctree.settings = self.docsettings
@@ -72,6 +83,7 @@ class WebSupportBuilder(PickleHTMLBuilder):
self.handle_page(docname, ctx, event_arg=doctree)
def write_doc_serialized(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
self.imgpath = '/' + posixpath.join(self.virtual_staticdir, self.imagedir)
self.post_process_images(doctree)
title = self.env.longtitles.get(docname)
@@ -79,10 +91,12 @@ class WebSupportBuilder(PickleHTMLBuilder):
self.index_page(docname, doctree, title)
def load_indexer(self, docnames):
self.indexer = self.search
self.indexer.init_indexing(changed=docnames)
# type: (Iterable[unicode]) -> None
self.indexer = self.search # type: ignore
self.indexer.init_indexing(changed=docnames) # type: ignore
def _render_page(self, pagename, addctx, templatename, event_arg=None):
# type: (unicode, Dict, unicode, unicode) -> Tuple[Dict, Dict]
# This is mostly copied from StandaloneHTMLBuilder. However, instead
# of rendering the template and saving the html, create a context
# dict and pickle it.
@@ -91,6 +105,7 @@ class WebSupportBuilder(PickleHTMLBuilder):
def pathto(otheruri, resource=False,
baseuri=self.get_target_uri(pagename)):
# type: (unicode, bool, unicode) -> unicode
if resource and '://' in otheruri:
return otheruri
elif not resource:
@@ -128,6 +143,7 @@ class WebSupportBuilder(PickleHTMLBuilder):
def handle_page(self, pagename, addctx, templatename='page.html',
outfilename=None, event_arg=None):
# type: (unicode, Dict, unicode, unicode, unicode) -> None
ctx, doc_ctx = self._render_page(pagename, addctx,
templatename, event_arg)
@@ -141,11 +157,12 @@ class WebSupportBuilder(PickleHTMLBuilder):
# "show source" link
if ctx.get('sourcename'):
source_name = path.join(self.staticdir,
'_sources', os_path(ctx['sourcename']))
'_sources', os_path(ctx['sourcename']))
ensuredir(path.dirname(source_name))
copyfile(self.env.doc2path(pagename), source_name)
def handle_finish(self):
# type: () -> None
# get global values for css and script files
_, doc_ctx = self._render_page('tmp', {}, 'page.html')
self.globalcontext['css'] = doc_ctx['css']
@@ -164,8 +181,16 @@ class WebSupportBuilder(PickleHTMLBuilder):
shutil.move(src, dst)
def dump_search_index(self):
self.indexer.finish_indexing()
# type: () -> None
self.indexer.finish_indexing() # type: ignore
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(WebSupportBuilder)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -16,9 +16,17 @@ from docutils import nodes
from docutils.io import StringOutput
from sphinx.builders import Builder
from sphinx.util import logging
from sphinx.util.osutil import ensuredir, os_path
from sphinx.writers.xml import XMLWriter, PseudoXMLWriter
if False:
# For type annotation
from typing import Any, Dict, Iterator, Set # NOQA
from sphinx.application import Sphinx # NOQA
logger = logging.getLogger(__name__)
class XMLBuilder(Builder):
"""
@@ -32,9 +40,11 @@ class XMLBuilder(Builder):
_writer_class = XMLWriter
def init(self):
# type: () -> None
pass
def get_outdated_docs(self):
# type: () -> Iterator[unicode]
for docname in self.env.found_docs:
if docname not in self.env.all_docs:
yield docname
@@ -54,12 +64,15 @@ class XMLBuilder(Builder):
pass
def get_target_uri(self, docname, typ=None):
# type: (unicode, unicode) -> unicode
return docname
def prepare_writing(self, docnames):
# type: (Set[unicode]) -> None
self.writer = self._writer_class(self)
def write_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
# work around multiple string % tuple issues in docutils;
# replace tuples in attribute values with lists
doctree = doctree.deepcopy()
@@ -77,12 +90,13 @@ class XMLBuilder(Builder):
outfilename = path.join(self.outdir, os_path(docname) + self.out_suffix)
ensuredir(path.dirname(outfilename))
try:
with codecs.open(outfilename, 'w', 'utf-8') as f:
with codecs.open(outfilename, 'w', 'utf-8') as f: # type: ignore
f.write(self.writer.output)
except (IOError, OSError) as err:
self.warn("error writing file %s: %s" % (outfilename, err))
logger.warning("error writing file %s: %s", outfilename, err)
def finish(self):
# type: () -> None
pass
@@ -98,7 +112,14 @@ class PseudoXMLBuilder(XMLBuilder):
def setup(app):
# type: (Sphinx) -> Dict[unicode, Any]
app.add_builder(XMLBuilder)
app.add_builder(PseudoXMLBuilder)
app.add_config_value('xml_pretty', True, 'env')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -30,7 +30,7 @@ from sphinx.util.pycompat import terminal_safe
if False:
# For type annotation
from typing import Any, IO, Union # NOQA
from typing import Any, IO, List, Union # NOQA
USAGE = """\
@@ -116,9 +116,6 @@ def handle_exception(app, opts, exception, stderr=sys.stderr):
def main(argv):
# type: (List[unicode]) -> int
if not color_terminal():
nocolor()
parser = optparse.OptionParser(USAGE, epilog=EPILOG, formatter=MyFormatter())
parser.add_option('--version', action='store_true', dest='version',
help='show version information and exit')
@@ -167,8 +164,12 @@ def main(argv):
help='no output on stdout, just warnings on stderr')
group.add_option('-Q', action='store_true', dest='really_quiet',
help='no output at all, not even warnings')
group.add_option('-N', action='store_true', dest='nocolor',
help='do not emit colored output')
group.add_option('--color', dest='color',
action='store_const', const='yes', default='auto',
help='Do emit colored output (default: auto-detect)')
group.add_option('-N', '--no-color', dest='color',
action='store_const', const='no',
help='Do not emit colored output (default: auot-detect)')
group.add_option('-w', metavar='FILE', dest='warnfile',
help='write warnings (and errors) to given file')
group.add_option('-W', action='store_true', dest='warningiserror',
@@ -219,12 +220,12 @@ def main(argv):
# handle remaining filename arguments
filenames = args[2:]
err = 0 # type: ignore
errored = False
for filename in filenames:
if not path.isfile(filename):
print('Error: Cannot find file %r.' % filename, file=sys.stderr)
err = 1 # type: ignore
if err:
errored = True
if errored:
return 1
# likely encoding used for command-line arguments
@@ -238,7 +239,7 @@ def main(argv):
print('Error: Cannot combine -a option and filenames.', file=sys.stderr)
return 1
if opts.nocolor:
if opts.color == 'no' or (opts.color == 'auto' and not color_terminal()):
nocolor()
doctreedir = abspath(opts.doctreedir or path.join(outdir, '.doctrees'))

View File

@@ -13,18 +13,22 @@ import re
from os import path, getenv
from six import PY2, PY3, iteritems, string_types, binary_type, text_type, integer_types
from typing import Any, NamedTuple, Union
from sphinx.errors import ConfigError
from sphinx.locale import l_
from sphinx.util import logging
from sphinx.util.i18n import format_date
from sphinx.util.osutil import cd
from sphinx.util.pycompat import execfile_, NoneType
if False:
# For type annotation
from typing import Any, Callable, Tuple # NOQA
from typing import Any, Callable, Dict, Iterable, Iterator, List, Tuple # NOQA
from sphinx.util.tags import Tags # NOQA
logger = logging.getLogger(__name__)
nonascii_re = re.compile(br'[\x80-\xff]')
copyright_year_re = re.compile(r'^((\d{4}-)?)(\d{4})(?=[ ,])')
@@ -40,6 +44,13 @@ CONFIG_PERMITTED_TYPE_WARNING = "The config value `{name}' has type `{current.__
CONFIG_TYPE_WARNING = "The config value `{name}' has type `{current.__name__}', " \
"defaults to `{default.__name__}'."
if PY3:
unicode = str # special alias for static typing...
ConfigValue = NamedTuple('ConfigValue', [('name', str),
('value', Any),
('rebuild', Union[bool, unicode])])
class ENUM(object):
"""represents the config value should be a one of candidates.
@@ -121,9 +132,6 @@ class Config(object):
tls_verify = (True, 'env'),
tls_cacerts = (None, 'env'),
# pre-initialized confval for HTML builder
html_translator_class = (None, 'html', string_classes),
) # type: Dict[unicode, Tuple]
def __init__(self, dirname, filename, overrides, tags):
@@ -163,11 +171,11 @@ class Config(object):
if getenv('SOURCE_DATE_EPOCH') is not None:
for k in ('copyright', 'epub_copyright'):
if k in config:
config[k] = copyright_year_re.sub('\g<1>%s' % format_date('%Y'), # type: ignore # NOQA
config[k] = copyright_year_re.sub(r'\g<1>%s' % format_date('%Y'),
config[k])
def check_types(self, warn):
# type: (Callable) -> None
def check_types(self):
# type: () -> None
# check all values for deviation from the default value's type, since
# that can result in TypeErrors all over the place
# NB. since config values might use l_() we have to wait with calling
@@ -186,7 +194,7 @@ class Config(object):
current = self[name]
if isinstance(permitted, ENUM):
if not permitted.match(current):
warn(CONFIG_ENUM_WARNING.format(
logger.warning(CONFIG_ENUM_WARNING.format(
name=name, current=current, candidates=permitted.candidates))
else:
if type(current) is type(default):
@@ -201,22 +209,22 @@ class Config(object):
continue # at least we share a non-trivial base class
if permitted:
warn(CONFIG_PERMITTED_TYPE_WARNING.format(
logger.warning(CONFIG_PERMITTED_TYPE_WARNING.format(
name=name, current=type(current),
permitted=str([cls.__name__ for cls in permitted])))
else:
warn(CONFIG_TYPE_WARNING.format(
logger.warning(CONFIG_TYPE_WARNING.format(
name=name, current=type(current), default=type(default)))
def check_unicode(self, warn):
# type: (Callable) -> None
def check_unicode(self):
# type: () -> None
# check all string values for non-ASCII characters in bytestrings,
# since that can result in UnicodeErrors all over the place
for name, value in iteritems(self._raw_config):
if isinstance(value, binary_type) and nonascii_re.search(value): # type: ignore
warn('the config value %r is set to a string with non-ASCII '
'characters; this can lead to Unicode errors occurring. '
'Please use Unicode strings, e.g. %r.' % (name, u'Content'))
if isinstance(value, binary_type) and nonascii_re.search(value):
logger.warning('the config value %r is set to a string with non-ASCII '
'characters; this can lead to Unicode errors occurring. '
'Please use Unicode strings, e.g. %r.', name, u'Content')
def convert_overrides(self, name, value):
# type: (unicode, Any) -> Any
@@ -229,10 +237,10 @@ class Config(object):
'ignoring (use %r to set individual elements)' %
(name, name + '.key=value'))
elif isinstance(defvalue, list):
return value.split(',') # type: ignore
return value.split(',')
elif isinstance(defvalue, integer_types):
try:
return int(value) # type: ignore
return int(value)
except ValueError:
raise ValueError('invalid number %r for config value %r, ignoring' %
(value, name))
@@ -244,10 +252,10 @@ class Config(object):
else:
return value
def pre_init_values(self, warn):
# type: (Callable) -> None
def pre_init_values(self):
# type: () -> None
"""Initialize some limited config variables before loading extensions"""
variables = ['needs_sphinx', 'suppress_warnings', 'html_translator_class']
variables = ['needs_sphinx', 'suppress_warnings']
for name in variables:
try:
if name in self.overrides:
@@ -255,26 +263,26 @@ class Config(object):
elif name in self._raw_config:
self.__dict__[name] = self._raw_config[name]
except ValueError as exc:
warn(exc)
logger.warning("%s", exc)
def init_values(self, warn):
# type: (Callable) -> None
def init_values(self):
# type: () -> None
config = self._raw_config
for valname, value in iteritems(self.overrides):
try:
if '.' in valname:
realvalname, key = valname.split('.', 1)
config.setdefault(realvalname, {})[key] = value # type: ignore
config.setdefault(realvalname, {})[key] = value
continue
elif valname not in self.values:
warn('unknown config value %r in override, ignoring' % valname)
logger.warning('unknown config value %r in override, ignoring', valname)
continue
if isinstance(value, string_types):
config[valname] = self.convert_overrides(valname, value)
else:
config[valname] = value
except ValueError as exc:
warn(exc)
logger.warning("%s", exc)
for name in config:
if name in self.values:
self.__dict__[name] = config[name]
@@ -307,3 +315,16 @@ class Config(object):
def __contains__(self, name):
# type: (unicode) -> bool
return name in self.values
def __iter__(self):
# type: () -> Iterable[ConfigValue]
for name, value in iteritems(self.values):
yield ConfigValue(name, getattr(self, name), value[1]) # type: ignore
def add(self, name, default, rebuild, types):
# type: (unicode, Any, Union[bool, unicode], Any) -> None
self.values[name] = (default, rebuild, types)
def filter(self, rebuild):
# type: (str) -> Iterator[ConfigValue]
return (value for value in self if value.rebuild == rebuild) # type: ignore

View File

@@ -10,12 +10,16 @@
"""
class RemovedInSphinx16Warning(DeprecationWarning):
class RemovedInSphinx17Warning(DeprecationWarning):
pass
class RemovedInSphinx17Warning(PendingDeprecationWarning):
class RemovedInSphinx18Warning(PendingDeprecationWarning):
pass
RemovedInNextVersionWarning = RemovedInSphinx16Warning
class RemovedInSphinx20Warning(PendingDeprecationWarning):
pass
RemovedInNextVersionWarning = RemovedInSphinx17Warning

View File

@@ -31,7 +31,7 @@ from sphinx.directives.patches import ( # noqa
if False:
# For type annotation
from typing import Any # NOQA
from typing import Any, Dict, List # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
@@ -242,9 +242,15 @@ class DefaultDomain(Directive):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
directives.register_directive('default-role', DefaultRole)
directives.register_directive('default-domain', DefaultDomain)
directives.register_directive('describe', ObjectDescription)
# new, more consistent, name
directives.register_directive('object', ObjectDescription)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -11,21 +11,23 @@ import sys
import codecs
from difflib import unified_diff
from six import string_types
from docutils import nodes
from docutils.parsers.rst import Directive, directives
from docutils.statemachine import ViewList
from sphinx import addnodes
from sphinx.locale import _
from sphinx.util import logging
from sphinx.util import parselinenos
from sphinx.util.nodes import set_source_info
if False:
# For type annotation
from typing import Any # NOQA
from typing import Any, Dict, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.config import Config # NOQA
logger = logging.getLogger(__name__)
class Highlight(Directive):
@@ -55,11 +57,14 @@ class Highlight(Directive):
linenothreshold=linenothreshold)]
def dedent_lines(lines, dedent):
# type: (List[unicode], int) -> List[unicode]
def dedent_lines(lines, dedent, location=None):
# type: (List[unicode], int, Any) -> List[unicode]
if not dedent:
return lines
if any(s[:dedent].strip() for s in lines):
logger.warning(_('Over dedent has detected'), location=location)
new_lines = []
for line in lines:
new_line = line[dedent:]
@@ -78,11 +83,12 @@ def container_wrapper(directive, literal_node, caption):
directive.state.nested_parse(ViewList([caption], source=''),
directive.content_offset, parsed)
if isinstance(parsed[0], nodes.system_message):
raise ValueError(parsed[0])
msg = _('Invalid caption: %s' % parsed[0].astext())
raise ValueError(msg)
caption_node = nodes.caption(parsed[0].rawsource, '',
*parsed[0].children)
caption_node.source = parsed[0].source
caption_node.line = parsed[0].line
caption_node.source = literal_node.source
caption_node.line = literal_node.line
container_node += caption_node
container_node += literal_node
return container_node
@@ -110,22 +116,30 @@ class CodeBlock(Directive):
def run(self):
# type: () -> List[nodes.Node]
document = self.state.document
code = u'\n'.join(self.content)
location = self.state_machine.get_source_and_line(self.lineno)
linespec = self.options.get('emphasize-lines')
if linespec:
try:
nlines = len(self.content)
hl_lines = [x+1 for x in parselinenos(linespec, nlines)]
hl_lines = parselinenos(linespec, nlines)
if any(i >= nlines for i in hl_lines):
logger.warning('line number spec is out of range(1-%d): %r' %
(nlines, self.options['emphasize_lines']),
location=location)
hl_lines = [x + 1 for x in hl_lines if x < nlines]
except ValueError as err:
document = self.state.document
return [document.reporter.warning(str(err), line=self.lineno)]
else:
hl_lines = None
if 'dedent' in self.options:
location = self.state_machine.get_source_and_line(self.lineno)
lines = code.split('\n')
lines = dedent_lines(lines, self.options['dedent'])
lines = dedent_lines(lines, self.options['dedent'], location=location)
code = '\n'.join(lines)
literal = nodes.literal_block(code, code)
@@ -145,9 +159,7 @@ class CodeBlock(Directive):
try:
literal = container_wrapper(self, literal, caption)
except ValueError as exc:
document = self.state.document
errmsg = _('Invalid caption: %s' % exc[0][0].astext()) # type: ignore
return [document.reporter.warning(errmsg, line=self.lineno)]
return [document.reporter.warning(str(exc), line=self.lineno)]
# literal will be note_implicit_target that is linked from caption and numref.
# when options['name'] is provided, it should be primary ID.
@@ -156,6 +168,196 @@ class CodeBlock(Directive):
return [literal]
class LiteralIncludeReader(object):
INVALID_OPTIONS_PAIR = [
('lineno-match', 'lineno-start'),
('lineno-match', 'append'),
('lineno-match', 'prepend'),
('start-after', 'start-at'),
('end-before', 'end-at'),
('diff', 'pyobject'),
('diff', 'lineno-start'),
('diff', 'lineno-match'),
('diff', 'lines'),
('diff', 'start-after'),
('diff', 'end-before'),
('diff', 'start-at'),
('diff', 'end-at'),
]
def __init__(self, filename, options, config):
# type: (unicode, Dict, Config) -> None
self.filename = filename
self.options = options
self.encoding = options.get('encoding', config.source_encoding)
self.lineno_start = self.options.get('lineno-start', 1)
self.parse_options()
def parse_options(self):
# type: () -> None
for option1, option2 in self.INVALID_OPTIONS_PAIR:
if option1 in self.options and option2 in self.options:
raise ValueError(_('Cannot use both "%s" and "%s" options') %
(option1, option2))
def read_file(self, filename, location=None):
# type: (unicode, Any) -> List[unicode]
try:
with codecs.open(filename, 'r', self.encoding, errors='strict') as f: # type: ignore # NOQA
text = f.read() # type: unicode
if 'tab-width' in self.options:
text = text.expandtabs(self.options['tab-width'])
lines = text.splitlines(True)
if 'dedent' in self.options:
return dedent_lines(lines, self.options.get('dedent'), location=location)
else:
return lines
except (IOError, OSError):
raise IOError(_('Include file %r not found or reading it failed') % filename)
except UnicodeError:
raise UnicodeError(_('Encoding %r used for reading included file %r seems to '
'be wrong, try giving an :encoding: option') %
(self.encoding, filename))
def read(self, location=None):
# type: (Any) -> Tuple[unicode, int]
if 'diff' in self.options:
lines = self.show_diff()
else:
filters = [self.pyobject_filter,
self.start_filter,
self.end_filter,
self.lines_filter,
self.prepend_filter,
self.append_filter]
lines = self.read_file(self.filename, location=location)
for func in filters:
lines = func(lines, location=location)
return ''.join(lines), len(lines)
def show_diff(self, location=None):
# type: (Any) -> List[unicode]
new_lines = self.read_file(self.filename)
old_filename = self.options.get('diff')
old_lines = self.read_file(old_filename)
diff = unified_diff(old_lines, new_lines, old_filename, self.filename) # type: ignore
return list(diff)
def pyobject_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
pyobject = self.options.get('pyobject')
if pyobject:
from sphinx.pycode import ModuleAnalyzer
analyzer = ModuleAnalyzer.for_file(self.filename, '')
tags = analyzer.find_tags()
if pyobject not in tags:
raise ValueError(_('Object named %r not found in include file %r') %
(pyobject, self.filename))
else:
start = tags[pyobject][1]
end = tags[pyobject][2]
lines = lines[start - 1:end - 1]
if 'lineno-match' in self.options:
self.lineno_start = start
return lines
def lines_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
linespec = self.options.get('lines')
if linespec:
linelist = parselinenos(linespec, len(lines))
if any(i >= len(lines) for i in linelist):
logger.warning('line number spec is out of range(1-%d): %r' %
(len(lines), linespec), location=location)
if 'lineno-match' in self.options:
# make sure the line list is not "disjoint".
first = linelist[0]
if all(first + i == n for i, n in enumerate(linelist)):
self.lineno_start += linelist[0]
else:
raise ValueError(_('Cannot use "lineno-match" with a disjoint '
'set of "lines"'))
lines = [lines[n] for n in linelist if n < len(lines)]
if lines == []:
raise ValueError(_('Line spec %r: no lines pulled from include file %r') %
(linespec, self.filename))
return lines
def start_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
if 'start-at' in self.options:
start = self.options.get('start-at')
inclusive = False
elif 'start-after' in self.options:
start = self.options.get('start-after')
inclusive = True
else:
start = None
if start:
for lineno, line in enumerate(lines):
if start in line:
if inclusive:
if 'lineno-match' in self.options:
self.lineno_start += lineno + 1
return lines[lineno + 1:]
else:
if 'lineno-match' in self.options:
self.lineno_start += lineno
return lines[lineno:]
return lines
def end_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
if 'end-at' in self.options:
end = self.options.get('end-at')
inclusive = True
elif 'end-before' in self.options:
end = self.options.get('end-before')
inclusive = False
else:
end = None
if end:
for lineno, line in enumerate(lines):
if end in line:
if inclusive:
return lines[:lineno + 1]
else:
if lineno == 0:
return []
else:
return lines[:lineno]
return lines
def prepend_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
prepend = self.options.get('prepend')
if prepend:
lines.insert(0, prepend + '\n')
return lines
def append_filter(self, lines, location=None):
# type: (List[unicode], Any) -> List[unicode]
append = self.options.get('append')
if append:
lines.append(append + '\n')
return lines
class LiteralInclude(Directive):
"""
Like ``.. include:: :literal:``, but only warns if the include file is
@@ -190,24 +392,6 @@ class LiteralInclude(Directive):
'diff': directives.unchanged_required,
}
def read_with_encoding(self, filename, document, codec_info, encoding):
# type: (unicode, nodes.Node, Any, unicode) -> List
try:
with codecs.StreamReaderWriter(open(filename, 'rb'), codec_info[2],
codec_info[3], 'strict') as f:
lines = f.readlines()
lines = dedent_lines(lines, self.options.get('dedent')) # type: ignore
return lines
except (IOError, OSError):
return [document.reporter.warning(
'Include file %r not found or reading it failed' % filename,
line=self.lineno)]
except UnicodeError:
return [document.reporter.warning(
'Encoding %r used for reading included file %r seems to '
'be wrong, try giving an :encoding: option' %
(encoding, filename))]
def run(self):
# type: () -> List[nodes.Node]
document = self.state.document
@@ -215,183 +399,63 @@ class LiteralInclude(Directive):
return [document.reporter.warning('File insertion disabled',
line=self.lineno)]
env = document.settings.env
rel_filename, filename = env.relfn2path(self.arguments[0])
if 'pyobject' in self.options and 'lines' in self.options:
return [document.reporter.warning(
'Cannot use both "pyobject" and "lines" options',
line=self.lineno)]
# convert options['diff'] to absolute path
if 'diff' in self.options:
_, path = env.relfn2path(self.options['diff'])
self.options['diff'] = path
if 'lineno-match' in self.options and 'lineno-start' in self.options:
return [document.reporter.warning(
'Cannot use both "lineno-match" and "lineno-start"',
line=self.lineno)]
try:
location = self.state_machine.get_source_and_line(self.lineno)
rel_filename, filename = env.relfn2path(self.arguments[0])
env.note_dependency(rel_filename)
if 'lineno-match' in self.options and \
(set(['append', 'prepend']) & set(self.options.keys())):
return [document.reporter.warning(
'Cannot use "lineno-match" and "append" or "prepend"',
line=self.lineno)]
reader = LiteralIncludeReader(filename, self.options, env.config)
text, lines = reader.read(location=location)
if 'start-after' in self.options and 'start-at' in self.options:
return [document.reporter.warning(
'Cannot use both "start-after" and "start-at" options',
line=self.lineno)]
retnode = nodes.literal_block(text, text, source=filename)
set_source_info(self, retnode)
if self.options.get('diff'): # if diff is set, set udiff
retnode['language'] = 'udiff'
elif 'language' in self.options:
retnode['language'] = self.options['language']
retnode['linenos'] = ('linenos' in self.options or
'lineno-start' in self.options or
'lineno-match' in self.options)
retnode['classes'] += self.options.get('class', [])
extra_args = retnode['highlight_args'] = {}
if 'empahsize-lines' in self.options:
hl_lines = parselinenos(self.options['emphasize-lines'], lines)
if any(i >= lines for i in hl_lines):
logger.warning('line number spec is out of range(1-%d): %r' %
(lines, self.options['emphasize_lines']),
location=location)
extra_args['hl_lines'] = [x + 1 for x in hl_lines if x < lines]
extra_args['linenostart'] = reader.lineno_start
if 'end-before' in self.options and 'end-at' in self.options:
return [document.reporter.warning(
'Cannot use both "end-before" and "end-at" options',
line=self.lineno)]
encoding = self.options.get('encoding', env.config.source_encoding)
codec_info = codecs.lookup(encoding)
lines = self.read_with_encoding(filename, document,
codec_info, encoding)
if lines and not isinstance(lines[0], string_types):
return lines
diffsource = self.options.get('diff')
if diffsource is not None:
tmp, fulldiffsource = env.relfn2path(diffsource)
difflines = self.read_with_encoding(fulldiffsource, document,
codec_info, encoding)
if not isinstance(difflines[0], string_types):
return difflines
diff = unified_diff(
difflines,
lines,
diffsource,
self.arguments[0])
lines = list(diff)
linenostart = self.options.get('lineno-start', 1)
objectname = self.options.get('pyobject')
if objectname is not None:
from sphinx.pycode import ModuleAnalyzer
analyzer = ModuleAnalyzer.for_file(filename, '')
tags = analyzer.find_tags()
if objectname not in tags:
return [document.reporter.warning(
'Object named %r not found in include file %r' %
(objectname, filename), line=self.lineno)]
else:
lines = lines[tags[objectname][1]-1: tags[objectname][2]-1]
if 'lineno-match' in self.options:
linenostart = tags[objectname][1]
linespec = self.options.get('lines')
if linespec:
try:
linelist = parselinenos(linespec, len(lines))
except ValueError as err:
return [document.reporter.warning(str(err), line=self.lineno)]
if 'lineno-match' in self.options:
# make sure the line list is not "disjoint".
previous = linelist[0]
for line_number in linelist[1:]:
if line_number == previous + 1:
previous = line_number
continue
return [document.reporter.warning(
'Cannot use "lineno-match" with a disjoint set of '
'"lines"', line=self.lineno)]
linenostart = linelist[0] + 1
# just ignore non-existing lines
lines = [lines[i] for i in linelist if i < len(lines)]
if not lines:
return [document.reporter.warning(
'Line spec %r: no lines pulled from include file %r' %
(linespec, filename), line=self.lineno)]
linespec = self.options.get('emphasize-lines')
if linespec:
try:
hl_lines = [x+1 for x in parselinenos(linespec, len(lines))]
except ValueError as err:
return [document.reporter.warning(str(err), line=self.lineno)]
else:
hl_lines = None
start_str = self.options.get('start-after')
start_inclusive = False
if self.options.get('start-at') is not None:
start_str = self.options.get('start-at')
start_inclusive = True
end_str = self.options.get('end-before')
end_inclusive = False
if self.options.get('end-at') is not None:
end_str = self.options.get('end-at')
end_inclusive = True
if start_str is not None or end_str is not None:
use = not start_str
res = []
for line_number, line in enumerate(lines):
if not use and start_str and start_str in line:
if 'lineno-match' in self.options:
linenostart += line_number + 1
use = True
if start_inclusive:
res.append(line)
elif use and end_str and end_str in line:
if end_inclusive:
res.append(line)
break
elif use:
res.append(line)
lines = res
prepend = self.options.get('prepend')
if prepend:
lines.insert(0, prepend + '\n')
append = self.options.get('append')
if append:
lines.append(append + '\n')
text = ''.join(lines)
if self.options.get('tab-width'):
text = text.expandtabs(self.options['tab-width'])
retnode = nodes.literal_block(text, text, source=filename)
set_source_info(self, retnode)
if diffsource: # if diff is set, set udiff
retnode['language'] = 'udiff'
if 'language' in self.options:
retnode['language'] = self.options['language']
retnode['linenos'] = 'linenos' in self.options or \
'lineno-start' in self.options or \
'lineno-match' in self.options
retnode['classes'] += self.options.get('class', [])
extra_args = retnode['highlight_args'] = {}
if hl_lines is not None:
extra_args['hl_lines'] = hl_lines
extra_args['linenostart'] = linenostart
env.note_dependency(rel_filename)
caption = self.options.get('caption')
if caption is not None:
if not caption:
caption = self.arguments[0]
try:
if 'caption' in self.options:
caption = self.options['caption'] or self.arguments[0]
retnode = container_wrapper(self, retnode, caption)
except ValueError as exc:
document = self.state.document
errmsg = _('Invalid caption: %s' % exc[0][0].astext()) # type: ignore
return [document.reporter.warning(errmsg, line=self.lineno)]
# retnode will be note_implicit_target that is linked from caption and numref.
# when options['name'] is provided, it should be primary ID.
self.add_name(retnode)
# retnode will be note_implicit_target that is linked from caption and numref.
# when options['name'] is provided, it should be primary ID.
self.add_name(retnode)
return [retnode]
return [retnode]
except Exception as exc:
return [document.reporter.warning(str(exc), line=self.lineno)]
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
directives.register_directive('highlight', Highlight)
directives.register_directive('highlightlang', Highlight) # old
directives.register_directive('code-block', CodeBlock)
directives.register_directive('sourcecode', CodeBlock)
directives.register_directive('literalinclude', LiteralInclude)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -24,7 +24,7 @@ from sphinx.util.matching import patfilter
if False:
# For type annotation
from typing import Tuple # NOQA
from typing import Any, Dict, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
@@ -215,7 +215,7 @@ class VersionChange(Directive):
text = versionlabels[self.name] % self.arguments[0]
if len(self.arguments) == 2:
inodes, messages = self.state.inline_text(self.arguments[1],
self.lineno+1)
self.lineno + 1)
para = nodes.paragraph(self.arguments[1], '', *inodes, translatable=False)
set_source_info(self, para)
node.append(para)
@@ -340,7 +340,7 @@ class HList(Directive):
index = 0
newnode = addnodes.hlist()
for column in range(ncolumns):
endindex = index + (column < nmore and (npercol+1) or npercol)
endindex = index + (column < nmore and (npercol + 1) or npercol)
col = addnodes.hlistcol()
col += nodes.bullet_list()
col[0] += fulllist.children[index:endindex]
@@ -427,7 +427,7 @@ class Include(BaseInclude):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
directives.register_directive('toctree', TocTree)
directives.register_directive('sectionauthor', Author)
directives.register_directive('moduleauthor', Author)
@@ -449,3 +449,9 @@ def setup(app):
directives.register_directive('cssclass', Class)
# new standard name when default-domain with "class" is in effect
directives.register_directive('rst-class', Class)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -9,9 +9,15 @@
from docutils import nodes
from docutils.parsers.rst import directives
from docutils.parsers.rst.directives import images, html
from docutils.parsers.rst.directives import images, html, tables
from sphinx import addnodes
from sphinx.util.nodes import set_source_info
if False:
# For type annotation
from typing import Dict, List # NOQA
from sphinx.application import Sphinx # NOQA
class Figure(images.Figure):
@@ -20,6 +26,7 @@ class Figure(images.Figure):
"""
def run(self):
# type: () -> List[nodes.Node]
name = self.options.pop('name', None)
result = images.Figure.run(self)
if len(result) == 2 or isinstance(result[0], nodes.system_message):
@@ -39,6 +46,7 @@ class Figure(images.Figure):
class Meta(html.Meta):
def run(self):
# type: () -> List[nodes.Node]
env = self.state.document.settings.env
result = html.Meta.run(self)
for node in result:
@@ -55,6 +63,55 @@ class Meta(html.Meta):
return result
class RSTTable(tables.RSTTable):
"""The table directive which sets source and line information to its caption.
Only for docutils-0.13 or older version."""
def make_title(self):
title, message = tables.RSTTable.make_title(self)
if title:
set_source_info(self, title)
return title, message
class CSVTable(tables.CSVTable):
"""The csv-table directive which sets source and line information to its caption.
Only for docutils-0.13 or older version."""
def make_title(self):
title, message = tables.CSVTable.make_title(self)
if title:
set_source_info(self, title)
return title, message
class ListTable(tables.ListTable):
"""The list-table directive which sets source and line information to its caption.
Only for docutils-0.13 or older version."""
def make_title(self):
title, message = tables.ListTable.make_title(self)
if title:
set_source_info(self, title)
return title, message
def setup(app):
# type: (Sphinx) -> Dict
directives.register_directive('figure', Figure)
directives.register_directive('meta', Meta)
directives.register_directive('table', RSTTable)
directives.register_directive('csv-table', CSVTable)
directives.register_directive('list-table', ListTable)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -19,10 +19,13 @@ from sphinx.locale import _
if False:
# For type annotation
from typing import Any, Callable, Iterable, Tuple, Type, Union # NOQA
from typing import Any, Callable, Dict, Iterable, List, Tuple, Type, Union # NOQA
from docutils import nodes # NOQA
from docutils.parsers.rst.states import Inliner # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
from sphinx.roles import XRefRole # NOQA
from sphinx.util.typing import RoleFunction # NOQA
class ObjType(object):
@@ -79,7 +82,7 @@ class Index(object):
self.domain = domain
def generate(self, docnames=None):
# type: (List[unicode]) -> Tuple[List[Tuple[unicode, List[List[Union[unicode, int]]]]], bool] # NOQA
# type: (Iterable[unicode]) -> Tuple[List[Tuple[unicode, List[List[Union[unicode, int]]]]], bool] # NOQA
"""Return entries for the index given by *name*. If *docnames* is
given, restrict to entries referring to these docnames.
@@ -107,7 +110,7 @@ class Index(object):
Qualifier and description are not rendered e.g. in LaTeX output.
"""
return []
raise NotImplementedError
class Domain(object):
@@ -142,7 +145,7 @@ class Domain(object):
#: directive name -> directive class
directives = {} # type: Dict[unicode, Any]
#: role name -> role callable
roles = {} # type: Dict[unicode, Callable]
roles = {} # type: Dict[unicode, Union[RoleFunction, XRefRole]]
#: a list of Index subclasses
indices = [] # type: List[Type[Index]]
#: role name -> a warning message if reference is missing
@@ -189,8 +192,8 @@ class Domain(object):
return None
fullname = '%s:%s' % (self.name, name)
def role_adapter(typ, rawtext, text, lineno, inliner,
options={}, content=[]):
def role_adapter(typ, rawtext, text, lineno, inliner, options={}, content=[]):
# type: (unicode, unicode, unicode, int, Inliner, Dict, List[unicode]) -> nodes.Node # NOQA
return self.roles[name](fullname, rawtext, text, lineno,
inliner, options, content)
self._role_cache[name] = role_adapter
@@ -210,6 +213,7 @@ class Domain(object):
class DirectiveAdapter(BaseDirective): # type: ignore
def run(self):
# type: () -> List[nodes.Node]
self.name = fullname
return BaseDirective.run(self)
self._directive_cache[name] = DirectiveAdapter

View File

@@ -24,7 +24,7 @@ from sphinx.util.docfields import Field, TypedField
if False:
# For type annotation
from typing import Any, Iterator, Tuple # NOQA
from typing import Any, Dict, Iterator, List, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
@@ -85,7 +85,7 @@ class CObject(ObjectDescription):
# add cross-ref nodes for all words
for part in [_f for _f in wsplit_re.split(ctype) if _f]: # type: ignore
tnode = nodes.Text(part, part)
if part[0] in string.ascii_letters+'_' and \
if part[0] in string.ascii_letters + '_' and \
part not in self.stopwords:
pnode = addnodes.pending_xref(
'', refdomain='c', reftype='type', reftarget=part,
@@ -172,7 +172,7 @@ class CObject(ObjectDescription):
ctype, argname = arg.rsplit(' ', 1)
self._parse_type(param, ctype)
# separate by non-breaking space in the output
param += nodes.emphasis(' '+argname, u'\xa0'+argname)
param += nodes.emphasis(' ' + argname, u'\xa0' + argname)
except ValueError:
# no argument name given, only the type
self._parse_type(param, arg)
@@ -245,7 +245,7 @@ class CXRefRole(XRefRole):
title = title[1:]
dot = title.rfind('.')
if dot != -1:
title = title[dot+1:]
title = title[dot + 1:]
return title, target
@@ -325,5 +325,11 @@ class CDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(CDomain)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -15,25 +15,28 @@ from copy import deepcopy
from six import iteritems, text_type
from docutils import nodes
from docutils.parsers.rst import Directive, directives
from sphinx import addnodes
from sphinx.roles import XRefRole
from sphinx.locale import l_, _
from sphinx.domains import Domain, ObjType
from sphinx.directives import ObjectDescription
from sphinx.util import logging
from sphinx.util.nodes import make_refnode
from sphinx.util.compat import Directive
from sphinx.util.pycompat import UnicodeMixin
from sphinx.util.docfields import Field, GroupedField
if False:
# For type annotation
from typing import Any, Iterator, Match, Pattern, Tuple, Union # NOQA
from typing import Any, Callable, Dict, Iterator, List, Match, Pattern, Tuple, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.config import Config # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
"""
Important note on ids
----------------------------------------------------------------------------
@@ -50,13 +53,17 @@ if False:
the index. All of the versions should work as permalinks.
Tagnames
Signature Nodes and Tagnames
----------------------------------------------------------------------------
Each desc_signature node will have the attribute 'sphinx_cpp_tagname' set to
- 'templateParams', if the line is on the form 'template<...>',
- 'templateIntroduction, if the line is on the form 'conceptName{...}'
Each signature is in a desc_signature node, where all children are
desc_signature_line nodes. Each of these lines will have the attribute
'sphinx_cpp_tagname' set to one of the following (prioritized):
- 'declarator', if the line contains the name of the declared object.
- 'templateParams', if the line starts a template parameter list,
- 'templateParams', if the line has template parameters
Note: such lines might get a new tag in the future.
- 'templateIntroduction, if the line is on the form 'conceptName{...}'
No other desc_signature nodes should exist (so far).
@@ -95,9 +102,9 @@ if False:
attribute-specifier-seq[opt] decl-specifier-seq[opt]
init-declarator-list[opt] ;
# Drop the semi-colon. For now: drop the attributes (TODO).
# Use at most 1 init-declerator.
-> decl-specifier-seq init-declerator
-> decl-specifier-seq declerator initializer
# Use at most 1 init-declarator.
-> decl-specifier-seq init-declarator
-> decl-specifier-seq declarator initializer
decl-specifier ->
storage-class-specifier ->
@@ -158,22 +165,22 @@ if False:
| template-argument-list "," template-argument "..."[opt]
template-argument ->
constant-expression
| type-specifier-seq abstract-declerator
| type-specifier-seq abstract-declarator
| id-expression
declerator ->
ptr-declerator
declarator ->
ptr-declarator
| noptr-declarator parameters-and-qualifiers trailing-return-type
(TODO: for now we don't support trailing-eturn-type)
ptr-declerator ->
noptr-declerator
ptr-declarator ->
noptr-declarator
| ptr-operator ptr-declarator
noptr-declerator ->
noptr-declarator ->
declarator-id attribute-specifier-seq[opt] ->
"..."[opt] id-expression
| rest-of-trailing
| noptr-declerator parameters-and-qualifiers
| noptr-declarator parameters-and-qualifiers
| noptr-declarator "[" constant-expression[opt] "]"
attribute-specifier-seq[opt]
| "(" ptr-declarator ")"
@@ -235,20 +242,20 @@ if False:
# Drop the attributes
-> decl-specifier-seq abstract-declarator[opt]
grammar, typedef-like: no initilizer
decl-specifier-seq declerator
decl-specifier-seq declarator
Can start with a templateDeclPrefix.
member_object:
goal: as a type_object which must have a declerator, and optionally
goal: as a type_object which must have a declarator, and optionally
with a initializer
grammar:
decl-specifier-seq declerator initializer
decl-specifier-seq declarator initializer
Can start with a templateDeclPrefix.
function_object:
goal: a function declaration, TODO: what about templates? for now: skip
grammar: no initializer
decl-specifier-seq declerator
decl-specifier-seq declarator
Can start with a templateDeclPrefix.
class_object:
@@ -532,7 +539,7 @@ class ASTBase(UnicodeMixin):
# type: (Any) -> bool
return not self.__eq__(other)
__hash__ = None # type: None
__hash__ = None # type: Callable[[], int]
def clone(self):
# type: () -> ASTBase
@@ -889,6 +896,7 @@ class ASTTemplateParams(ASTBase):
# type: (Any) -> None
assert params is not None
self.params = params
self.isNested = False # whether it's a template template param
def get_id_v2(self):
# type: () -> unicode
@@ -907,17 +915,30 @@ class ASTTemplateParams(ASTBase):
res.append(u"> ")
return ''.join(res)
def describe_signature(self, signode, mode, env, symbol):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol) -> None
signode.sphinx_cpp_tagname = 'templateParams'
signode += nodes.Text("template<")
def describe_signature(self, parentNode, mode, env, symbol, lineSpec=None):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol, bool) -> None
# 'lineSpec' is defaulted becuase of template template parameters
def makeLine(parentNode=parentNode):
signode = addnodes.desc_signature_line()
parentNode += signode
signode.sphinx_cpp_tagname = 'templateParams'
return signode
if self.isNested:
lineNode = parentNode
else:
lineNode = makeLine()
lineNode += nodes.Text("template<")
first = True
for param in self.params:
if not first:
signode += nodes.Text(", ")
lineNode += nodes.Text(", ")
first = False
param.describe_signature(signode, mode, env, symbol)
signode += nodes.Text(">")
if lineSpec:
lineNode = makeLine()
param.describe_signature(lineNode, mode, env, symbol)
if lineSpec and not first:
lineNode = makeLine()
lineNode += nodes.Text(">")
class ASTTemplateIntroductionParameter(ASTBase):
@@ -1002,8 +1023,11 @@ class ASTTemplateIntroduction(ASTBase):
res.append('} ')
return ''.join(res)
def describe_signature(self, signode, mode, env, symbol):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol) -> None
def describe_signature(self, parentNode, mode, env, symbol, lineSpec):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol, bool) -> None
# Note: 'lineSpec' has no effect on template introductions.
signode = addnodes.desc_signature_line()
parentNode += signode
signode.sphinx_cpp_tagname = 'templateIntroduction'
self.concept.describe_signature(signode, 'markType', env, symbol)
signode += nodes.Text('{')
@@ -1040,13 +1064,11 @@ class ASTTemplateDeclarationPrefix(ASTBase):
res.append(text_type(t))
return u''.join(res)
def describe_signature(self, signode, mode, env, symbol):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol) -> None
def describe_signature(self, signode, mode, env, symbol, lineSpec):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Symbol, bool) -> None
_verify_description_mode(mode)
for t in self.templates:
templateNode = addnodes.desc_signature_line()
t.describe_signature(templateNode, 'lastIsName', env, symbol)
signode += templateNode
t.describe_signature(signode, 'lastIsName', env, symbol, lineSpec)
class ASTOperatorBuildIn(ASTBase):
@@ -1418,7 +1440,7 @@ class ASTTrailingTypeSpecName(ASTBase):
self.nestedName.describe_signature(signode, mode, env, symbol=symbol)
class ASTFunctinoParameter(ASTBase):
class ASTFunctionParameter(ASTBase):
def __init__(self, arg, ellipsis=False):
# type: (Any, bool) -> None
self.arg = arg
@@ -2186,7 +2208,7 @@ class ASTDeclaratorParen(ASTBase):
self.next.describe_signature(signode, "noneIsName", env, symbol)
class ASTDecleratorNameParamQual(ASTBase):
class ASTDeclaratorNameParamQual(ASTBase):
def __init__(self, declId, arrayOps, paramQual):
# type: (Any, List[Any], Any) -> None
self.declId = declId
@@ -2719,8 +2741,8 @@ class ASTDeclaration(ASTBase):
res.append(text_type(self.declaration))
return u''.join(res)
def describe_signature(self, signode, mode, env):
# type: (addnodes.desc_signature, unicode, BuildEnvironment) -> None
def describe_signature(self, signode, mode, env, options):
# type: (addnodes.desc_signature, unicode, BuildEnvironment, Dict) -> None
_verify_description_mode(mode)
# The caller of the domain added a desc_signature node.
# Always enable multiline:
@@ -2733,7 +2755,8 @@ class ASTDeclaration(ASTBase):
assert self.symbol
if self.templatePrefix:
self.templatePrefix.describe_signature(signode, mode, env,
symbol=self.symbol)
symbol=self.symbol,
lineSpec=options.get('tparam-line-spec'))
signode += mainDeclNode
if self.visibility and self.visibility != "public":
mainDeclNode += addnodes.desc_annotation(self.visibility + " ",
@@ -3060,7 +3083,7 @@ class Symbol(object):
msg = "Duplicate declaration, also defined in '%s'.\n"
msg += "Declaration is '%s'."
msg = msg % (ourChild.docname, name)
env.warn(otherChild.docname, msg)
logger.warning(msg, location=otherChild.docname)
else:
# Both have declarations, and in the same docname.
# This can apparently happen, it should be safe to
@@ -3195,14 +3218,14 @@ class Symbol(object):
def to_string(self, indent):
# type: (int) -> unicode
res = ['\t'*indent] # type: List[unicode]
res = ['\t' * indent] # type: List[unicode]
if not self.parent:
res.append('::')
else:
if self.templateParams:
res.append(text_type(self.templateParams))
res.append('\n')
res.append('\t'*indent)
res.append('\t' * indent)
if self.identifier:
res.append(text_type(self.identifier))
else:
@@ -3269,7 +3292,7 @@ class DefinitionParser(object):
return DefinitionError(''.join(result))
def status(self, msg):
# type: (unicode) -> unicode
# type: (unicode) -> None
# for debugging
indicator = '-' * self.pos + '^'
print("%s\n%s\n%s" % (msg, self.definition, indicator))
@@ -3315,7 +3338,7 @@ class DefinitionParser(object):
return self.match(re.compile(r'\b%s\b' % re.escape(word)))
def skip_ws(self):
# type: (unicode) -> bool
# type: () -> bool
return self.match(_whitespace_re)
def skip_word_and_ws(self, word):
@@ -3350,6 +3373,8 @@ class DefinitionParser(object):
# type: () -> unicode
if self.last_match is not None:
return self.last_match.group()
else:
return None
def read_rest(self):
# type: () -> unicode
@@ -3644,7 +3669,7 @@ class DefinitionParser(object):
while 1:
self.skip_ws()
if self.skip_string('...'):
args.append(ASTFunctinoParameter(None, True))
args.append(ASTFunctionParameter(None, True))
self.skip_ws()
if not self.skip_string(')'):
self.fail('Expected ")" after "..." in '
@@ -3654,7 +3679,7 @@ class DefinitionParser(object):
# even in function pointers and similar.
arg = self._parse_type_with_init(outer=None, named='single')
# TODO: parse default parameters # TODO: didn't we just do that?
args.append(ASTFunctinoParameter(arg))
args.append(ASTFunctionParameter(arg))
self.skip_ws()
if self.skip_string(','):
@@ -3824,7 +3849,7 @@ class DefinitionParser(object):
return ASTDeclSpecs(outer, leftSpecs, rightSpecs, trailing)
def _parse_declarator_name_param_qual(self, named, paramMode, typed):
# type: (Union[bool, unicode], unicode, bool) -> ASTDecleratorNameParamQual
# type: (Union[bool, unicode], unicode, bool) -> ASTDeclaratorNameParamQual
# now we should parse the name, and then suffixes
if named == 'maybe':
pos = self.pos
@@ -3860,10 +3885,10 @@ class DefinitionParser(object):
else:
break
paramQual = self._parse_parameters_and_qualifiers(paramMode)
return ASTDecleratorNameParamQual(declId=declId, arrayOps=arrayOps,
return ASTDeclaratorNameParamQual(declId=declId, arrayOps=arrayOps,
paramQual=paramQual)
def _parse_declerator(self, named, paramMode, typed=True):
def _parse_declarator(self, named, paramMode, typed=True):
# type: (Union[bool, unicode], unicode, bool) -> Any
# 'typed' here means 'parse return type stuff'
if paramMode not in ('type', 'function', 'operatorCast'):
@@ -3885,14 +3910,14 @@ class DefinitionParser(object):
if const:
continue
break
next = self._parse_declerator(named, paramMode, typed)
next = self._parse_declarator(named, paramMode, typed)
return ASTDeclaratorPtr(next=next, volatile=volatile, const=const)
# TODO: shouldn't we parse an R-value ref here first?
if typed and self.skip_string("&"):
next = self._parse_declerator(named, paramMode, typed)
next = self._parse_declarator(named, paramMode, typed)
return ASTDeclaratorRef(next=next)
if typed and self.skip_string("..."):
next = self._parse_declerator(named, paramMode, False)
next = self._parse_declarator(named, paramMode, False)
return ASTDeclaratorParamPack(next=next)
if typed: # pointer to member
pos = self.pos
@@ -3918,13 +3943,13 @@ class DefinitionParser(object):
if const:
continue
break
next = self._parse_declerator(named, paramMode, typed)
next = self._parse_declarator(named, paramMode, typed)
return ASTDeclaratorMemPtr(name, const, volatile, next=next)
if typed and self.current_char == '(': # note: peeking, not skipping
if paramMode == "operatorCast":
# TODO: we should be able to parse cast operators which return
# function pointers. For now, just hax it and ignore.
return ASTDecleratorNameParamQual(declId=None, arrayOps=[],
return ASTDeclaratorNameParamQual(declId=None, arrayOps=[],
paramQual=None)
# maybe this is the beginning of params and quals,try that first,
# otherwise assume it's noptr->declarator > ( ptr-declarator )
@@ -3943,10 +3968,10 @@ class DefinitionParser(object):
# TODO: hmm, if there is a name, it must be in inner, right?
# TODO: hmm, if there must be parameters, they must b
# inside, right?
inner = self._parse_declerator(named, paramMode, typed)
inner = self._parse_declarator(named, paramMode, typed)
if not self.skip_string(')'):
self.fail("Expected ')' in \"( ptr-declarator )\"")
next = self._parse_declerator(named=False,
next = self._parse_declarator(named=False,
paramMode="type",
typed=typed)
return ASTDeclaratorParen(inner=inner, next=next)
@@ -4006,7 +4031,7 @@ class DefinitionParser(object):
# first try without the type
try:
declSpecs = self._parse_decl_specs(outer=outer, typed=False)
decl = self._parse_declerator(named=True, paramMode=outer,
decl = self._parse_declarator(named=True, paramMode=outer,
typed=False)
self.assert_end()
except DefinitionError as exUntyped:
@@ -4020,7 +4045,7 @@ class DefinitionParser(object):
self.pos = startPos
try:
declSpecs = self._parse_decl_specs(outer=outer)
decl = self._parse_declerator(named=True, paramMode=outer)
decl = self._parse_declarator(named=True, paramMode=outer)
except DefinitionError as exTyped:
self.pos = startPos
if outer == 'type':
@@ -4051,7 +4076,7 @@ class DefinitionParser(object):
self.pos = startPos
typed = True
declSpecs = self._parse_decl_specs(outer=outer, typed=typed)
decl = self._parse_declerator(named=True, paramMode=outer,
decl = self._parse_declarator(named=True, paramMode=outer,
typed=typed)
else:
paramMode = 'type'
@@ -4063,7 +4088,7 @@ class DefinitionParser(object):
elif outer == 'templateParam':
named = 'single'
declSpecs = self._parse_decl_specs(outer=outer)
decl = self._parse_declerator(named=named, paramMode=paramMode)
decl = self._parse_declarator(named=named, paramMode=paramMode)
return ASTType(declSpecs, decl)
def _parse_type_with_init(self, named, outer):
@@ -4167,6 +4192,7 @@ class DefinitionParser(object):
if self.skip_word('template'):
# declare a tenplate template parameter
nestedParams = self._parse_template_parameter_list()
nestedParams.isNested = True
else:
nestedParams = None
self.skip_ws()
@@ -4380,17 +4406,20 @@ class DefinitionParser(object):
templatePrefix = self._check_template_consistency(name, templatePrefix,
fullSpecShorthand=False)
res = ASTNamespace(name, templatePrefix)
res.objectType = 'namespace'
res.objectType = 'namespace' # type: ignore
return res
def parse_xref_object(self):
# type: () -> ASTNamespace
templatePrefix = self._parse_template_declaration_prefix(objectType="xref")
name = self._parse_nested_name()
# if there are '()' left, just skip them
self.skip_ws()
self.skip_string('()')
templatePrefix = self._check_template_consistency(name, templatePrefix,
fullSpecShorthand=True)
res = ASTNamespace(name, templatePrefix)
res.objectType = 'xref'
res.objectType = 'xref' # type: ignore
return res
@@ -4417,6 +4446,9 @@ class CPPObject(ObjectDescription):
names=('returns', 'return')),
]
option_spec = dict(ObjectDescription.option_spec)
option_spec['tparam-line-spec'] = directives.flag
def warn(self, msg):
# type: (unicode) -> None
self.state_machine.reporter.warning(msg, line=self.lineno)
@@ -4514,9 +4546,9 @@ class CPPObject(ObjectDescription):
# type: (Any) -> Any
raise NotImplementedError()
def describe_signature(self, signode, ast, parentScope):
# type: (addnodes.desc_signature, Any, Any) -> None
raise NotImplementedError()
def describe_signature(self, signode, ast, options):
# type: (addnodes.desc_signature, Any, Dict) -> None
ast.describe_signature(signode, 'lastIsName', self.env, options)
def handle_signature(self, sig, signode):
# type: (unicode, addnodes.desc_signature) -> Any
@@ -4549,7 +4581,8 @@ class CPPObject(ObjectDescription):
if ast.objectType == 'enumerator':
self._add_enumerator_to_parent(ast)
self.describe_signature(signode, ast)
self.options['tparam-line-spec'] = 'tparam-line-spec' in self.options
self.describe_signature(signode, ast, self.options)
return ast
def before_content(self):
@@ -4573,10 +4606,6 @@ class CPPTypeObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("type")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPConceptObject(CPPObject):
def get_index_text(self, name):
@@ -4587,10 +4616,6 @@ class CPPConceptObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("concept")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPMemberObject(CPPObject):
def get_index_text(self, name):
@@ -4601,10 +4626,6 @@ class CPPMemberObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("member")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPFunctionObject(CPPObject):
def get_index_text(self, name):
@@ -4615,10 +4636,6 @@ class CPPFunctionObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("function")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPClassObject(CPPObject):
def get_index_text(self, name):
@@ -4629,10 +4646,6 @@ class CPPClassObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("class")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPEnumObject(CPPObject):
def get_index_text(self, name):
@@ -4653,10 +4666,6 @@ class CPPEnumObject(CPPObject):
assert False
return ast
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPEnumeratorObject(CPPObject):
def get_index_text(self, name):
@@ -4667,10 +4676,6 @@ class CPPEnumeratorObject(CPPObject):
# type: (Any) -> Any
return parser.parse_declaration("enumerator")
def describe_signature(self, signode, ast): # type: ignore
# type: (addnodes.desc_signature, Any) -> None
ast.describe_signature(signode, 'lastIsName', self.env)
class CPPNamespaceObject(Directive):
"""
@@ -4726,7 +4731,7 @@ class CPPNamespacePushObject(Directive):
# type: () -> List[nodes.Node]
env = self.state.document.settings.env
if self.arguments[0].strip() in ('NULL', '0', 'nullptr'):
return
return []
parser = DefinitionParser(self.arguments[0], self, env.config)
try:
ast = parser.parse_namespace_object()
@@ -4872,7 +4877,7 @@ class CPPDomain(Domain):
msg = "Duplicate declaration, also defined in '%s'.\n"
msg += "Name of declaration is '%s'."
msg = msg % (ourNames[name], name)
self.env.warn(docname, msg)
logger.warning(msg, docname)
else:
ourNames[name] = docname
@@ -4882,12 +4887,14 @@ class CPPDomain(Domain):
class Warner(object):
def warn(self, msg):
if emitWarnings:
env.warn_node(msg, node)
logger.warning(msg, location=node)
warner = Warner()
# add parens again for those that could be functions
if typ == 'any' or typ == 'func':
target += '()'
parser = DefinitionParser(target, warner, env.config)
try:
ast = parser.parse_xref_object()
parser.skip_ws()
parser.assert_end()
except DefinitionError as e:
warner.warn('Unparseable C++ cross-reference: %r\n%s'
@@ -4947,11 +4954,26 @@ class CPPDomain(Domain):
name = text_type(fullNestedName).lstrip(':')
docname = s.docname
assert docname
if typ == 'any' and declaration.objectType == 'function':
if env.config.add_function_parentheses:
if not node['refexplicit']:
title = contnode.pop(0).astext()
contnode += nodes.Text(title + '()')
# If it's operator(), we need to add '()' if explicit function parens
# are requested. Then the Sphinx machinery will add another pair.
# Also, if it's an 'any' ref that resolves to a function, we need to add
# parens as well.
addParen = 0
if not node.get('refexplicit', False) and declaration.objectType == 'function':
# this is just the normal haxing for 'any' roles
if env.config.add_function_parentheses and typ == 'any':
addParen += 1
# and now this stuff for operator()
if (env.config.add_function_parentheses and typ == 'function' and
contnode[-1].astext().endswith('operator()')):
addParen += 1
if ((typ == 'any' or typ == 'function') and
contnode[-1].astext().endswith('operator') and
name.endswith('operator()')):
addParen += 1
if addParen > 0:
title = contnode.pop(0).astext()
contnode += nodes.Text(title + '()' * addParen)
return make_refnode(builder, fromdocname, docname,
declaration.get_newest_id(), contnode, name
), declaration.objectType
@@ -4987,8 +5009,14 @@ class CPPDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(CPPDomain)
app.add_config_value("cpp_index_common_prefix", [], 'env')
app.add_config_value("cpp_id_attributes", [], 'env')
app.add_config_value("cpp_paren_attributes", [], 'env')
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -20,7 +20,7 @@ from sphinx.util.docfields import Field, GroupedField, TypedField
if False:
# For type annotation
from typing import Iterator, Tuple # NOQA
from typing import Any, Dict, Iterator, List, Tuple # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
@@ -160,7 +160,7 @@ class JSXRefRole(XRefRole):
title = title[1:]
dot = title.rfind('.')
if dot != -1:
title = title[dot+1:]
title = title[dot + 1:]
if target[0:1] == '.':
target = target[1:]
refnode['refspecific'] = True
@@ -255,5 +255,11 @@ class JavaScriptDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(JavaScriptDomain)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -14,24 +14,26 @@ import re
from six import iteritems
from docutils import nodes
from docutils.parsers.rst import directives
from docutils.parsers.rst import Directive, directives
from sphinx import addnodes
from sphinx.roles import XRefRole
from sphinx.locale import l_, _
from sphinx.domains import Domain, ObjType, Index
from sphinx.directives import ObjectDescription
from sphinx.util import logging
from sphinx.util.nodes import make_refnode
from sphinx.util.compat import Directive
from sphinx.util.docfields import Field, GroupedField, TypedField
if False:
# For type annotation
from typing import Any, Iterator, Tuple, Union # NOQA
from typing import Any, Dict, Iterable, Iterator, List, Tuple, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
# REs for Python signatures
py_sig_re = re.compile(
@@ -114,7 +116,7 @@ class PyXrefMixin(object):
def make_xrefs(self, rolename, domain, target, innernode=nodes.emphasis,
contnode=None):
# type: (unicode, unicode, unicode, nodes.Node, nodes.Node) -> List[nodes.Node]
delims = '(\s*[\[\]\(\),](?:\s*or\s)?\s*|\s+or\s+)'
delims = r'(\s*[\[\]\(\),](?:\s*or\s)?\s*|\s+or\s+)'
delims_re = re.compile(delims)
sub_targets = re.split(delims, target)
@@ -189,7 +191,7 @@ class PyObject(ObjectDescription):
"""
return False
def handle_signature(self, sig, signode): # type: ignore
def handle_signature(self, sig, signode):
# type: (unicode, addnodes.desc_signature) -> Tuple[unicode, unicode]
"""Transform a Python signature into RST nodes.
@@ -561,7 +563,7 @@ class PyXRefRole(XRefRole):
title = title[1:]
dot = title.rfind('.')
if dot != -1:
title = title[dot+1:]
title = title[dot + 1:]
# if the first character is a dot, search more specific namespaces first
# else search builtins first
if target[0:1] == '.':
@@ -580,7 +582,7 @@ class PythonModuleIndex(Index):
shortname = l_('modules')
def generate(self, docnames=None):
# type: (List[unicode]) -> Tuple[List[Tuple[unicode, List[List[Union[unicode, int]]]]], bool] # NOQA
# type: (Iterable[unicode]) -> Tuple[List[Tuple[unicode, List[List[Union[unicode, int]]]]], bool] # NOQA
content = {} # type: Dict[unicode, List]
# list of prefixes to ignore
ignores = None # type: List[unicode]
@@ -785,10 +787,9 @@ class PythonDomain(Domain):
if not matches:
return None
elif len(matches) > 1:
env.warn_node(
'more than one target found for cross-reference '
'%r: %s' % (target, ', '.join(match[0] for match in matches)),
node)
logger.warning('more than one target found for cross-reference %r: %s',
target, ', '.join(match[0] for match in matches),
location=node)
name, obj = matches[0]
if obj[1] == 'module':
@@ -842,5 +843,11 @@ class PythonDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(PythonDomain)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -22,7 +22,7 @@ from sphinx.util.nodes import make_refnode
if False:
# For type annotation
from typing import Iterator, Tuple # NOQA
from typing import Any, Dict, Iterator, List, Tuple # NOQA
from docutils import nodes # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
@@ -177,5 +177,11 @@ class ReSTDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(ReSTDomain)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -12,10 +12,10 @@
import re
import unicodedata
from six import PY3, iteritems
from six import iteritems
from docutils import nodes
from docutils.parsers.rst import directives
from docutils.parsers.rst import Directive, directives
from docutils.statemachine import ViewList
from sphinx import addnodes
@@ -23,37 +23,31 @@ from sphinx.roles import XRefRole
from sphinx.locale import l_, _
from sphinx.domains import Domain, ObjType
from sphinx.directives import ObjectDescription
from sphinx.util import ws_re
from sphinx.util import ws_re, logging, docname_join
from sphinx.util.nodes import clean_astext, make_refnode
from sphinx.util.compat import Directive
if False:
# For type annotation
from typing import Any, Callable, Dict, Iterator, List, Tuple, Type, Union # NOQA
from docutils.parsers.rst.states import Inliner # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
from sphinx.util.typing import Role # NOQA
from sphinx.util.typing import RoleFunction # NOQA
if PY3:
unicode = str
RoleFunction = Callable[[unicode, unicode, unicode, int, Inliner, Dict, List[unicode]],
Tuple[List[nodes.Node], List[nodes.Node]]]
logger = logging.getLogger(__name__)
# RE for option descriptions
option_desc_re = re.compile(r'((?:/|--|-|\+)?[-\.?@#_a-zA-Z0-9]+)(=?\s*.*)')
option_desc_re = re.compile(r'((?:/|--|-|\+)?[^\s=]+)(=?\s*.*)')
# RE for grammar tokens
token_re = re.compile('`(\w+)`', re.U)
token_re = re.compile(r'`(\w+)`', re.U)
class GenericObject(ObjectDescription):
"""
A generic x-ref directive registered with Sphinx.add_object_type().
"""
indextemplate = ''
indextemplate = '' # type: unicode
parse_node = None # type: Callable[[GenericObject, BuildEnvironment, unicode, addnodes.desc_signature], unicode] # NOQA
def handle_signature(self, sig, signode):
@@ -76,7 +70,7 @@ class GenericObject(ObjectDescription):
colon = self.indextemplate.find(':')
if colon != -1:
indextype = self.indextemplate[:colon].strip()
indexentry = self.indextemplate[colon+1:].strip() % (name,)
indexentry = self.indextemplate[colon + 1:].strip() % (name,)
else:
indextype = 'single'
indexentry = self.indextemplate % (name,)
@@ -138,7 +132,7 @@ class Target(Directive):
colon = indexentry.find(':')
if colon != -1:
indextype = indexentry[:colon].strip()
indexentry = indexentry[colon+1:].strip()
indexentry = indexentry[colon + 1:].strip()
inode = addnodes.index(entries=[(indextype, indexentry,
targetname, '', None)])
ret.insert(0, inode)
@@ -164,12 +158,10 @@ class Cmdoption(ObjectDescription):
potential_option = potential_option.strip()
m = option_desc_re.match(potential_option) # type: ignore
if not m:
self.env.warn(
self.env.docname,
'Malformed option description %r, should '
'look like "opt", "-opt args", "--opt args", '
'"/opt args" or "+opt args"' % potential_option,
self.lineno)
logger.warning('Malformed option description %r, should '
'look like "opt", "-opt args", "--opt args", '
'"/opt args" or "+opt args"', potential_option,
location=(self.env.docname, self.lineno))
continue
optname, args = m.groups()
if count:
@@ -466,6 +458,7 @@ class StandardDomain(Domain):
searchprio=-1),
'envvar': ObjType(l_('environment variable'), 'envvar'),
'cmdoption': ObjType(l_('program option'), 'option'),
'doc': ObjType(l_('document'), 'doc', searchprio=-1)
} # type: Dict[unicode, ObjType]
directives = {
@@ -492,6 +485,8 @@ class StandardDomain(Domain):
warn_dangling=True),
# links to labels, without a different title
'keyword': XRefRole(warn_dangling=True),
# links to documents
'doc': XRefRole(warn_dangling=True, innernodeclass=nodes.inline),
} # type: Dict[unicode, Union[RoleFunction, XRefRole]]
initial_data = {
@@ -516,6 +511,7 @@ class StandardDomain(Domain):
'the label must precede a section header)',
'numref': 'undefined label: %(target)s',
'keyword': 'unknown keyword: %(target)s',
'doc': 'unknown document: %(target)s',
'option': 'unknown option: %(target)s',
'citation': 'citation not found: %(target)s',
}
@@ -573,9 +569,9 @@ class StandardDomain(Domain):
for node in document.traverse(nodes.citation):
label = node[0].astext()
if label in self.data['citations']:
path = env.doc2path(self.data['citations'][0])
env.warn_node('duplicate citation %s, other instance in %s' %
(label, path), node)
path = env.doc2path(self.data['citations'][label][0])
logger.warning('duplicate citation %s, other instance in %s', label, path,
location=node)
self.data['citations'][label] = (docname, node['ids'][0])
def note_labels(self, env, docname, document):
@@ -597,10 +593,11 @@ class StandardDomain(Domain):
# link and object descriptions
continue
if name in labels:
env.warn_node('duplicate label %s, ' % name + 'other instance '
'in ' + env.doc2path(labels[name][0]), node)
logger.warning('duplicate label %s, ' % name + 'other instance '
'in ' + env.doc2path(labels[name][0]),
location=node)
anonlabels[name] = docname, labelid
if node.tagname == 'section':
if node.tagname in ('section', 'rubric'):
sectname = clean_astext(node[0]) # node[0] == title node
elif self.is_enumerable_node(node):
sectname = self.get_numfig_title(node)
@@ -650,6 +647,8 @@ class StandardDomain(Domain):
resolver = self._resolve_numref_xref
elif typ == 'keyword':
resolver = self._resolve_keyword_xref
elif typ == 'doc':
resolver = self._resolve_doc_xref
elif typ == 'option':
resolver = self._resolve_option_xref
elif typ == 'citation':
@@ -689,7 +688,7 @@ class StandardDomain(Domain):
return None
if env.config.numfig is False:
env.warn_node('numfig is disabled. :numref: is ignored.', node)
logger.warning('numfig is disabled. :numref: is ignored.', location=node)
return contnode
target_node = env.get_doctree(docname).ids.get(labelid)
@@ -702,7 +701,8 @@ class StandardDomain(Domain):
if fignumber is None:
return contnode
except ValueError:
env.warn_node("no number is assigned for %s: %s" % (figtype, labelid), node)
logger.warning("no number is assigned for %s: %s", figtype, labelid,
location=node)
return contnode
try:
@@ -711,13 +711,13 @@ class StandardDomain(Domain):
else:
title = env.config.numfig_format.get(figtype, '')
if figname is None and '%{name}' in title:
env.warn_node('the link has no caption: %s' % title, node)
if figname is None and '{name}' in title:
logger.warning('the link has no caption: %s', title, location=node)
return contnode
else:
fignum = '.'.join(map(str, fignumber))
if '{name}' in title or 'number' in title:
# new style format (cf. "Fig.%{number}")
# new style format (cf. "Fig.{number}")
if figname:
newtitle = title.format(name=figname, number=fignum)
else:
@@ -726,10 +726,10 @@ class StandardDomain(Domain):
# old style format (cf. "Fig.%s")
newtitle = title % fignum
except KeyError as exc:
env.warn_node('invalid numfig_format: %s (%r)' % (title, exc), node)
logger.warning('invalid numfig_format: %s (%r)', title, exc, location=node)
return contnode
except TypeError:
env.warn_node('invalid numfig_format: %s' % title, node)
logger.warning('invalid numfig_format: %s', title, location=node)
return contnode
return self.build_reference_node(fromdocname, builder,
@@ -746,6 +746,22 @@ class StandardDomain(Domain):
return make_refnode(builder, fromdocname, docname,
labelid, contnode)
def _resolve_doc_xref(self, env, fromdocname, builder, typ, target, node, contnode):
# type: (BuildEnvironment, unicode, Builder, unicode, unicode, nodes.Node, nodes.Node) -> nodes.Node # NOQA
# directly reference to document by source name; can be absolute or relative
refdoc = node.get('refdoc', fromdocname)
docname = docname_join(refdoc, node['reftarget'])
if docname not in env.all_docs:
return None
else:
if node['refexplicit']:
# reference with explicit title
caption = node.astext()
else:
caption = clean_astext(env.titles[docname])
innernode = nodes.inline(caption, caption, classes=['doc'])
return make_refnode(builder, fromdocname, docname, None, innernode)
def _resolve_option_xref(self, env, fromdocname, builder, typ, target, node, contnode):
# type: (BuildEnvironment, unicode, Builder, unicode, unicode, nodes.Node, nodes.Node) -> nodes.Node # NOQA
progname = node.get('std:program')
@@ -753,8 +769,8 @@ class StandardDomain(Domain):
docname, labelid = self.data['progoptions'].get((progname, target), ('', ''))
if not docname:
commands = []
while ws_re.search(target): # type: ignore
subcommand, target = ws_re.split(target, 1) # type: ignore
while ws_re.search(target):
subcommand, target = ws_re.split(target, 1)
commands.append(subcommand)
progname = "-".join(commands)
@@ -833,7 +849,7 @@ class StandardDomain(Domain):
for doc in self.env.all_docs:
yield (doc, clean_astext(self.env.titles[doc]), 'doc', doc, '', -1)
for (prog, option), info in iteritems(self.data['progoptions']):
yield (option, option, 'option', info[0], info[1], 1)
yield (option, option, 'cmdoption', info[0], info[1], 1)
for (type, name), info in iteritems(self.data['objects']):
yield (name, name, type, info[0], info[1],
self.object_types[type].attrs['searchprio'])
@@ -872,6 +888,7 @@ class StandardDomain(Domain):
# type: (nodes.Node) -> unicode
"""Get figure type of nodes."""
def has_child(node, cls):
# type: (nodes.Node, Type) -> bool
return any(isinstance(child, cls) for child in node)
if isinstance(node, nodes.section):
@@ -910,5 +927,11 @@ class StandardDomain(Domain):
def setup(app):
# type: (Sphinx) -> None
# type: (Sphinx) -> Dict[unicode, Any]
app.add_domain(StandardDomain)
return {
'version': 'builtin',
'parallel_read_safe': True,
'parallel_write_safe': True,
}

View File

@@ -16,48 +16,48 @@ import time
import types
import codecs
import fnmatch
import warnings
from os import path
from glob import glob
from collections import defaultdict
from six import iteritems, itervalues, class_types, next
from six import itervalues, class_types, next
from six.moves import cPickle as pickle
from docutils import nodes
from docutils.io import NullOutput
from docutils.core import Publisher
from docutils.utils import Reporter, relative_path, get_source_line
from docutils.utils import Reporter, get_source_line
from docutils.parsers.rst import roles
from docutils.parsers.rst.languages import en as english
from docutils.frontend import OptionParser
from sphinx import addnodes
from sphinx.io import SphinxStandaloneReader, SphinxDummyWriter, SphinxFileInput
from sphinx.util import get_matching_docs, docname_join, FilenameUniqDict
from sphinx.util.nodes import clean_astext, WarningStream, is_translatable, \
process_only_nodes
from sphinx.util.osutil import SEP, getcwd, fs_encoding, ensuredir
from sphinx.util.images import guess_mimetype
from sphinx.util.i18n import find_catalog_files, get_image_filename_for_language, \
search_image_for_language
from sphinx.util.console import bold, purple # type: ignore
from sphinx.util import logging
from sphinx.util import get_matching_docs, FilenameUniqDict, status_iterator
from sphinx.util.nodes import WarningStream, is_translatable, process_only_nodes
from sphinx.util.osutil import SEP, ensuredir
from sphinx.util.i18n import find_catalog_files
from sphinx.util.console import bold # type: ignore
from sphinx.util.docutils import sphinx_domains
from sphinx.util.matching import compile_matchers
from sphinx.util.parallel import ParallelTasks, parallel_available, make_chunks
from sphinx.util.websupport import is_commentable
from sphinx.errors import SphinxError, ExtensionError
from sphinx.versioning import add_uids, merge_doctrees
from sphinx.transforms import SphinxContentsFilter
from sphinx.environment.managers.indexentries import IndexEntries
from sphinx.environment.managers.toctree import Toctree
from sphinx.deprecation import RemovedInSphinx20Warning
from sphinx.environment.adapters.indexentries import IndexEntries
from sphinx.environment.adapters.toctree import TocTree
if False:
# For type annotation
from typing import Any, Callable, Iterator, Pattern, Tuple, Type, Union # NOQA
from typing import Any, Callable, Dict, Iterator, List, Pattern, Set, Tuple, Type, Union # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.config import Config # NOQA
from sphinx.domains import Domain # NOQA
from sphinx.environment.managers import EnvironmentManager # NOQA
logger = logging.getLogger(__name__)
default_settings = {
'embed_stylesheet': False,
@@ -75,7 +75,7 @@ default_settings = {
# or changed to properly invalidate pickle files.
#
# NOTE: increase base version by 2 to have distinct numbers for Py2 and 3
ENV_VERSION = 50 + (sys.version_info[0] - 2)
ENV_VERSION = 51 + (sys.version_info[0] - 2)
dummy_reporter = Reporter('', 4, 4)
@@ -118,13 +118,10 @@ class BuildEnvironment(object):
def topickle(self, filename):
# type: (unicode) -> None
# remove unpicklable attributes
warnfunc = self._warnfunc
self.set_warnfunc(None)
values = self.config.values
del self.config.values
domains = self.domains
del self.domains
managers = self.detach_managers()
# remove potentially pickling-problematic values from config
for key, val in list(vars(self.config).items()):
if key.startswith('_') or \
@@ -135,10 +132,8 @@ class BuildEnvironment(object):
with open(filename, 'wb') as picklefile:
pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
# reset attributes
self.attach_managers(managers)
self.domains = domains
self.config.values = values
self.set_warnfunc(warnfunc)
# --------- ENVIRONMENT INITIALIZATION -------------------------------------
@@ -176,7 +171,7 @@ class BuildEnvironment(object):
self.all_docs = {} # type: Dict[unicode, float]
# docname -> mtime at the time of reading
# contains all read docnames
self.dependencies = {} # type: Dict[unicode, Set[unicode]]
self.dependencies = defaultdict(set) # type: Dict[unicode, Set[unicode]]
# docname -> set of dependent file
# names, relative to documentation root
self.included = set() # type: Set[unicode]
@@ -186,8 +181,8 @@ class BuildEnvironment(object):
# next build
# File metadata
self.metadata = {} # type: Dict[unicode, Dict[unicode, Any]]
# docname -> dict of metadata items
self.metadata = defaultdict(dict) # type: Dict[unicode, Dict[unicode, Any]]
# docname -> dict of metadata items
# TOC inventory
self.titles = {} # type: Dict[unicode, nodes.Node]
@@ -241,35 +236,10 @@ class BuildEnvironment(object):
# attributes of "any" cross references
self.ref_context = {} # type: Dict[unicode, Any]
self.managers = {} # type: Dict[unicode, EnvironmentManager]
self.init_managers()
def init_managers(self):
# type: () -> None
managers = {}
manager_class = None # type: Type[EnvironmentManager]
for manager_class in [IndexEntries, Toctree]: # type: ignore
managers[manager_class.name] = manager_class(self)
self.attach_managers(managers)
def attach_managers(self, managers):
# type: (Dict[unicode, EnvironmentManager]) -> None
for name, manager in iteritems(managers):
self.managers[name] = manager
manager.attach(self)
def detach_managers(self):
# type: () -> Dict[unicode, EnvironmentManager]
managers = self.managers
self.managers = {}
for _, manager in iteritems(managers):
manager.detach(self)
return managers
def set_warnfunc(self, func):
# type: (Callable) -> None
self._warnfunc = func
self.settings['warning_stream'] = WarningStream(func)
warnings.warn('env.set_warnfunc() is now deprecated. Use sphinx.util.logging instead.',
RemovedInSphinx20Warning)
def set_versioning_method(self, method, compare):
# type: (unicode, bool) -> None
@@ -312,20 +282,11 @@ class BuildEnvironment(object):
if docname in self.all_docs:
self.all_docs.pop(docname, None)
self.reread_always.discard(docname)
self.metadata.pop(docname, None)
self.dependencies.pop(docname, None)
self.titles.pop(docname, None)
self.longtitles.pop(docname, None)
self.images.purge_doc(docname)
self.dlfiles.purge_doc(docname)
for version, changes in self.versionchanges.items():
new = [change for change in changes if change[1] != docname]
changes[:] = new
for manager in itervalues(self.managers):
manager.clear_doc(docname)
for domain in self.domains.values():
domain.clear_doc(docname)
@@ -341,21 +302,11 @@ class BuildEnvironment(object):
self.all_docs[docname] = other.all_docs[docname]
if docname in other.reread_always:
self.reread_always.add(docname)
self.metadata[docname] = other.metadata[docname]
if docname in other.dependencies:
self.dependencies[docname] = other.dependencies[docname]
self.titles[docname] = other.titles[docname]
self.longtitles[docname] = other.longtitles[docname]
self.images.merge_other(docnames, other.images)
self.dlfiles.merge_other(docnames, other.dlfiles)
for version, changes in other.versionchanges.items():
self.versionchanges.setdefault(version, []).extend(
change for change in changes if change[1] in docnames)
for manager in itervalues(self.managers):
manager.merge_other(docnames, other)
for domainname, domain in self.domains.items():
domain.merge_domaindata(docnames, other.domaindata[domainname])
app.emit('env-merge-info', self, docnames, other)
@@ -369,8 +320,8 @@ class BuildEnvironment(object):
if filename.startswith(self.srcdir):
filename = filename[len(self.srcdir) + 1:]
for suffix in self.config.source_suffix:
if fnmatch.fnmatch(filename, '*' + suffix): # type: ignore
return filename[:-len(suffix)] # type: ignore
if fnmatch.fnmatch(filename, '*' + suffix):
return filename[:-len(suffix)]
else:
# the file does not have docname
return None
@@ -387,14 +338,14 @@ class BuildEnvironment(object):
docname = docname.replace(SEP, path.sep)
if suffix is None:
candidate_suffix = None # type: unicode
for candidate_suffix in self.config.source_suffix: # type: ignore
for candidate_suffix in self.config.source_suffix:
if path.isfile(path.join(self.srcdir, docname) +
candidate_suffix):
suffix = candidate_suffix
break
else:
# document does not exist
suffix = self.config.source_suffix[0] # type: ignore
suffix = self.config.source_suffix[0]
if base is True:
return path.join(self.srcdir, docname) + suffix
elif base is None:
@@ -427,8 +378,8 @@ class BuildEnvironment(object):
enc_rel_fn = rel_fn.encode(sys.getfilesystemencoding())
return rel_fn, path.abspath(path.join(self.srcdir, enc_rel_fn))
def find_files(self, config):
# type: (Config) -> None
def find_files(self, config, buildername):
# type: (Config, unicode) -> None
"""Find all source files in the source dir and put them in
self.found_docs.
"""
@@ -444,18 +395,25 @@ class BuildEnvironment(object):
if os.access(self.doc2path(docname), os.R_OK):
self.found_docs.add(docname)
else:
self.warn(docname, "document not readable. Ignored.")
logger.warning("document not readable. Ignored.", location=docname)
# add catalog mo file dependency
for docname in self.found_docs:
catalog_files = find_catalog_files(
docname,
self.srcdir,
self.config.locale_dirs,
self.config.language,
self.config.gettext_compact)
for filename in catalog_files:
self.dependencies.setdefault(docname, set()).add(filename)
# Current implementation is applying translated messages in the reading
# phase.Therefore, in order to apply the updated message catalog, it is
# necessary to re-process from the reading phase. Here, if dependency
# is set for the doc source and the mo file, it is processed again from
# the reading phase when mo is updated. In the future, we would like to
# move i18n process into the writing phase, and remove these lines.
if buildername != 'gettext':
# add catalog mo file dependency
for docname in self.found_docs:
catalog_files = find_catalog_files(
docname,
self.srcdir,
self.config.locale_dirs,
self.config.language,
self.config.gettext_compact)
for filename in catalog_files:
self.dependencies[docname].add(filename)
def get_outdated_files(self, config_changed):
# type: (bool) -> Tuple[Set[unicode], Set[unicode], Set[unicode]]
@@ -490,7 +448,7 @@ class BuildEnvironment(object):
changed.add(docname)
continue
# finally, check the mtime of dependencies
for dep in self.dependencies.get(docname, ()):
for dep in self.dependencies[docname]:
try:
# this will do the right thing when dep is absolute too
deppath = path.join(self.srcdir, dep)
@@ -522,10 +480,8 @@ class BuildEnvironment(object):
else:
# check if a config value was changed that affects how
# doctrees are read
for key, descr in iteritems(config.values):
if descr[1] != 'env':
continue
if self.config[key] != config[key]:
for confval in config.filter('env'):
if self.config[confval.name] != confval.value:
msg = '[config changed] '
config_changed = True
break
@@ -539,13 +495,13 @@ class BuildEnvironment(object):
# the source and doctree directories may have been relocated
self.srcdir = srcdir
self.doctreedir = doctreedir
self.find_files(config)
self.find_files(config, app.buildername)
self.config = config
# this cache also needs to be updated every time
self._nitpick_ignore = set(self.config.nitpick_ignore)
app.info(bold('updating environment: '), nonl=True)
logger.info(bold('updating environment: '), nonl=True)
added, changed, removed = self.get_outdated_files(config_changed)
@@ -561,7 +517,7 @@ class BuildEnvironment(object):
msg += '%s added, %s changed, %s removed' % (len(added), len(changed),
len(removed))
app.info(msg)
logger.info(msg)
self.app = app
@@ -584,14 +540,14 @@ class BuildEnvironment(object):
if ext_ok:
continue
if ext_ok is None:
app.warn('the %s extension does not declare if it '
'is safe for parallel reading, assuming it '
'isn\'t - please ask the extension author to '
'check and make it explicit' % extname)
app.warn('doing serial read')
logger.warning('the %s extension does not declare if it '
'is safe for parallel reading, assuming it '
'isn\'t - please ask the extension author to '
'check and make it explicit', extname)
logger.warning('doing serial read')
else:
app.warn('the %s extension is not safe for parallel '
'reading, doing serial read' % extname)
logger.warning('the %s extension is not safe for parallel '
'reading, doing serial read', extname)
par_ok = False
break
if par_ok:
@@ -613,8 +569,8 @@ class BuildEnvironment(object):
def _read_serial(self, docnames, app):
# type: (List[unicode], Sphinx) -> None
for docname in app.status_iterator(docnames, 'reading sources... ',
purple, len(docnames)):
for docname in status_iterator(docnames, 'reading sources... ', "purple",
len(docnames), self.app.verbosity):
# remove all inventory entries for that file
app.emit('env-purge-doc', self, docname)
self.clear_doc(docname)
@@ -630,12 +586,9 @@ class BuildEnvironment(object):
def read_process(docs):
# type: (List[unicode]) -> BuildEnvironment
self.app = app
self.warnings = [] # type: List[Tuple]
self.set_warnfunc(lambda *args, **kwargs: self.warnings.append((args, kwargs)))
for docname in docs:
self.read_doc(docname, app)
# allow pickling self to send it back
self.set_warnfunc(None)
del self.app
del self.domains
del self.config.values
@@ -644,28 +597,24 @@ class BuildEnvironment(object):
def merge(docs, otherenv):
# type: (List[unicode], BuildEnvironment) -> None
warnings.extend(otherenv.warnings)
self.merge_info_from(docs, otherenv, app)
tasks = ParallelTasks(nproc)
chunks = make_chunks(docnames, nproc)
warnings = [] # type: List[Tuple]
for chunk in app.status_iterator(
chunks, 'reading sources... ', purple, len(chunks)):
for chunk in status_iterator(chunks, 'reading sources... ', "purple",
len(chunks), self.app.verbosity):
tasks.add_task(read_process, chunk, merge)
# make sure all threads have finished
app.info(bold('waiting for workers...'))
logger.info(bold('waiting for workers...'))
tasks.join()
for warning, kwargs in warnings:
self._warnfunc(*warning, **kwargs)
def check_dependents(self, already):
# type: (Set[unicode]) -> Iterator[unicode]
to_rewrite = (self.toctree.assign_section_numbers() + # type: ignore
self.toctree.assign_figure_numbers()) # type: ignore
def check_dependents(self, app, already):
# type: (Sphinx, Set[unicode]) -> Iterator[unicode]
to_rewrite = [] # type: List[unicode]
for docnames in app.emit('env-get-updated', self):
to_rewrite.extend(docnames)
for docname in set(to_rewrite):
if docname not in already:
yield docname
@@ -680,11 +629,11 @@ class BuildEnvironment(object):
if lineend == -1:
lineend = len(error.object)
lineno = error.object.count(b'\n', 0, error.start) + 1
self.warn(self.docname, 'undecodable source characters, '
'replacing with "?": %r' %
(error.object[linestart+1:error.start] + b'>>>' +
error.object[error.start:error.end] + b'<<<' +
error.object[error.end:lineend]), lineno)
logger.warning('undecodable source characters, replacing with "?": %r',
(error.object[linestart + 1:error.start] + b'>>>' +
error.object[error.start:error.end] + b'<<<' +
error.object[error.end:lineend]),
location=(self.docname, lineno))
return (u'?', error.end)
def read_doc(self, docname, app=None):
@@ -714,8 +663,8 @@ class BuildEnvironment(object):
if role_fn:
roles._roles[''] = role_fn
else:
self.warn(docname, 'default role %s not found' %
self.config.default_role)
logger.warning('default role %s not found', self.config.default_role,
location=docname)
codecs.register_error('sphinx', self.warn_and_replace) # type: ignore
@@ -736,13 +685,6 @@ class BuildEnvironment(object):
doctree = pub.document
# post-processing
self.process_dependencies(docname, doctree)
self.process_images(docname, doctree)
self.process_downloads(docname, doctree)
self.process_metadata(docname, doctree)
self.create_title_from(docname, doctree)
for manager in itervalues(self.managers):
manager.process_doc(docname, doctree)
for domain in itervalues(self.domains):
domain.process_doc(self, docname, doctree)
@@ -804,18 +746,20 @@ class BuildEnvironment(object):
@property
def currmodule(self):
# type () -> None
# type: () -> None
"""Backwards compatible alias. Will be removed."""
self.warn(self.docname, 'env.currmodule is being referenced by an '
'extension; this API will be removed in the future')
logger.warning('env.currmodule is being referenced by an '
'extension; this API will be removed in the future',
location=self.docname)
return self.ref_context.get('py:module')
@property
def currclass(self):
# type: () -> None
"""Backwards compatible alias. Will be removed."""
self.warn(self.docname, 'env.currclass is being referenced by an '
'extension; this API will be removed in the future')
logger.warning('env.currclass is being referenced by an '
'extension; this API will be removed in the future',
location=self.docname)
return self.ref_context.get('py:class')
def new_serialno(self, category=''):
@@ -837,7 +781,7 @@ class BuildEnvironment(object):
*filename* should be absolute or relative to the source directory.
"""
self.dependencies.setdefault(self.docname, set()).add(filename)
self.dependencies[self.docname].add(filename)
def note_included(self, filename):
# type: (unicode) -> None
@@ -863,180 +807,31 @@ class BuildEnvironment(object):
self.ref_context.get('py:module'),
self.temp_data.get('object'), node.astext()))
# post-processing of read doctrees
def process_dependencies(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
"""Process docutils-generated dependency info."""
cwd = getcwd()
frompath = path.join(path.normpath(self.srcdir), 'dummy')
deps = doctree.settings.record_dependencies
if not deps:
return
for dep in deps.list:
# the dependency path is relative to the working dir, so get
# one relative to the srcdir
if isinstance(dep, bytes):
dep = dep.decode(fs_encoding)
relpath = relative_path(frompath,
path.normpath(path.join(cwd, dep)))
self.dependencies.setdefault(docname, set()).add(relpath)
def process_downloads(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
"""Process downloadable file paths. """
for node in doctree.traverse(addnodes.download_reference):
targetname = node['reftarget']
rel_filename, filename = self.relfn2path(targetname, docname)
self.dependencies.setdefault(docname, set()).add(rel_filename)
if not os.access(filename, os.R_OK):
self.warn_node('download file not readable: %s' % filename,
node)
continue
uniquename = self.dlfiles.add_file(docname, filename)
node['filename'] = uniquename
def process_images(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
"""Process and rewrite image URIs."""
def collect_candidates(imgpath, candidates):
globbed = {} # type: Dict[unicode, List[unicode]]
for filename in glob(imgpath):
new_imgpath = relative_path(path.join(self.srcdir, 'dummy'),
filename)
try:
mimetype = guess_mimetype(filename)
if mimetype not in candidates:
globbed.setdefault(mimetype, []).append(new_imgpath)
except (OSError, IOError) as err:
self.warn_node('image file %s not readable: %s' %
(filename, err), node)
for key, files in iteritems(globbed):
candidates[key] = sorted(files, key=len)[0] # select by similarity
for node in doctree.traverse(nodes.image):
# Map the mimetype to the corresponding image. The writer may
# choose the best image from these candidates. The special key * is
# set if there is only single candidate to be used by a writer.
# The special key ? is set for nonlocal URIs.
node['candidates'] = candidates = {}
imguri = node['uri']
if imguri.startswith('data:'):
self.warn_node('image data URI found. some builders might not support', node,
type='image', subtype='data_uri')
candidates['?'] = imguri
continue
elif imguri.find('://') != -1:
self.warn_node('nonlocal image URI found: %s' % imguri, node,
type='image', subtype='nonlocal_uri')
candidates['?'] = imguri
continue
rel_imgpath, full_imgpath = self.relfn2path(imguri, docname)
if self.config.language:
# substitute figures (ex. foo.png -> foo.en.png)
i18n_full_imgpath = search_image_for_language(full_imgpath, self)
if i18n_full_imgpath != full_imgpath:
full_imgpath = i18n_full_imgpath
rel_imgpath = relative_path(path.join(self.srcdir, 'dummy'),
i18n_full_imgpath)
# set imgpath as default URI
node['uri'] = rel_imgpath
if rel_imgpath.endswith(os.extsep + '*'):
if self.config.language:
# Search language-specific figures at first
i18n_imguri = get_image_filename_for_language(imguri, self)
_, full_i18n_imgpath = self.relfn2path(i18n_imguri, docname)
collect_candidates(full_i18n_imgpath, candidates)
collect_candidates(full_imgpath, candidates)
else:
candidates['*'] = rel_imgpath
# map image paths to unique image names (so that they can be put
# into a single directory)
for imgpath in itervalues(candidates):
self.dependencies.setdefault(docname, set()).add(imgpath)
if not os.access(path.join(self.srcdir, imgpath), os.R_OK):
self.warn_node('image file not readable: %s' % imgpath,
node)
continue
self.images.add_file(docname, imgpath)
def process_metadata(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
"""Process the docinfo part of the doctree as metadata.
Keep processing minimal -- just return what docutils says.
"""
self.metadata[docname] = {}
md = self.metadata[docname]
try:
docinfo = doctree[0]
except IndexError:
# probably an empty document
return
if docinfo.__class__ is not nodes.docinfo:
# nothing to see here
return
for node in docinfo:
# nodes are multiply inherited...
if isinstance(node, nodes.authors):
md['authors'] = [author.astext() for author in node]
elif isinstance(node, nodes.TextElement): # e.g. author
md[node.__class__.__name__] = node.astext()
else:
name, body = node
md[name.astext()] = body.astext()
for name, value in md.items():
if name in ('tocdepth',):
try:
value = int(value)
except ValueError:
value = 0
md[name] = value
del doctree[0]
def create_title_from(self, docname, document):
# type: (unicode, nodes.Node) -> None
"""Add a title node to the document (just copy the first section title),
and store that title in the environment.
"""
titlenode = nodes.title()
longtitlenode = titlenode
# explicit title set with title directive; use this only for
# the <title> tag in HTML output
if 'title' in document:
longtitlenode = nodes.title()
longtitlenode += nodes.Text(document['title'])
# look for first section title and use that as the title
for node in document.traverse(nodes.section):
visitor = SphinxContentsFilter(document)
node[0].walkabout(visitor)
titlenode += visitor.get_entry_text()
break
else:
# document has no title
titlenode += nodes.Text('<no title>')
self.titles[docname] = titlenode
self.longtitles[docname] = longtitlenode
def note_toctree(self, docname, toctreenode):
# type: (unicode, addnodes.toctree) -> None
"""Note a TOC tree directive in a document and gather information about
file relations from it.
"""
self.toctree.note_toctree(docname, toctreenode) # type: ignore
warnings.warn('env.note_toctree() is deprecated. '
'Use sphinx.environment.adapters.toctre.TocTree instead.',
RemovedInSphinx20Warning)
TocTree(self).note(docname, toctreenode)
def get_toc_for(self, docname, builder):
# type: (unicode, Builder) -> addnodes.toctree
# type: (unicode, Builder) -> Dict[unicode, nodes.Node]
"""Return a TOC nodetree -- for use on the same page only!"""
return self.toctree.get_toc_for(docname, builder) # type: ignore
warnings.warn('env.get_toc_for() is deprecated. '
'Use sphinx.environment.adapters.toctre.TocTree instead.',
RemovedInSphinx20Warning)
return TocTree(self).get_toc_for(docname, builder)
def get_toctree_for(self, docname, builder, collapse, **kwds):
# type: (unicode, Builder, bool, Any) -> addnodes.toctree
"""Return the global TOC nodetree."""
return self.toctree.get_toctree_for(docname, builder, collapse, **kwds) # type: ignore
warnings.warn('env.get_toctree_for() is deprecated. '
'Use sphinx.environment.adapters.toctre.TocTree instead.',
RemovedInSphinx20Warning)
return TocTree(self).get_toctree_for(docname, builder, collapse, **kwds)
def get_domain(self, domainname):
# type: (unicode) -> Domain
@@ -1076,9 +871,9 @@ class BuildEnvironment(object):
# now, resolve all toctree nodes
for toctreenode in doctree.traverse(addnodes.toctree):
result = self.resolve_toctree(docname, builder, toctreenode,
prune=prune_toctrees,
includehidden=includehidden)
result = TocTree(self).resolve(docname, builder, toctreenode,
prune=prune_toctrees,
includehidden=includehidden)
if result is None:
toctreenode.replace_self([])
else:
@@ -1100,9 +895,9 @@ class BuildEnvironment(object):
If *collapse* is True, all branches not containing docname will
be collapsed.
"""
return self.toctree.resolve_toctree(docname, builder, toctree, prune, # type: ignore
maxdepth, titles_only, collapse,
includehidden)
return TocTree(self).resolve(docname, builder, toctree, prune,
maxdepth, titles_only, collapse,
includehidden)
def resolve_references(self, doctree, fromdocname, builder):
# type: (nodes.Node, unicode, Builder) -> None
@@ -1127,8 +922,6 @@ class BuildEnvironment(object):
# really hardwired reference types
elif typ == 'any':
newnode = self._resolve_any_reference(builder, refdoc, node, contnode)
elif typ == 'doc':
newnode = self._resolve_doc_reference(builder, refdoc, node, contnode)
# no new node found? try the missing-reference event
if newnode is None:
newnode = builder.app.emit_firstresult(
@@ -1142,7 +935,7 @@ class BuildEnvironment(object):
node.replace_self(newnode or contnode)
# remove only-nodes that do not belong to our builder
process_only_nodes(doctree, builder.tags, warn_node=self.warn_node)
process_only_nodes(doctree, builder.tags)
# allow custom references to be resolved
builder.app.emit('doctree-resolved', doctree, fromdocname)
@@ -1164,32 +957,13 @@ class BuildEnvironment(object):
return
if domain and typ in domain.dangling_warnings:
msg = domain.dangling_warnings[typ]
elif typ == 'doc':
msg = 'unknown document: %(target)s'
elif node.get('refdomain', 'std') not in ('', 'std'):
msg = '%s:%s reference target not found: %%(target)s' % \
(node['refdomain'], typ)
else:
msg = '%r reference target not found: %%(target)s' % typ
self.warn_node(msg % {'target': target}, node, type='ref', subtype=typ)
def _resolve_doc_reference(self, builder, refdoc, node, contnode):
# type: (Builder, unicode, nodes.Node, nodes.Node) -> nodes.Node
# directly reference to document by source name;
# can be absolute or relative
docname = docname_join(refdoc, node['reftarget'])
if docname in self.all_docs:
if node['refexplicit']:
# reference with explicit title
caption = node.astext()
else:
caption = clean_astext(self.titles[docname])
innernode = nodes.inline(caption, caption)
innernode['classes'].append('doc')
newnode = nodes.reference('', '', internal=True)
newnode['refuri'] = builder.get_relative_uri(refdoc, docname)
newnode.append(innernode)
return newnode
logger.warning(msg % {'target': target},
location=node, type='ref', subtype=typ)
def _resolve_any_reference(self, builder, refdoc, node, contnode):
# type: (Builder, unicode, nodes.Node, nodes.Node) -> nodes.Node
@@ -1197,7 +971,8 @@ class BuildEnvironment(object):
target = node['reftarget']
results = [] # type: List[Tuple[unicode, nodes.Node]]
# first, try resolving as :doc:
doc_ref = self._resolve_doc_reference(builder, refdoc, node, contnode)
doc_ref = self.domains['std'].resolve_xref(self, refdoc, builder, 'doc',
target, node, contnode)
if doc_ref:
results.append(('doc', doc_ref))
# next, do the standard domain (makes this a priority)
@@ -1222,9 +997,9 @@ class BuildEnvironment(object):
return None
if len(results) > 1:
nice_results = ' or '.join(':%s:' % r[0] for r in results)
self.warn_node('more than one target found for \'any\' cross-'
'reference %r: could be %s' % (target, nice_results),
node)
logger.warning('more than one target found for \'any\' cross-'
'reference %r: could be %s', target, nice_results,
location=node)
res_role, newnode = results[0]
# Override "any" class with the actual role type to get the styling
# approximately correct.
@@ -1236,16 +1011,22 @@ class BuildEnvironment(object):
def create_index(self, builder, group_entries=True,
_fixre=re.compile(r'(.*) ([(][^()]*[)])')):
# type: (Builder, bool, Pattern) -> Any
return self.indices.create_index(builder, group_entries=group_entries, _fixre=_fixre) # type: ignore # NOQA
# type: (Builder, bool, Pattern) -> List[Tuple[unicode, List[Tuple[unicode, List[unicode]]]]] # NOQA
warnings.warn('env.create_index() is deprecated. '
'Use sphinx.environment.adapters.indexentreis.IndexEntries instead.',
RemovedInSphinx20Warning)
return IndexEntries(self).create_index(builder,
group_entries=group_entries,
_fixre=_fixre)
def collect_relations(self):
# type: () -> Dict[unicode, List[unicode]]
traversed = set()
def traverse_toctree(parent, docname):
# type: (unicode, unicode) -> Iterator[Tuple[unicode, unicode]]
if parent == docname:
self.warn(docname, 'self referenced toctree found. Ignored.')
logger.warning('self referenced toctree found. Ignored.', location=docname)
return
# traverse toctree by pre-order
@@ -1285,4 +1066,5 @@ class BuildEnvironment(object):
continue
if 'orphan' in self.metadata[docname]:
continue
self.warn(docname, 'document isn\'t included in any toctree')
logger.warning('document isn\'t included in any toctree',
location=docname)

View File

@@ -0,0 +1,10 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.adapters
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sphinx environment adapters
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""

View File

@@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.managers.indexentries
sphinx.environment.adapters.indexentries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Index entries manager for sphinx.environment.
Index entries adapters for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
@@ -11,67 +11,37 @@
import re
import bisect
import unicodedata
import string
from itertools import groupby
from six import text_type
from sphinx import addnodes
from sphinx.util import iteritems, split_index_msg, split_into
from sphinx.locale import _
from sphinx.environment.managers import EnvironmentManager
from sphinx.util import iteritems, split_into, logging
if False:
# For type annotation
from typing import Pattern, Tuple # NOQA
from docutils import nodes # NOQA
from typing import Any, Dict, Pattern, List, Tuple # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
class IndexEntries(EnvironmentManager):
name = 'indices'
class IndexEntries(object):
def __init__(self, env):
# type: (BuildEnvironment) -> None
super(IndexEntries, self).__init__(env)
self.data = env.indexentries
def clear_doc(self, docname):
# type: (unicode) -> None
self.data.pop(docname, None)
def merge_other(self, docnames, other):
# type: (List[unicode], BuildEnvironment) -> None
for docname in docnames:
self.data[docname] = other.indexentries[docname]
def process_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
entries = self.data[docname] = []
for node in doctree.traverse(addnodes.index):
try:
for entry in node['entries']:
split_index_msg(entry[0], entry[1])
except ValueError as exc:
self.env.warn_node(exc, node)
node.parent.remove(node)
else:
for entry in node['entries']:
if len(entry) == 5:
# Since 1.4: new index structure including index_key (5th column)
entries.append(entry)
else:
entries.append(entry + (None,))
self.env = env
def create_index(self, builder, group_entries=True,
_fixre=re.compile(r'(.*) ([(][^()]*[)])')):
# type: (Builder, bool, Pattern) -> List[Tuple[unicode, List[Tuple[unicode, List[unicode]]]]] # NOQA
# type: (Builder, bool, Pattern) -> List[Tuple[unicode, List[Tuple[unicode, Any]]]] # NOQA
"""Create the real index from the collected index entries."""
from sphinx.environment import NoUri
new = {} # type: Dict[unicode, List]
def add_entry(word, subword, main, link=True, dic=new, key=None):
# type: (unicode, unicode, unicode, bool, Dict, unicode) -> None
# Force the word to be unicode if it's a ASCII bytestring.
# This will solve problems with unicode normalization later.
# For instance the RFC role will add bytestrings at the moment
@@ -90,7 +60,7 @@ class IndexEntries(EnvironmentManager):
# maintain links in sorted/deterministic order
bisect.insort(entry[0], (main, uri))
for fn, entries in iteritems(self.data):
for fn, entries in iteritems(self.env.indexentries):
# new entry types must be listed in directives/other.py!
for type, value, tid, main, index_key in entries:
try:
@@ -119,15 +89,22 @@ class IndexEntries(EnvironmentManager):
add_entry(first, _('see also %s') % second, None,
link=False, key=index_key)
else:
self.env.warn(fn, 'unknown index entry type %r' % type)
logger.warning('unknown index entry type %r', type, location=fn)
except ValueError as err:
self.env.warn(fn, str(err))
logger.warning(str(err), location=fn)
# sort the index entries; put all symbols at the front, even those
# following the letters in ASCII, this is where the chr(127) comes from
def keyfunc(entry, lcletters=string.ascii_lowercase + '_'):
lckey = unicodedata.normalize('NFD', entry[0].lower())
if lckey[0:1] in lcletters:
def keyfunc(entry):
# type: (Tuple[unicode, List]) -> Tuple[unicode, unicode]
key, (void, void, category_key) = entry
if category_key:
# using specified category key to sort
key = category_key
lckey = unicodedata.normalize('NFD', key.lower())
if lckey.startswith(u'\N{RIGHT-TO-LEFT MARK}'):
lckey = lckey[1:]
if lckey[0:1].isalpha() or lckey.startswith('_'):
lckey = chr(127) + lckey
# ensure a determinstic order *within* letters by also sorting on
# the entry itself
@@ -165,14 +142,17 @@ class IndexEntries(EnvironmentManager):
i += 1
# group the entries by letter
def keyfunc2(item, letters=string.ascii_uppercase + '_'):
def keyfunc2(item):
# type: (Tuple[unicode, List]) -> unicode
# hack: mutating the subitems dicts to a list in the keyfunc
k, v = item
v[1] = sorted((si, se) for (si, (se, void, void)) in iteritems(v[1]))
if v[2] is None:
# now calculate the key
if k.startswith(u'\N{RIGHT-TO-LEFT MARK}'):
k = k[1:]
letter = unicodedata.normalize('NFD', k[0])[0].upper()
if letter in letters:
if letter.isalpha() or letter == '_':
return letter
else:
# get all other symbols under one heading

View File

@@ -0,0 +1,325 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.adapters.toctree
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Toctree adapter for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from six import iteritems
from docutils import nodes
from sphinx import addnodes
from sphinx.util import url_re, logging
from sphinx.util.nodes import clean_astext, process_only_nodes
if False:
# For type annotation
from typing import Any, Dict, List # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
class TocTree(object):
def __init__(self, env):
# type: (BuildEnvironment) -> None
self.env = env
def note(self, docname, toctreenode):
# type: (unicode, addnodes.toctree) -> None
"""Note a TOC tree directive in a document and gather information about
file relations from it.
"""
if toctreenode['glob']:
self.env.glob_toctrees.add(docname)
if toctreenode.get('numbered'):
self.env.numbered_toctrees.add(docname)
includefiles = toctreenode['includefiles']
for includefile in includefiles:
# note that if the included file is rebuilt, this one must be
# too (since the TOC of the included file could have changed)
self.env.files_to_rebuild.setdefault(includefile, set()).add(docname)
self.env.toctree_includes.setdefault(docname, []).extend(includefiles)
def resolve(self, docname, builder, toctree, prune=True, maxdepth=0,
titles_only=False, collapse=False, includehidden=False):
# type: (unicode, Builder, addnodes.toctree, bool, int, bool, bool, bool) -> nodes.Node
"""Resolve a *toctree* node into individual bullet lists with titles
as items, returning None (if no containing titles are found) or
a new node.
If *prune* is True, the tree is pruned to *maxdepth*, or if that is 0,
to the value of the *maxdepth* option on the *toctree* node.
If *titles_only* is True, only toplevel document titles will be in the
resulting tree.
If *collapse* is True, all branches not containing docname will
be collapsed.
"""
if toctree.get('hidden', False) and not includehidden:
return None
# For reading the following two helper function, it is useful to keep
# in mind the node structure of a toctree (using HTML-like node names
# for brevity):
#
# <ul>
# <li>
# <p><a></p>
# <p><a></p>
# ...
# <ul>
# ...
# </ul>
# </li>
# </ul>
#
# The transformation is made in two passes in order to avoid
# interactions between marking and pruning the tree (see bug #1046).
toctree_ancestors = self.get_toctree_ancestors(docname)
def _toctree_add_classes(node, depth):
# type: (nodes.Node, int) -> None
"""Add 'toctree-l%d' and 'current' classes to the toctree."""
for subnode in node.children:
if isinstance(subnode, (addnodes.compact_paragraph,
nodes.list_item)):
# for <p> and <li>, indicate the depth level and recurse
subnode['classes'].append('toctree-l%d' % (depth - 1))
_toctree_add_classes(subnode, depth)
elif isinstance(subnode, nodes.bullet_list):
# for <ul>, just recurse
_toctree_add_classes(subnode, depth + 1)
elif isinstance(subnode, nodes.reference):
# for <a>, identify which entries point to the current
# document and therefore may not be collapsed
if subnode['refuri'] == docname:
if not subnode['anchorname']:
# give the whole branch a 'current' class
# (useful for styling it differently)
branchnode = subnode
while branchnode:
branchnode['classes'].append('current')
branchnode = branchnode.parent
# mark the list_item as "on current page"
if subnode.parent.parent.get('iscurrent'):
# but only if it's not already done
return
while subnode:
subnode['iscurrent'] = True
subnode = subnode.parent
def _entries_from_toctree(toctreenode, parents, separate=False, subtree=False):
# type: (addnodes.toctree, List[nodes.Node], bool, bool) -> List[nodes.Node]
"""Return TOC entries for a toctree node."""
refs = [(e[0], e[1]) for e in toctreenode['entries']]
entries = []
for (title, ref) in refs:
try:
refdoc = None
if url_re.match(ref):
if title is None:
title = ref
reference = nodes.reference('', '', internal=False,
refuri=ref, anchorname='',
*[nodes.Text(title)])
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
toc = nodes.bullet_list('', item)
elif ref == 'self':
# 'self' refers to the document from which this
# toctree originates
ref = toctreenode['parent']
if not title:
title = clean_astext(self.env.titles[ref])
reference = nodes.reference('', '', internal=True,
refuri=ref,
anchorname='',
*[nodes.Text(title)])
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
# don't show subitems
toc = nodes.bullet_list('', item)
else:
if ref in parents:
logger.warning('circular toctree references '
'detected, ignoring: %s <- %s',
ref, ' <- '.join(parents),
location=ref)
continue
refdoc = ref
toc = self.env.tocs[ref].deepcopy()
maxdepth = self.env.metadata[ref].get('tocdepth', 0)
if ref not in toctree_ancestors or (prune and maxdepth > 0):
self._toctree_prune(toc, 2, maxdepth, collapse)
process_only_nodes(toc, builder.tags)
if title and toc.children and len(toc.children) == 1:
child = toc.children[0]
for refnode in child.traverse(nodes.reference):
if refnode['refuri'] == ref and \
not refnode['anchorname']:
refnode.children = [nodes.Text(title)]
if not toc.children:
# empty toc means: no titles will show up in the toctree
logger.warning('toctree contains reference to document %r that '
'doesn\'t have a title: no link will be generated',
ref, location=toctreenode)
except KeyError:
# this is raised if the included file does not exist
logger.warning('toctree contains reference to nonexisting document %r',
ref, location=toctreenode)
else:
# if titles_only is given, only keep the main title and
# sub-toctrees
if titles_only:
# delete everything but the toplevel title(s)
# and toctrees
for toplevel in toc:
# nodes with length 1 don't have any children anyway
if len(toplevel) > 1:
subtrees = toplevel.traverse(addnodes.toctree)
if subtrees:
toplevel[1][:] = subtrees
else:
toplevel.pop(1)
# resolve all sub-toctrees
for subtocnode in toc.traverse(addnodes.toctree):
if not (subtocnode.get('hidden', False) and
not includehidden):
i = subtocnode.parent.index(subtocnode) + 1
for item in _entries_from_toctree(
subtocnode, [refdoc] + parents,
subtree=True):
subtocnode.parent.insert(i, item)
i += 1
subtocnode.parent.remove(subtocnode)
if separate:
entries.append(toc)
else:
entries.extend(toc.children)
if not subtree and not separate:
ret = nodes.bullet_list()
ret += entries
return [ret]
return entries
maxdepth = maxdepth or toctree.get('maxdepth', -1)
if not titles_only and toctree.get('titlesonly', False):
titles_only = True
if not includehidden and toctree.get('includehidden', False):
includehidden = True
# NOTE: previously, this was separate=True, but that leads to artificial
# separation when two or more toctree entries form a logical unit, so
# separating mode is no longer used -- it's kept here for history's sake
tocentries = _entries_from_toctree(toctree, [], separate=False)
if not tocentries:
return None
newnode = addnodes.compact_paragraph('', '')
caption = toctree.attributes.get('caption')
if caption:
caption_node = nodes.caption(caption, '', *[nodes.Text(caption)])
caption_node.line = toctree.line
caption_node.source = toctree.source
caption_node.rawsource = toctree['rawcaption']
if hasattr(toctree, 'uid'):
# move uid to caption_node to translate it
caption_node.uid = toctree.uid
del toctree.uid
newnode += caption_node
newnode.extend(tocentries)
newnode['toctree'] = True
# prune the tree to maxdepth, also set toc depth and current classes
_toctree_add_classes(newnode, 1)
self._toctree_prune(newnode, 1, prune and maxdepth or 0, collapse)
if len(newnode[-1]) == 0: # No titles found
return None
# set the target paths in the toctrees (they are not known at TOC
# generation time)
for refnode in newnode.traverse(nodes.reference):
if not url_re.match(refnode['refuri']):
refnode['refuri'] = builder.get_relative_uri(
docname, refnode['refuri']) + refnode['anchorname']
return newnode
def get_toctree_ancestors(self, docname):
# type: (unicode) -> List[unicode]
parent = {}
for p, children in iteritems(self.env.toctree_includes):
for child in children:
parent[child] = p
ancestors = [] # type: List[unicode]
d = docname
while d in parent and d not in ancestors:
ancestors.append(d)
d = parent[d]
return ancestors
def _toctree_prune(self, node, depth, maxdepth, collapse=False):
# type: (nodes.Node, int, int, bool) -> None
"""Utility: Cut a TOC at a specified depth."""
for subnode in node.children[:]:
if isinstance(subnode, (addnodes.compact_paragraph,
nodes.list_item)):
# for <p> and <li>, just recurse
self._toctree_prune(subnode, depth, maxdepth, collapse)
elif isinstance(subnode, nodes.bullet_list):
# for <ul>, determine if the depth is too large or if the
# entry is to be collapsed
if maxdepth > 0 and depth > maxdepth:
subnode.parent.replace(subnode, [])
else:
# cull sub-entries whose parents aren't 'current'
if (collapse and depth > 1 and
'iscurrent' not in subnode.parent):
subnode.parent.remove(subnode)
else:
# recurse on visible children
self._toctree_prune(subnode, depth + 1, maxdepth, collapse)
def get_toc_for(self, docname, builder):
# type: (unicode, Builder) -> Dict[unicode, nodes.Node]
"""Return a TOC nodetree -- for use on the same page only!"""
tocdepth = self.env.metadata[docname].get('tocdepth', 0)
try:
toc = self.env.tocs[docname].deepcopy()
self._toctree_prune(toc, 2, tocdepth)
except KeyError:
# the document does not exist anymore: return a dummy node that
# renders to nothing
return nodes.paragraph()
process_only_nodes(toc, builder.tags)
for node in toc.traverse(nodes.reference):
node['refuri'] = node['anchorname'] or '#'
return toc
def get_toctree_for(self, docname, builder, collapse, **kwds):
# type: (unicode, Builder, bool, Any) -> nodes.Node
"""Return the global TOC nodetree."""
doctree = self.env.get_doctree(self.env.config.master_doc)
toctrees = []
if 'includehidden' not in kwds:
kwds['includehidden'] = True
if 'maxdepth' not in kwds:
kwds['maxdepth'] = 0
kwds['collapse'] = collapse
for toctreenode in doctree.traverse(addnodes.toctree):
toctree = self.resolve(docname, builder, toctreenode, prune=True, **kwds)
if toctree:
toctrees.append(toctree)
if not toctrees:
return None
result = toctrees[0]
for toctree in toctrees[1:]:
result.extend(toctree.children)
return result

View File

@@ -0,0 +1,85 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The data collector components for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from six import itervalues
if False:
# For type annotation
from typing import Dict, List, Set # NOQA
from docutils import nodes # NOQA
from sphinx.sphinx import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class EnvironmentCollector(object):
"""An EnvironmentCollector is a specific data collector from each document.
It gathers data and stores :py:class:`BuildEnvironment
<sphinx.environment.BuildEnvironment>` as a database. Examples of specific
data would be images, download files, section titles, metadatas, index
entries and toctrees, etc.
"""
listener_ids = None # type: Dict[unicode, int]
def enable(self, app):
# type: (Sphinx) -> None
assert self.listener_ids is None
self.listener_ids = {
'doctree-read': app.connect('doctree-read', self.process_doc),
'env-merge-info': app.connect('env-merge-info', self.merge_other),
'env-purge-doc': app.connect('env-purge-doc', self.clear_doc),
'env-get-updated': app.connect('env-get-updated', self.get_updated_docs),
'env-get-outdated': app.connect('env-get-outdated', self.get_outdated_docs),
}
def disable(self, app):
# type: (Sphinx) -> None
assert self.listener_ids is not None
for listener_id in itervalues(self.listener_ids):
app.disconnect(listener_id)
self.listener_ids = None
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
"""Remove specified data of a document.
This method is called on the removal of the document."""
raise NotImplementedError
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
"""Merge in specified data regarding docnames from a different `BuildEnvironment`
object which coming from a subprocess in parallel builds."""
raise NotImplementedError
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Process a document and gather specific data from it.
This method is called after the document is read."""
raise NotImplementedError
def get_updated_docs(self, app, env):
# type: (Sphinx, BuildEnvironment) -> List[unicode]
"""Return a list of docnames to re-read.
This methods is called after reading the whole of documents (experimental).
"""
return []
def get_outdated_docs(self, app, env, added, changed, removed):
# type: (Sphinx, BuildEnvironment, unicode, Set[unicode], Set[unicode], Set[unicode]) -> List[unicode] # NOQA
"""Return a list of docnames to re-read.
This methods is called before reading the documents.
"""
return []

View File

@@ -0,0 +1,148 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.asset
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The image collector for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import os
from os import path
from glob import glob
from six import iteritems, itervalues
from docutils import nodes
from docutils.utils import relative_path
from sphinx import addnodes
from sphinx.environment.collectors import EnvironmentCollector
from sphinx.util import logging
from sphinx.util.i18n import get_image_filename_for_language, search_image_for_language
from sphinx.util.images import guess_mimetype
if False:
# For type annotation
from typing import Dict, List, Set, Tuple # NOQA
from docutils import nodes # NOQA
from sphinx.sphinx import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
class ImageCollector(EnvironmentCollector):
"""Image files collector for sphinx.environment."""
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.images.purge_doc(docname)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
env.images.merge_other(docnames, other.images)
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Process and rewrite image URIs."""
docname = app.env.docname
for node in doctree.traverse(nodes.image):
# Map the mimetype to the corresponding image. The writer may
# choose the best image from these candidates. The special key * is
# set if there is only single candidate to be used by a writer.
# The special key ? is set for nonlocal URIs.
candidates = {} # type: Dict[unicode, unicode]
node['candidates'] = candidates
imguri = node['uri']
if imguri.startswith('data:'):
logger.warning('image data URI found. some builders might not support',
location=node, type='image', subtype='data_uri')
candidates['?'] = imguri
continue
elif imguri.find('://') != -1:
logger.warning('nonlocal image URI found: %s' % imguri,
location=node,
type='image', subtype='nonlocal_uri')
candidates['?'] = imguri
continue
rel_imgpath, full_imgpath = app.env.relfn2path(imguri, docname)
if app.config.language:
# substitute figures (ex. foo.png -> foo.en.png)
i18n_full_imgpath = search_image_for_language(full_imgpath, app.env)
if i18n_full_imgpath != full_imgpath:
full_imgpath = i18n_full_imgpath
rel_imgpath = relative_path(path.join(app.srcdir, 'dummy'),
i18n_full_imgpath)
# set imgpath as default URI
node['uri'] = rel_imgpath
if rel_imgpath.endswith(os.extsep + '*'):
if app.config.language:
# Search language-specific figures at first
i18n_imguri = get_image_filename_for_language(imguri, app.env)
_, full_i18n_imgpath = app.env.relfn2path(i18n_imguri, docname)
self.collect_candidates(app.env, full_i18n_imgpath, candidates, node)
self.collect_candidates(app.env, full_imgpath, candidates, node)
else:
candidates['*'] = rel_imgpath
# map image paths to unique image names (so that they can be put
# into a single directory)
for imgpath in itervalues(candidates):
app.env.dependencies[docname].add(imgpath)
if not os.access(path.join(app.srcdir, imgpath), os.R_OK):
logger.warning('image file not readable: %s' % imgpath,
location=node, type='image', subtype='not_readable')
continue
app.env.images.add_file(docname, imgpath)
def collect_candidates(self, env, imgpath, candidates, node):
# type: (BuildEnvironment, unicode, Dict[unicode, unicode], nodes.Node) -> None
globbed = {} # type: Dict[unicode, List[unicode]]
for filename in glob(imgpath):
new_imgpath = relative_path(path.join(env.srcdir, 'dummy'),
filename)
try:
mimetype = guess_mimetype(filename)
if mimetype not in candidates:
globbed.setdefault(mimetype, []).append(new_imgpath)
except (OSError, IOError) as err:
logger.warning('image file %s not readable: %s' % (filename, err),
location=node, type='image', subtype='not_readable')
for key, files in iteritems(globbed):
candidates[key] = sorted(files, key=len)[0] # select by similarity
class DownloadFileCollector(EnvironmentCollector):
"""Download files collector for sphinx.environment."""
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.dlfiles.purge_doc(docname)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
env.dlfiles.merge_other(docnames, other.dlfiles)
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Process downloadable file paths. """
for node in doctree.traverse(addnodes.download_reference):
targetname = node['reftarget']
rel_filename, filename = app.env.relfn2path(targetname, app.env.docname)
app.env.dependencies[app.env.docname].add(rel_filename)
if not os.access(filename, os.R_OK):
logger.warning('download file not readable: %s' % filename,
location=node, type='download', subtype='not_readable')
continue
node['filename'] = app.env.dlfiles.add_file(app.env.docname, filename)
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(ImageCollector)
app.add_env_collector(DownloadFileCollector)

View File

@@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The dependencies collector components for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from os import path
from docutils.utils import relative_path
from sphinx.util.osutil import getcwd, fs_encoding
from sphinx.environment.collectors import EnvironmentCollector
if False:
# For type annotation
from typing import Set # NOQA
from docutils import nodes # NOQA
from sphinx.sphinx import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class DependenciesCollector(EnvironmentCollector):
"""dependencies collector for sphinx.environment."""
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.dependencies.pop(docname, None)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
for docname in docnames:
if docname in other.dependencies:
env.dependencies[docname] = other.dependencies[docname]
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Process docutils-generated dependency info."""
cwd = getcwd()
frompath = path.join(path.normpath(app.srcdir), 'dummy')
deps = doctree.settings.record_dependencies
if not deps:
return
for dep in deps.list:
# the dependency path is relative to the working dir, so get
# one relative to the srcdir
if isinstance(dep, bytes):
dep = dep.decode(fs_encoding)
relpath = relative_path(frompath,
path.normpath(path.join(cwd, dep)))
app.env.dependencies[app.env.docname].add(relpath)
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(DependenciesCollector)

View File

@@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.indexentries
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Index entries collector for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from sphinx import addnodes
from sphinx.util import split_index_msg, logging
from sphinx.environment.collectors import EnvironmentCollector
if False:
# For type annotation
from typing import Set # NOQA
from docutils import nodes # NOQA
from sphinx.applicatin import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
class IndexEntriesCollector(EnvironmentCollector):
name = 'indices'
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.indexentries.pop(docname, None)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
for docname in docnames:
env.indexentries[docname] = other.indexentries[docname]
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
docname = app.env.docname
entries = app.env.indexentries[docname] = []
for node in doctree.traverse(addnodes.index):
try:
for entry in node['entries']:
split_index_msg(entry[0], entry[1])
except ValueError as exc:
logger.warning(str(exc), location=node)
node.parent.remove(node)
else:
for entry in node['entries']:
if len(entry) == 5:
# Since 1.4: new index structure including index_key (5th column)
entries.append(entry)
else:
entries.append(entry + (None,))
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(IndexEntriesCollector)

View File

@@ -0,0 +1,73 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.metadata
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The metadata collector components for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from docutils import nodes
from sphinx.environment.collectors import EnvironmentCollector
if False:
# For type annotation
from typing import Set # NOQA
from docutils import nodes # NOQA
from sphinx.sphinx import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class MetadataCollector(EnvironmentCollector):
"""metadata collector for sphinx.environment."""
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.metadata.pop(docname, None)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
for docname in docnames:
env.metadata[docname] = other.metadata[docname]
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Process the docinfo part of the doctree as metadata.
Keep processing minimal -- just return what docutils says.
"""
md = app.env.metadata[app.env.docname]
try:
docinfo = doctree[0]
except IndexError:
# probably an empty document
return
if docinfo.__class__ is not nodes.docinfo:
# nothing to see here
return
for node in docinfo:
# nodes are multiply inherited...
if isinstance(node, nodes.authors):
md['authors'] = [author.astext() for author in node]
elif isinstance(node, nodes.TextElement): # e.g. author
md[node.__class__.__name__] = node.astext()
else:
name, body = node
md[name.astext()] = body.astext()
for name, value in md.items():
if name in ('tocdepth',):
try:
value = int(value)
except ValueError:
value = 0
md[name] = value
del doctree[0]
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(MetadataCollector)

View File

@@ -0,0 +1,66 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.title
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The title collector components for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from docutils import nodes
from sphinx.environment.collectors import EnvironmentCollector
from sphinx.transforms import SphinxContentsFilter
if False:
# For type annotation
from typing import Set # NOQA
from docutils import nodes # NOQA
from sphinx.sphinx import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class TitleCollector(EnvironmentCollector):
"""title collector for sphinx.environment."""
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.titles.pop(docname, None)
env.longtitles.pop(docname, None)
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
for docname in docnames:
env.titles[docname] = other.titles[docname]
env.longtitles[docname] = other.longtitles[docname]
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Add a title node to the document (just copy the first section title),
and store that title in the environment.
"""
titlenode = nodes.title()
longtitlenode = titlenode
# explicit title set with title directive; use this only for
# the <title> tag in HTML output
if 'title' in doctree:
longtitlenode = nodes.title()
longtitlenode += nodes.Text(doctree['title'])
# look for first section title and use that as the title
for node in doctree.traverse(nodes.section):
visitor = SphinxContentsFilter(doctree)
node[0].walkabout(visitor)
titlenode += visitor.get_entry_text()
break
else:
# document has no title
titlenode += nodes.Text('<no title>')
app.env.titles[app.env.docname] = titlenode
app.env.longtitles[app.env.docname] = longtitlenode
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(TitleCollector)

View File

@@ -0,0 +1,288 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.collectors.toctree
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Toctree collector for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from six import iteritems
from docutils import nodes
from sphinx import addnodes
from sphinx.util import url_re, logging
from sphinx.transforms import SphinxContentsFilter
from sphinx.environment.adapters.toctree import TocTree
from sphinx.environment.collectors import EnvironmentCollector
if False:
# For type annotation
from typing import Any, Dict, List, Set, Tuple # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
logger = logging.getLogger(__name__)
class TocTreeCollector(EnvironmentCollector):
def clear_doc(self, app, env, docname):
# type: (Sphinx, BuildEnvironment, unicode) -> None
env.tocs.pop(docname, None)
env.toc_secnumbers.pop(docname, None)
env.toc_fignumbers.pop(docname, None)
env.toc_num_entries.pop(docname, None)
env.toctree_includes.pop(docname, None)
env.glob_toctrees.discard(docname)
env.numbered_toctrees.discard(docname)
for subfn, fnset in list(env.files_to_rebuild.items()):
fnset.discard(docname)
if not fnset:
del env.files_to_rebuild[subfn]
def merge_other(self, app, env, docnames, other):
# type: (Sphinx, BuildEnvironment, Set[unicode], BuildEnvironment) -> None
for docname in docnames:
env.tocs[docname] = other.tocs[docname]
env.toc_num_entries[docname] = other.toc_num_entries[docname]
if docname in other.toctree_includes:
env.toctree_includes[docname] = other.toctree_includes[docname]
if docname in other.glob_toctrees:
env.glob_toctrees.add(docname)
if docname in other.numbered_toctrees:
env.numbered_toctrees.add(docname)
for subfn, fnset in other.files_to_rebuild.items():
env.files_to_rebuild.setdefault(subfn, set()).update(fnset & set(docnames))
def process_doc(self, app, doctree):
# type: (Sphinx, nodes.Node) -> None
"""Build a TOC from the doctree and store it in the inventory."""
docname = app.env.docname
numentries = [0] # nonlocal again...
def traverse_in_section(node, cls):
"""Like traverse(), but stay within the same section."""
result = []
if isinstance(node, cls):
result.append(node)
for child in node.children:
if isinstance(child, nodes.section):
continue
result.extend(traverse_in_section(child, cls))
return result
def build_toc(node, depth=1):
entries = []
for sectionnode in node:
# find all toctree nodes in this section and add them
# to the toc (just copying the toctree node which is then
# resolved in self.get_and_resolve_doctree)
if isinstance(sectionnode, addnodes.only):
onlynode = addnodes.only(expr=sectionnode['expr'])
blist = build_toc(sectionnode, depth)
if blist:
onlynode += blist.children
entries.append(onlynode)
continue
if not isinstance(sectionnode, nodes.section):
for toctreenode in traverse_in_section(sectionnode,
addnodes.toctree):
item = toctreenode.copy()
entries.append(item)
# important: do the inventory stuff
TocTree(app.env).note(docname, toctreenode)
continue
title = sectionnode[0]
# copy the contents of the section title, but without references
# and unnecessary stuff
visitor = SphinxContentsFilter(doctree)
title.walkabout(visitor)
nodetext = visitor.get_entry_text()
if not numentries[0]:
# for the very first toc entry, don't add an anchor
# as it is the file's title anyway
anchorname = ''
else:
anchorname = '#' + sectionnode['ids'][0]
numentries[0] += 1
# make these nodes:
# list_item -> compact_paragraph -> reference
reference = nodes.reference(
'', '', internal=True, refuri=docname,
anchorname=anchorname, *nodetext)
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
sub_item = build_toc(sectionnode, depth + 1)
item += sub_item
entries.append(item)
if entries:
return nodes.bullet_list('', *entries)
return []
toc = build_toc(doctree)
if toc:
app.env.tocs[docname] = toc
else:
app.env.tocs[docname] = nodes.bullet_list('')
app.env.toc_num_entries[docname] = numentries[0]
def get_updated_docs(self, app, env):
# type: (Sphinx, BuildEnvironment) -> List[unicode]
return self.assign_section_numbers(env) + self.assign_figure_numbers(env)
def assign_section_numbers(self, env):
# type: (BuildEnvironment) -> List[unicode]
"""Assign a section number to each heading under a numbered toctree."""
# a list of all docnames whose section numbers changed
rewrite_needed = []
assigned = set() # type: Set[unicode]
old_secnumbers = env.toc_secnumbers
env.toc_secnumbers = {}
def _walk_toc(node, secnums, depth, titlenode=None):
# titlenode is the title of the document, it will get assigned a
# secnumber too, so that it shows up in next/prev/parent rellinks
for subnode in node.children:
if isinstance(subnode, nodes.bullet_list):
numstack.append(0)
_walk_toc(subnode, secnums, depth - 1, titlenode)
numstack.pop()
titlenode = None
elif isinstance(subnode, nodes.list_item):
_walk_toc(subnode, secnums, depth, titlenode)
titlenode = None
elif isinstance(subnode, addnodes.only):
# at this stage we don't know yet which sections are going
# to be included; just include all of them, even if it leads
# to gaps in the numbering
_walk_toc(subnode, secnums, depth, titlenode)
titlenode = None
elif isinstance(subnode, addnodes.compact_paragraph):
numstack[-1] += 1
if depth > 0:
number = tuple(numstack)
else:
number = None
secnums[subnode[0]['anchorname']] = \
subnode[0]['secnumber'] = number
if titlenode:
titlenode['secnumber'] = number
titlenode = None
elif isinstance(subnode, addnodes.toctree):
_walk_toctree(subnode, depth)
def _walk_toctree(toctreenode, depth):
if depth == 0:
return
for (title, ref) in toctreenode['entries']:
if url_re.match(ref) or ref == 'self':
# don't mess with those
continue
elif ref in assigned:
logger.warning('%s is already assigned section numbers '
'(nested numbered toctree?)', ref,
location=toctreenode, type='toc', subtype='secnum')
elif ref in env.tocs:
secnums = env.toc_secnumbers[ref] = {}
assigned.add(ref)
_walk_toc(env.tocs[ref], secnums, depth,
env.titles.get(ref))
if secnums != old_secnumbers.get(ref):
rewrite_needed.append(ref)
for docname in env.numbered_toctrees:
assigned.add(docname)
doctree = env.get_doctree(docname)
for toctreenode in doctree.traverse(addnodes.toctree):
depth = toctreenode.get('numbered', 0)
if depth:
# every numbered toctree gets new numbering
numstack = [0]
_walk_toctree(toctreenode, depth)
return rewrite_needed
def assign_figure_numbers(self, env):
# type: (BuildEnvironment) -> List[unicode]
"""Assign a figure number to each figure under a numbered toctree."""
rewrite_needed = []
assigned = set() # type: Set[unicode]
old_fignumbers = env.toc_fignumbers
env.toc_fignumbers = {}
fignum_counter = {} # type: Dict[unicode, Dict[Tuple[int], int]]
def get_section_number(docname, section):
anchorname = '#' + section['ids'][0]
secnumbers = env.toc_secnumbers.get(docname, {})
if anchorname in secnumbers:
secnum = secnumbers.get(anchorname)
else:
secnum = secnumbers.get('')
return secnum or tuple()
def get_next_fignumber(figtype, secnum):
counter = fignum_counter.setdefault(figtype, {})
secnum = secnum[:env.config.numfig_secnum_depth]
counter[secnum] = counter.get(secnum, 0) + 1
return secnum + (counter[secnum],)
def register_fignumber(docname, secnum, figtype, fignode):
env.toc_fignumbers.setdefault(docname, {})
fignumbers = env.toc_fignumbers[docname].setdefault(figtype, {})
figure_id = fignode['ids'][0]
fignumbers[figure_id] = get_next_fignumber(figtype, secnum)
def _walk_doctree(docname, doctree, secnum):
for subnode in doctree.children:
if isinstance(subnode, nodes.section):
next_secnum = get_section_number(docname, subnode)
if next_secnum:
_walk_doctree(docname, subnode, next_secnum)
else:
_walk_doctree(docname, subnode, secnum)
continue
elif isinstance(subnode, addnodes.toctree):
for title, subdocname in subnode['entries']:
if url_re.match(subdocname) or subdocname == 'self':
# don't mess with those
continue
_walk_doc(subdocname, secnum)
continue
figtype = env.get_domain('std').get_figtype(subnode) # type: ignore
if figtype and subnode['ids']:
register_fignumber(docname, secnum, figtype, subnode)
_walk_doctree(docname, subnode, secnum)
def _walk_doc(docname, secnum):
if docname not in assigned:
assigned.add(docname)
doctree = env.get_doctree(docname)
_walk_doctree(docname, doctree, secnum)
if env.config.numfig:
_walk_doc(env.config.master_doc, tuple())
for docname, fignums in iteritems(env.toc_fignumbers):
if fignums != old_fignumbers.get(docname):
rewrite_needed.append(docname)
return rewrite_needed
def setup(app):
# type: (Sphinx) -> None
app.add_env_collector(TocTreeCollector)

View File

@@ -1,50 +0,0 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.managers
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Manager components for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
if False:
# For type annotation
from typing import Any # NOQA
from docutils import nodes # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class EnvironmentManager(object):
"""Base class for sphinx.environment managers."""
name = None # type: unicode
env = None # type: BuildEnvironment
def __init__(self, env):
# type: (BuildEnvironment) -> None
self.env = env
def attach(self, env):
# type: (BuildEnvironment) -> None
self.env = env
if self.name:
setattr(env, self.name, self)
def detach(self, env):
# type: (BuildEnvironment) -> None
self.env = None
if self.name:
delattr(env, self.name)
def clear_doc(self, docname):
# type: (unicode) -> None
raise NotImplementedError
def merge_other(self, docnames, other):
# type: (List[unicode], Any) -> None
raise NotImplementedError
def process_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
raise NotImplementedError

View File

@@ -1,580 +0,0 @@
# -*- coding: utf-8 -*-
"""
sphinx.environment.managers.toctree
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Toctree manager for sphinx.environment.
:copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from six import iteritems
from docutils import nodes
from sphinx import addnodes
from sphinx.util import url_re
from sphinx.util.nodes import clean_astext, process_only_nodes
from sphinx.transforms import SphinxContentsFilter
from sphinx.environment.managers import EnvironmentManager
if False:
# For type annotation
from typing import Any, Tuple # NOQA
from sphinx.builders import Builder # NOQA
from sphinx.environment import BuildEnvironment # NOQA
class Toctree(EnvironmentManager):
name = 'toctree'
def __init__(self, env):
# type: (BuildEnvironment) -> None
super(Toctree, self).__init__(env)
self.tocs = env.tocs
self.toc_num_entries = env.toc_num_entries
self.toc_secnumbers = env.toc_secnumbers
self.toc_fignumbers = env.toc_fignumbers
self.toctree_includes = env.toctree_includes
self.files_to_rebuild = env.files_to_rebuild
self.glob_toctrees = env.glob_toctrees
self.numbered_toctrees = env.numbered_toctrees
def clear_doc(self, docname):
# type: (unicode) -> None
self.tocs.pop(docname, None)
self.toc_secnumbers.pop(docname, None)
self.toc_fignumbers.pop(docname, None)
self.toc_num_entries.pop(docname, None)
self.toctree_includes.pop(docname, None)
self.glob_toctrees.discard(docname)
self.numbered_toctrees.discard(docname)
for subfn, fnset in list(self.files_to_rebuild.items()):
fnset.discard(docname)
if not fnset:
del self.files_to_rebuild[subfn]
def merge_other(self, docnames, other):
# type: (List[unicode], BuildEnvironment) -> None
for docname in docnames:
self.tocs[docname] = other.tocs[docname]
self.toc_num_entries[docname] = other.toc_num_entries[docname]
if docname in other.toctree_includes:
self.toctree_includes[docname] = other.toctree_includes[docname]
if docname in other.glob_toctrees:
self.glob_toctrees.add(docname)
if docname in other.numbered_toctrees:
self.numbered_toctrees.add(docname)
for subfn, fnset in other.files_to_rebuild.items():
self.files_to_rebuild.setdefault(subfn, set()).update(fnset & docnames)
def process_doc(self, docname, doctree):
# type: (unicode, nodes.Node) -> None
"""Build a TOC from the doctree and store it in the inventory."""
numentries = [0] # nonlocal again...
def traverse_in_section(node, cls):
"""Like traverse(), but stay within the same section."""
result = []
if isinstance(node, cls):
result.append(node)
for child in node.children:
if isinstance(child, nodes.section):
continue
result.extend(traverse_in_section(child, cls))
return result
def build_toc(node, depth=1):
entries = []
for sectionnode in node:
# find all toctree nodes in this section and add them
# to the toc (just copying the toctree node which is then
# resolved in self.get_and_resolve_doctree)
if isinstance(sectionnode, addnodes.only):
onlynode = addnodes.only(expr=sectionnode['expr'])
blist = build_toc(sectionnode, depth)
if blist:
onlynode += blist.children
entries.append(onlynode)
continue
if not isinstance(sectionnode, nodes.section):
for toctreenode in traverse_in_section(sectionnode,
addnodes.toctree):
item = toctreenode.copy()
entries.append(item)
# important: do the inventory stuff
self.note_toctree(docname, toctreenode)
continue
title = sectionnode[0]
# copy the contents of the section title, but without references
# and unnecessary stuff
visitor = SphinxContentsFilter(doctree)
title.walkabout(visitor)
nodetext = visitor.get_entry_text()
if not numentries[0]:
# for the very first toc entry, don't add an anchor
# as it is the file's title anyway
anchorname = ''
else:
anchorname = '#' + sectionnode['ids'][0]
numentries[0] += 1
# make these nodes:
# list_item -> compact_paragraph -> reference
reference = nodes.reference(
'', '', internal=True, refuri=docname,
anchorname=anchorname, *nodetext)
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
sub_item = build_toc(sectionnode, depth + 1)
item += sub_item
entries.append(item)
if entries:
return nodes.bullet_list('', *entries)
return []
toc = build_toc(doctree)
if toc:
self.tocs[docname] = toc
else:
self.tocs[docname] = nodes.bullet_list('')
self.toc_num_entries[docname] = numentries[0]
def note_toctree(self, docname, toctreenode):
# type: (unicode, addnodes.toctree) -> None
"""Note a TOC tree directive in a document and gather information about
file relations from it.
"""
if toctreenode['glob']:
self.glob_toctrees.add(docname)
if toctreenode.get('numbered'):
self.numbered_toctrees.add(docname)
includefiles = toctreenode['includefiles']
for includefile in includefiles:
# note that if the included file is rebuilt, this one must be
# too (since the TOC of the included file could have changed)
self.files_to_rebuild.setdefault(includefile, set()).add(docname)
self.toctree_includes.setdefault(docname, []).extend(includefiles)
def get_toc_for(self, docname, builder):
# type: (unicode, Builder) -> None
"""Return a TOC nodetree -- for use on the same page only!"""
tocdepth = self.env.metadata[docname].get('tocdepth', 0)
try:
toc = self.tocs[docname].deepcopy()
self._toctree_prune(toc, 2, tocdepth)
except KeyError:
# the document does not exist anymore: return a dummy node that
# renders to nothing
return nodes.paragraph()
process_only_nodes(toc, builder.tags, warn_node=self.env.warn_node)
for node in toc.traverse(nodes.reference):
node['refuri'] = node['anchorname'] or '#'
return toc
def get_toctree_for(self, docname, builder, collapse, **kwds):
# type: (unicode, Builder, bool, Any) -> nodes.Node
"""Return the global TOC nodetree."""
doctree = self.env.get_doctree(self.env.config.master_doc)
toctrees = []
if 'includehidden' not in kwds:
kwds['includehidden'] = True
if 'maxdepth' not in kwds:
kwds['maxdepth'] = 0
kwds['collapse'] = collapse
for toctreenode in doctree.traverse(addnodes.toctree):
toctree = self.env.resolve_toctree(docname, builder, toctreenode,
prune=True, **kwds)
if toctree:
toctrees.append(toctree)
if not toctrees:
return None
result = toctrees[0]
for toctree in toctrees[1:]:
result.extend(toctree.children)
return result
def resolve_toctree(self, docname, builder, toctree, prune=True, maxdepth=0,
titles_only=False, collapse=False, includehidden=False):
# type: (unicode, Builder, addnodes.toctree, bool, int, bool, bool, bool) -> nodes.Node
"""Resolve a *toctree* node into individual bullet lists with titles
as items, returning None (if no containing titles are found) or
a new node.
If *prune* is True, the tree is pruned to *maxdepth*, or if that is 0,
to the value of the *maxdepth* option on the *toctree* node.
If *titles_only* is True, only toplevel document titles will be in the
resulting tree.
If *collapse* is True, all branches not containing docname will
be collapsed.
"""
if toctree.get('hidden', False) and not includehidden:
return None
# For reading the following two helper function, it is useful to keep
# in mind the node structure of a toctree (using HTML-like node names
# for brevity):
#
# <ul>
# <li>
# <p><a></p>
# <p><a></p>
# ...
# <ul>
# ...
# </ul>
# </li>
# </ul>
#
# The transformation is made in two passes in order to avoid
# interactions between marking and pruning the tree (see bug #1046).
toctree_ancestors = self.get_toctree_ancestors(docname)
def _toctree_add_classes(node, depth):
"""Add 'toctree-l%d' and 'current' classes to the toctree."""
for subnode in node.children:
if isinstance(subnode, (addnodes.compact_paragraph,
nodes.list_item)):
# for <p> and <li>, indicate the depth level and recurse
subnode['classes'].append('toctree-l%d' % (depth-1))
_toctree_add_classes(subnode, depth)
elif isinstance(subnode, nodes.bullet_list):
# for <ul>, just recurse
_toctree_add_classes(subnode, depth+1)
elif isinstance(subnode, nodes.reference):
# for <a>, identify which entries point to the current
# document and therefore may not be collapsed
if subnode['refuri'] == docname:
if not subnode['anchorname']:
# give the whole branch a 'current' class
# (useful for styling it differently)
branchnode = subnode
while branchnode:
branchnode['classes'].append('current')
branchnode = branchnode.parent
# mark the list_item as "on current page"
if subnode.parent.parent.get('iscurrent'):
# but only if it's not already done
return
while subnode:
subnode['iscurrent'] = True
subnode = subnode.parent
def _entries_from_toctree(toctreenode, parents,
separate=False, subtree=False):
"""Return TOC entries for a toctree node."""
refs = [(e[0], e[1]) for e in toctreenode['entries']]
entries = []
for (title, ref) in refs:
try:
refdoc = None
if url_re.match(ref):
if title is None:
title = ref
reference = nodes.reference('', '', internal=False,
refuri=ref, anchorname='',
*[nodes.Text(title)])
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
toc = nodes.bullet_list('', item)
elif ref == 'self':
# 'self' refers to the document from which this
# toctree originates
ref = toctreenode['parent']
if not title:
title = clean_astext(self.titles[ref])
reference = nodes.reference('', '', internal=True,
refuri=ref,
anchorname='',
*[nodes.Text(title)])
para = addnodes.compact_paragraph('', '', reference)
item = nodes.list_item('', para)
# don't show subitems
toc = nodes.bullet_list('', item)
else:
if ref in parents:
self.env.warn(ref, 'circular toctree references '
'detected, ignoring: %s <- %s' %
(ref, ' <- '.join(parents)))
continue
refdoc = ref
toc = self.tocs[ref].deepcopy()
maxdepth = self.env.metadata[ref].get('tocdepth', 0)
if ref not in toctree_ancestors or (prune and maxdepth > 0):
self._toctree_prune(toc, 2, maxdepth, collapse)
process_only_nodes(toc, builder.tags, warn_node=self.env.warn_node)
if title and toc.children and len(toc.children) == 1:
child = toc.children[0]
for refnode in child.traverse(nodes.reference):
if refnode['refuri'] == ref and \
not refnode['anchorname']:
refnode.children = [nodes.Text(title)]
if not toc.children:
# empty toc means: no titles will show up in the toctree
self.env.warn_node(
'toctree contains reference to document %r that '
'doesn\'t have a title: no link will be generated'
% ref, toctreenode)
except KeyError:
# this is raised if the included file does not exist
self.env.warn_node(
'toctree contains reference to nonexisting document %r'
% ref, toctreenode)
else:
# if titles_only is given, only keep the main title and
# sub-toctrees
if titles_only:
# delete everything but the toplevel title(s)
# and toctrees
for toplevel in toc:
# nodes with length 1 don't have any children anyway
if len(toplevel) > 1:
subtrees = toplevel.traverse(addnodes.toctree)
if subtrees:
toplevel[1][:] = subtrees
else:
toplevel.pop(1)
# resolve all sub-toctrees
for subtocnode in toc.traverse(addnodes.toctree):
if not (subtocnode.get('hidden', False) and
not includehidden):
i = subtocnode.parent.index(subtocnode) + 1
for item in _entries_from_toctree(
subtocnode, [refdoc] + parents,
subtree=True):
subtocnode.parent.insert(i, item)
i += 1
subtocnode.parent.remove(subtocnode)
if separate:
entries.append(toc)
else:
entries.extend(toc.children)
if not subtree and not separate:
ret = nodes.bullet_list()
ret += entries
return [ret]
return entries
maxdepth = maxdepth or toctree.get('maxdepth', -1)
if not titles_only and toctree.get('titlesonly', False):
titles_only = True
if not includehidden and toctree.get('includehidden', False):
includehidden = True
# NOTE: previously, this was separate=True, but that leads to artificial
# separation when two or more toctree entries form a logical unit, so
# separating mode is no longer used -- it's kept here for history's sake
tocentries = _entries_from_toctree(toctree, [], separate=False)
if not tocentries:
return None
newnode = addnodes.compact_paragraph('', '')
caption = toctree.attributes.get('caption')
if caption:
caption_node = nodes.caption(caption, '', *[nodes.Text(caption)])
caption_node.line = toctree.line
caption_node.source = toctree.source
caption_node.rawsource = toctree['rawcaption']
if hasattr(toctree, 'uid'):
# move uid to caption_node to translate it
caption_node.uid = toctree.uid
del toctree.uid
newnode += caption_node
newnode.extend(tocentries)
newnode['toctree'] = True
# prune the tree to maxdepth, also set toc depth and current classes
_toctree_add_classes(newnode, 1)
self._toctree_prune(newnode, 1, prune and maxdepth or 0, collapse)
if len(newnode[-1]) == 0: # No titles found
return None
# set the target paths in the toctrees (they are not known at TOC
# generation time)
for refnode in newnode.traverse(nodes.reference):
if not url_re.match(refnode['refuri']):
refnode['refuri'] = builder.get_relative_uri(
docname, refnode['refuri']) + refnode['anchorname']
return newnode
def get_toctree_ancestors(self, docname):
# type: (unicode) -> List[unicode]
parent = {}
for p, children in iteritems(self.toctree_includes):
for child in children:
parent[child] = p
ancestors = [] # type: List[unicode]
d = docname
while d in parent and d not in ancestors:
ancestors.append(d)
d = parent[d]
return ancestors
def _toctree_prune(self, node, depth, maxdepth, collapse=False):
# type: (nodes.Node, int, int, bool) -> None
"""Utility: Cut a TOC at a specified depth."""
for subnode in node.children[:]:
if isinstance(subnode, (addnodes.compact_paragraph,
nodes.list_item)):
# for <p> and <li>, just recurse
self._toctree_prune(subnode, depth, maxdepth, collapse)
elif isinstance(subnode, nodes.bullet_list):
# for <ul>, determine if the depth is too large or if the
# entry is to be collapsed
if maxdepth > 0 and depth > maxdepth:
subnode.parent.replace(subnode, [])
else:
# cull sub-entries whose parents aren't 'current'
if (collapse and depth > 1 and
'iscurrent' not in subnode.parent):
subnode.parent.remove(subnode)
else:
# recurse on visible children
self._toctree_prune(subnode, depth+1, maxdepth, collapse)
def assign_section_numbers(self):
# type: () -> List[unicode]
"""Assign a section number to each heading under a numbered toctree."""
# a list of all docnames whose section numbers changed
rewrite_needed = []
assigned = set() # type: Set[unicode]
old_secnumbers = self.toc_secnumbers
self.toc_secnumbers = self.env.toc_secnumbers = {}
def _walk_toc(node, secnums, depth, titlenode=None):
# titlenode is the title of the document, it will get assigned a
# secnumber too, so that it shows up in next/prev/parent rellinks
for subnode in node.children:
if isinstance(subnode, nodes.bullet_list):
numstack.append(0)
_walk_toc(subnode, secnums, depth-1, titlenode)
numstack.pop()
titlenode = None
elif isinstance(subnode, nodes.list_item):
_walk_toc(subnode, secnums, depth, titlenode)
titlenode = None
elif isinstance(subnode, addnodes.only):
# at this stage we don't know yet which sections are going
# to be included; just include all of them, even if it leads
# to gaps in the numbering
_walk_toc(subnode, secnums, depth, titlenode)
titlenode = None
elif isinstance(subnode, addnodes.compact_paragraph):
numstack[-1] += 1
if depth > 0:
number = tuple(numstack)
else:
number = None
secnums[subnode[0]['anchorname']] = \
subnode[0]['secnumber'] = number
if titlenode:
titlenode['secnumber'] = number
titlenode = None
elif isinstance(subnode, addnodes.toctree):
_walk_toctree(subnode, depth)
def _walk_toctree(toctreenode, depth):
if depth == 0:
return
for (title, ref) in toctreenode['entries']:
if url_re.match(ref) or ref == 'self' or ref in assigned:
# don't mess with those
continue
if ref in self.tocs:
secnums = self.toc_secnumbers[ref] = {}
assigned.add(ref)
_walk_toc(self.tocs[ref], secnums, depth,
self.env.titles.get(ref))
if secnums != old_secnumbers.get(ref):
rewrite_needed.append(ref)
for docname in self.numbered_toctrees:
assigned.add(docname)
doctree = self.env.get_doctree(docname)
for toctreenode in doctree.traverse(addnodes.toctree):
depth = toctreenode.get('numbered', 0)
if depth:
# every numbered toctree gets new numbering
numstack = [0]
_walk_toctree(toctreenode, depth)
return rewrite_needed
def assign_figure_numbers(self):
# type: () -> List[unicode]
"""Assign a figure number to each figure under a numbered toctree."""
rewrite_needed = []
assigned = set() # type: Set[unicode]
old_fignumbers = self.toc_fignumbers
self.toc_fignumbers = self.env.toc_fignumbers = {}
fignum_counter = {} # type: Dict[unicode, Dict[Tuple[int], int]]
def get_section_number(docname, section):
anchorname = '#' + section['ids'][0]
secnumbers = self.toc_secnumbers.get(docname, {})
if anchorname in secnumbers:
secnum = secnumbers.get(anchorname)
else:
secnum = secnumbers.get('')
return secnum or tuple()
def get_next_fignumber(figtype, secnum):
counter = fignum_counter.setdefault(figtype, {})
secnum = secnum[:self.env.config.numfig_secnum_depth]
counter[secnum] = counter.get(secnum, 0) + 1
return secnum + (counter[secnum],)
def register_fignumber(docname, secnum, figtype, fignode):
self.toc_fignumbers.setdefault(docname, {})
fignumbers = self.toc_fignumbers[docname].setdefault(figtype, {})
figure_id = fignode['ids'][0]
fignumbers[figure_id] = get_next_fignumber(figtype, secnum)
def _walk_doctree(docname, doctree, secnum):
for subnode in doctree.children:
if isinstance(subnode, nodes.section):
next_secnum = get_section_number(docname, subnode)
if next_secnum:
_walk_doctree(docname, subnode, next_secnum)
else:
_walk_doctree(docname, subnode, secnum)
continue
elif isinstance(subnode, addnodes.toctree):
for title, subdocname in subnode['entries']:
if url_re.match(subdocname) or subdocname == 'self':
# don't mess with those
continue
_walk_doc(subdocname, secnum)
continue
figtype = self.env.get_domain('std').get_figtype(subnode) # type: ignore
if figtype and subnode['ids']:
register_fignumber(docname, secnum, figtype, subnode)
_walk_doctree(docname, subnode, secnum)
def _walk_doc(docname, secnum):
if docname not in assigned:
assigned.add(docname)
doctree = self.env.get_doctree(docname)
_walk_doctree(docname, doctree, secnum)
if self.env.config.numfig:
_walk_doc(self.env.config.master_doc, tuple())
for docname, fignums in iteritems(self.toc_fignumbers):
if fignums != old_fignumbers.get(docname):
rewrite_needed.append(docname)
return rewrite_needed

View File

@@ -10,7 +10,9 @@
:license: BSD, see LICENSE for details.
"""
import traceback
if False:
# For type annotation
from typing import Any # NOQA
class SphinxError(Exception):
@@ -31,16 +33,19 @@ class ExtensionError(SphinxError):
category = 'Extension error'
def __init__(self, message, orig_exc=None):
# type: (unicode, Exception) -> None
SphinxError.__init__(self, message)
self.orig_exc = orig_exc
def __repr__(self):
# type: () -> str
if self.orig_exc:
return '%s(%r, %r)' % (self.__class__.__name__,
self.message, self.orig_exc)
return '%s(%r)' % (self.__class__.__name__, self.message)
def __str__(self):
# type: () -> str
parent_str = SphinxError.__str__(self)
if self.orig_exc:
return '%s (exception: %s)' % (parent_str, self.orig_exc)
@@ -61,6 +66,7 @@ class VersionRequirementError(SphinxError):
class PycodeError(Exception):
def __str__(self):
# type: () -> str
res = self.args[0]
if len(self.args) > 1:
res += ' (exception was: %r)' % self.args[1]
@@ -71,10 +77,11 @@ class SphinxParallelError(SphinxError):
category = 'Sphinx parallel build error'
def __init__(self, orig_exc, traceback):
self.orig_exc = orig_exc
def __init__(self, message, traceback):
# type: (str, Any) -> None
self.message = message
self.traceback = traceback
def __str__(self):
return traceback.format_exception_only(
self.orig_exc.__class__, self.orig_exc)[0].strip()
# type: () -> str
return self.message

View File

@@ -15,6 +15,7 @@ import re
import sys
import inspect
import traceback
import warnings
from types import FunctionType, BuiltinFunctionType, MethodType
from six import PY2, iterkeys, iteritems, itervalues, text_type, class_types, \
@@ -22,6 +23,7 @@ from six import PY2, iterkeys, iteritems, itervalues, text_type, class_types, \
from docutils import nodes
from docutils.utils import assemble_option_dict
from docutils.parsers.rst import Directive
from docutils.statemachine import ViewList
import sphinx
@@ -29,15 +31,16 @@ from sphinx.util import rpartition, force_decode
from sphinx.locale import _
from sphinx.pycode import ModuleAnalyzer, PycodeError
from sphinx.application import ExtensionError
from sphinx.util import logging
from sphinx.util.nodes import nested_parse_with_titles
from sphinx.util.compat import Directive
from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \
safe_getattr, object_description, is_builtin_class_method, isenumattribute
safe_getattr, object_description, is_builtin_class_method, \
isenumclass, isenumattribute
from sphinx.util.docstrings import prepare_docstring
if False:
# For type annotation
from typing import Any, Callable, Iterator, Sequence, Tuple, Type, Union # NOQA
from typing import Any, Callable, Dict, Iterator, List, Sequence, Set, Tuple, Type, Union # NOQA
from types import ModuleType # NOQA
from docutils.utils import Reporter # NOQA
from sphinx.application import Sphinx # NOQA
@@ -50,6 +53,8 @@ try:
except ImportError:
typing = None
logger = logging.getLogger(__name__)
# This type isn't exposed directly in any modules, but can be found
# here in most Python versions
MethodDescriptorType = type(type.__subclasses__)
@@ -477,7 +482,7 @@ class Documenter(object):
#: true if the generated content may contain titles
titles_allowed = False
option_spec = {'noindex': bool_option}
option_spec = {'noindex': bool_option} # type: Dict[unicode, Callable]
@staticmethod
def get_attr(obj, name, *defargs):
@@ -579,24 +584,25 @@ class Documenter(object):
Returns True if successful, False if an error occurred.
"""
dbg = self.env.app.debug
if self.objpath:
dbg('[autodoc] from %s import %s',
self.modname, '.'.join(self.objpath))
logger.debug('[autodoc] from %s import %s',
self.modname, '.'.join(self.objpath))
try:
dbg('[autodoc] import %s', self.modname)
logger.debug('[autodoc] import %s', self.modname)
for modname in self.env.config.autodoc_mock_imports:
dbg('[autodoc] adding a mock module %s!', modname)
logger.debug('[autodoc] adding a mock module %s!', modname)
mock_import(modname)
__import__(self.modname)
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=ImportWarning)
__import__(self.modname)
parent = None
obj = self.module = sys.modules[self.modname]
dbg('[autodoc] => %r', obj)
logger.debug('[autodoc] => %r', obj)
for part in self.objpath:
parent = obj
dbg('[autodoc] getattr(_, %r)', part)
logger.debug('[autodoc] getattr(_, %r)', part)
obj = self.get_attr(obj, part)
dbg('[autodoc] => %r', obj)
logger.debug('[autodoc] => %r', obj)
self.object_name = part
self.parent = parent
self.object = obj
@@ -618,7 +624,7 @@ class Documenter(object):
traceback.format_exc()
if PY2:
errmsg = errmsg.decode('utf-8') # type: ignore
dbg(errmsg)
logger.debug(errmsg)
self.directive.warn(errmsg)
self.env.note_reread()
return False
@@ -826,6 +832,14 @@ class Documenter(object):
else:
members = [(mname, self.get_attr(self.object, mname, None))
for mname in list(iterkeys(obj_dict))]
# Py34 doesn't have enum members in __dict__.
if isenumclass(self.object):
members.extend(
item for item in self.object.__members__.items()
if item not in members
)
membernames = set(m[0] for m in members)
# add instance attributes from the analyzer
for aname in analyzed_member_names:
@@ -961,6 +975,7 @@ class Documenter(object):
tagorder = self.analyzer.tagorder
def keyfunc(entry):
# type: (Tuple[Documenter, bool]) -> int
fullname = entry[0].name.split('::')[1]
return tagorder.get(fullname, len(tagorder))
memberdocumenters.sort(key=keyfunc)
@@ -1012,7 +1027,7 @@ class Documenter(object):
# be cached anyway)
self.analyzer.find_attr_docs()
except PycodeError as err:
self.env.app.debug('[autodoc] module analyzer failed: %s', err)
logger.debug('[autodoc] module analyzer failed: %s', err)
# no source file -- e.g. for builtin and C modules
self.analyzer = None
# at least add the module.__file__ as a dependency
@@ -1066,7 +1081,7 @@ class ModuleDocumenter(Documenter):
'member-order': identity, 'exclude-members': members_set_option,
'private-members': bool_option, 'special-members': members_option,
'imported-members': bool_option,
}
} # type: Dict[unicode, Callable]
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
@@ -1179,7 +1194,7 @@ class ClassLevelDocumenter(Documenter):
# ... if still None, there's no way to know
if mod_cls is None:
return None, []
modname, cls = rpartition(mod_cls, '.')
modname, cls = rpartition(mod_cls, '.') # type: ignore
parents = [cls]
# if the module name is still missing, get it like above
if not modname:
@@ -1318,7 +1333,7 @@ class ClassDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): # type:
'show-inheritance': bool_option, 'member-order': identity,
'exclude-members': members_set_option,
'private-members': bool_option, 'special-members': members_option,
}
} # type: Dict[unicode, Callable]
@classmethod
def can_document_member(cls, member, membername, isattr, parent):
@@ -1524,11 +1539,11 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter): # type:
# to distinguish classmethod/staticmethod
obj = self.parent.__dict__.get(self.object_name)
if isinstance(obj, classmethod): # type: ignore
if isinstance(obj, classmethod):
self.directivetype = 'classmethod'
# document class and static members before ordinary ones
self.member_order = self.member_order - 1
elif isinstance(obj, staticmethod): # type: ignore
elif isinstance(obj, staticmethod):
self.directivetype = 'staticmethod'
# document class and static members before ordinary ones
self.member_order = self.member_order - 1
@@ -1718,8 +1733,8 @@ class AutoDirective(Directive):
source, lineno = self.reporter.get_source_and_line(self.lineno)
except AttributeError:
source = lineno = None
self.env.app.debug('[autodoc] %s:%s: input:\n%s',
source, lineno, self.block_text)
logger.debug('[autodoc] %s:%s: input:\n%s',
source, lineno, self.block_text)
# find out what documenter to call
objtype = self.name[4:]
@@ -1748,7 +1763,7 @@ class AutoDirective(Directive):
if not self.result:
return self.warnings
self.env.app.debug2('[autodoc] output:\n%s', '\n'.join(self.result))
logger.debug('[autodoc] output:\n%s', '\n'.join(self.result))
# record all filenames as dependencies -- this will at least
# partially make automatic invalidation possible
@@ -1814,7 +1829,9 @@ class testcls:
"""test doc string"""
def __getattr__(self, x):
# type: (Any) -> Any
return x
def __setattr__(self, x, y):
# type: (Any, Any) -> None
"""Attr setter."""

View File

@@ -10,8 +10,11 @@
"""
from docutils import nodes
from sphinx.util import logging
from sphinx.util.nodes import clean_astext
logger = logging.getLogger(__name__)
def register_sections_as_label(app, document):
labels = app.env.domaindata['std']['labels']
@@ -23,8 +26,9 @@ def register_sections_as_label(app, document):
sectname = clean_astext(node[0])
if name in labels:
app.env.warn_node('duplicate label %s, ' % name + 'other instance '
'in ' + app.env.doc2path(labels[name][0]), node)
logger.warning('duplicate label %s, ' % name + 'other instance '
'in ' + app.env.doc2path(labels[name][0]),
location=node)
anonlabels[name] = docname, labelid
labels[name] = docname, labelid, sectname

View File

@@ -63,25 +63,27 @@ from types import ModuleType
from six import text_type
from docutils.parsers.rst import directives
from docutils.parsers.rst import Directive, directives
from docutils.statemachine import ViewList
from docutils import nodes
import sphinx
from sphinx import addnodes
from sphinx.util import import_object, rst
from sphinx.util.compat import Directive
from sphinx.environment.adapters.toctree import TocTree
from sphinx.util import import_object, rst, logging
from sphinx.pycode import ModuleAnalyzer, PycodeError
from sphinx.ext.autodoc import Options
if False:
# For type annotation
from typing import Any, Tuple, Type, Union # NOQA
from typing import Any, Dict, List, Tuple, Type, Union # NOQA
from docutils.utils import Inliner # NOQA
from sphinx.application import Sphinx # NOQA
from sphinx.environment import BuildEnvironment # NOQA
from sphinx.ext.autodoc import Documenter # NOQA
logger = logging.getLogger(__name__)
# -- autosummary_toc node ------------------------------------------------------
@@ -103,14 +105,14 @@ def process_autosummary_toc(app, doctree):
try:
if (isinstance(subnode, autosummary_toc) and
isinstance(subnode[0], addnodes.toctree)):
env.note_toctree(env.docname, subnode[0])
TocTree(env).note(env.docname, subnode[0])
continue
except IndexError:
continue
if not isinstance(subnode, nodes.section):
continue
if subnode not in crawled:
crawl_toc(subnode, depth+1)
crawl_toc(subnode, depth + 1)
crawl_toc(doctree)
@@ -283,7 +285,7 @@ class Autosummary(Directive):
if not isinstance(obj, ModuleType):
# give explicitly separated module name, so that members
# of inner classes can be documented
full_name = modname + '::' + full_name[len(modname)+1:]
full_name = modname + '::' + full_name[len(modname) + 1:]
# NB. using full_name here is important, since Documenters
# handle module prefixes slightly differently
documenter = get_documenter(obj, parent)(self, full_name)
@@ -306,8 +308,7 @@ class Autosummary(Directive):
# be cached anyway)
documenter.analyzer.find_attr_docs()
except PycodeError as err:
documenter.env.app.debug(
'[autodoc] module analyzer failed: %s', err)
logger.debug('[autodoc] module analyzer failed: %s', err)
# no source file -- e.g. for builtin and C modules
documenter.analyzer = None
@@ -319,7 +320,6 @@ class Autosummary(Directive):
else:
max_chars = max(10, max_item_chars - len(display_name))
sig = mangle_signature(sig, max_chars=max_chars)
sig = sig.replace('*', r'\*')
# -- Grab the summary
@@ -357,7 +357,7 @@ class Autosummary(Directive):
*items* is a list produced by :meth:`get_items`.
"""
table_spec = addnodes.tabular_col_spec()
table_spec['spec'] = 'p{0.5\linewidth}p{0.5\linewidth}'
table_spec['spec'] = r'p{0.5\linewidth}p{0.5\linewidth}'
table = autosummary_table('')
real_table = nodes.table('', classes=['longtable'])
@@ -388,7 +388,7 @@ class Autosummary(Directive):
for name, sig, summary, real_name in items:
qualifier = 'obj'
if 'nosignatures' not in self.options:
col1 = ':%s:`%s <%s>`\ %s' % (qualifier, name, real_name, rst.escape(sig)) # type: unicode # NOQA
col1 = ':%s:`%s <%s>`\\ %s' % (qualifier, name, real_name, rst.escape(sig)) # type: unicode # NOQA
else:
col1 = ':%s:`%s <%s>`' % (qualifier, name, real_name)
col2 = summary
@@ -423,13 +423,13 @@ def mangle_signature(sig, max_chars=30):
s = m.group(1)[:-2]
# Produce a more compact signature
sig = limited_join(", ", args, max_chars=max_chars-2)
sig = limited_join(", ", args, max_chars=max_chars - 2)
if opts:
if not sig:
sig = "[%s]" % limited_join(", ", opts, max_chars=max_chars-4)
sig = "[%s]" % limited_join(", ", opts, max_chars=max_chars - 4)
elif len(sig) < max_chars - 4 - 2 - 3:
sig += "[, %s]" % limited_join(", ", opts,
max_chars=max_chars-len(sig)-4-2)
max_chars=max_chars - len(sig) - 4 - 2)
return u"(%s)" % sig
@@ -521,7 +521,7 @@ def _import_by_name(name):
# ... then as MODNAME, MODNAME.OBJ1, MODNAME.OBJ1.OBJ2, ...
last_j = 0
modname = None
for j in reversed(range(1, len(name_parts)+1)):
for j in reversed(range(1, len(name_parts) + 1)):
last_j = j
modname = '.'.join(name_parts[:j])
try:
@@ -546,8 +546,7 @@ def _import_by_name(name):
# -- :autolink: (smart default role) -------------------------------------------
def autolink_role(typ, rawtext, etext, lineno, inliner,
options={}, content=[]):
def autolink_role(typ, rawtext, etext, lineno, inliner, options={}, content=[]):
# type: (unicode, unicode, unicode, int, Inliner, Dict, List[unicode]) -> Tuple[List[nodes.Node], List[nodes.Node]] # NOQA
"""Smart linking role.
@@ -555,6 +554,7 @@ def autolink_role(typ, rawtext, etext, lineno, inliner,
otherwise expands to '*text*'.
"""
env = inliner.document.settings.env
r = None # type: Tuple[List[nodes.Node], List[nodes.Node]]
r = env.get_domain('py').role('obj')(
'obj', rawtext, etext, lineno, inliner, options, content)
pnode = r[0][0]
@@ -563,9 +563,9 @@ def autolink_role(typ, rawtext, etext, lineno, inliner,
try:
name, obj, parent, modname = import_by_name(pnode['reftarget'], prefixes)
except ImportError:
content = pnode[0]
r[0][0] = nodes.emphasis(rawtext, content[0].astext(), # type: ignore
classes=content['classes']) # type: ignore
content_node = pnode[0]
r[0][0] = nodes.emphasis(rawtext, content_node[0].astext(),
classes=content_node['classes'])
return r
@@ -581,7 +581,7 @@ def get_rst_suffix(app):
return parser_class.supported
suffix = None # type: unicode
for suffix in app.config.source_suffix: # type: ignore
for suffix in app.config.source_suffix:
if 'restructuredtext' in get_supported_format(suffix):
return suffix
@@ -608,13 +608,13 @@ def process_generate_options(app):
suffix = get_rst_suffix(app)
if suffix is None:
app.warn('autosummary generats .rst files internally. '
'But your source_suffix does not contain .rst. Skipped.')
logger.warning('autosummary generats .rst files internally. '
'But your source_suffix does not contain .rst. Skipped.')
return
generate_autosummary_docs(genfiles, builder=app.builder,
warn=app.warn, info=app.info, suffix=suffix,
base_path=app.srcdir)
warn=logger.warning, info=logger.info,
suffix=suffix, base_path=app.srcdir)
def setup(app):

Some files were not shown because too many files have changed in this diff Show More