diff --git a/.hgignore b/.hgignore index 45ecebc31..16d29fcf8 100644 --- a/.hgignore +++ b/.hgignore @@ -7,15 +7,15 @@ ^build/ ^dist/ ^tests/.coverage +^tests/build/ ^sphinx/pycode/Grammar.*pickle ^Sphinx.egg-info/ ^doc/_build/ ^TAGS +^\.tags ^\.ropeproject/ ^env/ \.DS_Store$ ~$ ^utils/.*3\.py$ ^distribute- -^tests/root/_build/* -^tests/root/generated/* diff --git a/CHANGES b/CHANGES index 6279c4467..51b3b5dba 100644 --- a/CHANGES +++ b/CHANGES @@ -12,12 +12,14 @@ Incompatible changes * A new node, ``sphinx.addnodes.literal_strong``, has been added, for text that should appear literally (i.e. no smart quotes) in strong font. Custom writers will have to be adapted to handle this node. -* PR#269, #1476: replace `` tag by ``. User customized stylesheets - should be updated If the css contain some styles for `` tag. +* PR#269, #1476: replace ```` tag by ````. User customized stylesheets + should be updated If the css contain some styles for ``tt>`` tag. Thanks to Takeshi Komiya. -* #1543: :confval:`templates_path` is automatically added to - :confval:`exclude_patterns` to avoid reading autosummary rst templates in the +* #1543: `templates_path` is automatically added to + `exclude_patterns` to avoid reading autosummary rst templates in the templates directory. +* Custom domains should implement the new `Domain.resolve_any_xref` + method to make the `any` role work properly. Features added -------------- @@ -26,22 +28,31 @@ Features added * Add support for docutils 0.12 * Added ``sphinx.ext.napoleon`` extension for NumPy and Google style docstring support. +* Added support for parallel reading (parsing) of source files with the + `sphinx-build -j` option. Third-party extensions will need to be checked for + compatibility and may need to be adapted if they store information in the + build environment object. See `env-merge-info`. +* Added the `any` role that can be used to find a cross-reference of + *any* type in *any* domain. Custom domains should implement the new + `Domain.resolve_any_xref` method to make this work properly. * Exception logs now contain the last 10 messages emitted by Sphinx. * Added support for extension versions (a string returned by ``setup()``, these can be shown in the traceback log files). Version requirements for extensions - can be specified in projects using the new :confval:`needs_extensions` config + can be specified in projects using the new `needs_extensions` config value. +* Changing the default role within a document with the :dudir:`default-role` + directive is now supported. * PR#214: Added stemming support for 14 languages, so that the built-in document search can now handle these. Thanks to Shibukawa Yoshiki. * PR#202: Allow "." and "~" prefixed references in ``:param:`` doc fields for Python. -* PR#184: Add :confval:`autodoc_mock_imports`, allowing to mock imports of +* PR#184: Add `autodoc_mock_imports`, allowing to mock imports of external modules that need not be present when autodocumenting. * #925: Allow list-typed config values to be provided on the command line, like ``-D key=val1,val2``. -* #668: Allow line numbering of ``code-block`` and ``literalinclude`` directives +* #668: Allow line numbering of `code-block` and `literalinclude` directives to start at an arbitrary line number, with a new ``lineno-start`` option. -* PR#172, PR#266: The :rst:dir:`code-block` and :rst:dir:`literalinclude` +* PR#172, PR#266: The `code-block` and `literalinclude` directives now can have a ``caption`` option that shows a filename before the code in the output. Thanks to Nasimul Haque, Takeshi Komiya. * Prompt for the document language in sphinx-quickstart. @@ -56,135 +67,43 @@ Features added for the ids defined on the node. Thanks to Olivier Heurtier. * PR#229: Allow registration of other translators. Thanks to Russell Sim. * Add app.set_translator() API to register or override a Docutils translator - class like :confval:`html_translator_class`. + class like `html_translator_class`. * PR#267, #1134: add 'diff' parameter to literalinclude. Thanks to Richard Wall and WAKAYAMA shirou. * PR#272: Added 'bizstyle' theme. Thanks to Shoji KUMAGAI. * Automatically compile ``*.mo`` files from ``*.po`` files when - :confval:`gettext_auto_build` is True (default) and ``*.po`` is newer than + `gettext_auto_build` is True (default) and ``*.po`` is newer than ``*.mo`` file. -* #623: :mod:`~sphinx.ext.viewcode` supports imported function/class aliases. -* PR#275: :mod:`~sphinx.ext.intersphinx` supports multiple target for the +* #623: `sphinx.ext.viewcode` supports imported function/class aliases. +* PR#275: `sphinx.ext.intersphinx` supports multiple target for the inventory. Thanks to Brigitta Sipocz. +* PR#261: Added the `env-before-read-docs` event that can be connected to modify + the order of documents before they are read by the environment. +* #1284: Program options documented with :rst:dir:`option` can now start with + ``+``. +* PR#291: The caption of :rst:dir:`code-block` is recognised as a title of ref + target. Thanks to Takeshi Komiya. Bugs fixed ---------- -* #1568: fix a crash when a "centered" directive contains a reference. +* #1568: Fix a crash when a "centered" directive contains a reference. * #1563: :meth:`~sphinx.application.Sphinx.add_search_language` raises AssertionError for correct type of argument. Thanks to rikoman. * #1174: Fix smart quotes being applied inside roles like :rst:role:`program` or - :rst:role:`makevar`. -* #1335: Fix autosummary template overloading with exclamation prefix like - ``{% extends "!autosummary/class.rst" %}`` cause infinite recursive function - call. This was caused by PR#181. -* #1337: Fix autodoc with ``autoclass_content="both"`` uses useless - ``object.__init__`` docstring when class does not have ``__init__``. - This was caused by a change for #1138. -* #1340: Can't search alphabetical words on the HTML quick search generated - with language='ja'. -* #1319: Do not crash if the :confval:`html_logo` file does not exist. -* #603: Do not use the HTML-ized title for building the search index (that - resulted in "literal" being found on every page with a literal in the - title). -* #751: Allow production lists longer than a page in LaTeX by using longtable. -* #764: Always look for stopwords lowercased in JS search. -* #814: autodoc: Guard against strange type objects that don't have - ``__bases__``. -* #932: autodoc: Do not crash if ``__doc__`` is not a string. -* #933: Do not crash if an :rst:role:`option` value is malformed (contains - spaces but no option name). -* #908: On Python 3, handle error messages from LaTeX correctly in the pngmath - extension. -* #943: In autosummary, recognize "first sentences" to pull from the docstring - if they contain uppercase letters. -* #923: Take the entire LaTeX document into account when caching - pngmath-generated images. This rebuilds them correctly when - :confval:`pngmath_latex_preamble` changes. -* #901: Emit a warning when using docutils' new "math" markup without a Sphinx - math extension active. -* #845: In code blocks, when the selected lexer fails, display line numbers - nevertheless if configured. -* #929: Support parsed-literal blocks in LaTeX output correctly. -* #949: Update the tabulary.sty packed with Sphinx. -* #1050: Add anonymous labels into ``objects.inv`` to be referenced via - :mod:`~sphinx.ext.intersphinx`. -* #1095: Fix print-media stylesheet being included always in the "scrolls" - theme. -* #1085: Fix current classname not getting set if class description has - ``:noindex:`` set. -* #1181: Report option errors in autodoc directives more gracefully. -* #1155: Fix autodocumenting C-defined methods as attributes in Python 3. -* #1233: Allow finding both Python classes and exceptions with the "class" and - "exc" roles in intersphinx. -* #1198: Allow "image" for the "figwidth" option of the :rst:dir:`figure` - directive as documented by docutils. -* #1152: Fix pycode parsing errors of Python 3 code by including two grammar - versions for Python 2 and 3, and loading the appropriate version for the - running Python version. -* #1017: Be helpful and tell the user when the argument to :rst:dir:`option` - does not match the required format. -* #1345: Fix two bugs with :confval:`nitpick_ignore`; now you don't have to - remove the store environment for changes to have effect. -* #1072: In the JS search, fix issues searching for upper-cased words by - lowercasing words before stemming. -* #1299: Make behavior of the :rst:dir:`math` directive more consistent and - avoid producing empty environments in LaTeX output. -* #1308: Strip HTML tags from the content of "raw" nodes before feeding it - to the search indexer. -* #1249: Fix duplicate LaTeX page numbering for manual documents. -* #1292: In the linkchecker, retry HEAD requests when denied by HTTP 405. - Also make the redirect code apparent and tweak the output a bit to be - more obvious. -* #1285: Avoid name clashes between C domain objects and section titles. -* #848: Always take the newest code in incremental rebuilds with the - :mod:`sphinx.ext.viewcode` extension. -* #979, #1266: Fix exclude handling in ``sphinx-apidoc``. -* #1302: Fix regression in :mod:`sphinx.ext.inheritance_diagram` when - documenting classes that can't be pickled. -* #1316: Remove hard-coded ``font-face`` resources from epub theme. -* #1329: Fix traceback with empty translation msgstr in .po files. -* #1300: Fix references not working in translated documents in some instances. -* #1283: Fix a bug in the detection of changed files that would try to access - doctrees of deleted documents. -* #1330: Fix :confval:`exclude_patterns` behavior with subdirectories in the - :confval:`html_static_path`. -* #1323: Fix emitting empty ``
    `` tags in the HTML writer, which is not - valid HTML. -* #1147: Don't emit a sidebar search box in the "singlehtml" builder. -* PR#211: When checking for existence of the :confval:`html_logo` file, check - the full relative path and not the basename. -* #1357: Option names documented by :rst:dir:`option` are now again allowed to - not start with a dash or slash, and referencing them will work correctly. -* #1358: Fix handling of image paths outside of the source directory when using - the "wildcard" style reference. -* #1374: Fix for autosummary generating overly-long summaries if first line - doesn't end with a period. -* #1391: Actually prevent using "pngmath" and "mathjax" extensions at the same - time in sphinx-quickstart. -* #1386: Fix bug preventing more than one theme being added by the entry point - mechanism. -* #1370: Ignore "toctree" nodes in text writer, instead of raising. -* #1364: Fix 'make gettext' fails when the '.. todolist::' directive is present. -* #1367: Fix a change of PR#96 that break sphinx.util.docfields.Field.make_field - interface/behavior for `item` argument usage. -* #1363: Fix i18n: missing python domain's cross-references with currentmodule - directive or currentclass directive. -* #1419: Generated i18n sphinx.js files are missing message catalog entries - from '.js_t' and '.html'. The issue was introduced in Sphinx 1.1. -* #636: Keep straight single quotes in literal blocks in the LaTeX build. + `makevar`. * PR#235: comment db schema of websupport lacked a length of the node_id field. Thanks to solos. * #1466,PR#241: Fix failure of the cpp domain parser to parse C+11 "variadic templates" declarations. Thanks to Victor Zverovich. -* #1459,PR#244: Fix default mathjax js path point to `http://` that cause +* #1459,PR#244: Fix default mathjax js path point to ``http://`` that cause mixed-content error on HTTPS server. Thanks to sbrandtb and robo9k. * PR#157: autodoc remove spurious signatures from @property decorated attributes. Thanks to David Ham. * PR#159: Add coverage targets to quickstart generated Makefile and make.bat. Thanks to Matthias Troffaes. * #1251: When specifying toctree :numbered: option and :tocdepth: metadata, - sub section number that is larger depth than `:tocdepth:` is shrinked. + sub section number that is larger depth than ``:tocdepth:`` is shrunk. * PR#260: Encode underscore in citation labels for latex export. Thanks to Lennart Fricke. * PR#264: Fix could not resolve xref for figure node with :name: option. @@ -208,8 +127,8 @@ Bugs fixed qualified name. It should be rather easy to change this behaviour and potentially index by namespaces/classes as well. -* PR#258, #939: Add dedent option for :rst:dir:`code-block` and - :rst:dir:`literal-include`. Thanks to Zafar Siddiqui. +* PR#258, #939: Add dedent option for `code-block` and + `literalinclude`. Thanks to Zafar Siddiqui. * PR#268: Fix numbering section does not work at singlehtml mode. It still ad-hoc fix because there is a issue that section IDs are conflicted. Thanks to Takeshi Komiya. @@ -217,20 +136,18 @@ Bugs fixed Takeshi Komiya. * PR#274: Set its URL as a default title value if URL appears in toctree. Thanks to Takeshi Komiya. -* PR#276, #1381: :rst:role:`rfc` and :rst:role:`pep` roles support custom link +* PR#276, #1381: `rfc` and `pep` roles support custom link text. Thanks to Takeshi Komiya. * PR#277, #1513: highlights for function pointers in argument list of - :rst:dir:`c:function`. Thanks to Takeshi Komiya. + `c:function`. Thanks to Takeshi Komiya. * PR#278: Fix section entries were shown twice if toctree has been put under only directive. Thanks to Takeshi Komiya. -* #1547: pgen2 tokenizer doesn't recognize `...` literal (Ellipsis for py3). +* #1547: pgen2 tokenizer doesn't recognize ``...`` literal (Ellipsis for py3). Documentation ------------- * Add clarification about the syntax of tags. (:file:`doc/markup/misc.rst`) -* #1325: Added a "Intersphinx" tutorial section. (:file:`doc/tutorial.rst`) -* Extended the :ref:`documentation about building extensions `. Release 1.2.3 (released Sep 1, 2014) @@ -239,7 +156,7 @@ Release 1.2.3 (released Sep 1, 2014) Features added -------------- -* #1518: `sphinx-apidoc` command now have a `--version` option to show version +* #1518: ``sphinx-apidoc`` command now has a ``--version`` option to show version information and exit * New locales: Hebrew, European Portuguese, Vietnamese. @@ -257,14 +174,14 @@ Bugs fixed Thanks to Jorge_C. * #1467: Exception on Python3 if nonexistent method is specified by automethod * #1441: autosummary can't handle nested classes correctly. -* #1499: With non-callable `setup` in a conf.py, now sphinx-build emits - user-friendly error message. +* #1499: With non-callable ``setup`` in a conf.py, now sphinx-build emits + a user-friendly error message. * #1502: In autodoc, fix display of parameter defaults containing backslashes. * #1226: autodoc, autosummary: importing setup.py by automodule will invoke - setup process and execute `sys.exit()`. Now sphinx avoids SystemExit + setup process and execute ``sys.exit()``. Now sphinx avoids SystemExit exception and emits warnings without unexpected termination. * #1503: py:function directive generate incorrectly signature when specifying - a default parameter with an empty list `[]`. Thanks to Geert Jansen. + a default parameter with an empty list ``[]``. Thanks to Geert Jansen. * #1508: Non-ASCII filename raise exception on make singlehtml, latex, man, texinfo and changes. * #1531: On Python3 environment, docutils.conf with 'source_link=true' in the @@ -274,11 +191,11 @@ Bugs fixed * PR#281, PR#282, #1509: TODO extension not compatible with websupport. Thanks to Takeshi Komiya. * #1477: gettext does not extract nodes.line in a table or list. -* #1544: `make text` generate wrong table when it has empty table cells. +* #1544: ``make text`` generates wrong table when it has empty table cells. * #1522: Footnotes from table get displayed twice in LaTeX. This problem has been appeared from Sphinx-1.2.1 by #949. * #508: Sphinx every time exit with zero when is invoked from setup.py command. - ex. `python setup.py build_sphinx -b doctest` return zero even if doctest + ex. ``python setup.py build_sphinx -b doctest`` return zero even if doctest failed. Release 1.2.2 (released Mar 2, 2014) @@ -287,7 +204,7 @@ Release 1.2.2 (released Mar 2, 2014) Bugs fixed ---------- -* PR#211: When checking for existence of the :confval:`html_logo` file, check +* PR#211: When checking for existence of the `html_logo` file, check the full relative path and not the basename. * PR#212: Fix traceback with autodoc and ``__init__`` methods without docstring. * PR#213: Fix a missing import in the setup command. @@ -305,7 +222,7 @@ Bugs fixed * #1370: Ignore "toctree" nodes in text writer, instead of raising. * #1364: Fix 'make gettext' fails when the '.. todolist::' directive is present. * #1367: Fix a change of PR#96 that break sphinx.util.docfields.Field.make_field - interface/behavior for `item` argument usage. + interface/behavior for ``item`` argument usage. Documentation ------------- @@ -327,7 +244,7 @@ Bugs fixed This was caused by a change for #1138. * #1340: Can't search alphabetical words on the HTML quick search generated with language='ja'. -* #1319: Do not crash if the :confval:`html_logo` file does not exist. +* #1319: Do not crash if the `html_logo` file does not exist. * #603: Do not use the HTML-ized title for building the search index (that resulted in "literal" being found on every page with a literal in the title). @@ -344,7 +261,7 @@ Bugs fixed if they contain uppercase letters. * #923: Take the entire LaTeX document into account when caching pngmath-generated images. This rebuilds them correctly when - :confval:`pngmath_latex_preamble` changes. + `pngmath_latex_preamble` changes. * #901: Emit a warning when using docutils' new "math" markup without a Sphinx math extension active. * #845: In code blocks, when the selected lexer fails, display line numbers @@ -361,14 +278,14 @@ Bugs fixed * #1155: Fix autodocumenting C-defined methods as attributes in Python 3. * #1233: Allow finding both Python classes and exceptions with the "class" and "exc" roles in intersphinx. -* #1198: Allow "image" for the "figwidth" option of the :rst:dir:`figure` +* #1198: Allow "image" for the "figwidth" option of the :dudir:`figure` directive as documented by docutils. * #1152: Fix pycode parsing errors of Python 3 code by including two grammar versions for Python 2 and 3, and loading the appropriate version for the running Python version. * #1017: Be helpful and tell the user when the argument to :rst:dir:`option` does not match the required format. -* #1345: Fix two bugs with :confval:`nitpick_ignore`; now you don't have to +* #1345: Fix two bugs with `nitpick_ignore`; now you don't have to remove the store environment for changes to have effect. * #1072: In the JS search, fix issues searching for upper-cased words by lowercasing words before stemming. @@ -391,8 +308,8 @@ Bugs fixed * #1300: Fix references not working in translated documents in some instances. * #1283: Fix a bug in the detection of changed files that would try to access doctrees of deleted documents. -* #1330: Fix :confval:`exclude_patterns` behavior with subdirectories in the - :confval:`html_static_path`. +* #1330: Fix `exclude_patterns` behavior with subdirectories in the + `html_static_path`. * #1323: Fix emitting empty ``
      `` tags in the HTML writer, which is not valid HTML. * #1147: Don't emit a sidebar search box in the "singlehtml" builder. @@ -424,7 +341,7 @@ Bugs fixed * Restore ``versionmodified`` CSS class for versionadded/changed and deprecated directives. -* PR#181: Fix `html_theme_path=['.']` is a trigger of rebuild all documents +* PR#181: Fix ``html_theme_path = ['.']`` is a trigger of rebuild all documents always (This change keeps the current "theme changes cause a rebuild" feature). @@ -491,7 +408,7 @@ Features added * Support docutils.conf 'writers' and 'html4css1 writer' section in the HTML writer. The latex, manpage and texinfo writers also support their respective 'writers' sections. -* The new :confval:`html_extra_path` config value allows to specify directories +* The new `html_extra_path` config value allows to specify directories with files that should be copied directly to the HTML output directory. * Autodoc directives for module data and attributes now support an ``annotation`` option, so that the default display of the data/attribute @@ -562,10 +479,10 @@ Incompatible changes * Removed ``sphinx.util.compat.directive_dwim()`` and ``sphinx.roles.xfileref_role()`` which were deprecated since version 1.0. -* PR#122: the files given in :confval:`latex_additional_files` now override TeX +* PR#122: the files given in `latex_additional_files` now override TeX files included by Sphinx, such as ``sphinx.sty``. -* PR#124: the node generated by :rst:dir:`versionadded`, - :rst:dir:`versionchanged` and :rst:dir:`deprecated` directives now includes +* PR#124: the node generated by `versionadded`, + `versionchanged` and `deprecated` directives now includes all added markup (such as "New in version X") as child nodes, and no additional text must be generated by writers. * PR#99: the :rst:dir:`seealso` directive now generates admonition nodes instead @@ -619,7 +536,7 @@ Features added asterisks ("*"). - The default value for the ``paragraphindent`` has been changed from 2 to 0 meaning that paragraphs are no longer indented by default. - - #1110: A new configuration value :confval:`texinfo_no_detailmenu` has been + - #1110: A new configuration value `texinfo_no_detailmenu` has been added for controlling whether a ``@detailmenu`` is added in the "Top" node's menu. - Detailed menus are no longer created except for the "Top" node. @@ -628,16 +545,16 @@ Features added * LaTeX builder: - - PR#115: Add ``'transition'`` item in :confval:`latex_elements` for + - PR#115: Add ``'transition'`` item in `latex_elements` for customizing how transitions are displayed. Thanks to Jeff Klukas. - PR#114: The LaTeX writer now includes the "cmap" package by default. The - ``'cmappkg'`` item in :confval:`latex_elements` can be used to control this. + ``'cmappkg'`` item in `latex_elements` can be used to control this. Thanks to Dmitry Shachnev. - - The ``'fontpkg'`` item in :confval:`latex_elements` now defaults to ``''`` - when the :confval:`language` uses the Cyrillic script. Suggested by Dmitry + - The ``'fontpkg'`` item in `latex_elements` now defaults to ``''`` + when the `language` uses the Cyrillic script. Suggested by Dmitry Shachnev. - - The :confval:`latex_documents`, :confval:`texinfo_documents`, and - :confval:`man_pages` configuration values will be set to default values based + - The `latex_documents`, `texinfo_documents`, and + `man_pages` configuration values will be set to default values based on the :confval:`master_doc` if not explicitly set in :file:`conf.py`. Previously, if these values were not set, no output would be generated by their respective builders. @@ -655,13 +572,13 @@ Features added - Added the Docutils-native XML and pseudo-XML builders. See :class:`XMLBuilder` and :class:`PseudoXMLBuilder`. - PR#45: The linkcheck builder now checks ``#anchor``\ s for existence. - - PR#123, #1106: Add :confval:`epub_use_index` configuration value. If - provided, it will be used instead of :confval:`html_use_index` for epub + - PR#123, #1106: Add `epub_use_index` configuration value. If + provided, it will be used instead of `html_use_index` for epub builder. - - PR#126: Add :confval:`epub_tocscope` configuration value. The setting + - PR#126: Add `epub_tocscope` configuration value. The setting controls the generation of the epub toc. The user can now also include hidden toc entries. - - PR#112: Add :confval:`epub_show_urls` configuration value. + - PR#112: Add `epub_show_urls` configuration value. * Extensions: @@ -729,7 +646,7 @@ Bugs fixed * #1127: Fix traceback when autodoc tries to tokenize a non-Python file. * #1126: Fix double-hyphen to en-dash conversion in wrong places such as command-line option names in LaTeX. -* #1123: Allow whitespaces in filenames given to :rst:dir:`literalinclude`. +* #1123: Allow whitespaces in filenames given to `literalinclude`. * #1120: Added improvements about i18n for themes "basic", "haiku" and "scrolls" that Sphinx built-in. Thanks to Leonardo J. Caballero G. * #1118: Updated Spanish translation. Thanks to Leonardo J. Caballero G. @@ -737,7 +654,7 @@ Bugs fixed * #1112: Avoid duplicate download files when referenced from documents in different ways (absolute/relative). * #1111: Fix failure to find uppercase words in search when - :confval:`html_search_language` is 'ja'. Thanks to Tomo Saito. + `html_search_language` is 'ja'. Thanks to Tomo Saito. * #1108: The text writer now correctly numbers enumerated lists with non-default start values (based on patch by Ewan Edwards). * #1102: Support multi-context "with" statements in autodoc. @@ -802,7 +719,7 @@ Release 1.1.3 (Mar 10, 2012) * #860: Do not crash when encountering invalid doctest examples, just emit a warning. -* #864: Fix crash with some settings of :confval:`modindex_common_prefix`. +* #864: Fix crash with some settings of `modindex_common_prefix`. * #862: Fix handling of ``-D`` and ``-A`` options on Python 3. @@ -866,7 +783,7 @@ Release 1.1 (Oct 9, 2011) Incompatible changes -------------------- -* The :rst:dir:`py:module` directive doesn't output its ``platform`` option +* The `py:module` directive doesn't output its ``platform`` option value anymore. (It was the only thing that the directive did output, and therefore quite inconsistent.) @@ -902,7 +819,7 @@ Features added :rst:dir:`toctree`\'s ``numbered`` option. - #586: Implemented improved :rst:dir:`glossary` markup which allows multiple terms per definition. - - #478: Added :rst:dir:`py:decorator` directive to describe decorators. + - #478: Added `py:decorator` directive to describe decorators. - C++ domain now supports array definitions. - C++ domain now supports doc fields (``:param x:`` inside directives). - Section headings in :rst:dir:`only` directives are now correctly @@ -913,7 +830,7 @@ Features added * HTML builder: - Added ``pyramid`` theme. - - #559: :confval:`html_add_permalinks` is now a string giving the + - #559: `html_add_permalinks` is now a string giving the text to display in permalinks. - #259: HTML table rows now have even/odd CSS classes to enable "Zebra styling". @@ -921,26 +838,26 @@ Features added * Other builders: - - #516: Added new value of the :confval:`latex_show_urls` option to + - #516: Added new value of the `latex_show_urls` option to show the URLs in footnotes. - - #209: Added :confval:`text_newlines` and :confval:`text_sectionchars` + - #209: Added `text_newlines` and `text_sectionchars` config values. - - Added :confval:`man_show_urls` config value. + - Added `man_show_urls` config value. - #472: linkcheck builder: Check links in parallel, use HTTP HEAD requests and allow configuring the timeout. New config values: - :confval:`linkcheck_timeout` and :confval:`linkcheck_workers`. - - #521: Added :confval:`linkcheck_ignore` config value. + `linkcheck_timeout` and `linkcheck_workers`. + - #521: Added `linkcheck_ignore` config value. - #28: Support row/colspans in tables in the LaTeX builder. * Configuration and extensibility: - - #537: Added :confval:`nitpick_ignore`. + - #537: Added `nitpick_ignore`. - #306: Added :event:`env-get-outdated` event. - :meth:`.Application.add_stylesheet` now accepts full URIs. * Autodoc: - - #564: Add :confval:`autodoc_docstring_signature`. When enabled (the + - #564: Add `autodoc_docstring_signature`. When enabled (the default), autodoc retrieves the signature from the first line of the docstring, if it is found there. - #176: Provide ``private-members`` option for autodoc directives. @@ -958,12 +875,12 @@ Features added - Added ``inline`` option to graphviz directives, and fixed the default (block-style) in LaTeX output. - #590: Added ``caption`` option to graphviz directives. - - #553: Added :rst:dir:`testcleanup` blocks in the doctest extension. - - #594: :confval:`trim_doctest_flags` now also removes ```` + - #553: Added `testcleanup` blocks in the doctest extension. + - #594: `trim_doctest_flags` now also removes ```` indicators. - #367: Added automatic exclusion of hidden members in inheritance diagrams, and an option to selectively enable it. - - Added :confval:`pngmath_add_tooltips`. + - Added `pngmath_add_tooltips`. - The math extension displaymath directives now support ``name`` in addition to ``label`` for giving the equation label, for compatibility with Docutils. @@ -1036,7 +953,7 @@ Release 1.0.8 (Sep 23, 2011) * #669: Respect the ``noindex`` flag option in py:module directives. * #675: Fix IndexErrors when including nonexisting lines with - :rst:dir:`literalinclude`. + `literalinclude`. * #676: Respect custom function/method parameter separator strings. @@ -1119,7 +1036,7 @@ Release 1.0.6 (Jan 04, 2011) * #570: Try decoding ``-D`` and ``-A`` command-line arguments with the locale's preferred encoding. -* #528: Observe :confval:`locale_dirs` when looking for the JS +* #528: Observe `locale_dirs` when looking for the JS translations file. * #574: Add special code for better support of Japanese documents @@ -1292,51 +1209,51 @@ Features added - Added a "nitpicky" mode that emits warnings for all missing references. It is activated by the :option:`-n` command-line switch - or the :confval:`nitpicky` config value. + or the `nitpicky` config value. - Added ``latexpdf`` target in quickstart Makefile. * Markup: - - The :rst:role:`menuselection` and :rst:role:`guilabel` roles now + - The `menuselection` and `guilabel` roles now support ampersand accelerators. - New more compact doc field syntax is now recognized: ``:param type name: description``. - - Added ``tab-width`` option to :rst:dir:`literalinclude` directive. + - Added ``tab-width`` option to `literalinclude` directive. - Added ``titlesonly`` option to :rst:dir:`toctree` directive. - Added the ``prepend`` and ``append`` options to the - :rst:dir:`literalinclude` directive. + `literalinclude` directive. - #284: All docinfo metadata is now put into the document metadata, not just the author. - - The :rst:role:`ref` role can now also reference tables by caption. - - The :rst:dir:`include` directive now supports absolute paths, which + - The `ref` role can now also reference tables by caption. + - The :dudir:`include` directive now supports absolute paths, which are interpreted as relative to the source directory. - In the Python domain, references like ``:func:`.name``` now look for matching names with any prefix if no direct match is found. * Configuration: - - Added :confval:`rst_prolog` config value. - - Added :confval:`html_secnumber_suffix` config value to control + - Added `rst_prolog` config value. + - Added `html_secnumber_suffix` config value to control section numbering format. - - Added :confval:`html_compact_lists` config value to control + - Added `html_compact_lists` config value to control docutils' compact lists feature. - - The :confval:`html_sidebars` config value can now contain patterns + - The `html_sidebars` config value can now contain patterns as keys, and the values can be lists that explicitly select which sidebar templates should be rendered. That means that the builtin sidebar contents can be included only selectively. - - :confval:`html_static_path` can now contain single file entries. - - The new universal config value :confval:`exclude_patterns` makes the - old :confval:`unused_docs`, :confval:`exclude_trees` and - :confval:`exclude_dirnames` obsolete. - - Added :confval:`html_output_encoding` config value. - - Added the :confval:`latex_docclass` config value and made the + - `html_static_path` can now contain single file entries. + - The new universal config value `exclude_patterns` makes the + old ``unused_docs``, ``exclude_trees`` and + ``exclude_dirnames`` obsolete. + - Added `html_output_encoding` config value. + - Added the `latex_docclass` config value and made the "twoside" documentclass option overridable by "oneside". - - Added the :confval:`trim_doctest_flags` config value, which is true + - Added the `trim_doctest_flags` config value, which is true by default. - - Added :confval:`html_show_copyright` config value. - - Added :confval:`latex_show_pagerefs` and :confval:`latex_show_urls` + - Added `html_show_copyright` config value. + - Added `latex_show_pagerefs` and `latex_show_urls` config values. - - The behavior of :confval:`html_file_suffix` changed slightly: the + - The behavior of `html_file_suffix` changed slightly: the empty string now means "no suffix" instead of "default suffix", use ``None`` for "default suffix". @@ -1378,7 +1295,7 @@ Features added * Extension API: - Added :event:`html-collect-pages`. - - Added :confval:`needs_sphinx` config value and + - Added `needs_sphinx` config value and :meth:`~sphinx.application.Sphinx.require_sphinx` application API method. - #200: Added :meth:`~sphinx.application.Sphinx.add_stylesheet` @@ -1390,7 +1307,7 @@ Features added - Added the :mod:`~sphinx.ext.extlinks` extension. - Added support for source ordering of members in autodoc, with ``autodoc_member_order = 'bysource'``. - - Added :confval:`autodoc_default_flags` config value, which can be + - Added `autodoc_default_flags` config value, which can be used to select default flags for all autodoc directives. - Added a way for intersphinx to refer to named labels in other projects, and to specify the project you want to link to. @@ -1400,7 +1317,7 @@ Features added extension, thanks to Pauli Virtanen. - #309: The :mod:`~sphinx.ext.graphviz` extension can now output SVG instead of PNG images, controlled by the - :confval:`graphviz_output_format` config value. + `graphviz_output_format` config value. - Added ``alt`` option to :rst:dir:`graphviz` extension directives. - Added ``exclude`` argument to :func:`.autodoc.between`. diff --git a/Makefile b/Makefile index 128b2c809..0e4a9adef 100644 --- a/Makefile +++ b/Makefile @@ -48,10 +48,10 @@ reindent: @$(PYTHON) utils/reindent.py -r -n . endif -test: build +test: @cd tests; $(PYTHON) run.py -d -m '^[tT]est' $(TEST) -covertest: build +covertest: @cd tests; $(PYTHON) run.py -d -m '^[tT]est' --with-coverage \ --cover-package=sphinx $(TEST) diff --git a/README.rst b/README.rst index 9b22008b9..ae92a2ce3 100644 --- a/README.rst +++ b/README.rst @@ -2,6 +2,9 @@ README for Sphinx ================= +This is the Sphinx documentation generator, see http://sphinx-doc.org/. + + Installing ========== @@ -17,7 +20,7 @@ Reading the docs After installing:: cd doc - sphinx-build . _build/html + make html Then, direct your browser to ``_build/html/index.html``. @@ -35,6 +38,11 @@ If you want to use a different interpreter, e.g. ``python3``, use:: PYTHON=python3 make test +Continuous testing runs on drone.io: + +.. image:: https://drone.io/bitbucket.org/birkenfeld/sphinx/status.png + :target: https://drone.io/bitbucket.org/birkenfeld/sphinx/ + Contributing ============ diff --git a/doc/_templates/index.html b/doc/_templates/index.html index 45581e0f1..2016ea9f6 100644 --- a/doc/_templates/index.html +++ b/doc/_templates/index.html @@ -34,6 +34,9 @@
    • {%trans path=pathto('extensions')%}Extensions: automatic testing of code snippets, inclusion of docstrings from Python modules (API docs), and more{%endtrans%}
    • +
    • {%trans path=pathto('develop')%}Contributed extensions: more than + 50 extensions contributed by users + in a second repository; most of them installable from PyPI{%endtrans%}

    {%trans%} Sphinx uses reStructuredText diff --git a/doc/_templates/indexsidebar.html b/doc/_templates/indexsidebar.html index 019b20fc1..db925c888 100644 --- a/doc/_templates/indexsidebar.html +++ b/doc/_templates/indexsidebar.html @@ -3,7 +3,7 @@ {%trans%}project{%endtrans%}

    Download

    -{% if version.endswith('(hg)') %} +{% if version.endswith('a0') %}

    {%trans%}This documentation is for version {{ version }}, which is not released yet.{%endtrans%}

    {%trans%}You can use it from the diff --git a/doc/authors.rst b/doc/authors.rst index 04c8b2b44..980b33e8c 100644 --- a/doc/authors.rst +++ b/doc/authors.rst @@ -1,9 +1,9 @@ -:tocdepth: 2 - -.. _authors: - -Sphinx authors -============== - -.. include:: ../AUTHORS - +:tocdepth: 2 + +.. _authors: + +Sphinx authors +============== + +.. include:: ../AUTHORS + diff --git a/doc/changes.rst b/doc/changes.rst index d5927a725..e42636872 100644 --- a/doc/changes.rst +++ b/doc/changes.rst @@ -1,5 +1,7 @@ :tocdepth: 2 +.. default-role:: any + .. _changes: Changes in Sphinx diff --git a/doc/conf.py b/doc/conf.py index 3ae948217..4a6f8f580 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -83,7 +83,7 @@ texinfo_documents = [ # We're not using intersphinx right now, but if we did, this would be part of # the mapping: -intersphinx_mapping = {'python': ('http://docs.python.org/dev', None)} +intersphinx_mapping = {'python': ('http://docs.python.org/2/', None)} # Sphinx document translation with sphinx gettext feature uses these settings: locale_dirs = ['locale/'] diff --git a/doc/config.rst b/doc/config.rst index cc35a7578..a11254eaa 100644 --- a/doc/config.rst +++ b/doc/config.rst @@ -707,7 +707,7 @@ that use Sphinx's HTMLWriter class. .. confval:: html_use_opensearch - If nonempty, an `OpenSearch ` description file will be + If nonempty, an `OpenSearch `_ description file will be output, and all pages will contain a ```` tag referring to it. Since OpenSearch doesn't support relative URLs for its search page location, the value of this option must be the base URL from which these documents are diff --git a/doc/devguide.rst b/doc/devguide.rst index 885d52b0e..9d85ec0b0 100644 --- a/doc/devguide.rst +++ b/doc/devguide.rst @@ -130,6 +130,11 @@ These are the basic steps needed to start developing on Sphinx. * For bug fixes, first add a test that fails without your changes and passes after they are applied. + * Tests that need a sphinx-build run should be integrated in one of the + existing test modules if possible. New tests that to ``@with_app`` and + then ``build_all`` for a few assertions are not good since *the test suite + should not take more than a minute to run*. + #. Please add a bullet point to :file:`CHANGES` if the fix or feature is not trivial (small doc updates, typo fixes). Then commit:: diff --git a/doc/extdev/appapi.rst b/doc/extdev/appapi.rst index 8df819439..4fed158cb 100644 --- a/doc/extdev/appapi.rst +++ b/doc/extdev/appapi.rst @@ -437,6 +437,19 @@ handlers to the events. Example: .. versionadded:: 0.5 +.. event:: env-before-read-docs (app, env, docnames) + + Emitted after the environment has determined the list of all added and + changed files and just before it reads them. It allows extension authors to + reorder the list of docnames (*inplace*) before processing, or add more + docnames that Sphinx did not consider changed (but never add any docnames + that are not in ``env.found_docs``). + + You can also remove document names; do this with caution since it will make + Sphinx treat changed files as unchanged. + + .. versionadded:: 1.3 + .. event:: source-read (app, docname, source) Emitted when a source file has been read. The *source* argument is a list @@ -480,6 +493,26 @@ handlers to the events. Example: Here is the place to replace custom nodes that don't have visitor methods in the writers, so that they don't cause errors when the writers encounter them. +.. event:: env-merge-info (env, docnames, other) + + This event is only emitted when parallel reading of documents is enabled. It + is emitted once for every subprocess that has read some documents. + + You must handle this event in an extension that stores data in the + environment in a custom location. Otherwise the environment in the main + process will not be aware of the information stored in the subprocess. + + *other* is the environment object from the subprocess, *env* is the + environment from the main process. *docnames* is a set of document names + that have been read in the subprocess. + + For a sample of how to deal with this event, look at the standard + ``sphinx.ext.todo`` extension. The implementation is often similar to that + of :event:`env-purge-doc`, only that information is not removed, but added to + the main environment from the other environment. + + .. versionadded:: 1.3 + .. event:: env-updated (app, env) Emitted when the :meth:`update` method of the build environment has diff --git a/doc/extdev/index.rst b/doc/extdev/index.rst index a82f33a80..5144c5f8b 100644 --- a/doc/extdev/index.rst +++ b/doc/extdev/index.rst @@ -18,15 +18,32 @@ imports this module and executes its ``setup()`` function, which in turn notifies Sphinx of everything the extension offers -- see the extension tutorial for examples. -.. versionadded:: 1.3 - The ``setup()`` function can return a string, this is treated by Sphinx as - the version of the extension and used for informational purposes such as the - traceback file when an exception occurs. - The configuration file itself can be treated as an extension if it contains a ``setup()`` function. All other extensions to load must be listed in the :confval:`extensions` configuration value. +Extension metadata +------------------ + +.. versionadded:: 1.3 + +The ``setup()`` function can return a dictionary. This is treated by Sphinx +as metadata of the extension. Metadata keys currently recognized are: + +* ``'version'``: a string that identifies the extension version. It is used for + extension version requirement checking (see :confval:`needs_extensions`) and + informational purposes. If not given, ``"unknown version"`` is substituted. +* ``'parallel_read_safe'``: a boolean that specifies if parallel reading of + source files can be used when the extension is loaded. It defaults to + ``False``, i.e. you have to explicitly specify your extension to be + parallel-read-safe after checking that it is. +* ``'parallel_write_safe'``: a boolean that specifies if parallel writing of + output files can be used when the extension is loaded. Since extensions + usually don't negatively influence the process, this defaults to ``True``. + +APIs used for writing extensions +-------------------------------- + .. toctree:: tutorial diff --git a/doc/extdev/tutorial.rst b/doc/extdev/tutorial.rst index 8f1773cdf..e7912406a 100644 --- a/doc/extdev/tutorial.rst +++ b/doc/extdev/tutorial.rst @@ -162,7 +162,7 @@ new Python module called :file:`todo.py` and add the setup function:: app.connect('doctree-resolved', process_todo_nodes) app.connect('env-purge-doc', purge_todos) - return '0.1' # identifies the version of our extension + return {'version': '0.1'} # identifies the version of our extension The calls in this function refer to classes and functions not yet written. What the individual calls do is the following: diff --git a/doc/markup/code.rst b/doc/markup/code.rst index f69bb161b..b948dc386 100644 --- a/doc/markup/code.rst +++ b/doc/markup/code.rst @@ -36,21 +36,29 @@ installed) and handled in a smart way: highlighted as Python). * The highlighting language can be changed using the ``highlight`` directive, - used as follows:: + used as follows: - .. highlight:: c + .. rst:directive:: .. highlight:: language - This language is used until the next ``highlight`` directive is encountered. + Example:: + + .. highlight:: c + + This language is used until the next ``highlight`` directive is encountered. * For documents that have to show snippets in different languages, there's also a :rst:dir:`code-block` directive that is given the highlighting language - directly:: + directly: - .. code-block:: ruby + .. rst:directive:: .. code-block:: language - Some Ruby code. + Use it like this:: - The directive's alias name :rst:dir:`sourcecode` works as well. + .. code-block:: ruby + + Some Ruby code. + + The directive's alias name :rst:dir:`sourcecode` works as well. * The valid values for the highlighting language are: diff --git a/doc/markup/inline.rst b/doc/markup/inline.rst index 0cc97f43f..b5bb8d0c5 100644 --- a/doc/markup/inline.rst +++ b/doc/markup/inline.rst @@ -12,7 +12,9 @@ They are written as ``:rolename:`content```. The default role (```content```) has no special meaning by default. You are free to use it for anything you like, e.g. variable names; use the - :confval:`default_role` config value to set it to a known role. + :confval:`default_role` config value to set it to a known role -- the + :rst:role:`any` role to find anything or the :rst:role:`py:obj` role to find + Python objects are very useful for this. See :ref:`domains` for roles added by domains. @@ -38,12 +40,57 @@ more versatile: * If you prefix the content with ``~``, the link text will only be the last component of the target. For example, ``:py:meth:`~Queue.Queue.get``` will - refer to ``Queue.Queue.get`` but only display ``get`` as the link text. + refer to ``Queue.Queue.get`` but only display ``get`` as the link text. This + does not work with all cross-reference roles, but is domain specific. In HTML output, the link's ``title`` attribute (that is e.g. shown as a tool-tip on mouse-hover) will always be the full target name. +.. _any-role: + +Cross-referencing anything +-------------------------- + +.. rst:role:: any + + .. versionadded:: 1.3 + + This convenience role tries to do its best to find a valid target for its + reference text. + + * First, it tries standard cross-reference targets that would be referenced + by :rst:role:`doc`, :rst:role:`ref` or :rst:role:`option`. + + Custom objects added to the standard domain by extensions (see + :meth:`.add_object_type`) are also searched. + + * Then, it looks for objects (targets) in all loaded domains. It is up to + the domains how specific a match must be. For example, in the Python + domain a reference of ``:any:`Builder``` would match the + ``sphinx.builders.Builder`` class. + + If none or multiple targets are found, a warning will be emitted. In the + case of multiple targets, you can change "any" to a specific role. + + This role is a good candidate for setting :confval:`default_role`. If you + do, you can write cross-references without a lot of markup overhead. For + example, in this Python function documentation :: + + .. function:: install() + + This function installs a `handler` for every signal known by the + `signal` module. See the section `about-signals` for more information. + + there could be references to a glossary term (usually ``:term:`handler```), a + Python module (usually ``:py:mod:`signal``` or ``:mod:`signal```) and a + section (usually ``:ref:`about-signals```). + + The :rst:role:`any` role also works together with the + :mod:`~sphinx.ext.intersphinx` extension: when no local cross-reference is + found, all object types of intersphinx inventories are also searched. + + Cross-referencing objects ------------------------- diff --git a/sphinx/addnodes.py b/sphinx/addnodes.py index 55abdb019..9d8c46901 100644 --- a/sphinx/addnodes.py +++ b/sphinx/addnodes.py @@ -25,6 +25,7 @@ class desc(nodes.Admonition, nodes.Element): contains one or more ``desc_signature`` and a ``desc_content``. """ + class desc_signature(nodes.Part, nodes.Inline, nodes.TextElement): """Node for object signatures. @@ -39,33 +40,42 @@ class desc_addname(nodes.Part, nodes.Inline, nodes.TextElement): # compatibility alias desc_classname = desc_addname + class desc_type(nodes.Part, nodes.Inline, nodes.TextElement): """Node for return types or object type names.""" + class desc_returns(desc_type): """Node for a "returns" annotation (a la -> in Python).""" def astext(self): return ' -> ' + nodes.TextElement.astext(self) + class desc_name(nodes.Part, nodes.Inline, nodes.TextElement): """Node for the main object name.""" + class desc_parameterlist(nodes.Part, nodes.Inline, nodes.TextElement): """Node for a general parameter list.""" child_text_separator = ', ' + class desc_parameter(nodes.Part, nodes.Inline, nodes.TextElement): """Node for a single parameter.""" + class desc_optional(nodes.Part, nodes.Inline, nodes.TextElement): """Node for marking optional parts of the parameter list.""" child_text_separator = ', ' + def astext(self): return '[' + nodes.TextElement.astext(self) + ']' + class desc_annotation(nodes.Part, nodes.Inline, nodes.TextElement): """Node for signature annotations (not Python 3-style annotations).""" + class desc_content(nodes.General, nodes.Element): """Node for object description content. @@ -82,15 +92,18 @@ class versionmodified(nodes.Admonition, nodes.TextElement): directives. """ + class seealso(nodes.Admonition, nodes.Element): """Custom "see also" admonition.""" + class productionlist(nodes.Admonition, nodes.Element): """Node for grammar production lists. Contains ``production`` nodes. """ + class production(nodes.Part, nodes.Inline, nodes.TextElement): """Node for a single grammar production rule.""" @@ -107,26 +120,33 @@ class index(nodes.Invisible, nodes.Inline, nodes.TextElement): *entrytype* is one of "single", "pair", "double", "triple". """ + class centered(nodes.Part, nodes.TextElement): """Deprecated.""" + class acks(nodes.Element): """Special node for "acks" lists.""" + class hlist(nodes.Element): """Node for "horizontal lists", i.e. lists that should be compressed to take up less vertical space. """ + class hlistcol(nodes.Element): """Node for one column in a horizontal list.""" + class compact_paragraph(nodes.paragraph): """Node for a compact paragraph (which never makes a

    node).""" + class glossary(nodes.Element): """Node to insert a glossary.""" + class only(nodes.Element): """Node for "only" directives (conditional inclusion based on tags).""" @@ -136,14 +156,17 @@ class only(nodes.Element): class start_of_file(nodes.Element): """Node to mark start of a new file, used in the LaTeX builder only.""" + class highlightlang(nodes.Element): """Inserted to set the highlight language and line number options for subsequent code blocks. """ + class tabular_col_spec(nodes.Element): """Node for specifying tabular columns, used for LaTeX output.""" + class meta(nodes.Special, nodes.PreBibliographic, nodes.Element): """Node for meta directive -- same as docutils' standard meta node, but pickleable. @@ -160,22 +183,27 @@ class pending_xref(nodes.Inline, nodes.Element): BuildEnvironment.resolve_references. """ + class download_reference(nodes.reference): """Node for download references, similar to pending_xref.""" + class literal_emphasis(nodes.emphasis): """Node that behaves like `emphasis`, but further text processors are not applied (e.g. smartypants for HTML output). """ + class literal_strong(nodes.strong): """Node that behaves like `strong`, but further text processors are not applied (e.g. smartypants for HTML output). """ + class abbreviation(nodes.Inline, nodes.TextElement): """Node for abbreviations with explanations.""" + class termsep(nodes.Structural, nodes.Element): """Separates two terms within a node.""" diff --git a/sphinx/apidoc.py b/sphinx/apidoc.py index f716286c7..7b1a96d25 100644 --- a/sphinx/apidoc.py +++ b/sphinx/apidoc.py @@ -88,7 +88,7 @@ def create_module_file(package, module, opts): text = format_heading(1, '%s module' % module) else: text = '' - #text += format_heading(2, ':mod:`%s` Module' % module) + # text += format_heading(2, ':mod:`%s` Module' % module) text += format_directive(module, package) write_file(makename(package, module), text, opts) @@ -173,7 +173,7 @@ def shall_skip(module, opts): # skip if it has a "private" name and this is selected filename = path.basename(module) if filename != '__init__.py' and filename.startswith('_') and \ - not opts.includeprivate: + not opts.includeprivate: return True return False @@ -218,7 +218,7 @@ def recurse_tree(rootpath, excludes, opts): if is_pkg: # we are in a package with something to document if subs or len(py_files) > 1 or not \ - shall_skip(path.join(root, INITPY), opts): + shall_skip(path.join(root, INITPY), opts): subpackage = root[len(rootpath):].lstrip(path.sep).\ replace(path.sep, '.') create_package_file(root, root_package, subpackage, @@ -318,7 +318,7 @@ Note: By default this script will not overwrite already created files.""") (opts, args) = parser.parse_args(argv[1:]) if opts.show_version: - print('Sphinx (sphinx-apidoc) %s' % __version__) + print('Sphinx (sphinx-apidoc) %s' % __version__) return 0 if not args: diff --git a/sphinx/application.py b/sphinx/application.py index fe8704018..13a2d272a 100644 --- a/sphinx/application.py +++ b/sphinx/application.py @@ -20,7 +20,7 @@ import traceback from os import path from collections import deque -from six import iteritems, itervalues +from six import iteritems, itervalues, text_type from six.moves import cStringIO from docutils import nodes from docutils.parsers.rst import convert_directive_function, \ @@ -39,7 +39,8 @@ from sphinx.environment import BuildEnvironment, SphinxStandaloneReader from sphinx.util import pycompat # imported for side-effects from sphinx.util.tags import Tags from sphinx.util.osutil import ENOENT -from sphinx.util.console import bold, lightgray, darkgray +from sphinx.util.console import bold, lightgray, darkgray, darkgreen, \ + term_width_line if hasattr(sys, 'intern'): intern = sys.intern @@ -49,8 +50,10 @@ events = { 'builder-inited': '', 'env-get-outdated': 'env, added, changed, removed', 'env-purge-doc': 'env, docname', + 'env-before-read-docs': 'env, docnames', 'source-read': 'docname, source text', 'doctree-read': 'the doctree before being pickled', + 'env-merge-info': 'env, read docnames, other env instance', 'missing-reference': 'env, node, contnode', 'doctree-resolved': 'doctree, docname', 'env-updated': 'env', @@ -72,7 +75,7 @@ class Sphinx(object): self.verbosity = verbosity self.next_listener_id = 0 self._extensions = {} - self._extension_versions = {} + self._extension_metadata = {} self._listeners = {} self.domains = BUILTIN_DOMAINS.copy() self.builderclasses = BUILTIN_BUILDERS.copy() @@ -112,6 +115,10 @@ class Sphinx(object): # status code for command-line application self.statuscode = 0 + if not path.isdir(outdir): + self.info('making output directory...') + os.makedirs(outdir) + # read config self.tags = Tags(tags) self.config = Config(confdir, CONFIG_FILENAME, @@ -128,7 +135,7 @@ class Sphinx(object): self.setup_extension(extension) # the config file itself can be an extension if self.config.setup: - # py31 doesn't have 'callable' function for bellow check + # py31 doesn't have 'callable' function for below check if hasattr(self.config.setup, '__call__'): self.config.setup(self) else: @@ -156,7 +163,7 @@ class Sphinx(object): 'version requirement for extension %s, but it is ' 'not loaded' % extname) continue - has_ver = self._extension_versions[extname] + has_ver = self._extension_metadata[extname]['version'] if has_ver == 'unknown version' or needs_ver > has_ver: raise VersionRequirementError( 'This project needs the extension %s at least in ' @@ -200,8 +207,8 @@ class Sphinx(object): else: try: self.info(bold('loading pickled environment... '), nonl=True) - self.env = BuildEnvironment.frompickle(self.config, - path.join(self.doctreedir, ENV_PICKLE_FILENAME)) + self.env = BuildEnvironment.frompickle( + self.config, path.join(self.doctreedir, ENV_PICKLE_FILENAME)) self.env.domains = {} for domain in self.domains.keys(): # this can raise if the data version doesn't fit @@ -245,6 +252,15 @@ class Sphinx(object): else: self.builder.compile_update_catalogs() self.builder.build_update() + + status = (self.statuscode == 0 + and 'succeeded' or 'finished with problems') + if self._warncount: + self.info(bold('build %s, %s warning%s.' % + (status, self._warncount, + self._warncount != 1 and 's' or ''))) + else: + self.info(bold('build %s.' % status)) except Exception as err: # delete the saved env to force a fresh build next time envfile = path.join(self.doctreedir, ENV_PICKLE_FILENAME) @@ -291,7 +307,7 @@ class Sphinx(object): else: location = None warntext = location and '%s: %s%s\n' % (location, prefix, message) or \ - '%s%s\n' % (prefix, message) + '%s%s\n' % (prefix, message) if self.warningiserror: raise SphinxWarning(warntext) self._warncount += 1 @@ -350,6 +366,48 @@ class Sphinx(object): message = message % (args or kwargs) self._log(lightgray(message), self._status) + def _display_chunk(chunk): + if isinstance(chunk, (list, tuple)): + if len(chunk) == 1: + return text_type(chunk[0]) + return '%s .. %s' % (chunk[0], chunk[-1]) + return text_type(chunk) + + def old_status_iterator(self, iterable, summary, colorfunc=darkgreen, + stringify_func=_display_chunk): + l = 0 + for item in iterable: + if l == 0: + self.info(bold(summary), nonl=1) + l = 1 + self.info(colorfunc(stringify_func(item)) + ' ', nonl=1) + yield item + if l == 1: + self.info() + + # new version with progress info + def status_iterator(self, iterable, summary, colorfunc=darkgreen, length=0, + stringify_func=_display_chunk): + if length == 0: + for item in self.old_status_iterator(iterable, summary, colorfunc, + stringify_func): + yield item + return + l = 0 + summary = bold(summary) + for item in iterable: + l += 1 + s = '%s[%3d%%] %s' % (summary, 100*l/length, + colorfunc(stringify_func(item))) + if self.verbosity: + s += '\n' + else: + s = term_width_line(s) + self.info(s, nonl=1) + yield item + if l > 0: + self.info() + # ---- general extensibility interface ------------------------------------- def setup_extension(self, extension): @@ -366,20 +424,22 @@ class Sphinx(object): if not hasattr(mod, 'setup'): self.warn('extension %r has no setup() function; is it really ' 'a Sphinx extension module?' % extension) - version = None + ext_meta = None else: try: - version = mod.setup(self) + ext_meta = mod.setup(self) except VersionRequirementError as err: # add the extension name to the version required raise VersionRequirementError( 'The %s extension used by this project needs at least ' 'Sphinx v%s; it therefore cannot be built with this ' 'version.' % (extension, err)) - if version is None: - version = 'unknown version' + if ext_meta is None: + ext_meta = {} + if not ext_meta.get('version'): + ext_meta['version'] = 'unknown version' self._extensions[extension] = mod - self._extension_versions[extension] = version + self._extension_metadata[extension] = ext_meta def require_sphinx(self, version): # check the Sphinx version if requested @@ -461,7 +521,7 @@ class Sphinx(object): else: raise ExtensionError( 'Builder %r already exists (in module %s)' % ( - builder.name, self.builderclasses[builder.name].__module__)) + builder.name, self.builderclasses[builder.name].__module__)) self.builderclasses[builder.name] = builder def add_config_value(self, name, default, rebuild): diff --git a/sphinx/builders/__init__.py b/sphinx/builders/__init__.py index 833269c63..d52a7983d 100644 --- a/sphinx/builders/__init__.py +++ b/sphinx/builders/__init__.py @@ -22,7 +22,9 @@ from docutils import nodes from sphinx.util import i18n, path_stabilize from sphinx.util.osutil import SEP, relative_uri, find_catalog -from sphinx.util.console import bold, purple, darkgreen, term_width_line +from sphinx.util.console import bold, darkgreen +from sphinx.util.parallel import ParallelTasks, SerialTasks, make_chunks, \ + parallel_available # side effect: registers roles and directives from sphinx import roles @@ -62,10 +64,17 @@ class Builder(object): self.tags.add(self.name) self.tags.add("format_%s" % self.format) self.tags.add("builder_%s" % self.name) + # compatibility aliases + self.status_iterator = app.status_iterator + self.old_status_iterator = app.old_status_iterator # images that need to be copied over (source -> dest) self.images = {} + # these get set later + self.parallel_ok = False + self.finish_tasks = None + # load default translator class self.translator_class = app._translators.get(self.name) @@ -113,41 +122,6 @@ class Builder(object): """ raise NotImplementedError - def old_status_iterator(self, iterable, summary, colorfunc=darkgreen, - stringify_func=lambda x: x): - l = 0 - for item in iterable: - if l == 0: - self.info(bold(summary), nonl=1) - l = 1 - self.info(colorfunc(stringify_func(item)) + ' ', nonl=1) - yield item - if l == 1: - self.info() - - # new version with progress info - def status_iterator(self, iterable, summary, colorfunc=darkgreen, length=0, - stringify_func=lambda x: x): - if length == 0: - for item in self.old_status_iterator(iterable, summary, colorfunc, - stringify_func): - yield item - return - l = 0 - summary = bold(summary) - for item in iterable: - l += 1 - s = '%s[%3d%%] %s' % (summary, 100*l/length, - colorfunc(stringify_func(item))) - if self.app.verbosity: - s += '\n' - else: - s = term_width_line(s) - self.info(s, nonl=1) - yield item - if l > 0: - self.info() - supported_image_types = [] def post_process_images(self, doctree): @@ -179,9 +153,8 @@ class Builder(object): def compile_catalogs(self, catalogs, message): if not self.config.gettext_auto_build: return - self.info(bold('building [mo]: '), nonl=1) - self.info(message) - for catalog in self.status_iterator( + self.info(bold('building [mo]: ') + message) + for catalog in self.app.status_iterator( catalogs, 'writing output... ', darkgreen, len(catalogs), lambda c: c.mo_path): catalog.write_mo(self.config.language) @@ -263,25 +236,17 @@ class Builder(object): First updates the environment, and then calls :meth:`write`. """ if summary: - self.info(bold('building [%s]: ' % self.name), nonl=1) - self.info(summary) + self.info(bold('building [%s]' % self.name) + ': ' + summary) updated_docnames = set() # while reading, collect all warnings from docutils warnings = [] self.env.set_warnfunc(lambda *args: warnings.append(args)) - self.info(bold('updating environment: '), nonl=1) - msg, length, iterator = self.env.update(self.config, self.srcdir, - self.doctreedir, self.app) - self.info(msg) - for docname in self.status_iterator(iterator, 'reading sources... ', - purple, length): - updated_docnames.add(docname) - # nothing further to do, the environment has already - # done the reading + updated_docnames = self.env.update(self.config, self.srcdir, + self.doctreedir, self.app) + self.env.set_warnfunc(self.warn) for warning in warnings: self.warn(*warning) - self.env.set_warnfunc(self.warn) doccount = len(updated_docnames) self.info(bold('looking for now-outdated files... '), nonl=1) @@ -315,20 +280,33 @@ class Builder(object): if docnames and docnames != ['__all__']: docnames = set(docnames) & self.env.found_docs - # another indirection to support builders that don't build - # files individually + # determine if we can write in parallel + self.parallel_ok = False + if parallel_available and self.app.parallel > 1 and self.allow_parallel: + self.parallel_ok = True + for extname, md in self.app._extension_metadata.items(): + par_ok = md.get('parallel_write_safe', True) + if not par_ok: + self.app.warn('the %s extension is not safe for parallel ' + 'writing, doing serial read' % extname) + self.parallel_ok = False + break + + # create a task executor to use for misc. "finish-up" tasks + # if self.parallel_ok: + # self.finish_tasks = ParallelTasks(self.app.parallel) + # else: + # for now, just execute them serially + self.finish_tasks = SerialTasks() + + # write all "normal" documents (or everything for some builders) self.write(docnames, list(updated_docnames), method) # finish (write static files etc.) self.finish() - status = (self.app.statuscode == 0 - and 'succeeded' or 'finished with problems') - if self.app._warncount: - self.info(bold('build %s, %s warning%s.' % - (status, self.app._warncount, - self.app._warncount != 1 and 's' or ''))) - else: - self.info(bold('build %s.' % status)) + + # wait for all tasks + self.finish_tasks.join() def write(self, build_docnames, updated_docnames, method='update'): if build_docnames is None or build_docnames == ['__all__']: @@ -354,23 +332,17 @@ class Builder(object): warnings = [] self.env.set_warnfunc(lambda *args: warnings.append(args)) - # check for prerequisites to parallel build - # (parallel only works on POSIX, because the forking impl of - # multiprocessing is required) - if not (multiprocessing and - self.app.parallel > 1 and - self.allow_parallel and - os.name == 'posix'): - self._write_serial(sorted(docnames), warnings) - else: + if self.parallel_ok: # number of subprocesses is parallel-1 because the main process # is busy loading doctrees and doing write_doc_serialized() self._write_parallel(sorted(docnames), warnings, nproc=self.app.parallel - 1) + else: + self._write_serial(sorted(docnames), warnings) self.env.set_warnfunc(self.warn) def _write_serial(self, docnames, warnings): - for docname in self.status_iterator( + for docname in self.app.status_iterator( docnames, 'writing output... ', darkgreen, len(docnames)): doctree = self.env.get_and_resolve_doctree(docname, self) self.write_doc_serialized(docname, doctree) @@ -380,60 +352,34 @@ class Builder(object): def _write_parallel(self, docnames, warnings, nproc): def write_process(docs): - try: - for docname, doctree in docs: - self.write_doc(docname, doctree) - except KeyboardInterrupt: - pass # do not print a traceback on Ctrl-C - finally: - for warning in warnings: - self.warn(*warning) + for docname, doctree in docs: + self.write_doc(docname, doctree) + return warnings - def process_thread(docs): - p = multiprocessing.Process(target=write_process, args=(docs,)) - p.start() - p.join() - semaphore.release() - - # allow only "nproc" worker processes at once - semaphore = threading.Semaphore(nproc) - # list of threads to join when waiting for completion - threads = [] + def add_warnings(docs, wlist): + warnings.extend(wlist) # warm up caches/compile templates using the first document firstname, docnames = docnames[0], docnames[1:] doctree = self.env.get_and_resolve_doctree(firstname, self) self.write_doc_serialized(firstname, doctree) self.write_doc(firstname, doctree) - # for the rest, determine how many documents to write in one go - ndocs = len(docnames) - chunksize = min(ndocs // nproc, 10) - if chunksize == 0: - chunksize = 1 - nchunks, rest = divmod(ndocs, chunksize) - if rest: - nchunks += 1 - # partition documents in "chunks" that will be written by one Process - chunks = [docnames[i*chunksize:(i+1)*chunksize] for i in range(nchunks)] - for docnames in self.status_iterator( - chunks, 'writing output... ', darkgreen, len(chunks), - lambda chk: '%s .. %s' % (chk[0], chk[-1])): - docs = [] - for docname in docnames: + + tasks = ParallelTasks(nproc) + chunks = make_chunks(docnames, nproc) + + for chunk in self.app.status_iterator( + chunks, 'writing output... ', darkgreen, len(chunks)): + arg = [] + for i, docname in enumerate(chunk): doctree = self.env.get_and_resolve_doctree(docname, self) self.write_doc_serialized(docname, doctree) - docs.append((docname, doctree)) - # start a new thread to oversee the completion of this chunk - semaphore.acquire() - t = threading.Thread(target=process_thread, args=(docs,)) - t.setDaemon(True) - t.start() - threads.append(t) + arg.append((docname, doctree)) + tasks.add_task(write_process, arg, add_warnings) # make sure all threads have finished - self.info(bold('waiting for workers... ')) - for t in threads: - t.join() + self.info(bold('waiting for workers...')) + tasks.join() def prepare_writing(self, docnames): """A place where you can add logic before :meth:`write_doc` is run""" diff --git a/sphinx/builders/changes.py b/sphinx/builders/changes.py index aa947c96b..069d0ce6a 100644 --- a/sphinx/builders/changes.py +++ b/sphinx/builders/changes.py @@ -130,6 +130,9 @@ class ChangesBuilder(Builder): self.env.config.source_encoding) try: lines = f.readlines() + except UnicodeDecodeError: + self.warn('could not read %r for changelog creation' % docname) + continue finally: f.close() targetfn = path.join(self.outdir, 'rst', os_path(docname)) + '.html' diff --git a/sphinx/builders/epub.py b/sphinx/builders/epub.py index 95a0ef0af..5f9f66431 100644 --- a/sphinx/builders/epub.py +++ b/sphinx/builders/epub.py @@ -405,8 +405,8 @@ class EpubBuilder(StandaloneHTMLBuilder): converting the format and resizing the image if necessary/possible. """ ensuredir(path.join(self.outdir, '_images')) - for src in self.status_iterator(self.images, 'copying images... ', - brown, len(self.images)): + for src in self.app.status_iterator(self.images, 'copying images... ', + brown, len(self.images)): dest = self.images[src] try: img = Image.open(path.join(self.srcdir, src)) diff --git a/sphinx/builders/gettext.py b/sphinx/builders/gettext.py index 657ce9241..d21c79fcf 100644 --- a/sphinx/builders/gettext.py +++ b/sphinx/builders/gettext.py @@ -170,8 +170,8 @@ class MessageCatalogBuilder(I18nBuilder): extract_translations = self.templates.environment.extract_translations - for template in self.status_iterator(files, - 'reading templates... ', purple, len(files)): + for template in self.app.status_iterator( + files, 'reading templates... ', purple, len(files)): with open(template, 'r', encoding='utf-8') as f: context = f.read() for line, meth, msg in extract_translations(context): @@ -191,7 +191,7 @@ class MessageCatalogBuilder(I18nBuilder): ctime = datetime.fromtimestamp( timestamp, ltz).strftime('%Y-%m-%d %H:%M%z'), ) - for textdomain, catalog in self.status_iterator( + for textdomain, catalog in self.app.status_iterator( iteritems(self.catalogs), "writing message catalogs... ", darkgreen, len(self.catalogs), lambda textdomain__: textdomain__[0]): diff --git a/sphinx/builders/html.py b/sphinx/builders/html.py index ec3a81861..c2c308937 100644 --- a/sphinx/builders/html.py +++ b/sphinx/builders/html.py @@ -29,7 +29,7 @@ from docutils.readers.doctree import Reader as DoctreeReader from sphinx import package_dir, __version__ from sphinx.util import jsonimpl, copy_static_entry from sphinx.util.osutil import SEP, os_path, relative_uri, ensuredir, \ - movefile, ustrftime, copyfile + movefile, ustrftime, copyfile from sphinx.util.nodes import inline_all_toctrees from sphinx.util.matching import patmatch, compile_matchers from sphinx.locale import _ @@ -40,7 +40,7 @@ from sphinx.application import ENV_PICKLE_FILENAME from sphinx.highlighting import PygmentsBridge from sphinx.util.console import bold, darkgreen, brown from sphinx.writers.html import HTMLWriter, HTMLTranslator, \ - SmartyPantsHTMLTranslator + SmartyPantsHTMLTranslator #: the filename for the inventory of objects INVENTORY_FILENAME = 'objects.inv' @@ -443,12 +443,19 @@ class StandaloneHTMLBuilder(Builder): self.index_page(docname, doctree, title) def finish(self): - self.info(bold('writing additional files...'), nonl=1) + self.finish_tasks.add_task(self.gen_indices) + self.finish_tasks.add_task(self.gen_additional_pages) + self.finish_tasks.add_task(self.copy_image_files) + self.finish_tasks.add_task(self.copy_download_files) + self.finish_tasks.add_task(self.copy_static_files) + self.finish_tasks.add_task(self.copy_extra_files) + self.finish_tasks.add_task(self.write_buildinfo) - # pages from extensions - for pagelist in self.app.emit('html-collect-pages'): - for pagename, context, template in pagelist: - self.handle_page(pagename, context, template) + # dump the search index + self.handle_finish() + + def gen_indices(self): + self.info(bold('generating indices...'), nonl=1) # the global general index if self.get_builder_config('use_index', 'html'): @@ -457,16 +464,27 @@ class StandaloneHTMLBuilder(Builder): # the global domain-specific indices self.write_domain_indices() - # the search page - if self.name != 'htmlhelp': - self.info(' search', nonl=1) - self.handle_page('search', {}, 'search.html') + self.info() + + def gen_additional_pages(self): + # pages from extensions + for pagelist in self.app.emit('html-collect-pages'): + for pagename, context, template in pagelist: + self.handle_page(pagename, context, template) + + self.info(bold('writing additional pages...'), nonl=1) # additional pages from conf.py for pagename, template in self.config.html_additional_pages.items(): self.info(' '+pagename, nonl=1) self.handle_page(pagename, {}, template) + # the search page + if self.name != 'htmlhelp': + self.info(' search', nonl=1) + self.handle_page('search', {}, 'search.html') + + # the opensearch xml file if self.config.html_use_opensearch and self.name != 'htmlhelp': self.info(' opensearch', nonl=1) fn = path.join(self.outdir, '_static', 'opensearch.xml') @@ -474,15 +492,6 @@ class StandaloneHTMLBuilder(Builder): self.info() - self.copy_image_files() - self.copy_download_files() - self.copy_static_files() - self.copy_extra_files() - self.write_buildinfo() - - # dump the search index - self.handle_finish() - def write_genindex(self): # the total count of lines for each index letter, used to distribute # the entries into two columns @@ -526,8 +535,8 @@ class StandaloneHTMLBuilder(Builder): # copy image files if self.images: ensuredir(path.join(self.outdir, '_images')) - for src in self.status_iterator(self.images, 'copying images... ', - brown, len(self.images)): + for src in self.app.status_iterator(self.images, 'copying images... ', + brown, len(self.images)): dest = self.images[src] try: copyfile(path.join(self.srcdir, src), @@ -540,9 +549,9 @@ class StandaloneHTMLBuilder(Builder): # copy downloadable files if self.env.dlfiles: ensuredir(path.join(self.outdir, '_downloads')) - for src in self.status_iterator(self.env.dlfiles, - 'copying downloadable files... ', - brown, len(self.env.dlfiles)): + for src in self.app.status_iterator(self.env.dlfiles, + 'copying downloadable files... ', + brown, len(self.env.dlfiles)): dest = self.env.dlfiles[src][1] try: copyfile(path.join(self.srcdir, src), @@ -786,8 +795,8 @@ class StandaloneHTMLBuilder(Builder): copyfile(self.env.doc2path(pagename), source_name) def handle_finish(self): - self.dump_search_index() - self.dump_inventory() + self.finish_tasks.add_task(self.dump_search_index) + self.finish_tasks.add_task(self.dump_inventory) def dump_inventory(self): self.info(bold('dumping object inventory... '), nonl=True) diff --git a/sphinx/cmdline.py b/sphinx/cmdline.py index 6e7ab3267..18a17ab37 100644 --- a/sphinx/cmdline.py +++ b/sphinx/cmdline.py @@ -12,7 +12,7 @@ from __future__ import print_function import os import sys -import getopt +import optparse import traceback from os import path @@ -32,89 +32,121 @@ def usage(argv, msg=None): if msg: print(msg, file=sys.stderr) print(file=sys.stderr) - print("""\ + +USAGE = """\ Sphinx v%s -Usage: %s [options] sourcedir outdir [filenames...] +Usage: %%prog [options] sourcedir outdir [filenames...] -General options -^^^^^^^^^^^^^^^ --b builder to use; default is html --a write all files; default is to only write new and changed files --E don't use a saved environment, always read all files --d path for the cached environment and doctree files - (default: outdir/.doctrees) --j build in parallel with N processes where possible --M "make" mode -- used by Makefile, like "sphinx-build -M html" +Filename arguments: + without -a and without filenames, write new and changed files. + with -a, write all files. + with filenames, write these. +""" % __version__ -Build configuration options -^^^^^^^^^^^^^^^^^^^^^^^^^^^ --c path where configuration file (conf.py) is located - (default: same as sourcedir) --C use no config file at all, only -D options --D override a setting in configuration file --t define tag: include "only" blocks with --A pass a value into the templates, for HTML builder --n nit-picky mode, warn about all missing references +EPILOG = """\ +For more information, visit . +""" -Console output options -^^^^^^^^^^^^^^^^^^^^^^ --v increase verbosity (can be repeated) --q no output on stdout, just warnings on stderr --Q no output at all, not even warnings --w write warnings (and errors) to given file --W turn warnings into errors --T show full traceback on exception --N do not emit colored output --P run Pdb on exception -Filename arguments -^^^^^^^^^^^^^^^^^^ -* without -a and without filenames, write new and changed files. -* with -a, write all files. -* with filenames, write these. +class MyFormatter(optparse.IndentedHelpFormatter): + def format_usage(self, usage): + return usage -Standard options -^^^^^^^^^^^^^^^^ --h, --help show this help and exit ---version show version information and exit -""" % (__version__, argv[0]), file=sys.stderr) + def format_help(self, formatter): + result = [] + if self.description: + result.append(self.format_description(formatter)) + if self.option_list: + result.append(self.format_option_help(formatter)) + return "\n".join(result) def main(argv): if not color_terminal(): nocolor() + parser = optparse.OptionParser(USAGE, epilog=EPILOG, formatter=MyFormatter()) + parser.add_option('--version', action='store_true', dest='version', + help='show version information and exit') + + group = parser.add_option_group('General options') + group.add_option('-b', metavar='BUILDER', dest='builder', default='html', + help='builder to use; default is html') + group.add_option('-a', action='store_true', dest='force_all', + help='write all files; default is to only write new and ' + 'changed files') + group.add_option('-E', action='store_true', dest='freshenv', + help='don\'t use a saved environment, always read ' + 'all files') + group.add_option('-d', metavar='PATH', default=None, dest='doctreedir', + help='path for the cached environment and doctree files ' + '(default: outdir/.doctrees)') + group.add_option('-j', metavar='N', default=1, type='int', dest='jobs', + help='build in parallel with N processes where possible') + # this option never gets through to this point (it is intercepted earlier) + # group.add_option('-M', metavar='BUILDER', dest='make_mode', + # help='"make" mode -- as used by Makefile, like ' + # '"sphinx-build -M html"') + + group = parser.add_option_group('Build configuration options') + group.add_option('-c', metavar='PATH', dest='confdir', + help='path where configuration file (conf.py) is located ' + '(default: same as sourcedir)') + group.add_option('-C', action='store_true', dest='noconfig', + help='use no config file at all, only -D options') + group.add_option('-D', metavar='setting=value', action='append', + dest='define', default=[], + help='override a setting in configuration file') + group.add_option('-A', metavar='name=value', action='append', + dest='htmldefine', default=[], + help='pass a value into HTML templates') + group.add_option('-t', metavar='TAG', action='append', + dest='tags', default=[], + help='define tag: include "only" blocks with TAG') + group.add_option('-n', action='store_true', dest='nitpicky', + help='nit-picky mode, warn about all missing references') + + group = parser.add_option_group('Console output options') + group.add_option('-v', action='count', dest='verbosity', default=0, + help='increase verbosity (can be repeated)') + group.add_option('-q', action='store_true', dest='quiet', + help='no output on stdout, just warnings on stderr') + group.add_option('-Q', action='store_true', dest='really_quiet', + help='no output at all, not even warnings') + group.add_option('-N', action='store_true', dest='nocolor', + help='do not emit colored output') + group.add_option('-w', metavar='FILE', dest='warnfile', + help='write warnings (and errors) to given file') + group.add_option('-W', action='store_true', dest='warningiserror', + help='turn warnings into errors') + group.add_option('-T', action='store_true', dest='traceback', + help='show full traceback on exception') + group.add_option('-P', action='store_true', dest='pdb', + help='run Pdb on exception') + # parse options try: - opts, args = getopt.getopt(argv[1:], 'ab:t:d:c:CD:A:nNEqQWw:PThvj:', - ['help', 'version']) - except getopt.error as err: - usage(argv, 'Error: %s' % err) - return 1 + opts, args = parser.parse_args() + except SystemExit as err: + return err.code # handle basic options - allopts = set(opt[0] for opt in opts) - # help and version options - if '-h' in allopts or '--help' in allopts: - usage(argv) - print(file=sys.stderr) - print('For more information, see .', - file=sys.stderr) - return 0 - if '--version' in allopts: - print('Sphinx (sphinx-build) %s' % __version__) + if opts.version: + print('Sphinx (sphinx-build) %s' % __version__) return 0 # get paths (first and second positional argument) try: - srcdir = confdir = abspath(args[0]) + srcdir = abspath(args[0]) + confdir = abspath(opts.confdir or srcdir) + if opts.noconfig: + confdir = None if not path.isdir(srcdir): print('Error: Cannot find source directory `%s\'.' % srcdir, file=sys.stderr) return 1 - if not path.isfile(path.join(srcdir, 'conf.py')) and \ - '-c' not in allopts and '-C' not in allopts: - print('Error: Source directory doesn\'t contain a conf.py file.', + if not opts.noconfig and not path.isfile(path.join(confdir, 'conf.py')): + print('Error: Config directory doesn\'t contain a conf.py file.', file=sys.stderr) return 1 outdir = abspath(args[1]) @@ -144,116 +176,77 @@ def main(argv): except Exception: likely_encoding = None - buildername = None - force_all = freshenv = warningiserror = use_pdb = False - show_traceback = False - verbosity = 0 - parallel = 0 + if opts.force_all and filenames: + print('Error: Cannot combine -a option and filenames.', file=sys.stderr) + return 1 + + if opts.nocolor: + nocolor() + + doctreedir = abspath(opts.doctreedir or path.join(outdir, '.doctrees')) + status = sys.stdout warning = sys.stderr error = sys.stderr - warnfile = None + + if opts.quiet: + status = None + if opts.really_quiet: + status = warning = None + if warning and opts.warnfile: + try: + warnfp = open(opts.warnfile, 'w') + except Exception as exc: + print('Error: Cannot open warning file %r: %s' % + (opts.warnfile, exc), file=sys.stderr) + sys.exit(1) + warning = Tee(warning, warnfp) + error = warning + confoverrides = {} - tags = [] - doctreedir = path.join(outdir, '.doctrees') - for opt, val in opts: - if opt == '-b': - buildername = val - elif opt == '-a': - if filenames: - usage(argv, 'Error: Cannot combine -a option and filenames.') - return 1 - force_all = True - elif opt == '-t': - tags.append(val) - elif opt == '-d': - doctreedir = abspath(val) - elif opt == '-c': - confdir = abspath(val) - if not path.isfile(path.join(confdir, 'conf.py')): - print('Error: Configuration directory doesn\'t contain conf.py file.', - file=sys.stderr) - return 1 - elif opt == '-C': - confdir = None - elif opt == '-D': + for val in opts.define: + try: + key, val = val.split('=') + except ValueError: + print('Error: -D option argument must be in the form name=value.', + file=sys.stderr) + return 1 + if likely_encoding and isinstance(val, binary_type): try: - key, val = val.split('=') - except ValueError: - print('Error: -D option argument must be in the form name=value.', - file=sys.stderr) - return 1 + val = val.decode(likely_encoding) + except UnicodeError: + pass + confoverrides[key] = val + + for val in opts.htmldefine: + try: + key, val = val.split('=') + except ValueError: + print('Error: -A option argument must be in the form name=value.', + file=sys.stderr) + return 1 + try: + val = int(val) + except ValueError: if likely_encoding and isinstance(val, binary_type): try: val = val.decode(likely_encoding) except UnicodeError: pass - confoverrides[key] = val - elif opt == '-A': - try: - key, val = val.split('=') - except ValueError: - print('Error: -A option argument must be in the form name=value.', - file=sys.stderr) - return 1 - try: - val = int(val) - except ValueError: - if likely_encoding and isinstance(val, binary_type): - try: - val = val.decode(likely_encoding) - except UnicodeError: - pass - confoverrides['html_context.%s' % key] = val - elif opt == '-n': - confoverrides['nitpicky'] = True - elif opt == '-N': - nocolor() - elif opt == '-E': - freshenv = True - elif opt == '-q': - status = None - elif opt == '-Q': - status = None - warning = None - elif opt == '-W': - warningiserror = True - elif opt == '-w': - warnfile = val - elif opt == '-P': - use_pdb = True - elif opt == '-T': - show_traceback = True - elif opt == '-v': - verbosity += 1 - show_traceback = True - elif opt == '-j': - try: - parallel = int(val) - except ValueError: - print('Error: -j option argument must be an integer.', - file=sys.stderr) - return 1 + confoverrides['html_context.%s' % key] = val - if warning and warnfile: - warnfp = open(warnfile, 'w') - warning = Tee(warning, warnfp) - error = warning - - if not path.isdir(outdir): - if status: - print('Making output directory...', file=status) - os.makedirs(outdir) + if opts.nitpicky: + confoverrides['nitpicky'] = True app = None try: - app = Sphinx(srcdir, confdir, outdir, doctreedir, buildername, - confoverrides, status, warning, freshenv, - warningiserror, tags, verbosity, parallel) - app.build(force_all, filenames) + app = Sphinx(srcdir, confdir, outdir, doctreedir, opts.builder, + confoverrides, status, warning, opts.freshenv, + opts.warningiserror, opts.tags, opts.verbosity, opts.jobs) + app.build(opts.force_all, filenames) return app.statuscode except (Exception, KeyboardInterrupt) as err: - if use_pdb: + if opts.pdb: import pdb print(red('Exception occurred while building, starting debugger:'), file=error) @@ -261,7 +254,7 @@ def main(argv): pdb.post_mortem(sys.exc_info()[2]) else: print(file=error) - if show_traceback: + if opts.verbosity or opts.traceback: traceback.print_exc(None, error) print(file=error) if isinstance(err, KeyboardInterrupt): diff --git a/sphinx/directives/__init__.py b/sphinx/directives/__init__.py index 52b638fe0..969426bc1 100644 --- a/sphinx/directives/__init__.py +++ b/sphinx/directives/__init__.py @@ -11,7 +11,8 @@ import re -from docutils.parsers.rst import Directive, directives +from docutils import nodes +from docutils.parsers.rst import Directive, directives, roles from sphinx import addnodes from sphinx.util.docfields import DocFieldTransformer @@ -162,6 +163,34 @@ class ObjectDescription(Directive): DescDirective = ObjectDescription +class DefaultRole(Directive): + """ + Set the default interpreted text role. Overridden from docutils. + """ + + optional_arguments = 1 + final_argument_whitespace = False + + def run(self): + if not self.arguments: + if '' in roles._roles: + # restore the "default" default role + del roles._roles[''] + return [] + role_name = self.arguments[0] + role, messages = roles.role(role_name, self.state_machine.language, + self.lineno, self.state.reporter) + if role is None: + error = self.state.reporter.error( + 'Unknown interpreted text role "%s".' % role_name, + nodes.literal_block(self.block_text, self.block_text), + line=self.lineno) + return messages + [error] + roles._roles[''] = role + self.state.document.settings.env.temp_data['default_role'] = role_name + return messages + + class DefaultDomain(Directive): """ Directive to (re-)set the default domain for this source file. @@ -186,6 +215,7 @@ class DefaultDomain(Directive): return [] +directives.register_directive('default-role', DefaultRole) directives.register_directive('default-domain', DefaultDomain) directives.register_directive('describe', ObjectDescription) # new, more consistent, name diff --git a/sphinx/directives/code.py b/sphinx/directives/code.py index 855437594..c61380ecc 100644 --- a/sphinx/directives/code.py +++ b/sphinx/directives/code.py @@ -108,7 +108,7 @@ class CodeBlock(Directive): return [document.reporter.warning(str(err), line=self.lineno)] else: hl_lines = None - + if 'dedent' in self.options: lines = code.split('\n') lines = dedent_lines(lines, self.options['dedent']) diff --git a/sphinx/domains/__init__.py b/sphinx/domains/__init__.py index 51b886fdf..66d4c677d 100644 --- a/sphinx/domains/__init__.py +++ b/sphinx/domains/__init__.py @@ -155,10 +155,13 @@ class Domain(object): self._role_cache = {} self._directive_cache = {} self._role2type = {} + self._type2role = {} for name, obj in iteritems(self.object_types): for rolename in obj.roles: self._role2type.setdefault(rolename, []).append(name) + self._type2role[name] = obj.roles[0] if obj.roles else '' self.objtypes_for_role = self._role2type.get + self.role_for_objtype = self._type2role.get def role(self, name): """Return a role adapter function that always gives the registered @@ -199,6 +202,14 @@ class Domain(object): """Remove traces of a document in the domain-specific inventories.""" pass + def merge_domaindata(self, docnames, otherdata): + """Merge in data regarding *docnames* from a different domaindata + inventory (coming from a subprocess in parallel builds). + """ + raise NotImplementedError('merge_domaindata must be implemented in %s ' + 'to be able to do parallel builds!' % + self.__class__) + def process_doc(self, env, docname, document): """Process a document after it is read by the environment.""" pass @@ -220,6 +231,22 @@ class Domain(object): """ pass + def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): + """Resolve the pending_xref *node* with the given *target*. + + The reference comes from an "any" or similar role, which means that we + don't know the type. Otherwise, the arguments are the same as for + :meth:`resolve_xref`. + + The method must return a list (potentially empty) of tuples + ``('domain:role', newnode)``, where ``'domain:role'`` is the name of a + role that could have created the same reference, e.g. ``'py:func'``. + ``newnode`` is what :meth:`resolve_xref` would return. + + .. versionadded:: 1.3 + """ + raise NotImplementedError + def get_objects(self): """Return an iterable of "object descriptions", which are tuples with five items: diff --git a/sphinx/domains/c.py b/sphinx/domains/c.py index 4d12c141a..0754e3172 100644 --- a/sphinx/domains/c.py +++ b/sphinx/domains/c.py @@ -130,7 +130,7 @@ class CObject(ObjectDescription): if m: name = m.group(1) - typename = self.env.temp_data.get('c:type') + typename = self.env.ref_context.get('c:type') if self.name == 'c:member' and typename: fullname = typename + '.' + name else: @@ -212,12 +212,12 @@ class CObject(ObjectDescription): self.typename_set = False if self.name == 'c:type': if self.names: - self.env.temp_data['c:type'] = self.names[0] + self.env.ref_context['c:type'] = self.names[0] self.typename_set = True def after_content(self): if self.typename_set: - self.env.temp_data['c:type'] = None + self.env.ref_context.pop('c:type', None) class CXRefRole(XRefRole): @@ -269,6 +269,12 @@ class CDomain(Domain): if fn == docname: del self.data['objects'][fullname] + def merge_domaindata(self, docnames, otherdata): + # XXX check duplicates + for fullname, (fn, objtype) in otherdata['objects'].items(): + if fn in docnames: + self.data['objects'][fullname] = (fn, objtype) + def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): # strip pointer asterisk @@ -279,6 +285,17 @@ class CDomain(Domain): return make_refnode(builder, fromdocname, obj[0], 'c.' + target, contnode, target) + def resolve_any_xref(self, env, fromdocname, builder, target, + node, contnode): + # strip pointer asterisk + target = target.rstrip(' *') + if target not in self.data['objects']: + return [] + obj = self.data['objects'][target] + return [('c:' + self.role_for_objtype(obj[1]), + make_refnode(builder, fromdocname, obj[0], 'c.' + target, + contnode, target))] + def get_objects(self): for refname, (docname, type) in list(self.data['objects'].items()): yield (refname, refname, type, docname, 'c.' + refname, 1) diff --git a/sphinx/domains/cpp.py b/sphinx/domains/cpp.py index 23bd469fd..d4455a222 100644 --- a/sphinx/domains/cpp.py +++ b/sphinx/domains/cpp.py @@ -7,11 +7,11 @@ :copyright: Copyright 2007-2014 by the Sphinx team, see AUTHORS. :license: BSD, see LICENSE for details. - + See http://www.nongnu.org/hcb/ for the grammar. See http://mentorembedded.github.io/cxx-abi/abi.html#mangling for the inspiration for the id generation. - + common grammar things: simple-declaration -> attribute-specifier-seq[opt] decl-specifier-seq[opt] @@ -20,7 +20,7 @@ # Use at most 1 init-declerator. -> decl-specifier-seq init-declerator -> decl-specifier-seq declerator initializer - + decl-specifier -> storage-class-specifier -> "static" (only for member_object and function_object) @@ -76,8 +76,8 @@ constant-expression | type-specifier-seq abstract-declerator | id-expression - - + + declerator -> ptr-declerator | noptr-declarator parameters-and-qualifiers trailing-return-type @@ -108,11 +108,11 @@ memberFunctionInit -> "=" "0" # (note: only "0" is allowed as the value, according to the standard, # right?) - - + + We additionally add the possibility for specifying the visibility as the first thing. - + type_object: goal: either a single type (e.g., "MyClass:Something_T" or a typedef-like @@ -126,14 +126,14 @@ -> decl-specifier-seq abstract-declarator[opt] grammar, typedef-like: no initilizer decl-specifier-seq declerator - - + + member_object: goal: as a type_object which must have a declerator, and optionally with a initializer grammar: decl-specifier-seq declerator initializer - + function_object: goal: a function declaration, TODO: what about templates? for now: skip grammar: no initializer @@ -141,7 +141,6 @@ """ import re -import traceback from copy import deepcopy from six import iteritems, text_type @@ -222,9 +221,9 @@ _id_operator = { 'delete[]': 'da', # the arguments will make the difference between unary and binary # '+(unary)' : 'ps', - #'-(unary)' : 'ng', - #'&(unary)' : 'ad', - #'*(unary)' : 'de', + # '-(unary)' : 'ng', + # '&(unary)' : 'ad', + # '*(unary)' : 'de', '~': 'co', '+': 'pl', '-': 'mi', @@ -319,7 +318,7 @@ class ASTBase(UnicodeMixin): def _verify_description_mode(mode): - if not mode in ('lastIsName', 'noneIsName', 'markType', 'param'): + if mode not in ('lastIsName', 'noneIsName', 'markType', 'param'): raise Exception("Description mode '%s' is invalid." % mode) @@ -328,7 +327,7 @@ class ASTOperatorBuildIn(ASTBase): self.op = op def get_id(self): - if not self.op in _id_operator: + if self.op not in _id_operator: raise Exception('Internal error: Build-in operator "%s" can not ' 'be mapped to an id.' % self.op) return _id_operator[self.op] @@ -434,7 +433,7 @@ class ASTNestedNameElement(ASTBase): '', refdomain='cpp', reftype='type', reftarget=targetText, modname=None, classname=None) if env: # during testing we don't have an env, do we? - pnode['cpp:parent'] = env.temp_data.get('cpp:parent') + pnode['cpp:parent'] = env.ref_context.get('cpp:parent') pnode += nodes.Text(text_type(self.identifier)) signode += pnode elif mode == 'lastIsName': @@ -532,7 +531,7 @@ class ASTTrailingTypeSpecFundamental(ASTBase): return self.name def get_id(self): - if not self.name in _id_fundamental: + if self.name not in _id_fundamental: raise Exception( 'Semi-internal error: Fundamental type "%s" can not be mapped ' 'to an id. Is it a true fundamental type? If not so, the ' @@ -866,7 +865,7 @@ class ASTDeclerator(ASTBase): isinstance(self.ptrOps[-1], ASTPtrOpParamPack)): return False else: - return self.declId != None + return self.declId is not None def __unicode__(self): res = [] @@ -949,7 +948,7 @@ class ASTType(ASTBase): _verify_description_mode(mode) self.declSpecs.describe_signature(signode, 'markType', env) if (self.decl.require_start_space() and - len(text_type(self.declSpecs)) > 0): + len(text_type(self.declSpecs)) > 0): signode += nodes.Text(' ') self.decl.describe_signature(signode, mode, env) @@ -1178,7 +1177,7 @@ class DefinitionParser(object): else: while not self.eof: if (len(symbols) == 0 and - self.current_char in ( + self.current_char in ( ',', '>')): break # TODO: actually implement nice handling @@ -1190,8 +1189,7 @@ class DefinitionParser(object): self.fail( 'Could not find end of constant ' 'template argument.') - value = self.definition[ - startPos:self.pos].strip() + value = self.definition[startPos:self.pos].strip() templateArgs.append(ASTTemplateArgConstant(value)) self.skip_ws() if self.skip_string('>'): @@ -1422,7 +1420,7 @@ class DefinitionParser(object): def _parse_declerator(self, named, paramMode=None, typed=True): if paramMode: - if not paramMode in ('type', 'function'): + if paramMode not in ('type', 'function'): raise Exception( "Internal error, unknown paramMode '%s'." % paramMode) ptrOps = [] @@ -1493,7 +1491,7 @@ class DefinitionParser(object): if outer == 'member': value = self.read_rest().strip() return ASTInitializer(value) - elif outer == None: # function parameter + elif outer is None: # function parameter symbols = [] startPos = self.pos self.skip_ws() @@ -1528,7 +1526,7 @@ class DefinitionParser(object): doesn't need to name the arguments """ if outer: # always named - if not outer in ('type', 'member', 'function'): + if outer not in ('type', 'member', 'function'): raise Exception('Internal error, unknown outer "%s".' % outer) assert not named @@ -1652,12 +1650,12 @@ class CPPObject(ObjectDescription): if theid not in self.state.document.ids: # the name is not unique, the first one will win objects = self.env.domaindata['cpp']['objects'] - if not name in objects: + if name not in objects: signode['names'].append(name) signode['ids'].append(theid) signode['first'] = (not self.names) self.state.document.note_explicit_target(signode) - if not name in objects: + if name not in objects: objects.setdefault(name, (self.env.docname, ast.objectType, theid)) # add the uninstantiated template if it doesn't exist @@ -1665,8 +1663,8 @@ class CPPObject(ObjectDescription): if uninstantiated != name and uninstantiated not in objects: signode['names'].append(uninstantiated) objects.setdefault(uninstantiated, ( - self.env.docname, ast.objectType, theid)) - self.env.temp_data['cpp:lastname'] = ast.prefixedName + self.env.docname, ast.objectType, theid)) + self.env.ref_context['cpp:lastname'] = ast.prefixedName indextext = self.get_index_text(name) if not re.compile(r'^[a-zA-Z0-9_]*$').match(theid): @@ -1693,7 +1691,7 @@ class CPPObject(ObjectDescription): raise ValueError self.describe_signature(signode, ast) - parent = self.env.temp_data.get('cpp:parent') + parent = self.env.ref_context.get('cpp:parent') if parent and len(parent) > 0: ast = ast.clone() ast.prefixedName = ast.name.prefix_nested_name(parent[-1]) @@ -1741,15 +1739,15 @@ class CPPClassObject(CPPObject): return _('%s (C++ class)') % name def before_content(self): - lastname = self.env.temp_data['cpp:lastname'] + lastname = self.env.ref_context['cpp:lastname'] assert lastname - if 'cpp:parent' in self.env.temp_data: - self.env.temp_data['cpp:parent'].append(lastname) + if 'cpp:parent' in self.env.ref_context: + self.env.ref_context['cpp:parent'].append(lastname) else: - self.env.temp_data['cpp:parent'] = [lastname] + self.env.ref_context['cpp:parent'] = [lastname] def after_content(self): - self.env.temp_data['cpp:parent'].pop() + self.env.ref_context['cpp:parent'].pop() def parse_definition(self, parser): return parser.parse_class_object() @@ -1774,7 +1772,7 @@ class CPPNamespaceObject(Directive): def run(self): env = self.state.document.settings.env if self.arguments[0].strip() in ('NULL', '0', 'nullptr'): - env.temp_data['cpp:parent'] = [] + env.ref_context['cpp:parent'] = [] else: parser = DefinitionParser(self.arguments[0]) try: @@ -1784,13 +1782,13 @@ class CPPNamespaceObject(Directive): self.state_machine.reporter.warning(e.description, line=self.lineno) else: - env.temp_data['cpp:parent'] = [prefix] + env.ref_context['cpp:parent'] = [prefix] return [] class CPPXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): - parent = env.temp_data.get('cpp:parent') + parent = env.ref_context.get('cpp:parent') if parent: refnode['cpp:parent'] = parent[:] if not has_explicit_title: @@ -1838,18 +1836,24 @@ class CPPDomain(Domain): if data[0] == docname: del self.data['objects'][fullname] - def resolve_xref(self, env, fromdocname, builder, - typ, target, node, contnode): + def merge_domaindata(self, docnames, otherdata): + # XXX check duplicates + for fullname, data in otherdata['objects'].items(): + if data[0] in docnames: + self.data['objects'][fullname] = data + + def _resolve_xref_inner(self, env, fromdocname, builder, + target, node, contnode, warn=True): def _create_refnode(nameAst): name = text_type(nameAst) if name not in self.data['objects']: # try dropping the last template name = nameAst.get_name_no_last_template() if name not in self.data['objects']: - return None + return None, None docname, objectType, id = self.data['objects'][name] return make_refnode(builder, fromdocname, docname, id, contnode, - name) + name), objectType parser = DefinitionParser(target) try: @@ -1858,21 +1862,35 @@ class CPPDomain(Domain): if not parser.eof: raise DefinitionError('') except DefinitionError: - env.warn_node('unparseable C++ definition: %r' % target, node) - return None + if warn: + env.warn_node('unparseable C++ definition: %r' % target, node) + return None, None # try as is the name is fully qualified - refNode = _create_refnode(nameAst) - if refNode: - return refNode + res = _create_refnode(nameAst) + if res[0]: + return res # try qualifying it with the parent parent = node.get('cpp:parent', None) if parent and len(parent) > 0: return _create_refnode(nameAst.prefix_nested_name(parent[-1])) else: - return None + return None, None + + def resolve_xref(self, env, fromdocname, builder, + typ, target, node, contnode): + return self._resolve_xref_inner(env, fromdocname, builder, target, node, + contnode)[0] + + def resolve_any_xref(self, env, fromdocname, builder, target, + node, contnode): + node, objtype = self._resolve_xref_inner(env, fromdocname, builder, + target, node, contnode, warn=False) + if node: + return [('cpp:' + self.role_for_objtype(objtype), node)] + return [] def get_objects(self): for refname, (docname, type, theid) in iteritems(self.data['objects']): - yield (refname, refname, type, docname, refname, 1) \ No newline at end of file + yield (refname, refname, type, docname, refname, 1) diff --git a/sphinx/domains/javascript.py b/sphinx/domains/javascript.py index 2718b8727..af215fd6e 100644 --- a/sphinx/domains/javascript.py +++ b/sphinx/domains/javascript.py @@ -45,7 +45,7 @@ class JSObject(ObjectDescription): nameprefix = None name = prefix - objectname = self.env.temp_data.get('js:object') + objectname = self.env.ref_context.get('js:object') if nameprefix: if objectname: # someone documenting the method of an attribute of the current @@ -77,7 +77,7 @@ class JSObject(ObjectDescription): def add_target_and_index(self, name_obj, sig, signode): objectname = self.options.get( - 'object', self.env.temp_data.get('js:object')) + 'object', self.env.ref_context.get('js:object')) fullname = name_obj[0] if fullname not in self.state.document.ids: signode['names'].append(fullname) @@ -140,7 +140,7 @@ class JSConstructor(JSCallable): class JSXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): # basically what sphinx.domains.python.PyXRefRole does - refnode['js:object'] = env.temp_data.get('js:object') + refnode['js:object'] = env.ref_context.get('js:object') if not has_explicit_title: title = title.lstrip('.') target = target.lstrip('~') @@ -179,7 +179,7 @@ class JavaScriptDomain(Domain): 'attr': JSXRefRole(), } initial_data = { - 'objects': {}, # fullname -> docname, objtype + 'objects': {}, # fullname -> docname, objtype } def clear_doc(self, docname): @@ -187,6 +187,12 @@ class JavaScriptDomain(Domain): if fn == docname: del self.data['objects'][fullname] + def merge_domaindata(self, docnames, otherdata): + # XXX check duplicates + for fullname, (fn, objtype) in otherdata['objects'].items(): + if fn in docnames: + self.data['objects'][fullname] = (fn, objtype) + def find_obj(self, env, obj, name, typ, searchorder=0): if name[-2:] == '()': name = name[:-2] @@ -214,6 +220,16 @@ class JavaScriptDomain(Domain): return make_refnode(builder, fromdocname, obj[0], name.replace('$', '_S_'), contnode, name) + def resolve_any_xref(self, env, fromdocname, builder, target, node, + contnode): + objectname = node.get('js:object') + name, obj = self.find_obj(env, objectname, target, None, 1) + if not obj: + return [] + return [('js:' + self.role_for_objtype(obj[1]), + make_refnode(builder, fromdocname, obj[0], + name.replace('$', '_S_'), contnode, name))] + def get_objects(self): for refname, (docname, type) in list(self.data['objects'].items()): yield refname, refname, type, docname, \ diff --git a/sphinx/domains/python.py b/sphinx/domains/python.py index a7a93cb1e..4e08eba9b 100644 --- a/sphinx/domains/python.py +++ b/sphinx/domains/python.py @@ -156,8 +156,8 @@ class PyObject(ObjectDescription): # determine module and class name (if applicable), as well as full name modname = self.options.get( - 'module', self.env.temp_data.get('py:module')) - classname = self.env.temp_data.get('py:class') + 'module', self.env.ref_context.get('py:module')) + classname = self.env.ref_context.get('py:class') if classname: add_module = False if name_prefix and name_prefix.startswith(classname): @@ -194,7 +194,7 @@ class PyObject(ObjectDescription): # 'exceptions' module. elif add_module and self.env.config.add_module_names: modname = self.options.get( - 'module', self.env.temp_data.get('py:module')) + 'module', self.env.ref_context.get('py:module')) if modname and modname != 'exceptions': nodetext = modname + '.' signode += addnodes.desc_addname(nodetext, nodetext) @@ -225,7 +225,7 @@ class PyObject(ObjectDescription): def add_target_and_index(self, name_cls, sig, signode): modname = self.options.get( - 'module', self.env.temp_data.get('py:module')) + 'module', self.env.ref_context.get('py:module')) fullname = (modname and modname + '.' or '') + name_cls[0] # note target if fullname not in self.state.document.ids: @@ -254,7 +254,7 @@ class PyObject(ObjectDescription): def after_content(self): if self.clsname_set: - self.env.temp_data['py:class'] = None + self.env.ref_context.pop('py:class', None) class PyModulelevel(PyObject): @@ -299,7 +299,7 @@ class PyClasslike(PyObject): def before_content(self): PyObject.before_content(self) if self.names: - self.env.temp_data['py:class'] = self.names[0][0] + self.env.ref_context['py:class'] = self.names[0][0] self.clsname_set = True @@ -377,8 +377,8 @@ class PyClassmember(PyObject): def before_content(self): PyObject.before_content(self) lastname = self.names and self.names[-1][1] - if lastname and not self.env.temp_data.get('py:class'): - self.env.temp_data['py:class'] = lastname.strip('.') + if lastname and not self.env.ref_context.get('py:class'): + self.env.ref_context['py:class'] = lastname.strip('.') self.clsname_set = True @@ -434,7 +434,7 @@ class PyModule(Directive): env = self.state.document.settings.env modname = self.arguments[0].strip() noindex = 'noindex' in self.options - env.temp_data['py:module'] = modname + env.ref_context['py:module'] = modname ret = [] if not noindex: env.domaindata['py']['modules'][modname] = \ @@ -472,16 +472,16 @@ class PyCurrentModule(Directive): env = self.state.document.settings.env modname = self.arguments[0].strip() if modname == 'None': - env.temp_data['py:module'] = None + env.ref_context.pop('py:module', None) else: - env.temp_data['py:module'] = modname + env.ref_context['py:module'] = modname return [] class PyXRefRole(XRefRole): def process_link(self, env, refnode, has_explicit_title, title, target): - refnode['py:module'] = env.temp_data.get('py:module') - refnode['py:class'] = env.temp_data.get('py:class') + refnode['py:module'] = env.ref_context.get('py:module') + refnode['py:class'] = env.ref_context.get('py:class') if not has_explicit_title: title = title.lstrip('.') # only has a meaning for the target target = target.lstrip('~') # only has a meaning for the title @@ -627,6 +627,15 @@ class PythonDomain(Domain): if fn == docname: del self.data['modules'][modname] + def merge_domaindata(self, docnames, otherdata): + # XXX check duplicates? + for fullname, (fn, objtype) in otherdata['objects'].items(): + if fn in docnames: + self.data['objects'][fullname] = (fn, objtype) + for modname, data in otherdata['modules'].items(): + if data[0] in docnames: + self.data['modules'][modname] = data + def find_obj(self, env, modname, classname, name, type, searchmode=0): """Find a Python object for "name", perhaps using the given module and/or classname. Returns a list of (name, object entry) tuples. @@ -643,7 +652,10 @@ class PythonDomain(Domain): newname = None if searchmode == 1: - objtypes = self.objtypes_for_role(type) + if type is None: + objtypes = list(self.object_types) + else: + objtypes = self.objtypes_for_role(type) if objtypes is not None: if modname and classname: fullname = modname + '.' + classname + '.' + name @@ -704,22 +716,44 @@ class PythonDomain(Domain): name, obj = matches[0] if obj[1] == 'module': - # get additional info for modules - docname, synopsis, platform, deprecated = self.data['modules'][name] - assert docname == obj[0] - title = name - if synopsis: - title += ': ' + synopsis - if deprecated: - title += _(' (deprecated)') - if platform: - title += ' (' + platform + ')' - return make_refnode(builder, fromdocname, docname, - 'module-' + name, contnode, title) + return self._make_module_refnode(builder, fromdocname, name, + contnode) else: return make_refnode(builder, fromdocname, obj[0], name, contnode, name) + def resolve_any_xref(self, env, fromdocname, builder, target, + node, contnode): + modname = node.get('py:module') + clsname = node.get('py:class') + results = [] + + # always search in "refspecific" mode with the :any: role + matches = self.find_obj(env, modname, clsname, target, None, 1) + for name, obj in matches: + if obj[1] == 'module': + results.append(('py:mod', + self._make_module_refnode(builder, fromdocname, + name, contnode))) + else: + results.append(('py:' + self.role_for_objtype(obj[1]), + make_refnode(builder, fromdocname, obj[0], name, + contnode, name))) + return results + + def _make_module_refnode(self, builder, fromdocname, name, contnode): + # get additional info for modules + docname, synopsis, platform, deprecated = self.data['modules'][name] + title = name + if synopsis: + title += ': ' + synopsis + if deprecated: + title += _(' (deprecated)') + if platform: + title += ' (' + platform + ')' + return make_refnode(builder, fromdocname, docname, + 'module-' + name, contnode, title) + def get_objects(self): for modname, info in iteritems(self.data['modules']): yield (modname, modname, 'module', info[0], 'module-' + modname, 0) diff --git a/sphinx/domains/rst.py b/sphinx/domains/rst.py index e213211ab..2c304d0c7 100644 --- a/sphinx/domains/rst.py +++ b/sphinx/domains/rst.py @@ -123,6 +123,12 @@ class ReSTDomain(Domain): if doc == docname: del self.data['objects'][typ, name] + def merge_domaindata(self, docnames, otherdata): + # XXX check duplicates + for (typ, name), doc in otherdata['objects'].items(): + if doc in docnames: + self.data['objects'][typ, name] = doc + def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode): objects = self.data['objects'] @@ -134,6 +140,19 @@ class ReSTDomain(Domain): objtype + '-' + target, contnode, target + ' ' + objtype) + def resolve_any_xref(self, env, fromdocname, builder, target, + node, contnode): + objects = self.data['objects'] + results = [] + for objtype in self.object_types: + if (objtype, target) in self.data['objects']: + results.append(('rst:' + self.role_for_objtype(objtype), + make_refnode(builder, fromdocname, + objects[objtype, target], + objtype + '-' + target, + contnode, target + ' ' + objtype))) + return results + def get_objects(self): for (typ, name), docname in iteritems(self.data['objects']): yield name, name, typ, docname, typ + '-' + name, 1 diff --git a/sphinx/domains/std.py b/sphinx/domains/std.py index 7a769221e..5beb9feb7 100644 --- a/sphinx/domains/std.py +++ b/sphinx/domains/std.py @@ -28,7 +28,9 @@ from sphinx.util.compat import Directive # RE for option descriptions -option_desc_re = re.compile(r'((?:/|-|--)?[-_a-zA-Z0-9]+)(\s*.*)') +option_desc_re = re.compile(r'((?:/|--|-|\+)?[-?@#_a-zA-Z0-9]+)(=?\s*.*)') +# RE for grammar tokens +token_re = re.compile('`(\w+)`', re.U) class GenericObject(ObjectDescription): @@ -144,8 +146,9 @@ class Cmdoption(ObjectDescription): self.env.warn( self.env.docname, 'Malformed option description %r, should ' - 'look like "opt", "-opt args", "--opt args" or ' - '"/opt args"' % potential_option, self.lineno) + 'look like "opt", "-opt args", "--opt args", ' + '"/opt args" or "+opt args"' % potential_option, + self.lineno) continue optname, args = m.groups() if count: @@ -163,7 +166,7 @@ class Cmdoption(ObjectDescription): return firstname def add_target_and_index(self, firstname, sig, signode): - currprogram = self.env.temp_data.get('std:program') + currprogram = self.env.ref_context.get('std:program') for optname in signode.get('allnames', []): targetname = optname.replace('/', '-') if not targetname.startswith('-'): @@ -198,36 +201,19 @@ class Program(Directive): env = self.state.document.settings.env program = ws_re.sub('-', self.arguments[0].strip()) if program == 'None': - env.temp_data['std:program'] = None + env.ref_context.pop('std:program', None) else: - env.temp_data['std:program'] = program + env.ref_context['std:program'] = program return [] class OptionXRefRole(XRefRole): - innernodeclass = addnodes.literal_emphasis - - def _split(self, text, refnode, env): - try: - program, target = re.split(' (?=-|--|/)', text, 1) - except ValueError: - env.warn_node('Malformed :option: %r, does not contain option ' - 'marker - or -- or /' % text, refnode) - return None, text - else: - program = ws_re.sub('-', program) - return program, target - def process_link(self, env, refnode, has_explicit_title, title, target): - program = env.temp_data.get('std:program') - if not has_explicit_title: - if ' ' in title and not (title.startswith('/') or - title.startswith('-')): - program, target = self._split(title, refnode, env) - target = target.strip() - elif ' ' in target: - program, target = self._split(target, refnode, env) - refnode['refprogram'] = program + # validate content + if not re.match('(.+ )?[-/+]', target): + env.warn_node('Malformed :option: %r, does not contain option ' + 'marker - or -- or / or +' % target, refnode) + refnode['std:program'] = env.ref_context.get('std:program') return title, target @@ -327,7 +313,7 @@ class Glossary(Directive): else: messages.append(self.state.reporter.system_message( 2, 'glossary seems to be misformatted, check ' - 'indentation', source=source, line=lineno)) + 'indentation', source=source, line=lineno)) else: if not in_definition: # first line of definition, determines indentation @@ -338,7 +324,7 @@ class Glossary(Directive): else: messages.append(self.state.reporter.system_message( 2, 'glossary seems to be misformatted, check ' - 'indentation', source=source, line=lineno)) + 'indentation', source=source, line=lineno)) was_empty = False # now, parse all the entries into a big definition list @@ -359,7 +345,7 @@ class Glossary(Directive): tmp.source = source tmp.line = lineno new_id, termtext, new_termnodes = \ - make_termnodes_from_paragraph_node(env, tmp) + make_termnodes_from_paragraph_node(env, tmp) ids.append(new_id) termtexts.append(termtext) termnodes.extend(new_termnodes) @@ -386,8 +372,6 @@ class Glossary(Directive): return messages + [node] -token_re = re.compile('`(\w+)`', re.U) - def token_xrefs(text): retnodes = [] pos = 0 @@ -472,7 +456,7 @@ class StandardDomain(Domain): 'productionlist': ProductionList, } roles = { - 'option': OptionXRefRole(innernodeclass=addnodes.literal_emphasis), + 'option': OptionXRefRole(), 'envvar': EnvVarXRefRole(), # links to tokens in grammar productions 'token': XRefRole(), @@ -522,6 +506,21 @@ class StandardDomain(Domain): if fn == docname: del self.data['anonlabels'][key] + def merge_domaindata(self, docnames, otherdata): + # XXX duplicates? + for key, data in otherdata['progoptions'].items(): + if data[0] in docnames: + self.data['progoptions'][key] = data + for key, data in otherdata['objects'].items(): + if data[0] in docnames: + self.data['objects'][key] = data + for key, data in otherdata['labels'].items(): + if data[0] in docnames: + self.data['labels'][key] = data + for key, data in otherdata['anonlabels'].items(): + if data[0] in docnames: + self.data['anonlabels'][key] = data + def process_doc(self, env, docname, document): labels, anonlabels = self.data['labels'], self.data['anonlabels'] for name, explicit in iteritems(document.nametypes): @@ -532,7 +531,7 @@ class StandardDomain(Domain): continue node = document.ids[labelid] if name.isdigit() or 'refuri' in node or \ - node.tagname.startswith('desc_'): + node.tagname.startswith('desc_'): # ignore footnote labels, labels automatically generated from a # link and object descriptions continue @@ -541,7 +540,7 @@ class StandardDomain(Domain): 'in ' + env.doc2path(labels[name][0]), node) anonlabels[name] = docname, labelid if node.tagname == 'section': - sectname = clean_astext(node[0]) # node[0] == title node + sectname = clean_astext(node[0]) # node[0] == title node elif node.tagname == 'figure': for n in node: if n.tagname == 'caption': @@ -579,13 +578,13 @@ class StandardDomain(Domain): if node['refexplicit']: # reference to anonymous label; the reference uses # the supplied link caption - docname, labelid = self.data['anonlabels'].get(target, ('','')) + docname, labelid = self.data['anonlabels'].get(target, ('', '')) sectname = node.astext() else: # reference to named label; the final node will # contain the section name after the label docname, labelid, sectname = self.data['labels'].get(target, - ('','','')) + ('', '', '')) if not docname: return None newnode = nodes.reference('', '', internal=True) @@ -607,13 +606,22 @@ class StandardDomain(Domain): return newnode elif typ == 'keyword': # keywords are oddballs: they are referenced by named labels - docname, labelid, _ = self.data['labels'].get(target, ('','','')) + docname, labelid, _ = self.data['labels'].get(target, ('', '', '')) if not docname: return None return make_refnode(builder, fromdocname, docname, labelid, contnode) elif typ == 'option': - progname = node['refprogram'] + target = target.strip() + # most obvious thing: we are a flag option without program + if target.startswith(('-', '/', '+')): + progname = node.get('std:program') + else: + try: + progname, target = re.split(r' (?=-|--|/|\+)', target, 1) + except ValueError: + return None + progname = ws_re.sub('-', progname.strip()) docname, labelid = self.data['progoptions'].get((progname, target), ('', '')) if not docname: @@ -633,6 +641,28 @@ class StandardDomain(Domain): return make_refnode(builder, fromdocname, docname, labelid, contnode) + def resolve_any_xref(self, env, fromdocname, builder, target, + node, contnode): + results = [] + ltarget = target.lower() # :ref: lowercases its target automatically + for role in ('ref', 'option'): # do not try "keyword" + res = self.resolve_xref(env, fromdocname, builder, role, + ltarget if role == 'ref' else target, + node, contnode) + if res: + results.append(('std:' + role, res)) + # all others + for objtype in self.object_types: + key = (objtype, target) + if objtype == 'term': + key = (objtype, ltarget) + if key in self.data['objects']: + docname, labelid = self.data['objects'][key] + results.append(('std:' + self.role_for_objtype(objtype), + make_refnode(builder, fromdocname, docname, + labelid, contnode))) + return results + def get_objects(self): for (prog, option), info in iteritems(self.data['progoptions']): yield (option, option, 'option', info[0], info[1], 1) diff --git a/sphinx/environment.py b/sphinx/environment.py index d51e7a168..d183710a9 100644 --- a/sphinx/environment.py +++ b/sphinx/environment.py @@ -33,26 +33,31 @@ from docutils.parsers.rst import roles, directives from docutils.parsers.rst.languages import en as english from docutils.parsers.rst.directives.html import MetaBody from docutils.writers import UnfilteredWriter +from docutils.frontend import OptionParser from sphinx import addnodes from sphinx.util import url_re, get_matching_docs, docname_join, split_into, \ - FilenameUniqDict + FilenameUniqDict from sphinx.util.nodes import clean_astext, make_refnode, WarningStream -from sphinx.util.osutil import SEP, fs_encoding, find_catalog_files +from sphinx.util.osutil import SEP, find_catalog_files, getcwd, fs_encoding +from sphinx.util.console import bold, purple from sphinx.util.matching import compile_matchers +from sphinx.util.parallel import ParallelTasks, parallel_available, make_chunks from sphinx.util.websupport import is_commentable from sphinx.errors import SphinxError, ExtensionError from sphinx.locale import _ from sphinx.versioning import add_uids, merge_doctrees from sphinx.transforms import DefaultSubstitutions, MoveModuleTargets, \ - HandleCodeBlocks, SortIds, CitationReferences, Locale, \ - RemoveTranslatableInline, SphinxContentsFilter + HandleCodeBlocks, SortIds, CitationReferences, Locale, \ + RemoveTranslatableInline, SphinxContentsFilter orig_role_function = roles.role orig_directive_function = directives.directive -class ElementLookupError(Exception): pass + +class ElementLookupError(Exception): + pass default_settings = { @@ -69,7 +74,9 @@ default_settings = { # This is increased every time an environment attribute is added # or changed to properly invalidate pickle files. -ENV_VERSION = 42 + (sys.version_info[0] - 2) +# +# NOTE: increase base version by 2 to have distinct numbers for Py2 and 3 +ENV_VERSION = 44 + (sys.version_info[0] - 2) dummy_reporter = Reporter('', 4, 4) @@ -105,6 +112,33 @@ class SphinxDummyWriter(UnfilteredWriter): pass +class SphinxFileInput(FileInput): + def __init__(self, app, env, *args, **kwds): + self.app = app + self.env = env + # don't call sys.exit() on IOErrors + kwds['handle_io_errors'] = False + kwds['error_handler'] = 'sphinx' # py3: handle error on open. + FileInput.__init__(self, *args, **kwds) + + def decode(self, data): + if isinstance(data, text_type): # py3: `data` already decoded. + return data + return data.decode(self.encoding, 'sphinx') # py2: decoding + + def read(self): + data = FileInput.read(self) + if self.app: + arg = [data] + self.app.emit('source-read', self.env.docname, arg) + data = arg[0] + if self.env.config.rst_epilog: + data = data + '\n' + self.env.config.rst_epilog + '\n' + if self.env.config.rst_prolog: + data = self.env.config.rst_prolog + '\n' + data + return data + + class BuildEnvironment: """ The environment in which the ReST files are translated. @@ -122,7 +156,7 @@ class BuildEnvironment: finally: picklefile.close() if env.version != ENV_VERSION: - raise IOError('env version not current') + raise IOError('build environment version not current') env.config.values = config.values return env @@ -138,9 +172,9 @@ class BuildEnvironment: # remove potentially pickling-problematic values from config for key, val in list(vars(self.config).items()): if key.startswith('_') or \ - isinstance(val, types.ModuleType) or \ - isinstance(val, types.FunctionType) or \ - isinstance(val, class_types): + isinstance(val, types.ModuleType) or \ + isinstance(val, types.FunctionType) or \ + isinstance(val, class_types): del self.config[key] try: pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL) @@ -181,8 +215,8 @@ class BuildEnvironment: # the source suffix. self.found_docs = set() # contains all existing docnames - self.all_docs = {} # docname -> mtime at the time of build - # contains all built docnames + self.all_docs = {} # docname -> mtime at the time of reading + # contains all read docnames self.dependencies = {} # docname -> set of dependent file # names, relative to documentation root self.reread_always = set() # docnames to re-read unconditionally on @@ -223,6 +257,10 @@ class BuildEnvironment: # temporary data storage while reading a document self.temp_data = {} + # context for cross-references (e.g. current module or class) + # this is similar to temp_data, but will for example be copied to + # attributes of "any" cross references + self.ref_context = {} def set_warnfunc(self, func): self._warnfunc = func @@ -292,6 +330,50 @@ class BuildEnvironment: for domain in self.domains.values(): domain.clear_doc(docname) + def merge_info_from(self, docnames, other, app): + """Merge global information gathered about *docnames* while reading them + from the *other* environment. + + This possibly comes from a parallel build process. + """ + docnames = set(docnames) + for docname in docnames: + self.all_docs[docname] = other.all_docs[docname] + if docname in other.reread_always: + self.reread_always.add(docname) + self.metadata[docname] = other.metadata[docname] + if docname in other.dependencies: + self.dependencies[docname] = other.dependencies[docname] + self.titles[docname] = other.titles[docname] + self.longtitles[docname] = other.longtitles[docname] + self.tocs[docname] = other.tocs[docname] + self.toc_num_entries[docname] = other.toc_num_entries[docname] + # toc_secnumbers is not assigned during read + if docname in other.toctree_includes: + self.toctree_includes[docname] = other.toctree_includes[docname] + self.indexentries[docname] = other.indexentries[docname] + if docname in other.glob_toctrees: + self.glob_toctrees.add(docname) + if docname in other.numbered_toctrees: + self.numbered_toctrees.add(docname) + + self.images.merge_other(docnames, other.images) + self.dlfiles.merge_other(docnames, other.dlfiles) + + for subfn, fnset in other.files_to_rebuild.items(): + self.files_to_rebuild.setdefault(subfn, set()).update(fnset & docnames) + for key, data in other.citations.items(): + # XXX duplicates? + if data[0] in docnames: + self.citations[key] = data + for version, changes in other.versionchanges.items(): + self.versionchanges.setdefault(version, []).extend( + change for change in changes if change[1] in docnames) + + for domainname, domain in self.domains.items(): + domain.merge_domaindata(docnames, other.domaindata[domainname]) + app.emit('env-merge-info', self, docnames, other) + def doc2path(self, docname, base=True, suffix=None): """Return the filename for the document name. @@ -407,13 +489,11 @@ class BuildEnvironment: return added, changed, removed - def update(self, config, srcdir, doctreedir, app=None): + def update(self, config, srcdir, doctreedir, app): """(Re-)read all files new or changed since last update. - Returns a summary, the total count of documents to reread and an - iterator that yields docnames as it processes them. Store all - environment docnames in the canonical format (ie using SEP as a - separator in place of os.path.sep). + Store all environment docnames in the canonical format (ie using SEP as + a separator in place of os.path.sep). """ config_changed = False if self.config is None: @@ -445,6 +525,8 @@ class BuildEnvironment: # this cache also needs to be updated every time self._nitpick_ignore = set(self.config.nitpick_ignore) + app.info(bold('updating environment: '), nonl=1) + added, changed, removed = self.get_outdated_files(config_changed) # allow user intervention as well @@ -459,30 +541,98 @@ class BuildEnvironment: msg += '%s added, %s changed, %s removed' % (len(added), len(changed), len(removed)) + app.info(msg) - def update_generator(): + self.app = app + + # clear all files no longer present + for docname in removed: + app.emit('env-purge-doc', self, docname) + self.clear_doc(docname) + + # read all new and changed files + docnames = sorted(added | changed) + # allow changing and reordering the list of docs to read + app.emit('env-before-read-docs', self, docnames) + + # check if we should do parallel or serial read + par_ok = False + if parallel_available and len(docnames) > 5 and app.parallel > 1: + par_ok = True + for extname, md in app._extension_metadata.items(): + ext_ok = md.get('parallel_read_safe') + if ext_ok: + continue + if ext_ok is None: + app.warn('the %s extension does not declare if it ' + 'is safe for parallel reading, assuming it ' + 'isn\'t - please ask the extension author to ' + 'check and make it explicit' % extname) + app.warn('doing serial read') + else: + app.warn('the %s extension is not safe for parallel ' + 'reading, doing serial read' % extname) + par_ok = False + break + if par_ok: + self._read_parallel(docnames, app, nproc=app.parallel) + else: + self._read_serial(docnames, app) + + if config.master_doc not in self.all_docs: + self.warn(None, 'master file %s not found' % + self.doc2path(config.master_doc)) + + self.app = None + app.emit('env-updated', self) + return docnames + + def _read_serial(self, docnames, app): + for docname in app.status_iterator(docnames, 'reading sources... ', + purple, len(docnames)): + # remove all inventory entries for that file + app.emit('env-purge-doc', self, docname) + self.clear_doc(docname) + self.read_doc(docname, app) + + def _read_parallel(self, docnames, app, nproc): + # clear all outdated docs at once + for docname in docnames: + app.emit('env-purge-doc', self, docname) + self.clear_doc(docname) + + def read_process(docs): self.app = app + self.warnings = [] + self.set_warnfunc(lambda *args: self.warnings.append(args)) + for docname in docs: + self.read_doc(docname, app) + # allow pickling self to send it back + self.set_warnfunc(None) + del self.app + del self.domains + del self.config.values + del self.config + return self - # clear all files no longer present - for docname in removed: - if app: - app.emit('env-purge-doc', self, docname) - self.clear_doc(docname) + def merge(docs, otherenv): + warnings.extend(otherenv.warnings) + self.merge_info_from(docs, otherenv, app) - # read all new and changed files - for docname in sorted(added | changed): - yield docname - self.read_doc(docname, app=app) + tasks = ParallelTasks(nproc) + chunks = make_chunks(docnames, nproc) - if config.master_doc not in self.all_docs: - self.warn(None, 'master file %s not found' % - self.doc2path(config.master_doc)) + warnings = [] + for chunk in app.status_iterator( + chunks, 'reading sources... ', purple, len(chunks)): + tasks.add_task(read_process, chunk, merge) - self.app = None - if app: - app.emit('env-updated', self) + # make sure all threads have finished + app.info(bold('waiting for workers...')) + tasks.join() - return msg, len(added | changed), update_generator() + for warning in warnings: + self._warnfunc(*warning) def check_dependents(self, already): to_rewrite = self.assign_section_numbers() @@ -496,7 +646,8 @@ class BuildEnvironment: """Custom decoding error handler that warns and replaces.""" linestart = error.object.rfind(b'\n', 0, error.start) lineend = error.object.find(b'\n', error.start) - if lineend == -1: lineend = len(error.object) + if lineend == -1: + lineend = len(error.object) lineno = error.object.count(b'\n', 0, error.start) + 1 self.warn(self.docname, 'undecodable source characters, ' 'replacing with "?": %r' % @@ -550,19 +701,8 @@ class BuildEnvironment: directives.directive = directive roles.role = role - def read_doc(self, docname, src_path=None, save_parsed=True, app=None): - """Parse a file and add/update inventory entries for the doctree. - - If srcpath is given, read from a different source file. - """ - # remove all inventory entries for that file - if app: - app.emit('env-purge-doc', self, docname) - - self.clear_doc(docname) - - if src_path is None: - src_path = self.doc2path(docname) + def read_doc(self, docname, app=None): + """Parse a file and add/update inventory entries for the doctree.""" self.temp_data['docname'] = docname # defaults to the global default, but can be re-set in a document @@ -576,6 +716,12 @@ class BuildEnvironment: self.patch_lookup_functions() + docutilsconf = path.join(self.srcdir, 'docutils.conf') + # read docutils.conf from source dir, not from current dir + OptionParser.standard_config_files[1] = docutilsconf + if path.isfile(docutilsconf): + self.note_dependency(docutilsconf) + if self.config.default_role: role_fn, messages = roles.role(self.config.default_role, english, 0, dummy_reporter) @@ -587,38 +733,17 @@ class BuildEnvironment: codecs.register_error('sphinx', self.warn_and_replace) - class SphinxSourceClass(FileInput): - def __init__(self_, *args, **kwds): - # don't call sys.exit() on IOErrors - kwds['handle_io_errors'] = False - kwds['error_handler'] = 'sphinx' # py3: handle error on open. - FileInput.__init__(self_, *args, **kwds) - - def decode(self_, data): - if isinstance(data, text_type): # py3: `data` already decoded. - return data - return data.decode(self_.encoding, 'sphinx') # py2: decoding - - def read(self_): - data = FileInput.read(self_) - if app: - arg = [data] - app.emit('source-read', docname, arg) - data = arg[0] - if self.config.rst_epilog: - data = data + '\n' + self.config.rst_epilog + '\n' - if self.config.rst_prolog: - data = self.config.rst_prolog + '\n' + data - return data - # publish manually pub = Publisher(reader=SphinxStandaloneReader(), writer=SphinxDummyWriter(), - source_class=SphinxSourceClass, destination_class=NullOutput) pub.set_components(None, 'restructuredtext', None) pub.process_programmatic_settings(None, self.settings, None) - pub.set_source(None, src_path) + src_path = self.doc2path(docname) + source = SphinxFileInput(app, self, source=None, source_path=src_path, + encoding=self.config.source_encoding) + pub.source = source + pub.settings._source = src_path pub.set_destination(None, None) pub.publish() doctree = pub.document @@ -641,12 +766,12 @@ class BuildEnvironment: if app: app.emit('doctree-read', doctree) - # store time of build, for outdated files detection + # store time of reading, for outdated files detection # (Some filesystems have coarse timestamp resolution; # therefore time.time() can be older than filesystem's timestamp. # For example, FAT32 has 2sec timestamp resolution.) self.all_docs[docname] = max( - time.time(), path.getmtime(self.doc2path(docname))) + time.time(), path.getmtime(self.doc2path(docname))) if self.versioning_condition: # get old doctree @@ -679,21 +804,20 @@ class BuildEnvironment: # cleanup self.temp_data.clear() + self.ref_context.clear() + roles._roles.pop('', None) # if a document has set a local default role - if save_parsed: - # save the parsed doctree - doctree_filename = self.doc2path(docname, self.doctreedir, - '.doctree') - dirname = path.dirname(doctree_filename) - if not path.isdir(dirname): - os.makedirs(dirname) - f = open(doctree_filename, 'wb') - try: - pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) - finally: - f.close() - else: - return doctree + # save the parsed doctree + doctree_filename = self.doc2path(docname, self.doctreedir, + '.doctree') + dirname = path.dirname(doctree_filename) + if not path.isdir(dirname): + os.makedirs(dirname) + f = open(doctree_filename, 'wb') + try: + pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL) + finally: + f.close() # utilities to use while reading a document @@ -704,13 +828,17 @@ class BuildEnvironment: @property def currmodule(self): - """Backwards compatible alias.""" - return self.temp_data.get('py:module') + """Backwards compatible alias. Will be removed.""" + self.warn(self.docname, 'env.currmodule is being referenced by an ' + 'extension; this API will be removed in the future') + return self.ref_context.get('py:module') @property def currclass(self): - """Backwards compatible alias.""" - return self.temp_data.get('py:class') + """Backwards compatible alias. Will be removed.""" + self.warn(self.docname, 'env.currclass is being referenced by an ' + 'extension; this API will be removed in the future') + return self.ref_context.get('py:class') def new_serialno(self, category=''): """Return a serial number, e.g. for index entry targets. @@ -740,7 +868,7 @@ class BuildEnvironment: def note_versionchange(self, type, version, node, lineno): self.versionchanges.setdefault(version, []).append( (type, self.temp_data['docname'], lineno, - self.temp_data.get('py:module'), + self.ref_context.get('py:module'), self.temp_data.get('object'), node.astext())) # post-processing of read doctrees @@ -755,7 +883,7 @@ class BuildEnvironment: def process_dependencies(self, docname, doctree): """Process docutils-generated dependency info.""" - cwd = os.getcwd() + cwd = getcwd() frompath = path.join(path.normpath(self.srcdir), 'dummy') deps = doctree.settings.record_dependencies if not deps: @@ -763,6 +891,8 @@ class BuildEnvironment: for dep in deps.list: # the dependency path is relative to the working dir, so get # one relative to the srcdir + if isinstance(dep, bytes): + dep = dep.decode(fs_encoding) relpath = relative_path(frompath, path.normpath(path.join(cwd, dep))) self.dependencies.setdefault(docname, set()).add(relpath) @@ -846,7 +976,7 @@ class BuildEnvironment: # nodes are multiply inherited... if isinstance(node, nodes.authors): md['authors'] = [author.astext() for author in node] - elif isinstance(node, nodes.TextElement): # e.g. author + elif isinstance(node, nodes.TextElement): # e.g. author md[node.__class__.__name__] = node.astext() else: name, body = node @@ -976,7 +1106,7 @@ class BuildEnvironment: def build_toc_from(self, docname, document): """Build a TOC from the doctree and store it in the inventory.""" - numentries = [0] # nonlocal again... + numentries = [0] # nonlocal again... def traverse_in_section(node, cls): """Like traverse(), but stay within the same section.""" @@ -1102,7 +1232,6 @@ class BuildEnvironment: stream=WarningStream(self._warnfunc)) return doctree - def get_and_resolve_doctree(self, docname, builder, doctree=None, prune_toctrees=True, includehidden=False): """Read the doctree from the pickle, resolve cross-references and @@ -1117,7 +1246,8 @@ class BuildEnvironment: # now, resolve all toctree nodes for toctreenode in doctree.traverse(addnodes.toctree): result = self.resolve_toctree(docname, builder, toctreenode, - prune=prune_toctrees, includehidden=includehidden) + prune=prune_toctrees, + includehidden=includehidden) if result is None: toctreenode.replace_self([]) else: @@ -1174,7 +1304,7 @@ class BuildEnvironment: else: # cull sub-entries whose parents aren't 'current' if (collapse and depth > 1 and - 'iscurrent' not in subnode.parent): + 'iscurrent' not in subnode.parent): subnode.parent.remove(subnode) else: # recurse on visible children @@ -1256,7 +1386,7 @@ class BuildEnvironment: child = toc.children[0] for refnode in child.traverse(nodes.reference): if refnode['refuri'] == ref and \ - not refnode['anchorname']: + not refnode['anchorname']: refnode.children = [nodes.Text(title)] if not toc.children: # empty toc means: no titles will show up in the toctree @@ -1346,49 +1476,23 @@ class BuildEnvironment: domain = self.domains[node['refdomain']] except KeyError: raise NoUri - newnode = domain.resolve_xref(self, fromdocname, builder, + newnode = domain.resolve_xref(self, refdoc, builder, typ, target, node, contnode) # really hardwired reference types + elif typ == 'any': + newnode = self._resolve_any_reference(builder, node, contnode) elif typ == 'doc': - # directly reference to document by source name; - # can be absolute or relative - docname = docname_join(refdoc, target) - if docname in self.all_docs: - if node['refexplicit']: - # reference with explicit title - caption = node.astext() - else: - caption = clean_astext(self.titles[docname]) - innernode = nodes.emphasis(caption, caption) - newnode = nodes.reference('', '', internal=True) - newnode['refuri'] = builder.get_relative_uri( - fromdocname, docname) - newnode.append(innernode) + newnode = self._resolve_doc_reference(builder, node, contnode) elif typ == 'citation': - docname, labelid = self.citations.get(target, ('', '')) - if docname: - try: - newnode = make_refnode(builder, fromdocname, - docname, labelid, contnode) - except NoUri: - # remove the ids we added in the CitationReferences - # transform since they can't be transfered to - # the contnode (if it's a Text node) - if not isinstance(contnode, nodes.Element): - del node['ids'][:] - raise - elif 'ids' in node: - # remove ids attribute that annotated at - # transforms.CitationReference.apply. - del node['ids'][:] + newnode = self._resolve_citation(builder, refdoc, node, contnode) # no new node found? try the missing-reference event if newnode is None: newnode = builder.app.emit_firstresult( 'missing-reference', self, node, contnode) - # still not found? warn if in nit-picky mode + # still not found? warn if node wishes to be warned about or + # we are in nit-picky mode if newnode is None: - self._warn_missing_reference( - fromdocname, typ, target, node, domain) + self._warn_missing_reference(refdoc, typ, target, node, domain) except NoUri: newnode = contnode node.replace_self(newnode or contnode) @@ -1399,7 +1503,7 @@ class BuildEnvironment: # allow custom references to be resolved builder.app.emit('doctree-resolved', doctree, fromdocname) - def _warn_missing_reference(self, fromdoc, typ, target, node, domain): + def _warn_missing_reference(self, refdoc, typ, target, node, domain): warn = node.get('refwarn') if self.config.nitpicky: warn = True @@ -1418,13 +1522,91 @@ class BuildEnvironment: msg = 'unknown document: %(target)s' elif typ == 'citation': msg = 'citation not found: %(target)s' - elif node.get('refdomain', 'std') != 'std': + elif node.get('refdomain', 'std') not in ('', 'std'): msg = '%s:%s reference target not found: %%(target)s' % \ (node['refdomain'], typ) else: - msg = '%s reference target not found: %%(target)s' % typ + msg = '%r reference target not found: %%(target)s' % typ self.warn_node(msg % {'target': target}, node) + def _resolve_doc_reference(self, builder, node, contnode): + # directly reference to document by source name; + # can be absolute or relative + docname = docname_join(node['refdoc'], node['reftarget']) + if docname in self.all_docs: + if node['refexplicit']: + # reference with explicit title + caption = node.astext() + else: + caption = clean_astext(self.titles[docname]) + innernode = nodes.emphasis(caption, caption) + newnode = nodes.reference('', '', internal=True) + newnode['refuri'] = builder.get_relative_uri(node['refdoc'], docname) + newnode.append(innernode) + return newnode + + def _resolve_citation(self, builder, fromdocname, node, contnode): + docname, labelid = self.citations.get(node['reftarget'], ('', '')) + if docname: + try: + newnode = make_refnode(builder, fromdocname, + docname, labelid, contnode) + return newnode + except NoUri: + # remove the ids we added in the CitationReferences + # transform since they can't be transfered to + # the contnode (if it's a Text node) + if not isinstance(contnode, nodes.Element): + del node['ids'][:] + raise + elif 'ids' in node: + # remove ids attribute that annotated at + # transforms.CitationReference.apply. + del node['ids'][:] + + def _resolve_any_reference(self, builder, node, contnode): + """Resolve reference generated by the "any" role.""" + refdoc = node['refdoc'] + target = node['reftarget'] + results = [] + # first, try resolving as :doc: + doc_ref = self._resolve_doc_reference(builder, node, contnode) + if doc_ref: + results.append(('doc', doc_ref)) + # next, do the standard domain (makes this a priority) + results.extend(self.domains['std'].resolve_any_xref( + self, refdoc, builder, target, node, contnode)) + for domain in self.domains.values(): + if domain.name == 'std': + continue # we did this one already + try: + results.extend(domain.resolve_any_xref(self, refdoc, builder, + target, node, contnode)) + except NotImplementedError: + # the domain doesn't yet support the new interface + # we have to manually collect possible references (SLOW) + for role in domain.roles: + res = domain.resolve_xref(self, refdoc, builder, role, target, + node, contnode) + if res: + results.append(('%s:%s' % (domain.name, role), res)) + # now, see how many matches we got... + if not results: + return None + if len(results) > 1: + nice_results = ' or '.join(':%s:' % r[0] for r in results) + self.warn_node('more than one target found for \'any\' cross-' + 'reference %r: could be %s' % (target, nice_results), + node) + res_role, newnode = results[0] + # Override "any" class with the actual role type to get the styling + # approximately correct. + res_domain = res_role.split(':')[0] + if newnode and newnode[0].get('classes'): + newnode[0]['classes'].append(res_domain) + newnode[0]['classes'].append(res_role.replace(':', '-')) + return newnode + def process_only_nodes(self, doctree, builder, fromdocname=None): # A comment on the comment() nodes being inserted: replacing by [] would # result in a "Losing ids" exception if there is a target node before @@ -1595,7 +1777,7 @@ class BuildEnvironment: # prefixes match: add entry as subitem of the # previous entry oldsubitems.setdefault(m.group(2), [[], {}])[0].\ - extend(targets) + extend(targets) del newlist[i] continue oldkey = m.group(1) @@ -1622,6 +1804,7 @@ class BuildEnvironment: def collect_relations(self): relations = {} getinc = self.toctree_includes.get + def collect(parents, parents_set, docname, previous, next): # circular relationship? if docname in parents_set: @@ -1661,8 +1844,8 @@ class BuildEnvironment: # same for children if includes: for subindex, args in enumerate(zip(includes, - [None] + includes, - includes[1:] + [None])): + [None] + includes, + includes[1:] + [None])): collect([(docname, subindex)] + parents, parents_set.union([docname]), *args) relations[docname] = [parents[0][0], previous, next] diff --git a/sphinx/errors.py b/sphinx/errors.py index 4d737e512..3d7a5eb47 100644 --- a/sphinx/errors.py +++ b/sphinx/errors.py @@ -10,6 +10,9 @@ :license: BSD, see LICENSE for details. """ +import traceback + + class SphinxError(Exception): """ Base class for Sphinx errors that are shown to the user in a nicer @@ -62,3 +65,13 @@ class PycodeError(Exception): if len(self.args) > 1: res += ' (exception was: %r)' % self.args[1] return res + + +class SphinxParallelError(Exception): + def __init__(self, orig_exc, traceback): + self.orig_exc = orig_exc + self.traceback = traceback + + def __str__(self): + return traceback.format_exception_only( + self.orig_exc.__class__, self.orig_exc)[0].strip() diff --git a/sphinx/ext/autodoc.py b/sphinx/ext/autodoc.py index 5b014d0d5..5b0bda17a 100644 --- a/sphinx/ext/autodoc.py +++ b/sphinx/ext/autodoc.py @@ -30,7 +30,7 @@ from sphinx.application import ExtensionError from sphinx.util.nodes import nested_parse_with_titles from sphinx.util.compat import Directive from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \ - safe_getattr, safe_repr, is_builtin_class_method + safe_getattr, safe_repr, is_builtin_class_method from sphinx.util.docstrings import prepare_docstring @@ -50,11 +50,13 @@ class DefDict(dict): def __init__(self, default): dict.__init__(self) self.default = default + def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: return self.default + def __bool__(self): # docutils check "if option_spec" return True @@ -92,6 +94,7 @@ class _MockModule(object): else: return _MockModule() + def mock_import(modname): if '.' in modname: pkg, _n, mods = modname.rpartition('.') @@ -104,12 +107,14 @@ def mock_import(modname): ALL = object() INSTANCEATTR = object() + def members_option(arg): """Used to convert the :members: option to auto directives.""" if arg is None: return ALL return [x.strip() for x in arg.split(',')] + def members_set_option(arg): """Used to convert the :members: option to auto directives.""" if arg is None: @@ -118,6 +123,7 @@ def members_set_option(arg): SUPPRESS = object() + def annotation_option(arg): if arg is None: # suppress showing the representation of the object @@ -125,6 +131,7 @@ def annotation_option(arg): else: return arg + def bool_option(arg): """Used to convert flag options to auto directives. (Instead of directives.flag(), which returns None). @@ -201,6 +208,7 @@ def cut_lines(pre, post=0, what=None): lines.append('') return process + def between(marker, what=None, keepempty=False, exclude=False): """Return a listener that either keeps, or if *exclude* is True excludes, lines between lines that match the *marker* regular expression. If no line @@ -211,6 +219,7 @@ def between(marker, what=None, keepempty=False, exclude=False): be processed. """ marker_re = re.compile(marker) + def process(app, what_, name, obj, options, lines): if what and what_ not in what: return @@ -325,7 +334,7 @@ class Documenter(object): # an autogenerated one try: explicit_modname, path, base, args, retann = \ - py_ext_sig_re.match(self.name).groups() + py_ext_sig_re.match(self.name).groups() except AttributeError: self.directive.warn('invalid signature for auto%s (%r)' % (self.objtype, self.name)) @@ -340,7 +349,7 @@ class Documenter(object): parents = [] self.modname, self.objpath = \ - self.resolve_name(modname, parents, path, base) + self.resolve_name(modname, parents, path, base) if not self.modname: return False @@ -637,19 +646,19 @@ class Documenter(object): keep = False if want_all and membername.startswith('__') and \ - membername.endswith('__') and len(membername) > 4: + membername.endswith('__') and len(membername) > 4: # special __methods__ if self.options.special_members is ALL and \ membername != '__doc__': keep = has_doc or self.options.undoc_members elif self.options.special_members and \ - self.options.special_members is not ALL and \ + self.options.special_members is not ALL and \ membername in self.options.special_members: keep = has_doc or self.options.undoc_members elif want_all and membername.startswith('_'): # ignore members whose name starts with _ by default keep = self.options.private_members and \ - (has_doc or self.options.undoc_members) + (has_doc or self.options.undoc_members) elif (namespace, membername) in attr_docs: # keep documented attributes keep = True @@ -685,7 +694,7 @@ class Documenter(object): self.env.temp_data['autodoc:class'] = self.objpath[0] want_all = all_members or self.options.inherited_members or \ - self.options.members is ALL + self.options.members is ALL # find out which members are documentable members_check_module, members = self.get_object_members(want_all) @@ -707,11 +716,11 @@ class Documenter(object): # give explicitly separated module name, so that members # of inner classes can be documented full_mname = self.modname + '::' + \ - '.'.join(self.objpath + [mname]) + '.'.join(self.objpath + [mname]) documenter = classes[-1](self.directive, full_mname, self.indent) memberdocumenters.append((documenter, isattr)) member_order = self.options.member_order or \ - self.env.config.autodoc_member_order + self.env.config.autodoc_member_order if member_order == 'groupwise': # sort by group; relies on stable sort to keep items in the # same group sorted alphabetically @@ -719,6 +728,7 @@ class Documenter(object): elif member_order == 'bysource' and self.analyzer: # sort by source order, by virtue of the module analyzer tagorder = self.analyzer.tagorder + def keyfunc(entry): fullname = entry[0].name.split('::')[1] return tagorder.get(fullname, len(tagorder)) @@ -872,7 +882,7 @@ class ModuleDocumenter(Documenter): self.directive.warn( 'missing attribute mentioned in :members: or __all__: ' 'module %s, attribute %s' % ( - safe_getattr(self.object, '__name__', '???'), mname)) + safe_getattr(self.object, '__name__', '???'), mname)) return False, ret @@ -891,7 +901,7 @@ class ModuleLevelDocumenter(Documenter): modname = self.env.temp_data.get('autodoc:module') # ... or in the scope of a module directive if not modname: - modname = self.env.temp_data.get('py:module') + modname = self.env.ref_context.get('py:module') # ... else, it stays None, which means invalid return modname, parents + [base] @@ -913,7 +923,7 @@ class ClassLevelDocumenter(Documenter): mod_cls = self.env.temp_data.get('autodoc:class') # ... or from a class directive if mod_cls is None: - mod_cls = self.env.temp_data.get('py:class') + mod_cls = self.env.ref_context.get('py:class') # ... if still None, there's no way to know if mod_cls is None: return None, [] @@ -923,7 +933,7 @@ class ClassLevelDocumenter(Documenter): if not modname: modname = self.env.temp_data.get('autodoc:module') if not modname: - modname = self.env.temp_data.get('py:module') + modname = self.env.ref_context.get('py:module') # ... else, it stays None, which means invalid return modname, parents + [base] @@ -976,6 +986,7 @@ class DocstringSignatureMixin(object): self.args, self.retann = result return Documenter.format_signature(self) + class DocstringStripSignatureMixin(DocstringSignatureMixin): """ Mixin for AttributeDocumenter to provide the @@ -1007,7 +1018,7 @@ class FunctionDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter): def format_args(self): if inspect.isbuiltin(self.object) or \ - inspect.ismethoddescriptor(self.object): + inspect.ismethoddescriptor(self.object): # cannot introspect arguments of a C function or method return None try: @@ -1070,8 +1081,8 @@ class ClassDocumenter(ModuleLevelDocumenter): # classes without __init__ method, default __init__ or # __init__ written in C? if initmeth is None or \ - is_builtin_class_method(self.object, '__init__') or \ - not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)): + is_builtin_class_method(self.object, '__init__') or \ + not(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)): return None try: argspec = getargspec(initmeth) @@ -1109,7 +1120,7 @@ class ClassDocumenter(ModuleLevelDocumenter): if not self.doc_as_attr and self.options.show_inheritance: self.add_line(u'', '') if hasattr(self.object, '__bases__') and len(self.object.__bases__): - bases = [b.__module__ == '__builtin__' and + bases = [b.__module__ in ('__builtin__', 'builtins') and u':class:`%s`' % b.__name__ or u':class:`%s.%s`' % (b.__module__, b.__name__) for b in self.object.__bases__] @@ -1142,7 +1153,7 @@ class ClassDocumenter(ModuleLevelDocumenter): # for new-style classes, no __init__ means default __init__ if (initdocstring is not None and (initdocstring == object.__init__.__doc__ or # for pypy - initdocstring.strip() == object.__init__.__doc__)): #for !pypy + initdocstring.strip() == object.__init__.__doc__)): # for !pypy initdocstring = None if initdocstring: if content == 'init': @@ -1186,7 +1197,7 @@ class ExceptionDocumenter(ClassDocumenter): @classmethod def can_document_member(cls, member, membername, isattr, parent): return isinstance(member, class_types) and \ - issubclass(member, BaseException) + issubclass(member, BaseException) class DataDocumenter(ModuleLevelDocumenter): @@ -1233,7 +1244,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter): @classmethod def can_document_member(cls, member, membername, isattr, parent): return inspect.isroutine(member) and \ - not isinstance(parent, ModuleDocumenter) + not isinstance(parent, ModuleDocumenter) def import_object(self): ret = ClassLevelDocumenter.import_object(self) @@ -1257,7 +1268,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter): def format_args(self): if inspect.isbuiltin(self.object) or \ - inspect.ismethoddescriptor(self.object): + inspect.ismethoddescriptor(self.object): # can never get arguments of a C function or method return None argspec = getargspec(self.object) @@ -1272,7 +1283,7 @@ class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter): pass -class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter): +class AttributeDocumenter(DocstringStripSignatureMixin, ClassLevelDocumenter): """ Specialized Documenter subclass for attributes. """ @@ -1290,9 +1301,9 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter): @classmethod def can_document_member(cls, member, membername, isattr, parent): isdatadesc = isdescriptor(member) and not \ - isinstance(member, cls.method_types) and not \ - type(member).__name__ in ("type", "method_descriptor", - "instancemethod") + isinstance(member, cls.method_types) and not \ + type(member).__name__ in ("type", "method_descriptor", + "instancemethod") return isdatadesc or (not isinstance(parent, ModuleDocumenter) and not inspect.isroutine(member) and not isinstance(member, class_types)) @@ -1303,7 +1314,7 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter): def import_object(self): ret = ClassLevelDocumenter.import_object(self) if isdescriptor(self.object) and \ - not isinstance(self.object, self.method_types): + not isinstance(self.object, self.method_types): self._datadescriptor = True else: # if it's not a data descriptor @@ -1312,7 +1323,7 @@ class AttributeDocumenter(DocstringStripSignatureMixin,ClassLevelDocumenter): def get_real_modname(self): return self.get_attr(self.parent or self.object, '__module__', None) \ - or self.modname + or self.modname def add_directive_header(self, sig): ClassLevelDocumenter.add_directive_header(self, sig) @@ -1479,7 +1490,7 @@ def add_documenter(cls): raise ExtensionError('autodoc documenter %r must be a subclass ' 'of Documenter' % cls) # actually, it should be possible to override Documenters - #if cls.objtype in AutoDirective._registry: + # if cls.objtype in AutoDirective._registry: # raise ExtensionError('autodoc documenter for %r is already ' # 'registered' % cls.objtype) AutoDirective._registry[cls.objtype] = cls @@ -1504,7 +1515,7 @@ def setup(app): app.add_event('autodoc-process-signature') app.add_event('autodoc-skip-member') - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} class testcls: diff --git a/sphinx/ext/autosummary/__init__.py b/sphinx/ext/autosummary/__init__.py index 31bbfb8a5..c40ba91d6 100644 --- a/sphinx/ext/autosummary/__init__.py +++ b/sphinx/ext/autosummary/__init__.py @@ -432,11 +432,11 @@ def get_import_prefixes_from_env(env): """ prefixes = [None] - currmodule = env.temp_data.get('py:module') + currmodule = env.ref_context.get('py:module') if currmodule: prefixes.insert(0, currmodule) - currclass = env.temp_data.get('py:class') + currclass = env.ref_context.get('py:class') if currclass: if currmodule: prefixes.insert(0, currmodule + "." + currclass) @@ -570,4 +570,4 @@ def setup(app): app.connect('doctree-read', process_autosummary_toc) app.connect('builder-inited', process_generate_options) app.add_config_value('autosummary_generate', [], True) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/coverage.py b/sphinx/ext/coverage.py index 49dc02f44..b62806fa4 100644 --- a/sphinx/ext/coverage.py +++ b/sphinx/ext/coverage.py @@ -265,4 +265,4 @@ def setup(app): app.add_config_value('coverage_ignore_c_items', {}, False) app.add_config_value('coverage_write_headline', True, False) app.add_config_value('coverage_skip_undoc_in_source', False, False) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/doctest.py b/sphinx/ext/doctest.py index 20b8692f1..216325cb8 100644 --- a/sphinx/ext/doctest.py +++ b/sphinx/ext/doctest.py @@ -32,6 +32,7 @@ from sphinx.util.console import bold blankline_re = re.compile(r'^\s*', re.MULTILINE) doctestopt_re = re.compile(r'#\s*doctest:.+$', re.MULTILINE) + # set up the necessary directives class TestDirective(Directive): @@ -79,30 +80,35 @@ class TestDirective(Directive): option_strings = self.options['options'].replace(',', ' ').split() for option in option_strings: if (option[0] not in '+-' or option[1:] not in - doctest.OPTIONFLAGS_BY_NAME): + doctest.OPTIONFLAGS_BY_NAME): # XXX warn? continue flag = doctest.OPTIONFLAGS_BY_NAME[option[1:]] node['options'][flag] = (option[0] == '+') return [node] + class TestsetupDirective(TestDirective): option_spec = {} + class TestcleanupDirective(TestDirective): option_spec = {} + class DoctestDirective(TestDirective): option_spec = { 'hide': directives.flag, 'options': directives.unchanged, } + class TestcodeDirective(TestDirective): option_spec = { 'hide': directives.flag, } + class TestoutputDirective(TestDirective): option_spec = { 'hide': directives.flag, @@ -112,6 +118,7 @@ class TestoutputDirective(TestDirective): parser = doctest.DocTestParser() + # helper classes class TestGroup(object): @@ -196,7 +203,7 @@ class DocTestBuilder(Builder): def init(self): # default options self.opt = doctest.DONT_ACCEPT_TRUE_FOR_1 | doctest.ELLIPSIS | \ - doctest.IGNORE_EXCEPTION_DETAIL + doctest.IGNORE_EXCEPTION_DETAIL # HACK HACK HACK # doctest compiles its snippets with type 'single'. That is nice @@ -247,6 +254,10 @@ Results of doctest builder run on %s # write executive summary def s(v): return v != 1 and 's' or '' + repl = (self.total_tries, s(self.total_tries), + self.total_failures, s(self.total_failures), + self.setup_failures, s(self.setup_failures), + self.cleanup_failures, s(self.cleanup_failures)) self._out(''' Doctest summary =============== @@ -254,10 +265,7 @@ Doctest summary %5d failure%s in tests %5d failure%s in setup code %5d failure%s in cleanup code -''' % (self.total_tries, s(self.total_tries), - self.total_failures, s(self.total_failures), - self.setup_failures, s(self.setup_failures), - self.cleanup_failures, s(self.cleanup_failures))) +''' % repl) self.outfile.close() if self.total_failures or self.setup_failures or self.cleanup_failures: @@ -290,11 +298,11 @@ Doctest summary def condition(node): return (isinstance(node, (nodes.literal_block, nodes.comment)) and 'testnodetype' in node) or \ - isinstance(node, nodes.doctest_block) + isinstance(node, nodes.doctest_block) else: def condition(node): return isinstance(node, (nodes.literal_block, nodes.comment)) \ - and 'testnodetype' in node + and 'testnodetype' in node for node in doctree.traverse(condition): source = 'test' in node and node['test'] or node.astext() if not source: @@ -364,7 +372,7 @@ Doctest summary filename, 0, None) sim_doctest.globs = ns old_f = runner.failures - self.type = 'exec' # the snippet may contain multiple statements + self.type = 'exec' # the snippet may contain multiple statements runner.run(sim_doctest, out=self._warn_out, clear_globs=False) if runner.failures > old_f: return False @@ -394,7 +402,7 @@ Doctest summary new_opt = code[0].options.copy() new_opt.update(example.options) example.options = new_opt - self.type = 'single' # as for ordinary doctests + self.type = 'single' # as for ordinary doctests else: # testcode and output separate output = code[1] and code[1].code or '' @@ -413,7 +421,7 @@ Doctest summary options=options) test = doctest.DocTest([example], {}, group.name, filename, code[0].lineno, None) - self.type = 'exec' # multiple statements again + self.type = 'exec' # multiple statements again # DocTest.__init__ copies the globs namespace, which we don't want test.globs = ns # also don't clear the globs namespace after running the doctest @@ -435,4 +443,4 @@ def setup(app): app.add_config_value('doctest_test_doctest_blocks', 'default', False) app.add_config_value('doctest_global_setup', '', False) app.add_config_value('doctest_global_cleanup', '', False) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/extlinks.py b/sphinx/ext/extlinks.py index c0cfbcd2b..ae65dbb8f 100644 --- a/sphinx/ext/extlinks.py +++ b/sphinx/ext/extlinks.py @@ -59,4 +59,4 @@ def setup_link_roles(app): def setup(app): app.add_config_value('extlinks', {}, 'env') app.connect('builder-inited', setup_link_roles) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/graphviz.py b/sphinx/ext/graphviz.py index b4b8bc276..56831c648 100644 --- a/sphinx/ext/graphviz.py +++ b/sphinx/ext/graphviz.py @@ -323,4 +323,4 @@ def setup(app): app.add_config_value('graphviz_dot', 'dot', 'html') app.add_config_value('graphviz_dot_args', [], 'html') app.add_config_value('graphviz_output_format', 'png', 'html') - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/ifconfig.py b/sphinx/ext/ifconfig.py index ab15e1e1f..a4e4a02df 100644 --- a/sphinx/ext/ifconfig.py +++ b/sphinx/ext/ifconfig.py @@ -73,4 +73,4 @@ def setup(app): app.add_node(ifconfig) app.add_directive('ifconfig', IfConfig) app.connect('doctree-resolved', process_ifconfig_nodes) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/inheritance_diagram.py b/sphinx/ext/inheritance_diagram.py index bbae5c110..0b2e5ce3a 100644 --- a/sphinx/ext/inheritance_diagram.py +++ b/sphinx/ext/inheritance_diagram.py @@ -39,13 +39,14 @@ r""" import re import sys import inspect -import __builtin__ as __builtin__ # as __builtin__ is for lib2to3 compatibility try: from hashlib import md5 except ImportError: from md5 import md5 from six import text_type +from six.moves import builtins + from docutils import nodes from docutils.parsers.rst import directives @@ -147,10 +148,10 @@ class InheritanceGraph(object): displayed node names. """ all_classes = {} - builtins = vars(__builtin__).values() + py_builtins = vars(builtins).values() def recurse(cls): - if not show_builtins and cls in builtins: + if not show_builtins and cls in py_builtins: return if not private_bases and cls.__name__.startswith('_'): return @@ -174,7 +175,7 @@ class InheritanceGraph(object): baselist = [] all_classes[cls] = (nodename, fullname, baselist, tooltip) for base in cls.__bases__: - if not show_builtins and base in builtins: + if not show_builtins and base in py_builtins: continue if not private_bases and base.__name__.startswith('_'): continue @@ -194,7 +195,7 @@ class InheritanceGraph(object): completely general. """ module = cls.__module__ - if module == '__builtin__': + if module in ('__builtin__', 'builtins'): fullname = cls.__name__ else: fullname = '%s.%s' % (module, cls.__name__) @@ -310,7 +311,7 @@ class InheritanceDiagram(Directive): # Create a graph starting with the list of classes try: graph = InheritanceGraph( - class_names, env.temp_data.get('py:module'), + class_names, env.ref_context.get('py:module'), parts=node['parts'], private_bases='private-bases' in self.options) except InheritanceException as err: @@ -407,4 +408,4 @@ def setup(app): app.add_config_value('inheritance_graph_attrs', {}, False), app.add_config_value('inheritance_node_attrs', {}, False), app.add_config_value('inheritance_edge_attrs', {}, False), - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/intersphinx.py b/sphinx/ext/intersphinx.py index 43507a383..6f3d44eb4 100644 --- a/sphinx/ext/intersphinx.py +++ b/sphinx/ext/intersphinx.py @@ -222,15 +222,21 @@ def load_mappings(app): def missing_reference(app, env, node, contnode): """Attempt to resolve a missing reference via intersphinx references.""" - domain = node.get('refdomain') - if not domain: - # only objects in domains are in the inventory - return target = node['reftarget'] - objtypes = env.domains[domain].objtypes_for_role(node['reftype']) - if not objtypes: - return - objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes] + if node['reftype'] == 'any': + # we search anything! + objtypes = ['%s:%s' % (domain.name, objtype) + for domain in env.domains.values() + for objtype in domain.object_types] + else: + domain = node.get('refdomain') + if not domain: + # only objects in domains are in the inventory + return + objtypes = env.domains[domain].objtypes_for_role(node['reftype']) + if not objtypes: + return + objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes] to_try = [(env.intersphinx_inventory, target)] in_set = None if ':' in target: @@ -248,7 +254,7 @@ def missing_reference(app, env, node, contnode): # get correct path in case of subdirectories uri = path.join(relative_path(node['refdoc'], env.srcdir), uri) newnode = nodes.reference('', '', internal=False, refuri=uri, - reftitle=_('(in %s v%s)') % (proj, version)) + reftitle=_('(in %s v%s)') % (proj, version)) if node.get('refexplicit'): # use whatever title was given newnode.append(contnode) @@ -276,4 +282,4 @@ def setup(app): app.add_config_value('intersphinx_cache_limit', 5, False) app.connect('missing-reference', missing_reference) app.connect('builder-inited', load_mappings) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/jsmath.py b/sphinx/ext/jsmath.py index 897d87acb..9bf38f62f 100644 --- a/sphinx/ext/jsmath.py +++ b/sphinx/ext/jsmath.py @@ -57,4 +57,4 @@ def setup(app): mathbase_setup(app, (html_visit_math, None), (html_visit_displaymath, None)) app.add_config_value('jsmath_path', '', False) app.connect('builder-inited', builder_inited) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/linkcode.py b/sphinx/ext/linkcode.py index bbb0698cd..37e021e88 100644 --- a/sphinx/ext/linkcode.py +++ b/sphinx/ext/linkcode.py @@ -16,9 +16,11 @@ from sphinx import addnodes from sphinx.locale import _ from sphinx.errors import SphinxError + class LinkcodeError(SphinxError): category = "linkcode error" + def doctree_read(app, doctree): env = app.builder.env @@ -68,7 +70,8 @@ def doctree_read(app, doctree): classes=['viewcode-link']) signode += onlynode + def setup(app): app.connect('doctree-read', doctree_read) app.add_config_value('linkcode_resolve', None, '') - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/mathjax.py b/sphinx/ext/mathjax.py index fd5c5f1d3..f677ff482 100644 --- a/sphinx/ext/mathjax.py +++ b/sphinx/ext/mathjax.py @@ -69,4 +69,4 @@ def setup(app): app.add_config_value('mathjax_inline', [r'\(', r'\)'], 'html') app.add_config_value('mathjax_display', [r'\[', r'\]'], 'html') app.connect('builder-inited', builder_inited) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/napoleon/__init__.py b/sphinx/ext/napoleon/__init__.py index 554162ede..9b43d8fda 100644 --- a/sphinx/ext/napoleon/__init__.py +++ b/sphinx/ext/napoleon/__init__.py @@ -256,7 +256,7 @@ def setup(app): for name, (default, rebuild) in iteritems(Config._config_values): app.add_config_value(name, default, rebuild) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} def _process_docstring(app, what, name, obj, options, lines): diff --git a/sphinx/ext/pngmath.py b/sphinx/ext/pngmath.py index ee108d166..51c9d0116 100644 --- a/sphinx/ext/pngmath.py +++ b/sphinx/ext/pngmath.py @@ -246,4 +246,4 @@ def setup(app): app.add_config_value('pngmath_latex_preamble', '', 'html') app.add_config_value('pngmath_add_tooltips', True, 'html') app.connect('build-finished', cleanup_tempdir) - return sphinx.__version__ + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/todo.py b/sphinx/ext/todo.py index d70617b9d..ae434dd4d 100644 --- a/sphinx/ext/todo.py +++ b/sphinx/ext/todo.py @@ -150,6 +150,14 @@ def purge_todos(app, env, docname): if todo['docname'] != docname] +def merge_info(app, env, docnames, other): + if not hasattr(other, 'todo_all_todos'): + return + if not hasattr(env, 'todo_all_todos'): + env.todo_all_todos = [] + env.todo_all_todos.extend(other.todo_all_todos) + + def visit_todo_node(self, node): self.visit_admonition(node) @@ -172,4 +180,5 @@ def setup(app): app.connect('doctree-read', process_todos) app.connect('doctree-resolved', process_todo_nodes) app.connect('env-purge-doc', purge_todos) - return sphinx.__version__ + app.connect('env-merge-info', merge_info) + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/ext/viewcode.py b/sphinx/ext/viewcode.py index 3653a2da4..cd3f2ac74 100644 --- a/sphinx/ext/viewcode.py +++ b/sphinx/ext/viewcode.py @@ -20,6 +20,7 @@ from sphinx.locale import _ from sphinx.pycode import ModuleAnalyzer from sphinx.util import get_full_modname from sphinx.util.nodes import make_refnode +from sphinx.util.console import blue def _get_full_modname(app, modname, attribute): @@ -37,7 +38,7 @@ def _get_full_modname(app, modname, attribute): # It should be displayed only verbose mode. app.verbose(traceback.format_exc().rstrip()) app.verbose('viewcode can\'t import %s, failed with error "%s"' % - (modname, e)) + (modname, e)) return None @@ -100,6 +101,16 @@ def doctree_read(app, doctree): signode += onlynode +def env_merge_info(app, env, docnames, other): + if not hasattr(other, '_viewcode_modules'): + return + # create a _viewcode_modules dict on the main environment + if not hasattr(env, '_viewcode_modules'): + env._viewcode_modules = {} + # now merge in the information from the subprocess + env._viewcode_modules.update(other._viewcode_modules) + + def missing_reference(app, env, node, contnode): # resolve our "viewcode" reference nodes -- they need special treatment if node['reftype'] == 'viewcode': @@ -116,10 +127,12 @@ def collect_pages(app): modnames = set(env._viewcode_modules) - app.builder.info(' (%d module code pages)' % - len(env._viewcode_modules), nonl=1) +# app.builder.info(' (%d module code pages)' % +# len(env._viewcode_modules), nonl=1) - for modname, entry in iteritems(env._viewcode_modules): + for modname, entry in app.status_iterator( + iteritems(env._viewcode_modules), 'highlighting module code... ', + blue, len(env._viewcode_modules), lambda x: x[0]): if not entry: continue code, tags, used, refname = entry @@ -162,15 +175,14 @@ def collect_pages(app): context = { 'parents': parents, 'title': modname, - 'body': _('

    Source code for %s

    ') % modname + \ - '\n'.join(lines) + 'body': (_('

    Source code for %s

    ') % modname + + '\n'.join(lines)), } yield (pagename, context, 'page.html') if not modnames: return - app.builder.info(' _modules/index', nonl=True) html = ['\n'] # the stack logic is needed for using nested lists for submodules stack = [''] @@ -190,8 +202,8 @@ def collect_pages(app): html.append('
' * (len(stack) - 1)) context = { 'title': _('Overview: module code'), - 'body': _('

All modules for which code is available

') + \ - ''.join(html), + 'body': (_('

All modules for which code is available

') + + ''.join(html)), } yield ('_modules/index', context, 'page.html') @@ -200,8 +212,9 @@ def collect_pages(app): def setup(app): app.add_config_value('viewcode_import', True, False) app.connect('doctree-read', doctree_read) + app.connect('env-merge-info', env_merge_info) app.connect('html-collect-pages', collect_pages) app.connect('missing-reference', missing_reference) - #app.add_config_value('viewcode_include_modules', [], 'env') - #app.add_config_value('viewcode_exclude_modules', [], 'env') - return sphinx.__version__ + # app.add_config_value('viewcode_include_modules', [], 'env') + # app.add_config_value('viewcode_exclude_modules', [], 'env') + return {'version': sphinx.__version__, 'parallel_read_safe': True} diff --git a/sphinx/highlighting.py b/sphinx/highlighting.py index 599a76a90..c2d2e89a6 100644 --- a/sphinx/highlighting.py +++ b/sphinx/highlighting.py @@ -24,46 +24,32 @@ from sphinx.util.pycompat import htmlescape from sphinx.util.texescape import tex_hl_escape_map_new from sphinx.ext import doctest -try: - import pygments - from pygments import highlight - from pygments.lexers import PythonLexer, PythonConsoleLexer, CLexer, \ - TextLexer, RstLexer - from pygments.lexers import get_lexer_by_name, guess_lexer - from pygments.formatters import HtmlFormatter, LatexFormatter - from pygments.filters import ErrorToken - from pygments.styles import get_style_by_name - from pygments.util import ClassNotFound - from sphinx.pygments_styles import SphinxStyle, NoneStyle -except ImportError: - pygments = None - lexers = None - HtmlFormatter = LatexFormatter = None -else: +from pygments import highlight +from pygments.lexers import PythonLexer, PythonConsoleLexer, CLexer, \ + TextLexer, RstLexer +from pygments.lexers import get_lexer_by_name, guess_lexer +from pygments.formatters import HtmlFormatter, LatexFormatter +from pygments.filters import ErrorToken +from pygments.styles import get_style_by_name +from pygments.util import ClassNotFound +from sphinx.pygments_styles import SphinxStyle, NoneStyle - lexers = dict( - none = TextLexer(), - python = PythonLexer(), - pycon = PythonConsoleLexer(), - pycon3 = PythonConsoleLexer(python3=True), - rest = RstLexer(), - c = CLexer(), - ) - for _lexer in lexers.values(): - _lexer.add_filter('raiseonerror') +lexers = dict( + none = TextLexer(), + python = PythonLexer(), + pycon = PythonConsoleLexer(), + pycon3 = PythonConsoleLexer(python3=True), + rest = RstLexer(), + c = CLexer(), +) +for _lexer in lexers.values(): + _lexer.add_filter('raiseonerror') escape_hl_chars = {ord(u'\\'): u'\\PYGZbs{}', ord(u'{'): u'\\PYGZob{}', ord(u'}'): u'\\PYGZcb{}'} -# used if Pygments is not available -_LATEX_STYLES = r''' -\newcommand\PYGZbs{\char`\\} -\newcommand\PYGZob{\char`\{} -\newcommand\PYGZcb{\char`\}} -''' - # used if Pygments is available # use textcomp quote to get a true single quote _LATEX_ADD_STYLES = r''' @@ -80,8 +66,6 @@ class PygmentsBridge(object): def __init__(self, dest='html', stylename='sphinx', trim_doctest_flags=False): self.dest = dest - if not pygments: - return if stylename is None or stylename == 'sphinx': style = SphinxStyle elif stylename == 'none': @@ -153,8 +137,6 @@ class PygmentsBridge(object): def highlight_block(self, source, lang, warn=None, force=False, **kwargs): if not isinstance(source, text_type): source = source.decode() - if not pygments: - return self.unhighlighted(source) # find out which lexer to use if lang in ('py', 'python'): @@ -213,11 +195,6 @@ class PygmentsBridge(object): return hlsource.translate(tex_hl_escape_map_new) def get_stylesheet(self): - if not pygments: - if self.dest == 'latex': - return _LATEX_STYLES - # no HTML styles needed - return '' formatter = self.get_formatter() if self.dest == 'html': return formatter.get_style_defs('.highlight') diff --git a/sphinx/quickstart.py b/sphinx/quickstart.py index fdfb81062..f81b38f05 100644 --- a/sphinx/quickstart.py +++ b/sphinx/quickstart.py @@ -10,13 +10,16 @@ """ from __future__ import print_function -import sys, os, time, re +import re +import os +import sys +import time from os import path from io import open TERM_ENCODING = getattr(sys.stdin, 'encoding', None) -#try to import readline, unix specific enhancement +# try to import readline, unix specific enhancement try: import readline if readline.__doc__ and 'libedit' in readline.__doc__: @@ -33,7 +36,7 @@ from docutils.utils import column_width from sphinx import __version__ from sphinx.util.osutil import make_filename from sphinx.util.console import purple, bold, red, turquoise, \ - nocolor, color_terminal + nocolor, color_terminal from sphinx.util import texescape # function to get input from terminal -- overridden by the test suite @@ -972,17 +975,20 @@ def mkdir_p(dir): class ValidationError(Exception): """Raised for validation errors.""" + def is_path(x): x = path.expanduser(x) if path.exists(x) and not path.isdir(x): raise ValidationError("Please enter a valid path name.") return x + def nonempty(x): if not x: raise ValidationError("Please enter some text.") return x + def choice(*l): def val(x): if x not in l: @@ -990,17 +996,20 @@ def choice(*l): return x return val + def boolean(x): if x.upper() not in ('Y', 'YES', 'N', 'NO'): raise ValidationError("Please enter either 'y' or 'n'.") return x.upper() in ('Y', 'YES') + def suffix(x): if not (x[0:1] == '.' and len(x) > 1): raise ValidationError("Please enter a file suffix, " "e.g. '.rst' or '.txt'.") return x + def ok(x): return x @@ -1097,7 +1106,7 @@ Enter the root path for documentation.''') do_prompt(d, 'path', 'Root path for the documentation', '.', is_path) while path.isfile(path.join(d['path'], 'conf.py')) or \ - path.isfile(path.join(d['path'], 'source', 'conf.py')): + path.isfile(path.join(d['path'], 'source', 'conf.py')): print() print(bold('Error: an existing conf.py has been found in the ' 'selected root path.')) @@ -1169,7 +1178,7 @@ document is a custom template, you can also set this to another filename.''') 'index') while path.isfile(path.join(d['path'], d['master']+d['suffix'])) or \ - path.isfile(path.join(d['path'], 'source', d['master']+d['suffix'])): + path.isfile(path.join(d['path'], 'source', d['master']+d['suffix'])): print() print(bold('Error: the master file %s has already been found in the ' 'selected root path.' % (d['master']+d['suffix']))) @@ -1256,10 +1265,10 @@ def generate(d, overwrite=True, silent=False): d['extensions'] = extensions d['copyright'] = time.strftime('%Y') + ', ' + d['author'] d['author_texescaped'] = text_type(d['author']).\ - translate(texescape.tex_escape_map) + translate(texescape.tex_escape_map) d['project_doc'] = d['project'] + ' Documentation' d['project_doc_texescaped'] = text_type(d['project'] + ' Documentation').\ - translate(texescape.tex_escape_map) + translate(texescape.tex_escape_map) # escape backslashes and single quotes in strings that are put into # a Python string literal diff --git a/sphinx/roles.py b/sphinx/roles.py index aaf6272bd..451cfe60b 100644 --- a/sphinx/roles.py +++ b/sphinx/roles.py @@ -17,22 +17,23 @@ from docutils.parsers.rst import roles from sphinx import addnodes from sphinx.locale import _ +from sphinx.errors import SphinxError from sphinx.util import ws_re from sphinx.util.nodes import split_explicit_title, process_index_entry, \ - set_role_source_info + set_role_source_info generic_docroles = { - 'command' : addnodes.literal_strong, - 'dfn' : nodes.emphasis, - 'kbd' : nodes.literal, - 'mailheader' : addnodes.literal_emphasis, - 'makevar' : addnodes.literal_strong, - 'manpage' : addnodes.literal_emphasis, - 'mimetype' : addnodes.literal_emphasis, - 'newsgroup' : addnodes.literal_emphasis, - 'program' : addnodes.literal_strong, # XXX should be an x-ref - 'regexp' : nodes.literal, + 'command': addnodes.literal_strong, + 'dfn': nodes.emphasis, + 'kbd': nodes.literal, + 'mailheader': addnodes.literal_emphasis, + 'makevar': addnodes.literal_strong, + 'manpage': addnodes.literal_emphasis, + 'mimetype': addnodes.literal_emphasis, + 'newsgroup': addnodes.literal_emphasis, + 'program': addnodes.literal_strong, # XXX should be an x-ref + 'regexp': nodes.literal, } for rolename, nodeclass in iteritems(generic_docroles): @@ -40,6 +41,7 @@ for rolename, nodeclass in iteritems(generic_docroles): role = roles.CustomRole(rolename, generic, {'classes': [rolename]}) roles.register_local_role(rolename, role) + # -- generic cross-reference role ---------------------------------------------- class XRefRole(object): @@ -96,7 +98,11 @@ class XRefRole(object): options={}, content=[]): env = inliner.document.settings.env if not typ: - typ = env.config.default_role + typ = env.temp_data.get('default_role') + if not typ: + typ = env.config.default_role + if not typ: + raise SphinxError('cannot determine default role!') else: typ = typ.lower() if ':' not in typ: @@ -158,6 +164,15 @@ class XRefRole(object): return [node], [] +class AnyXRefRole(XRefRole): + def process_link(self, env, refnode, has_explicit_title, title, target): + result = XRefRole.process_link(self, env, refnode, has_explicit_title, + title, target) + # add all possible context info (i.e. std:program, py:module etc.) + refnode.attributes.update(env.ref_context) + return result + + def indexmarkup_role(typ, rawtext, text, lineno, inliner, options={}, content=[]): """Role for PEP/RFC references that generate an index entry.""" @@ -221,6 +236,7 @@ def indexmarkup_role(typ, rawtext, text, lineno, inliner, _amp_re = re.compile(r'(?=0===n})}function lt(e){var t=ct.split("|"),n=e.createDocumentFragment();if(n.createElement)while(t.length)n.createElement(t.pop());return n}function Lt(e,t){return e.getElementsByTagName(t)[0]||e.appendChild(e.ownerDocument.createElement(t))}function At(e,t){if(t.nodeType!==1||!v.hasData(e))return;var n,r,i,s=v._data(e),o=v._data(t,s),u=s.events;if(u){delete o.handle,o.events={};for(n in u)for(r=0,i=u[n].length;r").appendTo(i.body),n=t.css("display");t.remove();if(n==="none"||n===""){Pt=i.body.appendChild(Pt||v.extend(i.createElement("iframe"),{frameBorder:0,width:0,height:0}));if(!Ht||!Pt.createElement)Ht=(Pt.contentWindow||Pt.contentDocument).document,Ht.write(""),Ht.close();t=Ht.body.appendChild(Ht.createElement(e)),n=Dt(t,"display"),i.body.removeChild(Pt)}return Wt[e]=n,n}function fn(e,t,n,r){var i;if(v.isArray(t))v.each(t,function(t,i){n||sn.test(e)?r(e,i):fn(e+"["+(typeof i=="object"?t:"")+"]",i,n,r)});else if(!n&&v.type(t)==="object")for(i in t)fn(e+"["+i+"]",t[i],n,r);else r(e,t)}function Cn(e){return function(t,n){typeof t!="string"&&(n=t,t="*");var r,i,s,o=t.toLowerCase().split(y),u=0,a=o.length;if(v.isFunction(n))for(;u)[^>]*$|#([\w\-]*)$)/,E=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,S=/^[\],:{}\s]*$/,x=/(?:^|:|,)(?:\s*\[)+/g,T=/\\(?:["\\\/bfnrt]|u[\da-fA-F]{4})/g,N=/"[^"\\\r\n]*"|true|false|null|-?(?:\d\d*\.|)\d+(?:[eE][\-+]?\d+|)/g,C=/^-ms-/,k=/-([\da-z])/gi,L=function(e,t){return(t+"").toUpperCase()},A=function(){i.addEventListener?(i.removeEventListener("DOMContentLoaded",A,!1),v.ready()):i.readyState==="complete"&&(i.detachEvent("onreadystatechange",A),v.ready())},O={};v.fn=v.prototype={constructor:v,init:function(e,n,r){var s,o,u,a;if(!e)return this;if(e.nodeType)return this.context=this[0]=e,this.length=1,this;if(typeof e=="string"){e.charAt(0)==="<"&&e.charAt(e.length-1)===">"&&e.length>=3?s=[null,e,null]:s=w.exec(e);if(s&&(s[1]||!n)){if(s[1])return n=n instanceof v?n[0]:n,a=n&&n.nodeType?n.ownerDocument||n:i,e=v.parseHTML(s[1],a,!0),E.test(s[1])&&v.isPlainObject(n)&&this.attr.call(e,n,!0),v.merge(this,e);o=i.getElementById(s[2]);if(o&&o.parentNode){if(o.id!==s[2])return r.find(e);this.length=1,this[0]=o}return this.context=i,this.selector=e,this}return!n||n.jquery?(n||r).find(e):this.constructor(n).find(e)}return v.isFunction(e)?r.ready(e):(e.selector!==t&&(this.selector=e.selector,this.context=e.context),v.makeArray(e,this))},selector:"",jquery:"1.8.3",length:0,size:function(){return this.length},toArray:function(){return l.call(this)},get:function(e){return e==null?this.toArray():e<0?this[this.length+e]:this[e]},pushStack:function(e,t,n){var r=v.merge(this.constructor(),e);return r.prevObject=this,r.context=this.context,t==="find"?r.selector=this.selector+(this.selector?" ":"")+n:t&&(r.selector=this.selector+"."+t+"("+n+")"),r},each:function(e,t){return v.each(this,e,t)},ready:function(e){return v.ready.promise().done(e),this},eq:function(e){return e=+e,e===-1?this.slice(e):this.slice(e,e+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(l.apply(this,arguments),"slice",l.call(arguments).join(","))},map:function(e){return this.pushStack(v.map(this,function(t,n){return e.call(t,n,t)}))},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:[].sort,splice:[].splice},v.fn.init.prototype=v.fn,v.extend=v.fn.extend=function(){var e,n,r,i,s,o,u=arguments[0]||{},a=1,f=arguments.length,l=!1;typeof u=="boolean"&&(l=u,u=arguments[1]||{},a=2),typeof u!="object"&&!v.isFunction(u)&&(u={}),f===a&&(u=this,--a);for(;a0)return;r.resolveWith(i,[v]),v.fn.trigger&&v(i).trigger("ready").off("ready")},isFunction:function(e){return v.type(e)==="function"},isArray:Array.isArray||function(e){return v.type(e)==="array"},isWindow:function(e){return e!=null&&e==e.window},isNumeric:function(e){return!isNaN(parseFloat(e))&&isFinite(e)},type:function(e){return e==null?String(e):O[h.call(e)]||"object"},isPlainObject:function(e){if(!e||v.type(e)!=="object"||e.nodeType||v.isWindow(e))return!1;try{if(e.constructor&&!p.call(e,"constructor")&&!p.call(e.constructor.prototype,"isPrototypeOf"))return!1}catch(n){return!1}var r;for(r in e);return r===t||p.call(e,r)},isEmptyObject:function(e){var t;for(t in e)return!1;return!0},error:function(e){throw new Error(e)},parseHTML:function(e,t,n){var r;return!e||typeof e!="string"?null:(typeof t=="boolean"&&(n=t,t=0),t=t||i,(r=E.exec(e))?[t.createElement(r[1])]:(r=v.buildFragment([e],t,n?null:[]),v.merge([],(r.cacheable?v.clone(r.fragment):r.fragment).childNodes)))},parseJSON:function(t){if(!t||typeof t!="string")return null;t=v.trim(t);if(e.JSON&&e.JSON.parse)return e.JSON.parse(t);if(S.test(t.replace(T,"@").replace(N,"]").replace(x,"")))return(new Function("return "+t))();v.error("Invalid JSON: "+t)},parseXML:function(n){var r,i;if(!n||typeof n!="string")return null;try{e.DOMParser?(i=new DOMParser,r=i.parseFromString(n,"text/xml")):(r=new ActiveXObject("Microsoft.XMLDOM"),r.async="false",r.loadXML(n))}catch(s){r=t}return(!r||!r.documentElement||r.getElementsByTagName("parsererror").length)&&v.error("Invalid XML: "+n),r},noop:function(){},globalEval:function(t){t&&g.test(t)&&(e.execScript||function(t){e.eval.call(e,t)})(t)},camelCase:function(e){return e.replace(C,"ms-").replace(k,L)},nodeName:function(e,t){return e.nodeName&&e.nodeName.toLowerCase()===t.toLowerCase()},each:function(e,n,r){var i,s=0,o=e.length,u=o===t||v.isFunction(e);if(r){if(u){for(i in e)if(n.apply(e[i],r)===!1)break}else for(;s0&&e[0]&&e[a-1]||a===0||v.isArray(e));if(f)for(;u-1)a.splice(n,1),i&&(n<=o&&o--,n<=u&&u--)}),this},has:function(e){return v.inArray(e,a)>-1},empty:function(){return a=[],this},disable:function(){return a=f=n=t,this},disabled:function(){return!a},lock:function(){return f=t,n||c.disable(),this},locked:function(){return!f},fireWith:function(e,t){return t=t||[],t=[e,t.slice?t.slice():t],a&&(!r||f)&&(i?f.push(t):l(t)),this},fire:function(){return c.fireWith(this,arguments),this},fired:function(){return!!r}};return c},v.extend({Deferred:function(e){var t=[["resolve","done",v.Callbacks("once memory"),"resolved"],["reject","fail",v.Callbacks("once memory"),"rejected"],["notify","progress",v.Callbacks("memory")]],n="pending",r={state:function(){return n},always:function(){return i.done(arguments).fail(arguments),this},then:function(){var e=arguments;return v.Deferred(function(n){v.each(t,function(t,r){var s=r[0],o=e[t];i[r[1]](v.isFunction(o)?function(){var e=o.apply(this,arguments);e&&v.isFunction(e.promise)?e.promise().done(n.resolve).fail(n.reject).progress(n.notify):n[s+"With"](this===i?n:this,[e])}:n[s])}),e=null}).promise()},promise:function(e){return e!=null?v.extend(e,r):r}},i={};return r.pipe=r.then,v.each(t,function(e,s){var o=s[2],u=s[3];r[s[1]]=o.add,u&&o.add(function(){n=u},t[e^1][2].disable,t[2][2].lock),i[s[0]]=o.fire,i[s[0]+"With"]=o.fireWith}),r.promise(i),e&&e.call(i,i),i},when:function(e){var t=0,n=l.call(arguments),r=n.length,i=r!==1||e&&v.isFunction(e.promise)?r:0,s=i===1?e:v.Deferred(),o=function(e,t,n){return function(r){t[e]=this,n[e]=arguments.length>1?l.call(arguments):r,n===u?s.notifyWith(t,n):--i||s.resolveWith(t,n)}},u,a,f;if(r>1){u=new Array(r),a=new Array(r),f=new Array(r);for(;t
a",n=p.getElementsByTagName("*"),r=p.getElementsByTagName("a")[0];if(!n||!r||!n.length)return{};s=i.createElement("select"),o=s.appendChild(i.createElement("option")),u=p.getElementsByTagName("input")[0],r.style.cssText="top:1px;float:left;opacity:.5",t={leadingWhitespace:p.firstChild.nodeType===3,tbody:!p.getElementsByTagName("tbody").length,htmlSerialize:!!p.getElementsByTagName("link").length,style:/top/.test(r.getAttribute("style")),hrefNormalized:r.getAttribute("href")==="/a",opacity:/^0.5/.test(r.style.opacity),cssFloat:!!r.style.cssFloat,checkOn:u.value==="on",optSelected:o.selected,getSetAttribute:p.className!=="t",enctype:!!i.createElement("form").enctype,html5Clone:i.createElement("nav").cloneNode(!0).outerHTML!=="<:nav>",boxModel:i.compatMode==="CSS1Compat",submitBubbles:!0,changeBubbles:!0,focusinBubbles:!1,deleteExpando:!0,noCloneEvent:!0,inlineBlockNeedsLayout:!1,shrinkWrapBlocks:!1,reliableMarginRight:!0,boxSizingReliable:!0,pixelPosition:!1},u.checked=!0,t.noCloneChecked=u.cloneNode(!0).checked,s.disabled=!0,t.optDisabled=!o.disabled;try{delete p.test}catch(d){t.deleteExpando=!1}!p.addEventListener&&p.attachEvent&&p.fireEvent&&(p.attachEvent("onclick",h=function(){t.noCloneEvent=!1}),p.cloneNode(!0).fireEvent("onclick"),p.detachEvent("onclick",h)),u=i.createElement("input"),u.value="t",u.setAttribute("type","radio"),t.radioValue=u.value==="t",u.setAttribute("checked","checked"),u.setAttribute("name","t"),p.appendChild(u),a=i.createDocumentFragment(),a.appendChild(p.lastChild),t.checkClone=a.cloneNode(!0).cloneNode(!0).lastChild.checked,t.appendChecked=u.checked,a.removeChild(u),a.appendChild(p);if(p.attachEvent)for(l in{submit:!0,change:!0,focusin:!0})f="on"+l,c=f in p,c||(p.setAttribute(f,"return;"),c=typeof p[f]=="function"),t[l+"Bubbles"]=c;return v(function(){var n,r,s,o,u="padding:0;margin:0;border:0;display:block;overflow:hidden;",a=i.getElementsByTagName("body")[0];if(!a)return;n=i.createElement("div"),n.style.cssText="visibility:hidden;border:0;width:0;height:0;position:static;top:0;margin-top:1px",a.insertBefore(n,a.firstChild),r=i.createElement("div"),n.appendChild(r),r.innerHTML="
t
",s=r.getElementsByTagName("td"),s[0].style.cssText="padding:0;margin:0;border:0;display:none",c=s[0].offsetHeight===0,s[0].style.display="",s[1].style.display="none",t.reliableHiddenOffsets=c&&s[0].offsetHeight===0,r.innerHTML="",r.style.cssText="box-sizing:border-box;-moz-box-sizing:border-box;-webkit-box-sizing:border-box;padding:1px;border:1px;display:block;width:4px;margin-top:1%;position:absolute;top:1%;",t.boxSizing=r.offsetWidth===4,t.doesNotIncludeMarginInBodyOffset=a.offsetTop!==1,e.getComputedStyle&&(t.pixelPosition=(e.getComputedStyle(r,null)||{}).top!=="1%",t.boxSizingReliable=(e.getComputedStyle(r,null)||{width:"4px"}).width==="4px",o=i.createElement("div"),o.style.cssText=r.style.cssText=u,o.style.marginRight=o.style.width="0",r.style.width="1px",r.appendChild(o),t.reliableMarginRight=!parseFloat((e.getComputedStyle(o,null)||{}).marginRight)),typeof r.style.zoom!="undefined"&&(r.innerHTML="",r.style.cssText=u+"width:1px;padding:1px;display:inline;zoom:1",t.inlineBlockNeedsLayout=r.offsetWidth===3,r.style.display="block",r.style.overflow="visible",r.innerHTML="
",r.firstChild.style.width="5px",t.shrinkWrapBlocks=r.offsetWidth!==3,n.style.zoom=1),a.removeChild(n),n=r=s=o=null}),a.removeChild(p),n=r=s=o=u=a=p=null,t}();var D=/(?:\{[\s\S]*\}|\[[\s\S]*\])$/,P=/([A-Z])/g;v.extend({cache:{},deletedIds:[],uuid:0,expando:"jQuery"+(v.fn.jquery+Math.random()).replace(/\D/g,""),noData:{embed:!0,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:!0},hasData:function(e){return e=e.nodeType?v.cache[e[v.expando]]:e[v.expando],!!e&&!B(e)},data:function(e,n,r,i){if(!v.acceptData(e))return;var s,o,u=v.expando,a=typeof n=="string",f=e.nodeType,l=f?v.cache:e,c=f?e[u]:e[u]&&u;if((!c||!l[c]||!i&&!l[c].data)&&a&&r===t)return;c||(f?e[u]=c=v.deletedIds.pop()||v.guid++:c=u),l[c]||(l[c]={},f||(l[c].toJSON=v.noop));if(typeof n=="object"||typeof n=="function")i?l[c]=v.extend(l[c],n):l[c].data=v.extend(l[c].data,n);return s=l[c],i||(s.data||(s.data={}),s=s.data),r!==t&&(s[v.camelCase(n)]=r),a?(o=s[n],o==null&&(o=s[v.camelCase(n)])):o=s,o},removeData:function(e,t,n){if(!v.acceptData(e))return;var r,i,s,o=e.nodeType,u=o?v.cache:e,a=o?e[v.expando]:v.expando;if(!u[a])return;if(t){r=n?u[a]:u[a].data;if(r){v.isArray(t)||(t in r?t=[t]:(t=v.camelCase(t),t in r?t=[t]:t=t.split(" ")));for(i=0,s=t.length;i1,null,!1))},removeData:function(e){return this.each(function(){v.removeData(this,e)})}}),v.extend({queue:function(e,t,n){var r;if(e)return t=(t||"fx")+"queue",r=v._data(e,t),n&&(!r||v.isArray(n)?r=v._data(e,t,v.makeArray(n)):r.push(n)),r||[]},dequeue:function(e,t){t=t||"fx";var n=v.queue(e,t),r=n.length,i=n.shift(),s=v._queueHooks(e,t),o=function(){v.dequeue(e,t)};i==="inprogress"&&(i=n.shift(),r--),i&&(t==="fx"&&n.unshift("inprogress"),delete s.stop,i.call(e,o,s)),!r&&s&&s.empty.fire()},_queueHooks:function(e,t){var n=t+"queueHooks";return v._data(e,n)||v._data(e,n,{empty:v.Callbacks("once memory").add(function(){v.removeData(e,t+"queue",!0),v.removeData(e,n,!0)})})}}),v.fn.extend({queue:function(e,n){var r=2;return typeof e!="string"&&(n=e,e="fx",r--),arguments.length1)},removeAttr:function(e){return this.each(function(){v.removeAttr(this,e)})},prop:function(e,t){return v.access(this,v.prop,e,t,arguments.length>1)},removeProp:function(e){return e=v.propFix[e]||e,this.each(function(){try{this[e]=t,delete this[e]}catch(n){}})},addClass:function(e){var t,n,r,i,s,o,u;if(v.isFunction(e))return this.each(function(t){v(this).addClass(e.call(this,t,this.className))});if(e&&typeof e=="string"){t=e.split(y);for(n=0,r=this.length;n=0)r=r.replace(" "+n[s]+" "," ");i.className=e?v.trim(r):""}}}return this},toggleClass:function(e,t){var n=typeof e,r=typeof t=="boolean";return v.isFunction(e)?this.each(function(n){v(this).toggleClass(e.call(this,n,this.className,t),t)}):this.each(function(){if(n==="string"){var i,s=0,o=v(this),u=t,a=e.split(y);while(i=a[s++])u=r?u:!o.hasClass(i),o[u?"addClass":"removeClass"](i)}else if(n==="undefined"||n==="boolean")this.className&&v._data(this,"__className__",this.className),this.className=this.className||e===!1?"":v._data(this,"__className__")||""})},hasClass:function(e){var t=" "+e+" ",n=0,r=this.length;for(;n=0)return!0;return!1},val:function(e){var n,r,i,s=this[0];if(!arguments.length){if(s)return n=v.valHooks[s.type]||v.valHooks[s.nodeName.toLowerCase()],n&&"get"in n&&(r=n.get(s,"value"))!==t?r:(r=s.value,typeof r=="string"?r.replace(R,""):r==null?"":r);return}return i=v.isFunction(e),this.each(function(r){var s,o=v(this);if(this.nodeType!==1)return;i?s=e.call(this,r,o.val()):s=e,s==null?s="":typeof s=="number"?s+="":v.isArray(s)&&(s=v.map(s,function(e){return e==null?"":e+""})),n=v.valHooks[this.type]||v.valHooks[this.nodeName.toLowerCase()];if(!n||!("set"in n)||n.set(this,s,"value")===t)this.value=s})}}),v.extend({valHooks:{option:{get:function(e){var t=e.attributes.value;return!t||t.specified?e.value:e.text}},select:{get:function(e){var t,n,r=e.options,i=e.selectedIndex,s=e.type==="select-one"||i<0,o=s?null:[],u=s?i+1:r.length,a=i<0?u:s?i:0;for(;a=0}),n.length||(e.selectedIndex=-1),n}}},attrFn:{},attr:function(e,n,r,i){var s,o,u,a=e.nodeType;if(!e||a===3||a===8||a===2)return;if(i&&v.isFunction(v.fn[n]))return v(e)[n](r);if(typeof e.getAttribute=="undefined")return v.prop(e,n,r);u=a!==1||!v.isXMLDoc(e),u&&(n=n.toLowerCase(),o=v.attrHooks[n]||(X.test(n)?F:j));if(r!==t){if(r===null){v.removeAttr(e,n);return}return o&&"set"in o&&u&&(s=o.set(e,r,n))!==t?s:(e.setAttribute(n,r+""),r)}return o&&"get"in o&&u&&(s=o.get(e,n))!==null?s:(s=e.getAttribute(n),s===null?t:s)},removeAttr:function(e,t){var n,r,i,s,o=0;if(t&&e.nodeType===1){r=t.split(y);for(;o=0}})});var $=/^(?:textarea|input|select)$/i,J=/^([^\.]*|)(?:\.(.+)|)$/,K=/(?:^|\s)hover(\.\S+|)\b/,Q=/^key/,G=/^(?:mouse|contextmenu)|click/,Y=/^(?:focusinfocus|focusoutblur)$/,Z=function(e){return v.event.special.hover?e:e.replace(K,"mouseenter$1 mouseleave$1")};v.event={add:function(e,n,r,i,s){var o,u,a,f,l,c,h,p,d,m,g;if(e.nodeType===3||e.nodeType===8||!n||!r||!(o=v._data(e)))return;r.handler&&(d=r,r=d.handler,s=d.selector),r.guid||(r.guid=v.guid++),a=o.events,a||(o.events=a={}),u=o.handle,u||(o.handle=u=function(e){return typeof v=="undefined"||!!e&&v.event.triggered===e.type?t:v.event.dispatch.apply(u.elem,arguments)},u.elem=e),n=v.trim(Z(n)).split(" ");for(f=0;f=0&&(y=y.slice(0,-1),a=!0),y.indexOf(".")>=0&&(b=y.split("."),y=b.shift(),b.sort());if((!s||v.event.customEvent[y])&&!v.event.global[y])return;n=typeof n=="object"?n[v.expando]?n:new v.Event(y,n):new v.Event(y),n.type=y,n.isTrigger=!0,n.exclusive=a,n.namespace=b.join("."),n.namespace_re=n.namespace?new RegExp("(^|\\.)"+b.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,h=y.indexOf(":")<0?"on"+y:"";if(!s){u=v.cache;for(f in u)u[f].events&&u[f].events[y]&&v.event.trigger(n,r,u[f].handle.elem,!0);return}n.result=t,n.target||(n.target=s),r=r!=null?v.makeArray(r):[],r.unshift(n),p=v.event.special[y]||{};if(p.trigger&&p.trigger.apply(s,r)===!1)return;m=[[s,p.bindType||y]];if(!o&&!p.noBubble&&!v.isWindow(s)){g=p.delegateType||y,l=Y.test(g+y)?s:s.parentNode;for(c=s;l;l=l.parentNode)m.push([l,g]),c=l;c===(s.ownerDocument||i)&&m.push([c.defaultView||c.parentWindow||e,g])}for(f=0;f=0:v.find(h,this,null,[s]).length),u[h]&&f.push(c);f.length&&w.push({elem:s,matches:f})}d.length>m&&w.push({elem:this,matches:d.slice(m)});for(r=0;r0?this.on(t,null,e,n):this.trigger(t)},Q.test(t)&&(v.event.fixHooks[t]=v.event.keyHooks),G.test(t)&&(v.event.fixHooks[t]=v.event.mouseHooks)}),function(e,t){function nt(e,t,n,r){n=n||[],t=t||g;var i,s,a,f,l=t.nodeType;if(!e||typeof e!="string")return n;if(l!==1&&l!==9)return[];a=o(t);if(!a&&!r)if(i=R.exec(e))if(f=i[1]){if(l===9){s=t.getElementById(f);if(!s||!s.parentNode)return n;if(s.id===f)return n.push(s),n}else if(t.ownerDocument&&(s=t.ownerDocument.getElementById(f))&&u(t,s)&&s.id===f)return n.push(s),n}else{if(i[2])return S.apply(n,x.call(t.getElementsByTagName(e),0)),n;if((f=i[3])&&Z&&t.getElementsByClassName)return S.apply(n,x.call(t.getElementsByClassName(f),0)),n}return vt(e.replace(j,"$1"),t,n,r,a)}function rt(e){return function(t){var n=t.nodeName.toLowerCase();return n==="input"&&t.type===e}}function it(e){return function(t){var n=t.nodeName.toLowerCase();return(n==="input"||n==="button")&&t.type===e}}function st(e){return N(function(t){return t=+t,N(function(n,r){var i,s=e([],n.length,t),o=s.length;while(o--)n[i=s[o]]&&(n[i]=!(r[i]=n[i]))})})}function ot(e,t,n){if(e===t)return n;var r=e.nextSibling;while(r){if(r===t)return-1;r=r.nextSibling}return 1}function ut(e,t){var n,r,s,o,u,a,f,l=L[d][e+" "];if(l)return t?0:l.slice(0);u=e,a=[],f=i.preFilter;while(u){if(!n||(r=F.exec(u)))r&&(u=u.slice(r[0].length)||u),a.push(s=[]);n=!1;if(r=I.exec(u))s.push(n=new m(r.shift())),u=u.slice(n.length),n.type=r[0].replace(j," ");for(o in i.filter)(r=J[o].exec(u))&&(!f[o]||(r=f[o](r)))&&(s.push(n=new m(r.shift())),u=u.slice(n.length),n.type=o,n.matches=r);if(!n)break}return t?u.length:u?nt.error(e):L(e,a).slice(0)}function at(e,t,r){var i=t.dir,s=r&&t.dir==="parentNode",o=w++;return t.first?function(t,n,r){while(t=t[i])if(s||t.nodeType===1)return e(t,n,r)}:function(t,r,u){if(!u){var a,f=b+" "+o+" ",l=f+n;while(t=t[i])if(s||t.nodeType===1){if((a=t[d])===l)return t.sizset;if(typeof a=="string"&&a.indexOf(f)===0){if(t.sizset)return t}else{t[d]=l;if(e(t,r,u))return t.sizset=!0,t;t.sizset=!1}}}else while(t=t[i])if(s||t.nodeType===1)if(e(t,r,u))return t}}function ft(e){return e.length>1?function(t,n,r){var i=e.length;while(i--)if(!e[i](t,n,r))return!1;return!0}:e[0]}function lt(e,t,n,r,i){var s,o=[],u=0,a=e.length,f=t!=null;for(;u-1&&(s[f]=!(o[f]=c))}}else g=lt(g===o?g.splice(d,g.length):g),i?i(null,o,g,a):S.apply(o,g)})}function ht(e){var t,n,r,s=e.length,o=i.relative[e[0].type],u=o||i.relative[" "],a=o?1:0,f=at(function(e){return e===t},u,!0),l=at(function(e){return T.call(t,e)>-1},u,!0),h=[function(e,n,r){return!o&&(r||n!==c)||((t=n).nodeType?f(e,n,r):l(e,n,r))}];for(;a1&&ft(h),a>1&&e.slice(0,a-1).join("").replace(j,"$1"),n,a0,s=e.length>0,o=function(u,a,f,l,h){var p,d,v,m=[],y=0,w="0",x=u&&[],T=h!=null,N=c,C=u||s&&i.find.TAG("*",h&&a.parentNode||a),k=b+=N==null?1:Math.E;T&&(c=a!==g&&a,n=o.el);for(;(p=C[w])!=null;w++){if(s&&p){for(d=0;v=e[d];d++)if(v(p,a,f)){l.push(p);break}T&&(b=k,n=++o.el)}r&&((p=!v&&p)&&y--,u&&x.push(p))}y+=w;if(r&&w!==y){for(d=0;v=t[d];d++)v(x,m,a,f);if(u){if(y>0)while(w--)!x[w]&&!m[w]&&(m[w]=E.call(l));m=lt(m)}S.apply(l,m),T&&!u&&m.length>0&&y+t.length>1&&nt.uniqueSort(l)}return T&&(b=k,c=N),x};return o.el=0,r?N(o):o}function dt(e,t,n){var r=0,i=t.length;for(;r2&&(f=u[0]).type==="ID"&&t.nodeType===9&&!s&&i.relative[u[1].type]){t=i.find.ID(f.matches[0].replace($,""),t,s)[0];if(!t)return n;e=e.slice(u.shift().length)}for(o=J.POS.test(e)?-1:u.length-1;o>=0;o--){f=u[o];if(i.relative[l=f.type])break;if(c=i.find[l])if(r=c(f.matches[0].replace($,""),z.test(u[0].type)&&t.parentNode||t,s)){u.splice(o,1),e=r.length&&u.join("");if(!e)return S.apply(n,x.call(r,0)),n;break}}}return a(e,h)(r,t,s,n,z.test(e)),n}function mt(){}var n,r,i,s,o,u,a,f,l,c,h=!0,p="undefined",d=("sizcache"+Math.random()).replace(".",""),m=String,g=e.document,y=g.documentElement,b=0,w=0,E=[].pop,S=[].push,x=[].slice,T=[].indexOf||function(e){var t=0,n=this.length;for(;ti.cacheLength&&delete e[t.shift()],e[n+" "]=r},e)},k=C(),L=C(),A=C(),O="[\\x20\\t\\r\\n\\f]",M="(?:\\\\.|[-\\w]|[^\\x00-\\xa0])+",_=M.replace("w","w#"),D="([*^$|!~]?=)",P="\\["+O+"*("+M+")"+O+"*(?:"+D+O+"*(?:(['\"])((?:\\\\.|[^\\\\])*?)\\3|("+_+")|)|)"+O+"*\\]",H=":("+M+")(?:\\((?:(['\"])((?:\\\\.|[^\\\\])*?)\\2|([^()[\\]]*|(?:(?:"+P+")|[^:]|\\\\.)*|.*))\\)|)",B=":(even|odd|eq|gt|lt|nth|first|last)(?:\\("+O+"*((?:-\\d)?\\d*)"+O+"*\\)|)(?=[^-]|$)",j=new RegExp("^"+O+"+|((?:^|[^\\\\])(?:\\\\.)*)"+O+"+$","g"),F=new RegExp("^"+O+"*,"+O+"*"),I=new RegExp("^"+O+"*([\\x20\\t\\r\\n\\f>+~])"+O+"*"),q=new RegExp(H),R=/^(?:#([\w\-]+)|(\w+)|\.([\w\-]+))$/,U=/^:not/,z=/[\x20\t\r\n\f]*[+~]/,W=/:not\($/,X=/h\d/i,V=/input|select|textarea|button/i,$=/\\(?!\\)/g,J={ID:new RegExp("^#("+M+")"),CLASS:new RegExp("^\\.("+M+")"),NAME:new RegExp("^\\[name=['\"]?("+M+")['\"]?\\]"),TAG:new RegExp("^("+M.replace("w","w*")+")"),ATTR:new RegExp("^"+P),PSEUDO:new RegExp("^"+H),POS:new RegExp(B,"i"),CHILD:new RegExp("^:(only|nth|first|last)-child(?:\\("+O+"*(even|odd|(([+-]|)(\\d*)n|)"+O+"*(?:([+-]|)"+O+"*(\\d+)|))"+O+"*\\)|)","i"),needsContext:new RegExp("^"+O+"*[>+~]|"+B,"i")},K=function(e){var t=g.createElement("div");try{return e(t)}catch(n){return!1}finally{t=null}},Q=K(function(e){return e.appendChild(g.createComment("")),!e.getElementsByTagName("*").length}),G=K(function(e){return e.innerHTML="",e.firstChild&&typeof e.firstChild.getAttribute!==p&&e.firstChild.getAttribute("href")==="#"}),Y=K(function(e){e.innerHTML="";var t=typeof e.lastChild.getAttribute("multiple");return t!=="boolean"&&t!=="string"}),Z=K(function(e){return e.innerHTML="",!e.getElementsByClassName||!e.getElementsByClassName("e").length?!1:(e.lastChild.className="e",e.getElementsByClassName("e").length===2)}),et=K(function(e){e.id=d+0,e.innerHTML="
",y.insertBefore(e,y.firstChild);var t=g.getElementsByName&&g.getElementsByName(d).length===2+g.getElementsByName(d+0).length;return r=!g.getElementById(d),y.removeChild(e),t});try{x.call(y.childNodes,0)[0].nodeType}catch(tt){x=function(e){var t,n=[];for(;t=this[e];e++)n.push(t);return n}}nt.matches=function(e,t){return nt(e,null,null,t)},nt.matchesSelector=function(e,t){return nt(t,null,null,[e]).length>0},s=nt.getText=function(e){var t,n="",r=0,i=e.nodeType;if(i){if(i===1||i===9||i===11){if(typeof e.textContent=="string")return e.textContent;for(e=e.firstChild;e;e=e.nextSibling)n+=s(e)}else if(i===3||i===4)return e.nodeValue}else for(;t=e[r];r++)n+=s(t);return n},o=nt.isXML=function(e){var t=e&&(e.ownerDocument||e).documentElement;return t?t.nodeName!=="HTML":!1},u=nt.contains=y.contains?function(e,t){var n=e.nodeType===9?e.documentElement:e,r=t&&t.parentNode;return e===r||!!(r&&r.nodeType===1&&n.contains&&n.contains(r))}:y.compareDocumentPosition?function(e,t){return t&&!!(e.compareDocumentPosition(t)&16)}:function(e,t){while(t=t.parentNode)if(t===e)return!0;return!1},nt.attr=function(e,t){var n,r=o(e);return r||(t=t.toLowerCase()),(n=i.attrHandle[t])?n(e):r||Y?e.getAttribute(t):(n=e.getAttributeNode(t),n?typeof e[t]=="boolean"?e[t]?t:null:n.specified?n.value:null:null)},i=nt.selectors={cacheLength:50,createPseudo:N,match:J,attrHandle:G?{}:{href:function(e){return e.getAttribute("href",2)},type:function(e){return e.getAttribute("type")}},find:{ID:r?function(e,t,n){if(typeof t.getElementById!==p&&!n){var r=t.getElementById(e);return r&&r.parentNode?[r]:[]}}:function(e,n,r){if(typeof n.getElementById!==p&&!r){var i=n.getElementById(e);return i?i.id===e||typeof i.getAttributeNode!==p&&i.getAttributeNode("id").value===e?[i]:t:[]}},TAG:Q?function(e,t){if(typeof t.getElementsByTagName!==p)return t.getElementsByTagName(e)}:function(e,t){var n=t.getElementsByTagName(e);if(e==="*"){var r,i=[],s=0;for(;r=n[s];s++)r.nodeType===1&&i.push(r);return i}return n},NAME:et&&function(e,t){if(typeof t.getElementsByName!==p)return t.getElementsByName(name)},CLASS:Z&&function(e,t,n){if(typeof t.getElementsByClassName!==p&&!n)return t.getElementsByClassName(e)}},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(e){return e[1]=e[1].replace($,""),e[3]=(e[4]||e[5]||"").replace($,""),e[2]==="~="&&(e[3]=" "+e[3]+" "),e.slice(0,4)},CHILD:function(e){return e[1]=e[1].toLowerCase(),e[1]==="nth"?(e[2]||nt.error(e[0]),e[3]=+(e[3]?e[4]+(e[5]||1):2*(e[2]==="even"||e[2]==="odd")),e[4]=+(e[6]+e[7]||e[2]==="odd")):e[2]&&nt.error(e[0]),e},PSEUDO:function(e){var t,n;if(J.CHILD.test(e[0]))return null;if(e[3])e[2]=e[3];else if(t=e[4])q.test(t)&&(n=ut(t,!0))&&(n=t.indexOf(")",t.length-n)-t.length)&&(t=t.slice(0,n),e[0]=e[0].slice(0,n)),e[2]=t;return e.slice(0,3)}},filter:{ID:r?function(e){return e=e.replace($,""),function(t){return t.getAttribute("id")===e}}:function(e){return e=e.replace($,""),function(t){var n=typeof t.getAttributeNode!==p&&t.getAttributeNode("id");return n&&n.value===e}},TAG:function(e){return e==="*"?function(){return!0}:(e=e.replace($,"").toLowerCase(),function(t){return t.nodeName&&t.nodeName.toLowerCase()===e})},CLASS:function(e){var t=k[d][e+" "];return t||(t=new RegExp("(^|"+O+")"+e+"("+O+"|$)"))&&k(e,function(e){return t.test(e.className||typeof e.getAttribute!==p&&e.getAttribute("class")||"")})},ATTR:function(e,t,n){return function(r,i){var s=nt.attr(r,e);return s==null?t==="!=":t?(s+="",t==="="?s===n:t==="!="?s!==n:t==="^="?n&&s.indexOf(n)===0:t==="*="?n&&s.indexOf(n)>-1:t==="$="?n&&s.substr(s.length-n.length)===n:t==="~="?(" "+s+" ").indexOf(n)>-1:t==="|="?s===n||s.substr(0,n.length+1)===n+"-":!1):!0}},CHILD:function(e,t,n,r){return e==="nth"?function(e){var t,i,s=e.parentNode;if(n===1&&r===0)return!0;if(s){i=0;for(t=s.firstChild;t;t=t.nextSibling)if(t.nodeType===1){i++;if(e===t)break}}return i-=r,i===n||i%n===0&&i/n>=0}:function(t){var n=t;switch(e){case"only":case"first":while(n=n.previousSibling)if(n.nodeType===1)return!1;if(e==="first")return!0;n=t;case"last":while(n=n.nextSibling)if(n.nodeType===1)return!1;return!0}}},PSEUDO:function(e,t){var n,r=i.pseudos[e]||i.setFilters[e.toLowerCase()]||nt.error("unsupported pseudo: "+e);return r[d]?r(t):r.length>1?(n=[e,e,"",t],i.setFilters.hasOwnProperty(e.toLowerCase())?N(function(e,n){var i,s=r(e,t),o=s.length;while(o--)i=T.call(e,s[o]),e[i]=!(n[i]=s[o])}):function(e){return r(e,0,n)}):r}},pseudos:{not:N(function(e){var t=[],n=[],r=a(e.replace(j,"$1"));return r[d]?N(function(e,t,n,i){var s,o=r(e,null,i,[]),u=e.length;while(u--)if(s=o[u])e[u]=!(t[u]=s)}):function(e,i,s){return t[0]=e,r(t,null,s,n),!n.pop()}}),has:N(function(e){return function(t){return nt(e,t).length>0}}),contains:N(function(e){return function(t){return(t.textContent||t.innerText||s(t)).indexOf(e)>-1}}),enabled:function(e){return e.disabled===!1},disabled:function(e){return e.disabled===!0},checked:function(e){var t=e.nodeName.toLowerCase();return t==="input"&&!!e.checked||t==="option"&&!!e.selected},selected:function(e){return e.parentNode&&e.parentNode.selectedIndex,e.selected===!0},parent:function(e){return!i.pseudos.empty(e)},empty:function(e){var t;e=e.firstChild;while(e){if(e.nodeName>"@"||(t=e.nodeType)===3||t===4)return!1;e=e.nextSibling}return!0},header:function(e){return X.test(e.nodeName)},text:function(e){var t,n;return e.nodeName.toLowerCase()==="input"&&(t=e.type)==="text"&&((n=e.getAttribute("type"))==null||n.toLowerCase()===t)},radio:rt("radio"),checkbox:rt("checkbox"),file:rt("file"),password:rt("password"),image:rt("image"),submit:it("submit"),reset:it("reset"),button:function(e){var t=e.nodeName.toLowerCase();return t==="input"&&e.type==="button"||t==="button"},input:function(e){return V.test(e.nodeName)},focus:function(e){var t=e.ownerDocument;return e===t.activeElement&&(!t.hasFocus||t.hasFocus())&&!!(e.type||e.href||~e.tabIndex)},active:function(e){return e===e.ownerDocument.activeElement},first:st(function(){return[0]}),last:st(function(e,t){return[t-1]}),eq:st(function(e,t,n){return[n<0?n+t:n]}),even:st(function(e,t){for(var n=0;n=0;)e.push(r);return e}),gt:st(function(e,t,n){for(var r=n<0?n+t:n;++r",e.querySelectorAll("[selected]").length||i.push("\\["+O+"*(?:checked|disabled|ismap|multiple|readonly|selected|value)"),e.querySelectorAll(":checked").length||i.push(":checked")}),K(function(e){e.innerHTML="

",e.querySelectorAll("[test^='']").length&&i.push("[*^$]="+O+"*(?:\"\"|'')"),e.innerHTML="",e.querySelectorAll(":enabled").length||i.push(":enabled",":disabled")}),i=new RegExp(i.join("|")),vt=function(e,r,s,o,u){if(!o&&!u&&!i.test(e)){var a,f,l=!0,c=d,h=r,p=r.nodeType===9&&e;if(r.nodeType===1&&r.nodeName.toLowerCase()!=="object"){a=ut(e),(l=r.getAttribute("id"))?c=l.replace(n,"\\$&"):r.setAttribute("id",c),c="[id='"+c+"'] ",f=a.length;while(f--)a[f]=c+a[f].join("");h=z.test(e)&&r.parentNode||r,p=a.join(",")}if(p)try{return S.apply(s,x.call(h.querySelectorAll(p),0)),s}catch(v){}finally{l||r.removeAttribute("id")}}return t(e,r,s,o,u)},u&&(K(function(t){e=u.call(t,"div");try{u.call(t,"[test!='']:sizzle"),s.push("!=",H)}catch(n){}}),s=new RegExp(s.join("|")),nt.matchesSelector=function(t,n){n=n.replace(r,"='$1']");if(!o(t)&&!s.test(n)&&!i.test(n))try{var a=u.call(t,n);if(a||e||t.document&&t.document.nodeType!==11)return a}catch(f){}return nt(n,null,null,[t]).length>0})}(),i.pseudos.nth=i.pseudos.eq,i.filters=mt.prototype=i.pseudos,i.setFilters=new mt,nt.attr=v.attr,v.find=nt,v.expr=nt.selectors,v.expr[":"]=v.expr.pseudos,v.unique=nt.uniqueSort,v.text=nt.getText,v.isXMLDoc=nt.isXML,v.contains=nt.contains}(e);var nt=/Until$/,rt=/^(?:parents|prev(?:Until|All))/,it=/^.[^:#\[\.,]*$/,st=v.expr.match.needsContext,ot={children:!0,contents:!0,next:!0,prev:!0};v.fn.extend({find:function(e){var t,n,r,i,s,o,u=this;if(typeof e!="string")return v(e).filter(function(){for(t=0,n=u.length;t0)for(i=r;i=0:v.filter(e,this).length>0:this.filter(e).length>0)},closest:function(e,t){var n,r=0,i=this.length,s=[],o=st.test(e)||typeof e!="string"?v(e,t||this.context):0;for(;r-1:v.find.matchesSelector(n,e)){s.push(n);break}n=n.parentNode}}return s=s.length>1?v.unique(s):s,this.pushStack(s,"closest",e)},index:function(e){return e?typeof e=="string"?v.inArray(this[0],v(e)):v.inArray(e.jquery?e[0]:e,this):this[0]&&this[0].parentNode?this.prevAll().length:-1},add:function(e,t){var n=typeof e=="string"?v(e,t):v.makeArray(e&&e.nodeType?[e]:e),r=v.merge(this.get(),n);return this.pushStack(ut(n[0])||ut(r[0])?r:v.unique(r))},addBack:function(e){return this.add(e==null?this.prevObject:this.prevObject.filter(e))}}),v.fn.andSelf=v.fn.addBack,v.each({parent:function(e){var t=e.parentNode;return t&&t.nodeType!==11?t:null},parents:function(e){return v.dir(e,"parentNode")},parentsUntil:function(e,t,n){return v.dir(e,"parentNode",n)},next:function(e){return at(e,"nextSibling")},prev:function(e){return at(e,"previousSibling")},nextAll:function(e){return v.dir(e,"nextSibling")},prevAll:function(e){return v.dir(e,"previousSibling")},nextUntil:function(e,t,n){return v.dir(e,"nextSibling",n)},prevUntil:function(e,t,n){return v.dir(e,"previousSibling",n)},siblings:function(e){return v.sibling((e.parentNode||{}).firstChild,e)},children:function(e){return v.sibling(e.firstChild)},contents:function(e){return v.nodeName(e,"iframe")?e.contentDocument||e.contentWindow.document:v.merge([],e.childNodes)}},function(e,t){v.fn[e]=function(n,r){var i=v.map(this,t,n);return nt.test(e)||(r=n),r&&typeof r=="string"&&(i=v.filter(r,i)),i=this.length>1&&!ot[e]?v.unique(i):i,this.length>1&&rt.test(e)&&(i=i.reverse()),this.pushStack(i,e,l.call(arguments).join(","))}}),v.extend({filter:function(e,t,n){return n&&(e=":not("+e+")"),t.length===1?v.find.matchesSelector(t[0],e)?[t[0]]:[]:v.find.matches(e,t)},dir:function(e,n,r){var i=[],s=e[n];while(s&&s.nodeType!==9&&(r===t||s.nodeType!==1||!v(s).is(r)))s.nodeType===1&&i.push(s),s=s[n];return i},sibling:function(e,t){var n=[];for(;e;e=e.nextSibling)e.nodeType===1&&e!==t&&n.push(e);return n}});var ct="abbr|article|aside|audio|bdi|canvas|data|datalist|details|figcaption|figure|footer|header|hgroup|mark|meter|nav|output|progress|section|summary|time|video",ht=/ jQuery\d+="(?:null|\d+)"/g,pt=/^\s+/,dt=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,vt=/<([\w:]+)/,mt=/]","i"),Et=/^(?:checkbox|radio)$/,St=/checked\s*(?:[^=]|=\s*.checked.)/i,xt=/\/(java|ecma)script/i,Tt=/^\s*\s*$/g,Nt={option:[1,""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]},Ct=lt(i),kt=Ct.appendChild(i.createElement("div"));Nt.optgroup=Nt.option,Nt.tbody=Nt.tfoot=Nt.colgroup=Nt.caption=Nt.thead,Nt.th=Nt.td,v.support.htmlSerialize||(Nt._default=[1,"X
","
"]),v.fn.extend({text:function(e){return v.access(this,function(e){return e===t?v.text(this):this.empty().append((this[0]&&this[0].ownerDocument||i).createTextNode(e))},null,e,arguments.length)},wrapAll:function(e){if(v.isFunction(e))return this.each(function(t){v(this).wrapAll(e.call(this,t))});if(this[0]){var t=v(e,this[0].ownerDocument).eq(0).clone(!0);this[0].parentNode&&t.insertBefore(this[0]),t.map(function(){var e=this;while(e.firstChild&&e.firstChild.nodeType===1)e=e.firstChild;return e}).append(this)}return this},wrapInner:function(e){return v.isFunction(e)?this.each(function(t){v(this).wrapInner(e.call(this,t))}):this.each(function(){var t=v(this),n=t.contents();n.length?n.wrapAll(e):t.append(e)})},wrap:function(e){var t=v.isFunction(e);return this.each(function(n){v(this).wrapAll(t?e.call(this,n):e)})},unwrap:function(){return this.parent().each(function(){v.nodeName(this,"body")||v(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,!0,function(e){(this.nodeType===1||this.nodeType===11)&&this.appendChild(e)})},prepend:function(){return this.domManip(arguments,!0,function(e){(this.nodeType===1||this.nodeType===11)&&this.insertBefore(e,this.firstChild)})},before:function(){if(!ut(this[0]))return this.domManip(arguments,!1,function(e){this.parentNode.insertBefore(e,this)});if(arguments.length){var e=v.clean(arguments);return this.pushStack(v.merge(e,this),"before",this.selector)}},after:function(){if(!ut(this[0]))return this.domManip(arguments,!1,function(e){this.parentNode.insertBefore(e,this.nextSibling)});if(arguments.length){var e=v.clean(arguments);return this.pushStack(v.merge(this,e),"after",this.selector)}},remove:function(e,t){var n,r=0;for(;(n=this[r])!=null;r++)if(!e||v.filter(e,[n]).length)!t&&n.nodeType===1&&(v.cleanData(n.getElementsByTagName("*")),v.cleanData([n])),n.parentNode&&n.parentNode.removeChild(n);return this},empty:function(){var e,t=0;for(;(e=this[t])!=null;t++){e.nodeType===1&&v.cleanData(e.getElementsByTagName("*"));while(e.firstChild)e.removeChild(e.firstChild)}return this},clone:function(e,t){return e=e==null?!1:e,t=t==null?e:t,this.map(function(){return v.clone(this,e,t)})},html:function(e){return v.access(this,function(e){var n=this[0]||{},r=0,i=this.length;if(e===t)return n.nodeType===1?n.innerHTML.replace(ht,""):t;if(typeof e=="string"&&!yt.test(e)&&(v.support.htmlSerialize||!wt.test(e))&&(v.support.leadingWhitespace||!pt.test(e))&&!Nt[(vt.exec(e)||["",""])[1].toLowerCase()]){e=e.replace(dt,"<$1>");try{for(;r1&&typeof f=="string"&&St.test(f))return this.each(function(){v(this).domManip(e,n,r)});if(v.isFunction(f))return this.each(function(i){var s=v(this);e[0]=f.call(this,i,n?s.html():t),s.domManip(e,n,r)});if(this[0]){i=v.buildFragment(e,this,l),o=i.fragment,s=o.firstChild,o.childNodes.length===1&&(o=s);if(s){n=n&&v.nodeName(s,"tr");for(u=i.cacheable||c-1;a0?this.clone(!0):this).get(),v(o[i])[t](r),s=s.concat(r);return this.pushStack(s,e,o.selector)}}),v.extend({clone:function(e,t,n){var r,i,s,o;v.support.html5Clone||v.isXMLDoc(e)||!wt.test("<"+e.nodeName+">")?o=e.cloneNode(!0):(kt.innerHTML=e.outerHTML,kt.removeChild(o=kt.firstChild));if((!v.support.noCloneEvent||!v.support.noCloneChecked)&&(e.nodeType===1||e.nodeType===11)&&!v.isXMLDoc(e)){Ot(e,o),r=Mt(e),i=Mt(o);for(s=0;r[s];++s)i[s]&&Ot(r[s],i[s])}if(t){At(e,o);if(n){r=Mt(e),i=Mt(o);for(s=0;r[s];++s)At(r[s],i[s])}}return r=i=null,o},clean:function(e,t,n,r){var s,o,u,a,f,l,c,h,p,d,m,g,y=t===i&&Ct,b=[];if(!t||typeof t.createDocumentFragment=="undefined")t=i;for(s=0;(u=e[s])!=null;s++){typeof u=="number"&&(u+="");if(!u)continue;if(typeof u=="string")if(!gt.test(u))u=t.createTextNode(u);else{y=y||lt(t),c=t.createElement("div"),y.appendChild(c),u=u.replace(dt,"<$1>"),a=(vt.exec(u)||["",""])[1].toLowerCase(),f=Nt[a]||Nt._default,l=f[0],c.innerHTML=f[1]+u+f[2];while(l--)c=c.lastChild;if(!v.support.tbody){h=mt.test(u),p=a==="table"&&!h?c.firstChild&&c.firstChild.childNodes:f[1]===""&&!h?c.childNodes:[];for(o=p.length-1;o>=0;--o)v.nodeName(p[o],"tbody")&&!p[o].childNodes.length&&p[o].parentNode.removeChild(p[o])}!v.support.leadingWhitespace&&pt.test(u)&&c.insertBefore(t.createTextNode(pt.exec(u)[0]),c.firstChild),u=c.childNodes,c.parentNode.removeChild(c)}u.nodeType?b.push(u):v.merge(b,u)}c&&(u=c=y=null);if(!v.support.appendChecked)for(s=0;(u=b[s])!=null;s++)v.nodeName(u,"input")?_t(u):typeof u.getElementsByTagName!="undefined"&&v.grep(u.getElementsByTagName("input"),_t);if(n){m=function(e){if(!e.type||xt.test(e.type))return r?r.push(e.parentNode?e.parentNode.removeChild(e):e):n.appendChild(e)};for(s=0;(u=b[s])!=null;s++)if(!v.nodeName(u,"script")||!m(u))n.appendChild(u),typeof u.getElementsByTagName!="undefined"&&(g=v.grep(v.merge([],u.getElementsByTagName("script")),m),b.splice.apply(b,[s+1,0].concat(g)),s+=g.length)}return b},cleanData:function(e,t){var n,r,i,s,o=0,u=v.expando,a=v.cache,f=v.support.deleteExpando,l=v.event.special;for(;(i=e[o])!=null;o++)if(t||v.acceptData(i)){r=i[u],n=r&&a[r];if(n){if(n.events)for(s in n.events)l[s]?v.event.remove(i,s):v.removeEvent(i,s,n.handle);a[r]&&(delete a[r],f?delete i[u]:i.removeAttribute?i.removeAttribute(u):i[u]=null,v.deletedIds.push(r))}}}}),function(){var e,t;v.uaMatch=function(e){e=e.toLowerCase();var t=/(chrome)[ \/]([\w.]+)/.exec(e)||/(webkit)[ \/]([\w.]+)/.exec(e)||/(opera)(?:.*version|)[ \/]([\w.]+)/.exec(e)||/(msie) ([\w.]+)/.exec(e)||e.indexOf("compatible")<0&&/(mozilla)(?:.*? rv:([\w.]+)|)/.exec(e)||[];return{browser:t[1]||"",version:t[2]||"0"}},e=v.uaMatch(o.userAgent),t={},e.browser&&(t[e.browser]=!0,t.version=e.version),t.chrome?t.webkit=!0:t.webkit&&(t.safari=!0),v.browser=t,v.sub=function(){function e(t,n){return new e.fn.init(t,n)}v.extend(!0,e,this),e.superclass=this,e.fn=e.prototype=this(),e.fn.constructor=e,e.sub=this.sub,e.fn.init=function(r,i){return i&&i instanceof v&&!(i instanceof e)&&(i=e(i)),v.fn.init.call(this,r,i,t)},e.fn.init.prototype=e.fn;var t=e(i);return e}}();var Dt,Pt,Ht,Bt=/alpha\([^)]*\)/i,jt=/opacity=([^)]*)/,Ft=/^(top|right|bottom|left)$/,It=/^(none|table(?!-c[ea]).+)/,qt=/^margin/,Rt=new RegExp("^("+m+")(.*)$","i"),Ut=new RegExp("^("+m+")(?!px)[a-z%]+$","i"),zt=new RegExp("^([-+])=("+m+")","i"),Wt={BODY:"block"},Xt={position:"absolute",visibility:"hidden",display:"block"},Vt={letterSpacing:0,fontWeight:400},$t=["Top","Right","Bottom","Left"],Jt=["Webkit","O","Moz","ms"],Kt=v.fn.toggle;v.fn.extend({css:function(e,n){return v.access(this,function(e,n,r){return r!==t?v.style(e,n,r):v.css(e,n)},e,n,arguments.length>1)},show:function(){return Yt(this,!0)},hide:function(){return Yt(this)},toggle:function(e,t){var n=typeof e=="boolean";return v.isFunction(e)&&v.isFunction(t)?Kt.apply(this,arguments):this.each(function(){(n?e:Gt(this))?v(this).show():v(this).hide()})}}),v.extend({cssHooks:{opacity:{get:function(e,t){if(t){var n=Dt(e,"opacity");return n===""?"1":n}}}},cssNumber:{fillOpacity:!0,fontWeight:!0,lineHeight:!0,opacity:!0,orphans:!0,widows:!0,zIndex:!0,zoom:!0},cssProps:{"float":v.support.cssFloat?"cssFloat":"styleFloat"},style:function(e,n,r,i){if(!e||e.nodeType===3||e.nodeType===8||!e.style)return;var s,o,u,a=v.camelCase(n),f=e.style;n=v.cssProps[a]||(v.cssProps[a]=Qt(f,a)),u=v.cssHooks[n]||v.cssHooks[a];if(r===t)return u&&"get"in u&&(s=u.get(e,!1,i))!==t?s:f[n];o=typeof r,o==="string"&&(s=zt.exec(r))&&(r=(s[1]+1)*s[2]+parseFloat(v.css(e,n)),o="number");if(r==null||o==="number"&&isNaN(r))return;o==="number"&&!v.cssNumber[a]&&(r+="px");if(!u||!("set"in u)||(r=u.set(e,r,i))!==t)try{f[n]=r}catch(l){}},css:function(e,n,r,i){var s,o,u,a=v.camelCase(n);return n=v.cssProps[a]||(v.cssProps[a]=Qt(e.style,a)),u=v.cssHooks[n]||v.cssHooks[a],u&&"get"in u&&(s=u.get(e,!0,i)),s===t&&(s=Dt(e,n)),s==="normal"&&n in Vt&&(s=Vt[n]),r||i!==t?(o=parseFloat(s),r||v.isNumeric(o)?o||0:s):s},swap:function(e,t,n){var r,i,s={};for(i in t)s[i]=e.style[i],e.style[i]=t[i];r=n.call(e);for(i in t)e.style[i]=s[i];return r}}),e.getComputedStyle?Dt=function(t,n){var r,i,s,o,u=e.getComputedStyle(t,null),a=t.style;return u&&(r=u.getPropertyValue(n)||u[n],r===""&&!v.contains(t.ownerDocument,t)&&(r=v.style(t,n)),Ut.test(r)&&qt.test(n)&&(i=a.width,s=a.minWidth,o=a.maxWidth,a.minWidth=a.maxWidth=a.width=r,r=u.width,a.width=i,a.minWidth=s,a.maxWidth=o)),r}:i.documentElement.currentStyle&&(Dt=function(e,t){var n,r,i=e.currentStyle&&e.currentStyle[t],s=e.style;return i==null&&s&&s[t]&&(i=s[t]),Ut.test(i)&&!Ft.test(t)&&(n=s.left,r=e.runtimeStyle&&e.runtimeStyle.left,r&&(e.runtimeStyle.left=e.currentStyle.left),s.left=t==="fontSize"?"1em":i,i=s.pixelLeft+"px",s.left=n,r&&(e.runtimeStyle.left=r)),i===""?"auto":i}),v.each(["height","width"],function(e,t){v.cssHooks[t]={get:function(e,n,r){if(n)return e.offsetWidth===0&&It.test(Dt(e,"display"))?v.swap(e,Xt,function(){return tn(e,t,r)}):tn(e,t,r)},set:function(e,n,r){return Zt(e,n,r?en(e,t,r,v.support.boxSizing&&v.css(e,"boxSizing")==="border-box"):0)}}}),v.support.opacity||(v.cssHooks.opacity={get:function(e,t){return jt.test((t&&e.currentStyle?e.currentStyle.filter:e.style.filter)||"")?.01*parseFloat(RegExp.$1)+"":t?"1":""},set:function(e,t){var n=e.style,r=e.currentStyle,i=v.isNumeric(t)?"alpha(opacity="+t*100+")":"",s=r&&r.filter||n.filter||"";n.zoom=1;if(t>=1&&v.trim(s.replace(Bt,""))===""&&n.removeAttribute){n.removeAttribute("filter");if(r&&!r.filter)return}n.filter=Bt.test(s)?s.replace(Bt,i):s+" "+i}}),v(function(){v.support.reliableMarginRight||(v.cssHooks.marginRight={get:function(e,t){return v.swap(e,{display:"inline-block"},function(){if(t)return Dt(e,"marginRight")})}}),!v.support.pixelPosition&&v.fn.position&&v.each(["top","left"],function(e,t){v.cssHooks[t]={get:function(e,n){if(n){var r=Dt(e,t);return Ut.test(r)?v(e).position()[t]+"px":r}}}})}),v.expr&&v.expr.filters&&(v.expr.filters.hidden=function(e){return e.offsetWidth===0&&e.offsetHeight===0||!v.support.reliableHiddenOffsets&&(e.style&&e.style.display||Dt(e,"display"))==="none"},v.expr.filters.visible=function(e){return!v.expr.filters.hidden(e)}),v.each({margin:"",padding:"",border:"Width"},function(e,t){v.cssHooks[e+t]={expand:function(n){var r,i=typeof n=="string"?n.split(" "):[n],s={};for(r=0;r<4;r++)s[e+$t[r]+t]=i[r]||i[r-2]||i[0];return s}},qt.test(e)||(v.cssHooks[e+t].set=Zt)});var rn=/%20/g,sn=/\[\]$/,on=/\r?\n/g,un=/^(?:color|date|datetime|datetime-local|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,an=/^(?:select|textarea)/i;v.fn.extend({serialize:function(){return v.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?v.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||an.test(this.nodeName)||un.test(this.type))}).map(function(e,t){var n=v(this).val();return n==null?null:v.isArray(n)?v.map(n,function(e,n){return{name:t.name,value:e.replace(on,"\r\n")}}):{name:t.name,value:n.replace(on,"\r\n")}}).get()}}),v.param=function(e,n){var r,i=[],s=function(e,t){t=v.isFunction(t)?t():t==null?"":t,i[i.length]=encodeURIComponent(e)+"="+encodeURIComponent(t)};n===t&&(n=v.ajaxSettings&&v.ajaxSettings.traditional);if(v.isArray(e)||e.jquery&&!v.isPlainObject(e))v.each(e,function(){s(this.name,this.value)});else for(r in e)fn(r,e[r],n,s);return i.join("&").replace(rn,"+")};var ln,cn,hn=/#.*$/,pn=/^(.*?):[ \t]*([^\r\n]*)\r?$/mg,dn=/^(?:about|app|app\-storage|.+\-extension|file|res|widget):$/,vn=/^(?:GET|HEAD)$/,mn=/^\/\//,gn=/\?/,yn=/)<[^<]*)*<\/script>/gi,bn=/([?&])_=[^&]*/,wn=/^([\w\+\.\-]+:)(?:\/\/([^\/?#:]*)(?::(\d+)|)|)/,En=v.fn.load,Sn={},xn={},Tn=["*/"]+["*"];try{cn=s.href}catch(Nn){cn=i.createElement("a"),cn.href="",cn=cn.href}ln=wn.exec(cn.toLowerCase())||[],v.fn.load=function(e,n,r){if(typeof e!="string"&&En)return En.apply(this,arguments);if(!this.length)return this;var i,s,o,u=this,a=e.indexOf(" ");return a>=0&&(i=e.slice(a,e.length),e=e.slice(0,a)),v.isFunction(n)?(r=n,n=t):n&&typeof n=="object"&&(s="POST"),v.ajax({url:e,type:s,dataType:"html",data:n,complete:function(e,t){r&&u.each(r,o||[e.responseText,t,e])}}).done(function(e){o=arguments,u.html(i?v("
").append(e.replace(yn,"")).find(i):e)}),this},v.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(e,t){v.fn[t]=function(e){return this.on(t,e)}}),v.each(["get","post"],function(e,n){v[n]=function(e,r,i,s){return v.isFunction(r)&&(s=s||i,i=r,r=t),v.ajax({type:n,url:e,data:r,success:i,dataType:s})}}),v.extend({getScript:function(e,n){return v.get(e,t,n,"script")},getJSON:function(e,t,n){return v.get(e,t,n,"json")},ajaxSetup:function(e,t){return t?Ln(e,v.ajaxSettings):(t=e,e=v.ajaxSettings),Ln(e,t),e},ajaxSettings:{url:cn,isLocal:dn.test(ln[1]),global:!0,type:"GET",contentType:"application/x-www-form-urlencoded; charset=UTF-8",processData:!0,async:!0,accepts:{xml:"application/xml, text/xml",html:"text/html",text:"text/plain",json:"application/json, text/javascript","*":Tn},contents:{xml:/xml/,html:/html/,json:/json/},responseFields:{xml:"responseXML",text:"responseText"},converters:{"* text":e.String,"text html":!0,"text json":v.parseJSON,"text xml":v.parseXML},flatOptions:{context:!0,url:!0}},ajaxPrefilter:Cn(Sn),ajaxTransport:Cn(xn),ajax:function(e,n){function T(e,n,s,a){var l,y,b,w,S,T=n;if(E===2)return;E=2,u&&clearTimeout(u),o=t,i=a||"",x.readyState=e>0?4:0,s&&(w=An(c,x,s));if(e>=200&&e<300||e===304)c.ifModified&&(S=x.getResponseHeader("Last-Modified"),S&&(v.lastModified[r]=S),S=x.getResponseHeader("Etag"),S&&(v.etag[r]=S)),e===304?(T="notmodified",l=!0):(l=On(c,w),T=l.state,y=l.data,b=l.error,l=!b);else{b=T;if(!T||e)T="error",e<0&&(e=0)}x.status=e,x.statusText=(n||T)+"",l?d.resolveWith(h,[y,T,x]):d.rejectWith(h,[x,T,b]),x.statusCode(g),g=t,f&&p.trigger("ajax"+(l?"Success":"Error"),[x,c,l?y:b]),m.fireWith(h,[x,T]),f&&(p.trigger("ajaxComplete",[x,c]),--v.active||v.event.trigger("ajaxStop"))}typeof e=="object"&&(n=e,e=t),n=n||{};var r,i,s,o,u,a,f,l,c=v.ajaxSetup({},n),h=c.context||c,p=h!==c&&(h.nodeType||h instanceof v)?v(h):v.event,d=v.Deferred(),m=v.Callbacks("once memory"),g=c.statusCode||{},b={},w={},E=0,S="canceled",x={readyState:0,setRequestHeader:function(e,t){if(!E){var n=e.toLowerCase();e=w[n]=w[n]||e,b[e]=t}return this},getAllResponseHeaders:function(){return E===2?i:null},getResponseHeader:function(e){var n;if(E===2){if(!s){s={};while(n=pn.exec(i))s[n[1].toLowerCase()]=n[2]}n=s[e.toLowerCase()]}return n===t?null:n},overrideMimeType:function(e){return E||(c.mimeType=e),this},abort:function(e){return e=e||S,o&&o.abort(e),T(0,e),this}};d.promise(x),x.success=x.done,x.error=x.fail,x.complete=m.add,x.statusCode=function(e){if(e){var t;if(E<2)for(t in e)g[t]=[g[t],e[t]];else t=e[x.status],x.always(t)}return this},c.url=((e||c.url)+"").replace(hn,"").replace(mn,ln[1]+"//"),c.dataTypes=v.trim(c.dataType||"*").toLowerCase().split(y),c.crossDomain==null&&(a=wn.exec(c.url.toLowerCase()),c.crossDomain=!(!a||a[1]===ln[1]&&a[2]===ln[2]&&(a[3]||(a[1]==="http:"?80:443))==(ln[3]||(ln[1]==="http:"?80:443)))),c.data&&c.processData&&typeof c.data!="string"&&(c.data=v.param(c.data,c.traditional)),kn(Sn,c,n,x);if(E===2)return x;f=c.global,c.type=c.type.toUpperCase(),c.hasContent=!vn.test(c.type),f&&v.active++===0&&v.event.trigger("ajaxStart");if(!c.hasContent){c.data&&(c.url+=(gn.test(c.url)?"&":"?")+c.data,delete c.data),r=c.url;if(c.cache===!1){var N=v.now(),C=c.url.replace(bn,"$1_="+N);c.url=C+(C===c.url?(gn.test(c.url)?"&":"?")+"_="+N:"")}}(c.data&&c.hasContent&&c.contentType!==!1||n.contentType)&&x.setRequestHeader("Content-Type",c.contentType),c.ifModified&&(r=r||c.url,v.lastModified[r]&&x.setRequestHeader("If-Modified-Since",v.lastModified[r]),v.etag[r]&&x.setRequestHeader("If-None-Match",v.etag[r])),x.setRequestHeader("Accept",c.dataTypes[0]&&c.accepts[c.dataTypes[0]]?c.accepts[c.dataTypes[0]]+(c.dataTypes[0]!=="*"?", "+Tn+"; q=0.01":""):c.accepts["*"]);for(l in c.headers)x.setRequestHeader(l,c.headers[l]);if(!c.beforeSend||c.beforeSend.call(h,x,c)!==!1&&E!==2){S="abort";for(l in{success:1,error:1,complete:1})x[l](c[l]);o=kn(xn,c,n,x);if(!o)T(-1,"No Transport");else{x.readyState=1,f&&p.trigger("ajaxSend",[x,c]),c.async&&c.timeout>0&&(u=setTimeout(function(){x.abort("timeout")},c.timeout));try{E=1,o.send(b,T)}catch(k){if(!(E<2))throw k;T(-1,k)}}return x}return x.abort()},active:0,lastModified:{},etag:{}});var Mn=[],_n=/\?/,Dn=/(=)\?(?=&|$)|\?\?/,Pn=v.now();v.ajaxSetup({jsonp:"callback",jsonpCallback:function(){var e=Mn.pop()||v.expando+"_"+Pn++;return this[e]=!0,e}}),v.ajaxPrefilter("json jsonp",function(n,r,i){var s,o,u,a=n.data,f=n.url,l=n.jsonp!==!1,c=l&&Dn.test(f),h=l&&!c&&typeof a=="string"&&!(n.contentType||"").indexOf("application/x-www-form-urlencoded")&&Dn.test(a);if(n.dataTypes[0]==="jsonp"||c||h)return s=n.jsonpCallback=v.isFunction(n.jsonpCallback)?n.jsonpCallback():n.jsonpCallback,o=e[s],c?n.url=f.replace(Dn,"$1"+s):h?n.data=a.replace(Dn,"$1"+s):l&&(n.url+=(_n.test(f)?"&":"?")+n.jsonp+"="+s),n.converters["script json"]=function(){return u||v.error(s+" was not called"),u[0]},n.dataTypes[0]="json",e[s]=function(){u=arguments},i.always(function(){e[s]=o,n[s]&&(n.jsonpCallback=r.jsonpCallback,Mn.push(s)),u&&v.isFunction(o)&&o(u[0]),u=o=t}),"script"}),v.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/javascript|ecmascript/},converters:{"text script":function(e){return v.globalEval(e),e}}}),v.ajaxPrefilter("script",function(e){e.cache===t&&(e.cache=!1),e.crossDomain&&(e.type="GET",e.global=!1)}),v.ajaxTransport("script",function(e){if(e.crossDomain){var n,r=i.head||i.getElementsByTagName("head")[0]||i.documentElement;return{send:function(s,o){n=i.createElement("script"),n.async="async",e.scriptCharset&&(n.charset=e.scriptCharset),n.src=e.url,n.onload=n.onreadystatechange=function(e,i){if(i||!n.readyState||/loaded|complete/.test(n.readyState))n.onload=n.onreadystatechange=null,r&&n.parentNode&&r.removeChild(n),n=t,i||o(200,"success")},r.insertBefore(n,r.firstChild)},abort:function(){n&&n.onload(0,1)}}}});var Hn,Bn=e.ActiveXObject?function(){for(var e in Hn)Hn[e](0,1)}:!1,jn=0;v.ajaxSettings.xhr=e.ActiveXObject?function(){return!this.isLocal&&Fn()||In()}:Fn,function(e){v.extend(v.support,{ajax:!!e,cors:!!e&&"withCredentials"in e})}(v.ajaxSettings.xhr()),v.support.ajax&&v.ajaxTransport(function(n){if(!n.crossDomain||v.support.cors){var r;return{send:function(i,s){var o,u,a=n.xhr();n.username?a.open(n.type,n.url,n.async,n.username,n.password):a.open(n.type,n.url,n.async);if(n.xhrFields)for(u in n.xhrFields)a[u]=n.xhrFields[u];n.mimeType&&a.overrideMimeType&&a.overrideMimeType(n.mimeType),!n.crossDomain&&!i["X-Requested-With"]&&(i["X-Requested-With"]="XMLHttpRequest");try{for(u in i)a.setRequestHeader(u,i[u])}catch(f){}a.send(n.hasContent&&n.data||null),r=function(e,i){var u,f,l,c,h;try{if(r&&(i||a.readyState===4)){r=t,o&&(a.onreadystatechange=v.noop,Bn&&delete Hn[o]);if(i)a.readyState!==4&&a.abort();else{u=a.status,l=a.getAllResponseHeaders(),c={},h=a.responseXML,h&&h.documentElement&&(c.xml=h);try{c.text=a.responseText}catch(p){}try{f=a.statusText}catch(p){f=""}!u&&n.isLocal&&!n.crossDomain?u=c.text?200:404:u===1223&&(u=204)}}}catch(d){i||s(-1,d)}c&&s(u,f,c,l)},n.async?a.readyState===4?setTimeout(r,0):(o=++jn,Bn&&(Hn||(Hn={},v(e).unload(Bn)),Hn[o]=r),a.onreadystatechange=r):r()},abort:function(){r&&r(0,1)}}}});var qn,Rn,Un=/^(?:toggle|show|hide)$/,zn=new RegExp("^(?:([-+])=|)("+m+")([a-z%]*)$","i"),Wn=/queueHooks$/,Xn=[Gn],Vn={"*":[function(e,t){var n,r,i=this.createTween(e,t),s=zn.exec(t),o=i.cur(),u=+o||0,a=1,f=20;if(s){n=+s[2],r=s[3]||(v.cssNumber[e]?"":"px");if(r!=="px"&&u){u=v.css(i.elem,e,!0)||n||1;do a=a||".5",u/=a,v.style(i.elem,e,u+r);while(a!==(a=i.cur()/o)&&a!==1&&--f)}i.unit=r,i.start=u,i.end=s[1]?u+(s[1]+1)*n:n}return i}]};v.Animation=v.extend(Kn,{tweener:function(e,t){v.isFunction(e)?(t=e,e=["*"]):e=e.split(" ");var n,r=0,i=e.length;for(;r-1,f={},l={},c,h;a?(l=i.position(),c=l.top,h=l.left):(c=parseFloat(o)||0,h=parseFloat(u)||0),v.isFunction(t)&&(t=t.call(e,n,s)),t.top!=null&&(f.top=t.top-s.top+c),t.left!=null&&(f.left=t.left-s.left+h),"using"in t?t.using.call(e,f):i.css(f)}},v.fn.extend({position:function(){if(!this[0])return;var e=this[0],t=this.offsetParent(),n=this.offset(),r=er.test(t[0].nodeName)?{top:0,left:0}:t.offset();return n.top-=parseFloat(v.css(e,"marginTop"))||0,n.left-=parseFloat(v.css(e,"marginLeft"))||0,r.top+=parseFloat(v.css(t[0],"borderTopWidth"))||0,r.left+=parseFloat(v.css(t[0],"borderLeftWidth"))||0,{top:n.top-r.top,left:n.left-r.left}},offsetParent:function(){return this.map(function(){var e=this.offsetParent||i.body;while(e&&!er.test(e.nodeName)&&v.css(e,"position")==="static")e=e.offsetParent;return e||i.body})}}),v.each({scrollLeft:"pageXOffset",scrollTop:"pageYOffset"},function(e,n){var r=/Y/.test(n);v.fn[e]=function(i){return v.access(this,function(e,i,s){var o=tr(e);if(s===t)return o?n in o?o[n]:o.document.documentElement[i]:e[i];o?o.scrollTo(r?v(o).scrollLeft():s,r?s:v(o).scrollTop()):e[i]=s},e,i,arguments.length,null)}}),v.each({Height:"height",Width:"width"},function(e,n){v.each({padding:"inner"+e,content:n,"":"outer"+e},function(r,i){v.fn[i]=function(i,s){var o=arguments.length&&(r||typeof i!="boolean"),u=r||(i===!0||s===!0?"margin":"border");return v.access(this,function(n,r,i){var s;return v.isWindow(n)?n.document.documentElement["client"+e]:n.nodeType===9?(s=n.documentElement,Math.max(n.body["scroll"+e],s["scroll"+e],n.body["offset"+e],s["offset"+e],s["client"+e])):i===t?v.css(n,r,i,u):v.style(n,r,i,u)},n,o?i:t,o,null)}})}),e.jQuery=e.$=v,typeof define=="function"&&define.amd&&define.amd.jQuery&&define("jquery",[],function(){return v})})(window); \ No newline at end of file diff --git a/sphinx/themes/bizstyle/static/css3-mediaqueries_src.js b/sphinx/themes/bizstyle/static/css3-mediaqueries_src.js index e5a3bb0bb..65b44825d 100644 --- a/sphinx/themes/bizstyle/static/css3-mediaqueries_src.js +++ b/sphinx/themes/bizstyle/static/css3-mediaqueries_src.js @@ -1,1104 +1,1104 @@ -/* -css3-mediaqueries.js - CSS Helper and CSS3 Media Queries Enabler - -author: Wouter van der Graaf -version: 1.0 (20110330) -license: MIT -website: http://code.google.com/p/css3-mediaqueries-js/ - -W3C spec: http://www.w3.org/TR/css3-mediaqueries/ - -Note: use of embedded