mirror of
https://github.com/sphinx-doc/sphinx.git
synced 2025-02-25 18:55:22 -06:00
Merge with birkenfeld/sphinx.
This commit is contained in:
commit
55c7fa44bb
@ -15,3 +15,7 @@
|
||||
^env/
|
||||
\.DS_Store$
|
||||
~$
|
||||
^utils/.*3\.py$
|
||||
^distribute-
|
||||
^tests/root/_build/*
|
||||
^tests/root/generated/*
|
||||
|
12
AUTHORS
12
AUTHORS
@ -8,26 +8,36 @@ Other contributors, listed alphabetically, are:
|
||||
* Andi Albrecht -- agogo theme
|
||||
* Henrique Bastos -- SVG support for graphviz extension
|
||||
* Daniel Bültmann -- todo extension
|
||||
* Etienne Desautels -- apidoc module
|
||||
* Michael Droettboom -- inheritance_diagram extension
|
||||
* Charles Duffy -- original graphviz extension
|
||||
* Kevin Dunn -- MathJax extension
|
||||
* Josip Dzolonga -- coverage builder
|
||||
* Horst Gutmann -- internationalization support
|
||||
* Martin Hans -- autodoc improvements
|
||||
* Doug Hellmann -- graphviz improvements
|
||||
* Dave Kuhlman -- original LaTeX writer
|
||||
* Blaise Laflamme -- pyramid theme
|
||||
* Thomas Lamb -- linkcheck builder
|
||||
* Łukasz Langa -- partial support for autodoc
|
||||
* Robert Lehmann -- gettext builder (GSOC project)
|
||||
* Dan MacKinlay -- metadata fixes
|
||||
* Martin Mahner -- nature theme
|
||||
* Will Maier -- directory HTML builder
|
||||
* Jacob Mason -- websupport library (GSOC project)
|
||||
* Roland Meister -- epub builder
|
||||
* Ezio Melotti -- collapsible sidebar JavaScript
|
||||
* Daniel Neuhäuser -- JavaScript domain
|
||||
* Daniel Neuhäuser -- JavaScript domain, Python 3 support (GSOC)
|
||||
* Christopher Perkins -- autosummary integration
|
||||
* Benjamin Peterson -- unittests
|
||||
* T. Powers -- HTML output improvements
|
||||
* Stefan Seefeld -- toctree improvements
|
||||
* Shibukawa Yoshiki -- pluggable search API and Japanese search
|
||||
* Antonio Valentino -- qthelp builder
|
||||
* Pauli Virtanen -- autodoc improvements, autosummary extension
|
||||
* Stefan van der Walt -- autosummary extension
|
||||
* Thomas Waldmann -- apidoc module fixes
|
||||
* John Waltman -- Texinfo builder
|
||||
* Barry Warsaw -- setup command improvements
|
||||
* Sebastian Wiesner -- image handling, distutils support
|
||||
|
||||
|
311
CHANGES
311
CHANGES
@ -1,10 +1,306 @@
|
||||
Release 1.1 (in development)
|
||||
============================
|
||||
|
||||
Incompatible changes
|
||||
--------------------
|
||||
|
||||
Release 1.0.1 (in development)
|
||||
* The :rst:dir:`py:module` directive doesn't output its ``platform`` option
|
||||
value anymore. (It was the only thing that the directive did output, and
|
||||
therefore quite inconsistent.)
|
||||
|
||||
Features added
|
||||
--------------
|
||||
|
||||
* Added Python 3.x support.
|
||||
|
||||
* New builders and subsystems:
|
||||
|
||||
- Added a Texinfo builder.
|
||||
- Added i18n support for content, a ``gettext`` builder and related
|
||||
utilities.
|
||||
- Added the ``websupport`` library and builder.
|
||||
- #98: Added a ``sphinx-apidoc`` script that autogenerates a hierarchy
|
||||
of source files containing autodoc directives to document modules
|
||||
and packages.
|
||||
- #273: Add an API for adding full-text search support for languages
|
||||
other than English. Add support for Japanese.
|
||||
|
||||
* Markup:
|
||||
|
||||
- #138: Add an :rst:role:`index` role, to make inline index entries.
|
||||
- #454: Add more index markup capabilities: marking see/seealso entries,
|
||||
and main entries for a given key.
|
||||
- #460: Allow limiting the depth of section numbers for HTML using the
|
||||
:rst:dir:`toctree`\'s ``numbered`` option.
|
||||
- #586: Implemented improved :rst:dir:`glossary` markup which allows
|
||||
multiple terms per definition.
|
||||
- #478: Added :rst:dir:`py:decorator` directive to describe decorators.
|
||||
- C++ domain now supports array definitions.
|
||||
- Section headings in :rst:dir:`only` directives are now correctly
|
||||
handled.
|
||||
|
||||
* HTML builder:
|
||||
|
||||
- Added ``pyramid`` theme.
|
||||
- #559: :confval:`html_add_permalinks` is now a string giving the
|
||||
text to display in permalinks.
|
||||
- #259: HTML table rows now have even/odd CSS classes to enable
|
||||
"Zebra styling".
|
||||
- #554: Add theme option ``sidebarwidth`` to the basic theme.
|
||||
|
||||
* Other builders:
|
||||
|
||||
- #516: Added new value of the :confval:`latex_show_urls` option to
|
||||
show the URLs in footnotes.
|
||||
- #209: Added :confval:`text_newlines` and :confval:`text_sectionchars`
|
||||
config values.
|
||||
- Added :confval:`man_show_urls` config value.
|
||||
- #472: linkcheck builder: Check links in parallel, use HTTP HEAD
|
||||
requests and allow configuring the timeout. New config values:
|
||||
:confval:`linkcheck_timeout` and :confval:`linkcheck_workers`.
|
||||
- #521: Added :confval:`linkcheck_ignore` config value.
|
||||
|
||||
* Configuration and extensibility:
|
||||
|
||||
- #537: Added :confval:`nitpick_ignore`.
|
||||
- #306: Added :event:`env-get-outdated` event.
|
||||
|
||||
* Autodoc:
|
||||
|
||||
- #564: Add :confval:`autodoc_docstring_signature`. When enabled (the
|
||||
default), autodoc retrieves the signature from the first line of the
|
||||
docstring, if it is found there.
|
||||
- #176: Provide ``private-members`` option for autodoc directives.
|
||||
- #520: Provide ``special-members`` option for autodoc directives.
|
||||
- #431: Doc comments for attributes can now be given on the same line
|
||||
as the assignment.
|
||||
- #437: autodoc now shows values of class data attributes.
|
||||
- autodoc now supports documenting the signatures of
|
||||
``functools.partial`` objects.
|
||||
|
||||
* Other extensions:
|
||||
|
||||
- Added the :mod:`sphinx.ext.mathjax` extension.
|
||||
- #443: Allow referencing external graphviz files.
|
||||
- Added ``inline`` option to graphviz directives, and fixed the
|
||||
default (block-style) in LaTeX output.
|
||||
- #590: Added ``caption`` option to graphviz directives.
|
||||
- #553: Added :rst:dir:`testcleanup` blocks in the doctest extension.
|
||||
- #594: :confval:`trim_doctest_flags` now also removes ``<BLANKLINE>``
|
||||
indicators.
|
||||
- #367: Added automatic exclusion of hidden members in inheritance
|
||||
diagrams, and an option to selectively enable it.
|
||||
- Added :confval:`pngmath_add_tooltips`.
|
||||
|
||||
* New locales:
|
||||
|
||||
- #221: Added Swedish locale.
|
||||
- #526: Added Iranian locale.
|
||||
|
||||
|
||||
Release 1.0.8 (in development)
|
||||
==============================
|
||||
|
||||
|
||||
Release 1.0.7 (Jan 15, 2011)
|
||||
============================
|
||||
|
||||
* #347: Fix wrong generation of directives of static methods in
|
||||
autosummary.
|
||||
|
||||
* #599: Import PIL as ``from PIL import Image``.
|
||||
|
||||
* #558: Fix longtables with captions in LaTeX output.
|
||||
|
||||
* Make token references work as hyperlinks again in LaTeX output.
|
||||
|
||||
* #572: Show warnings by default when reference labels cannot be
|
||||
found.
|
||||
|
||||
* #536: Include line number when complaining about missing reference
|
||||
targets in nitpicky mode.
|
||||
|
||||
* #590: Fix inline display of graphviz diagrams in LaTeX output.
|
||||
|
||||
* #589: Build using app.build() in setup command.
|
||||
|
||||
* Fix a bug in the inheritance diagram exception that caused base
|
||||
classes to be skipped if one of them is a builtin.
|
||||
|
||||
* Fix general index links for C++ domain objects.
|
||||
|
||||
* #332: Make admonition boundaries in LaTeX output visible.
|
||||
|
||||
* #573: Fix KeyErrors occurring on rebuild after removing a file.
|
||||
|
||||
* Fix a traceback when removing files with globbed toctrees.
|
||||
|
||||
* If an autodoc object cannot be imported, always re-read the
|
||||
document containing the directive on next build.
|
||||
|
||||
* If an autodoc object cannot be imported, show the full traceback
|
||||
of the import error.
|
||||
|
||||
* Fix a bug where the removal of download files and images wasn't
|
||||
noticed.
|
||||
|
||||
* #571: Implement ``~`` cross-reference prefix for the C domain.
|
||||
|
||||
* Fix regression of LaTeX output with the fix of #556.
|
||||
|
||||
* #568: Fix lookup of class attribute documentation on descriptors
|
||||
so that comment documentation now works.
|
||||
|
||||
* Fix traceback with ``only`` directives preceded by targets.
|
||||
|
||||
* Fix tracebacks occurring for duplicate C++ domain objects.
|
||||
|
||||
* Fix JavaScript domain links to objects with ``$`` in their name.
|
||||
|
||||
|
||||
Release 1.0.6 (Jan 04, 2011)
|
||||
============================
|
||||
|
||||
* #581: Fix traceback in Python domain for empty cross-reference
|
||||
targets.
|
||||
|
||||
* #283: Fix literal block display issues on Chrome browsers.
|
||||
|
||||
* #383, #148: Support sorting a limited range of accented characters
|
||||
in the general index and the glossary.
|
||||
|
||||
* #570: Try decoding ``-D`` and ``-A`` command-line arguments with
|
||||
the locale's preferred encoding.
|
||||
|
||||
* #528: Observe :confval:`locale_dirs` when looking for the JS
|
||||
translations file.
|
||||
|
||||
* #574: Add special code for better support of Japanese documents
|
||||
in the LaTeX builder.
|
||||
|
||||
* Regression of #77: If there is only one parameter given with
|
||||
``:param:`` markup, the bullet list is now suppressed again.
|
||||
|
||||
* #556: Fix missing paragraph breaks in LaTeX output in certain
|
||||
situations.
|
||||
|
||||
* #567: Emit the ``autodoc-process-docstring`` event even for objects
|
||||
without a docstring so that it can add content.
|
||||
|
||||
* #565: In the LaTeX builder, not only literal blocks require different
|
||||
table handling, but also quite a few other list-like block elements.
|
||||
|
||||
* #515: Fix tracebacks in the viewcode extension for Python objects
|
||||
that do not have a valid signature.
|
||||
|
||||
* Fix strange reportings of line numbers for warnings generated from
|
||||
autodoc-included docstrings, due to different behavior depending
|
||||
on docutils version.
|
||||
|
||||
* Several fixes to the C++ domain.
|
||||
|
||||
|
||||
Release 1.0.5 (Nov 12, 2010)
|
||||
============================
|
||||
|
||||
* #557: Add CSS styles required by docutils 0.7 for aligned images
|
||||
and figures.
|
||||
|
||||
* In the Makefile generated by LaTeX output, do not delete pdf files
|
||||
on clean; they might be required images.
|
||||
|
||||
* #535: Fix LaTeX output generated for line blocks.
|
||||
|
||||
* #544: Allow ``.pyw`` as a source file extension.
|
||||
|
||||
|
||||
Release 1.0.4 (Sep 17, 2010)
|
||||
============================
|
||||
|
||||
* #524: Open intersphinx inventories in binary mode on Windows,
|
||||
since version 2 contains zlib-compressed data.
|
||||
|
||||
* #513: Allow giving non-local URIs for JavaScript files, e.g.
|
||||
in the JSMath extension.
|
||||
|
||||
* #512: Fix traceback when ``intersphinx_mapping`` is empty.
|
||||
|
||||
|
||||
Release 1.0.3 (Aug 23, 2010)
|
||||
============================
|
||||
|
||||
* #495: Fix internal vs. external link distinction for links coming
|
||||
from a docutils table-of-contents.
|
||||
|
||||
* #494: Fix the ``maxdepth`` option for the ``toctree()`` template
|
||||
callable when used with ``collapse=True``.
|
||||
|
||||
* #507: Fix crash parsing Python argument lists containing brackets
|
||||
in string literals.
|
||||
|
||||
* #501: Fix regression when building LaTeX docs with figures that
|
||||
don't have captions.
|
||||
|
||||
* #510: Fix inheritance diagrams for classes that are not picklable.
|
||||
|
||||
* #497: Introduce separate background color for the sidebar collapse
|
||||
button, making it easier to see.
|
||||
|
||||
* #502, #503, #496: Fix small layout bugs in several builtin themes.
|
||||
|
||||
|
||||
Release 1.0.2 (Aug 14, 2010)
|
||||
============================
|
||||
|
||||
* #490: Fix cross-references to objects of types added by the
|
||||
:func:`~.Sphinx.add_object_type` API function.
|
||||
|
||||
* Fix handling of doc field types for different directive types.
|
||||
|
||||
* Allow breaking long signatures, continuing with backlash-escaped
|
||||
newlines.
|
||||
|
||||
* Fix unwanted styling of C domain references (because of a namespace
|
||||
clash with Pygments styles).
|
||||
|
||||
* Allow references to PEPs and RFCs with explicit anchors.
|
||||
|
||||
* #471: Fix LaTeX references to figures.
|
||||
|
||||
* #482: When doing a non-exact search, match only the given type
|
||||
of object.
|
||||
|
||||
* #481: Apply non-exact search for Python reference targets with
|
||||
``.name`` for modules too.
|
||||
|
||||
* #484: Fix crash when duplicating a parameter in an info field list.
|
||||
|
||||
* #487: Fix setting the default role to one provided by the
|
||||
``oldcmarkup`` extension.
|
||||
|
||||
* #488: Fix crash when json-py is installed, which provides a
|
||||
``json`` module but is incompatible to simplejson.
|
||||
|
||||
* #480: Fix handling of target naming in intersphinx.
|
||||
|
||||
* #486: Fix removal of ``!`` for all cross-reference roles.
|
||||
|
||||
|
||||
Release 1.0.1 (Jul 27, 2010)
|
||||
============================
|
||||
|
||||
* #470: Fix generated target names for reST domain objects; they
|
||||
are not in the same namespace.
|
||||
|
||||
* #266: Add Bengali language.
|
||||
|
||||
* #473: Fix a bug in parsing JavaScript object names.
|
||||
|
||||
* #474: Fix building with SingleHTMLBuilder when there is no toctree.
|
||||
|
||||
* Fix display names for objects linked to by intersphinx with
|
||||
explicit targets.
|
||||
|
||||
* Fix building with the JSON builder.
|
||||
|
||||
* Fix hyperrefs in object descriptions for LaTeX.
|
||||
@ -170,15 +466,12 @@ Features added
|
||||
- Added Danish translation, thanks to Hjorth Larsen.
|
||||
- Added Lithuanian translation, thanks to Dalius Dobravolskas.
|
||||
|
||||
* Bugs fixed:
|
||||
|
||||
Release 0.6.8 (in development)
|
||||
==============================
|
||||
|
||||
* #445: Fix links to result pages when using the search function
|
||||
of HTML built with the ``dirhtml`` builder.
|
||||
|
||||
* #444: In templates, properly re-escape values treated with the
|
||||
"striptags" Jinja filter.
|
||||
- #445: Fix links to result pages when using the search function
|
||||
of HTML built with the ``dirhtml`` builder.
|
||||
- #444: In templates, properly re-escape values treated with the
|
||||
"striptags" Jinja filter.
|
||||
|
||||
|
||||
Release 0.6.7 (Jun 05, 2010)
|
||||
|
53
EXAMPLES
53
EXAMPLES
@ -12,29 +12,34 @@ interesting examples.
|
||||
Documentation using the default theme
|
||||
-------------------------------------
|
||||
|
||||
* APSW: http://apsw.googlecode.com/svn/publish/index.html
|
||||
* APSW: http://apidoc.apsw.googlecode.com/hg/index.html
|
||||
* ASE: https://wiki.fysik.dtu.dk/ase/
|
||||
* boostmpi: http://documen.tician.de/boostmpi/
|
||||
* Calibre: http://calibre.kovidgoyal.net/user_manual/
|
||||
* Calibre: http://calibre-ebook.com/user_manual/
|
||||
* CodePy: http://documen.tician.de/codepy/
|
||||
* Cython: http://docs.cython.org/
|
||||
* C\\C++ Python language binding project: http://language-binding.net/index.html
|
||||
* Director: http://packages.python.org/director/
|
||||
* F2py: http://www.f2py.org/html/
|
||||
* Dirigible: http://www.projectdirigible.com/documentation/
|
||||
* F2py: http://f2py.sourceforge.net/docs/
|
||||
* GeoDjango: http://geodjango.org/docs/
|
||||
* gevent: http://www.gevent.org/
|
||||
* Google Wave API: http://wave-robot-python-client.googlecode.com/svn/trunk/pydocs/index.html
|
||||
* GSL Shell: http://www.nongnu.org/gsl-shell/
|
||||
* Heapkeeper: http://heapkeeper.org/
|
||||
* Hands-on Python Tutorial: http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/
|
||||
* Hedge: http://documen.tician.de/hedge/
|
||||
* Kaa: http://doc.freevo.org/api/kaa/
|
||||
* Leo: http://webpages.charter.net/edreamleo/front.html
|
||||
* Lino: http://lino.saffre-rumma.net/
|
||||
* MeshPy: http://documen.tician.de/meshpy/
|
||||
* mpmath: http://mpmath.googlecode.com/svn/trunk/doc/build/index.html
|
||||
* OpenEXR: http://excamera.com/articles/26/doc/index.html
|
||||
* OpenGDA: http://www.opengda.org/gdadoc/html/
|
||||
* openWNS: http://docs.openwns.org/
|
||||
* Paste: http://pythonpaste.org/script/
|
||||
* Paver: http://www.blueskyonmars.com/projects/paver/
|
||||
* Pyccuracy: http://www.pyccuracy.org/
|
||||
* Paver: http://paver.github.com/paver/
|
||||
* Pyccuracy: https://github.com/heynemann/pyccuracy/wiki/
|
||||
* PyCuda: http://documen.tician.de/pycuda/
|
||||
* Pyevolve: http://pyevolve.sourceforge.net/
|
||||
* Pylo: http://documen.tician.de/pylo/
|
||||
@ -66,16 +71,16 @@ Documentation using a customized version of the default theme
|
||||
* IFM: http://fluffybunny.memebot.com/ifm-docs/index.html
|
||||
* LEPL: http://www.acooke.org/lepl/
|
||||
* Mayavi: http://code.enthought.com/projects/mayavi/docs/development/html/mayavi
|
||||
* NOC: http://trac.nocproject.org/trac/wiki/NocGuide
|
||||
* NOC: http://redmine.nocproject.org/projects/noc
|
||||
* NumPy: http://docs.scipy.org/doc/numpy/reference/
|
||||
* Peach^3: http://peach3.nl/doc/latest/userdoc/
|
||||
* Py on Windows: http://timgolden.me.uk/python-on-windows/
|
||||
* PyLit: http://pylit.berlios.de/
|
||||
* Sage: http://sagemath.org/doc/
|
||||
* SciPy: http://docs.scipy.org/doc/scipy/reference/
|
||||
* simuPOP: http://simupop.sourceforge.net/manual_release/build/userGuide.html
|
||||
* Sprox: http://sprox.org/
|
||||
* TurboGears: http://turbogears.org/2.0/docs/
|
||||
* Zentyal: http://doc.zentyal.org/
|
||||
* Zope: http://docs.zope.org/zope2/index.html
|
||||
* zc.async: http://packages.python.org/zc.async/1.5.0/
|
||||
|
||||
@ -83,20 +88,22 @@ Documentation using a customized version of the default theme
|
||||
Documentation using the sphinxdoc theme
|
||||
---------------------------------------
|
||||
|
||||
* Fityk: http://www.unipress.waw.pl/fityk/
|
||||
* Fityk: http://fityk.nieto.pl/
|
||||
* MapServer: http://mapserver.org/
|
||||
* Matplotlib: http://matplotlib.sourceforge.net/
|
||||
* Music21: http://mit.edu/music21/doc/html/contents.html
|
||||
* MyHDL: http://www.myhdl.org/doc/0.6/
|
||||
* NetworkX: http://networkx.lanl.gov/
|
||||
* Pweave: http://mpastell.com/pweave/
|
||||
* Pyre: http://docs.danse.us/pyre/sphinx/
|
||||
* Pysparse: http://pysparse.sourceforge.net/
|
||||
* PyTango:
|
||||
http://www.tango-controls.org/static/PyTango/latest/doc/html/index.html
|
||||
* Reteisi: http://docs.argolinux.org/reteisi/
|
||||
* Satchmo: http://www.satchmoproject.com/docs/svn/
|
||||
* Reteisi: http://www.reteisi.org/contents.html
|
||||
* Satchmo: http://www.satchmoproject.com/docs/dev/
|
||||
* Sphinx: http://sphinx.pocoo.org/
|
||||
* Sqlkit: http://sqlkit.argolinux.org/
|
||||
* Tau: http://www.tango-controls.org/static/tau/latest/doc/html/index.html
|
||||
* Total Open Station: http://tops.berlios.de/
|
||||
* WebFaction: http://docs.webfaction.com/
|
||||
|
||||
@ -104,12 +111,13 @@ Documentation using the sphinxdoc theme
|
||||
Documentation using another builtin theme
|
||||
-----------------------------------------
|
||||
|
||||
* C/C++ Development with Eclipse: http://book.dehlia.in/c-cpp-eclipse/ (agogo)
|
||||
* C/C++ Development with Eclipse: http://eclipsebook.in/ (agogo)
|
||||
* Distribute: http://packages.python.org/distribute/ (nature)
|
||||
* Jinja: http://jinja.pocoo.org/2/documentation/ (scrolls)
|
||||
* Jinja: http://jinja.pocoo.org/ (scrolls)
|
||||
* pip: http://pip.openplans.org/ (nature)
|
||||
* Programmieren mit PyGTK und Glade (German):
|
||||
http://www.florian-diesch.de/doc/python-und-glade/online/ (agogo)
|
||||
* pypol: http://pypol.altervista.org/ (nature)
|
||||
* Spring Python: http://springpython.webfactional.com/current/sphinx/index.html
|
||||
(nature)
|
||||
* sqlparse: http://python-sqlparse.googlecode.com/svn/docs/api/index.html
|
||||
@ -134,16 +142,18 @@ Documentation using a custom theme/integrated in a site
|
||||
* Open ERP: http://doc.openerp.com/
|
||||
* OpenLayers: http://docs.openlayers.org/
|
||||
* PyEphem: http://rhodesmill.org/pyephem/
|
||||
* German Plone 4.0 user manual: http://www.hasecke.com/plone-benutzerhandbuch/4.0/
|
||||
* Pylons: http://pylonshq.com/docs/en/0.9.7/
|
||||
* PyMOTW: http://www.doughellmann.com/PyMOTW/
|
||||
* qooxdoo: http://manual.qooxdoo.org/current
|
||||
* Roundup: http://www.roundup-tracker.org/
|
||||
* Selenium: http://seleniumhq.org/docs/
|
||||
* Self: http://selflanguage.org/
|
||||
* Tablib: http://tablib.org/
|
||||
* SQLAlchemy: http://www.sqlalchemy.org/docs/
|
||||
* tinyTiM: http://tinytim.sourceforge.net/docs/2.0/
|
||||
* tipfy: http://www.tipfy.org/docs/
|
||||
* Werkzeug: http://werkzeug.pocoo.org/documentation/dev/
|
||||
* Werkzeug: http://werkzeug.pocoo.org/docs/
|
||||
* WFront: http://discorporate.us/projects/WFront/
|
||||
|
||||
|
||||
@ -152,7 +162,22 @@ Homepages and other non-documentation sites
|
||||
|
||||
* Applied Mathematics at the Stellenbosch University: http://dip.sun.ac.za/
|
||||
* A personal page: http://www.dehlia.in/
|
||||
* Benoit Boissinot: http://perso.ens-lyon.fr/benoit.boissinot/
|
||||
* Benoit Boissinot: http://bboissin.appspot.com/
|
||||
* lunarsite: http://lunaryorn.de/
|
||||
* The Wine Cellar Book: http://www.thewinecellarbook.com/doc/en/
|
||||
* VOR: http://www.vor-cycling.be/
|
||||
|
||||
|
||||
Books produced using Sphinx
|
||||
---------------------------
|
||||
|
||||
* "The ``repoze.bfg`` Web Application Framework":
|
||||
http://www.amazon.com/repoze-bfg-Web-Application-Framework-Version/dp/0615345379
|
||||
* A Theoretical Physics Reference book: http://theoretical-physics.net/
|
||||
* "Simple and Steady Way of Learning for Software Engineering" (in Japanese):
|
||||
http://www.amazon.co.jp/dp/477414259X/
|
||||
* "Expert Python Programming" (Japanese translation):
|
||||
http://www.amazon.co.jp/dp/4048686291/
|
||||
* "Pomodoro Technique Illustrated" (Japanese translation):
|
||||
http://www.amazon.co.jp/dp/4048689525/
|
||||
|
||||
|
2
LICENSE
2
LICENSE
@ -1,7 +1,7 @@
|
||||
License for Sphinx
|
||||
==================
|
||||
|
||||
Copyright (c) 2007-2010 by the Sphinx team (see AUTHORS file).
|
||||
Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file).
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
|
@ -7,7 +7,7 @@ include TODO
|
||||
|
||||
include babel.cfg
|
||||
include Makefile
|
||||
include ez_setup.py
|
||||
include distribute_setup.py
|
||||
include sphinx-autogen.py
|
||||
include sphinx-build.py
|
||||
include sphinx-quickstart.py
|
||||
|
54
Makefile
54
Makefile
@ -1,36 +1,64 @@
|
||||
PYTHON ?= python
|
||||
|
||||
export PYTHONPATH = $(shell echo "$$PYTHONPATH"):./sphinx
|
||||
.PHONY: all check clean clean-pyc clean-patchfiles clean-backupfiles \
|
||||
clean-generated pylint reindent test covertest build convert-utils
|
||||
|
||||
.PHONY: all check clean clean-pyc clean-patchfiles pylint reindent test
|
||||
DONT_CHECK = -i build -i dist -i sphinx/style/jquery.js \
|
||||
-i sphinx/pycode/pgen2 -i sphinx/util/smartypants.py \
|
||||
-i .ropeproject -i doc/_build -i tests/path.py \
|
||||
-i tests/coverage.py -i env -i utils/convert.py \
|
||||
-i sphinx/search/ja.py \
|
||||
-i utils/reindent3.py -i utils/check_sources3.py -i .tox
|
||||
|
||||
all: clean-pyc check test
|
||||
all: clean-pyc clean-backupfiles check test
|
||||
|
||||
ifeq ($(PYTHON), python3)
|
||||
check: convert-utils
|
||||
@$(PYTHON) utils/check_sources3.py $(DONT_CHECK) .
|
||||
else
|
||||
check:
|
||||
@$(PYTHON) utils/check_sources.py -i build -i dist -i sphinx/style/jquery.js \
|
||||
-i sphinx/pycode/pgen2 -i sphinx/util/smartypants.py -i .ropeproject \
|
||||
-i doc/_build -i ez_setup.py -i tests/path.py -i tests/coverage.py \
|
||||
-i env -i .tox .
|
||||
@$(PYTHON) utils/check_sources.py $(DONT_CHECK) .
|
||||
endif
|
||||
|
||||
clean: clean-pyc clean-patchfiles
|
||||
clean: clean-pyc clean-patchfiles clean-backupfiles clean-generated
|
||||
|
||||
clean-pyc:
|
||||
find . -name '*.pyc' -exec rm -f {} +
|
||||
find . -name '*.pyo' -exec rm -f {} +
|
||||
find . -name '*~' -exec rm -f {} +
|
||||
|
||||
clean-patchfiles:
|
||||
find . -name '*.orig' -exec rm -f {} +
|
||||
find . -name '*.rej' -exec rm -f {} +
|
||||
|
||||
clean-backupfiles:
|
||||
find . -name '*~' -exec rm -f {} +
|
||||
find . -name '*.bak' -exec rm -f {} +
|
||||
|
||||
clean-generated:
|
||||
rm -f utils/*3.py*
|
||||
|
||||
pylint:
|
||||
@pylint --rcfile utils/pylintrc sphinx
|
||||
|
||||
ifeq ($(PYTHON), python3)
|
||||
reindent: convert-utils
|
||||
@$(PYTHON) utils/reindent3.py -r -n .
|
||||
else
|
||||
reindent:
|
||||
@$(PYTHON) utils/reindent.py -r -B .
|
||||
@$(PYTHON) utils/reindent.py -r -n .
|
||||
endif
|
||||
|
||||
test:
|
||||
test: build
|
||||
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' $(TEST)
|
||||
|
||||
covertest:
|
||||
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' --with-coverage --cover-package=sphinx $(TEST)
|
||||
covertest: build
|
||||
@cd tests; $(PYTHON) run.py -d -m '^[tT]est' --with-coverage \
|
||||
--cover-package=sphinx $(TEST)
|
||||
|
||||
build:
|
||||
@$(PYTHON) setup.py build
|
||||
|
||||
ifeq ($(PYTHON), python3)
|
||||
convert-utils:
|
||||
@python3 utils/convert.py -i utils/convert.py utils/
|
||||
endif
|
||||
|
12
README
12
README
@ -26,6 +26,18 @@ Then, direct your browser to ``_build/html/index.html``.
|
||||
Or read them online at <http://sphinx.pocoo.org/>.
|
||||
|
||||
|
||||
Testing
|
||||
=======
|
||||
|
||||
To run the tests with the interpreter available as ``python``, use::
|
||||
|
||||
make test
|
||||
|
||||
If you want to use a different interpreter, e.g. ``python3``, use::
|
||||
|
||||
PYTHON=python3 make test
|
||||
|
||||
|
||||
Contributing
|
||||
============
|
||||
|
||||
|
0
custom_fixers/__init__.py
Normal file
0
custom_fixers/__init__.py
Normal file
12
custom_fixers/fix_alt_unicode.py
Normal file
12
custom_fixers/fix_alt_unicode.py
Normal file
@ -0,0 +1,12 @@
|
||||
from lib2to3.fixer_base import BaseFix
|
||||
from lib2to3.fixer_util import Name
|
||||
|
||||
class FixAltUnicode(BaseFix):
|
||||
PATTERN = """
|
||||
func=funcdef< 'def' name='__unicode__'
|
||||
parameters< '(' NAME ')' > any+ >
|
||||
"""
|
||||
|
||||
def transform(self, node, results):
|
||||
name = results['name']
|
||||
name.replace(Name('__str__', prefix=name.prefix))
|
485
distribute_setup.py
Normal file
485
distribute_setup.py
Normal file
@ -0,0 +1,485 @@
|
||||
#!python
|
||||
"""Bootstrap distribute installation
|
||||
|
||||
If you want to use setuptools in your package's setup.py, just include this
|
||||
file in the same directory with it, and add this to the top of your setup.py::
|
||||
|
||||
from distribute_setup import use_setuptools
|
||||
use_setuptools()
|
||||
|
||||
If you want to require a specific version of setuptools, set a download
|
||||
mirror, or use an alternate download directory, you can do so by supplying
|
||||
the appropriate options to ``use_setuptools()``.
|
||||
|
||||
This file can also be run as a script to install or upgrade setuptools.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import fnmatch
|
||||
import tempfile
|
||||
import tarfile
|
||||
from distutils import log
|
||||
|
||||
try:
|
||||
from site import USER_SITE
|
||||
except ImportError:
|
||||
USER_SITE = None
|
||||
|
||||
try:
|
||||
import subprocess
|
||||
|
||||
def _python_cmd(*args):
|
||||
args = (sys.executable,) + args
|
||||
return subprocess.call(args) == 0
|
||||
|
||||
except ImportError:
|
||||
# will be used for python 2.3
|
||||
def _python_cmd(*args):
|
||||
args = (sys.executable,) + args
|
||||
# quoting arguments if windows
|
||||
if sys.platform == 'win32':
|
||||
def quote(arg):
|
||||
if ' ' in arg:
|
||||
return '"%s"' % arg
|
||||
return arg
|
||||
args = [quote(arg) for arg in args]
|
||||
return os.spawnl(os.P_WAIT, sys.executable, *args) == 0
|
||||
|
||||
DEFAULT_VERSION = "0.6.13"
|
||||
DEFAULT_URL = "http://pypi.python.org/packages/source/d/distribute/"
|
||||
SETUPTOOLS_FAKED_VERSION = "0.6c11"
|
||||
|
||||
SETUPTOOLS_PKG_INFO = """\
|
||||
Metadata-Version: 1.0
|
||||
Name: setuptools
|
||||
Version: %s
|
||||
Summary: xxxx
|
||||
Home-page: xxx
|
||||
Author: xxx
|
||||
Author-email: xxx
|
||||
License: xxx
|
||||
Description: xxx
|
||||
""" % SETUPTOOLS_FAKED_VERSION
|
||||
|
||||
|
||||
def _install(tarball):
|
||||
# extracting the tarball
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
log.warn('Extracting in %s', tmpdir)
|
||||
old_wd = os.getcwd()
|
||||
try:
|
||||
os.chdir(tmpdir)
|
||||
tar = tarfile.open(tarball)
|
||||
_extractall(tar)
|
||||
tar.close()
|
||||
|
||||
# going in the directory
|
||||
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
|
||||
os.chdir(subdir)
|
||||
log.warn('Now working in %s', subdir)
|
||||
|
||||
# installing
|
||||
log.warn('Installing Distribute')
|
||||
if not _python_cmd('setup.py', 'install'):
|
||||
log.warn('Something went wrong during the installation.')
|
||||
log.warn('See the error message above.')
|
||||
finally:
|
||||
os.chdir(old_wd)
|
||||
|
||||
|
||||
def _build_egg(egg, tarball, to_dir):
|
||||
# extracting the tarball
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
log.warn('Extracting in %s', tmpdir)
|
||||
old_wd = os.getcwd()
|
||||
try:
|
||||
os.chdir(tmpdir)
|
||||
tar = tarfile.open(tarball)
|
||||
_extractall(tar)
|
||||
tar.close()
|
||||
|
||||
# going in the directory
|
||||
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
|
||||
os.chdir(subdir)
|
||||
log.warn('Now working in %s', subdir)
|
||||
|
||||
# building an egg
|
||||
log.warn('Building a Distribute egg in %s', to_dir)
|
||||
_python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir)
|
||||
|
||||
finally:
|
||||
os.chdir(old_wd)
|
||||
# returning the result
|
||||
log.warn(egg)
|
||||
if not os.path.exists(egg):
|
||||
raise IOError('Could not build the egg.')
|
||||
|
||||
|
||||
def _do_download(version, download_base, to_dir, download_delay):
|
||||
egg = os.path.join(to_dir, 'distribute-%s-py%d.%d.egg'
|
||||
% (version, sys.version_info[0], sys.version_info[1]))
|
||||
if not os.path.exists(egg):
|
||||
tarball = download_setuptools(version, download_base,
|
||||
to_dir, download_delay)
|
||||
_build_egg(egg, tarball, to_dir)
|
||||
sys.path.insert(0, egg)
|
||||
import setuptools
|
||||
setuptools.bootstrap_install_from = egg
|
||||
|
||||
|
||||
def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
|
||||
to_dir=os.curdir, download_delay=15, no_fake=True):
|
||||
# making sure we use the absolute path
|
||||
to_dir = os.path.abspath(to_dir)
|
||||
was_imported = 'pkg_resources' in sys.modules or \
|
||||
'setuptools' in sys.modules
|
||||
try:
|
||||
try:
|
||||
import pkg_resources
|
||||
if not hasattr(pkg_resources, '_distribute'):
|
||||
if not no_fake:
|
||||
_fake_setuptools()
|
||||
raise ImportError
|
||||
except ImportError:
|
||||
return _do_download(version, download_base, to_dir, download_delay)
|
||||
try:
|
||||
pkg_resources.require("distribute>="+version)
|
||||
return
|
||||
except pkg_resources.VersionConflict:
|
||||
e = sys.exc_info()[1]
|
||||
if was_imported:
|
||||
sys.stderr.write(
|
||||
"The required version of distribute (>=%s) is not available,\n"
|
||||
"and can't be installed while this script is running. Please\n"
|
||||
"install a more recent version first, using\n"
|
||||
"'easy_install -U distribute'."
|
||||
"\n\n(Currently using %r)\n" % (version, e.args[0]))
|
||||
sys.exit(2)
|
||||
else:
|
||||
del pkg_resources, sys.modules['pkg_resources'] # reload ok
|
||||
return _do_download(version, download_base, to_dir,
|
||||
download_delay)
|
||||
except pkg_resources.DistributionNotFound:
|
||||
return _do_download(version, download_base, to_dir,
|
||||
download_delay)
|
||||
finally:
|
||||
if not no_fake:
|
||||
_create_fake_setuptools_pkg_info(to_dir)
|
||||
|
||||
def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
|
||||
to_dir=os.curdir, delay=15):
|
||||
"""Download distribute from a specified location and return its filename
|
||||
|
||||
`version` should be a valid distribute version number that is available
|
||||
as an egg for download under the `download_base` URL (which should end
|
||||
with a '/'). `to_dir` is the directory where the egg will be downloaded.
|
||||
`delay` is the number of seconds to pause before an actual download
|
||||
attempt.
|
||||
"""
|
||||
# making sure we use the absolute path
|
||||
to_dir = os.path.abspath(to_dir)
|
||||
try:
|
||||
from urllib.request import urlopen
|
||||
except ImportError:
|
||||
from urllib2 import urlopen
|
||||
tgz_name = "distribute-%s.tar.gz" % version
|
||||
url = download_base + tgz_name
|
||||
saveto = os.path.join(to_dir, tgz_name)
|
||||
src = dst = None
|
||||
if not os.path.exists(saveto): # Avoid repeated downloads
|
||||
try:
|
||||
log.warn("Downloading %s", url)
|
||||
src = urlopen(url)
|
||||
# Read/write all in one block, so we don't create a corrupt file
|
||||
# if the download is interrupted.
|
||||
data = src.read()
|
||||
dst = open(saveto, "wb")
|
||||
dst.write(data)
|
||||
finally:
|
||||
if src:
|
||||
src.close()
|
||||
if dst:
|
||||
dst.close()
|
||||
return os.path.realpath(saveto)
|
||||
|
||||
def _no_sandbox(function):
|
||||
def __no_sandbox(*args, **kw):
|
||||
try:
|
||||
from setuptools.sandbox import DirectorySandbox
|
||||
if not hasattr(DirectorySandbox, '_old'):
|
||||
def violation(*args):
|
||||
pass
|
||||
DirectorySandbox._old = DirectorySandbox._violation
|
||||
DirectorySandbox._violation = violation
|
||||
patched = True
|
||||
else:
|
||||
patched = False
|
||||
except ImportError:
|
||||
patched = False
|
||||
|
||||
try:
|
||||
return function(*args, **kw)
|
||||
finally:
|
||||
if patched:
|
||||
DirectorySandbox._violation = DirectorySandbox._old
|
||||
del DirectorySandbox._old
|
||||
|
||||
return __no_sandbox
|
||||
|
||||
def _patch_file(path, content):
|
||||
"""Will backup the file then patch it"""
|
||||
existing_content = open(path).read()
|
||||
if existing_content == content:
|
||||
# already patched
|
||||
log.warn('Already patched.')
|
||||
return False
|
||||
log.warn('Patching...')
|
||||
_rename_path(path)
|
||||
f = open(path, 'w')
|
||||
try:
|
||||
f.write(content)
|
||||
finally:
|
||||
f.close()
|
||||
return True
|
||||
|
||||
_patch_file = _no_sandbox(_patch_file)
|
||||
|
||||
def _same_content(path, content):
|
||||
return open(path).read() == content
|
||||
|
||||
def _rename_path(path):
|
||||
new_name = path + '.OLD.%s' % time.time()
|
||||
log.warn('Renaming %s into %s', path, new_name)
|
||||
os.rename(path, new_name)
|
||||
return new_name
|
||||
|
||||
def _remove_flat_installation(placeholder):
|
||||
if not os.path.isdir(placeholder):
|
||||
log.warn('Unkown installation at %s', placeholder)
|
||||
return False
|
||||
found = False
|
||||
for file in os.listdir(placeholder):
|
||||
if fnmatch.fnmatch(file, 'setuptools*.egg-info'):
|
||||
found = True
|
||||
break
|
||||
if not found:
|
||||
log.warn('Could not locate setuptools*.egg-info')
|
||||
return
|
||||
|
||||
log.warn('Removing elements out of the way...')
|
||||
pkg_info = os.path.join(placeholder, file)
|
||||
if os.path.isdir(pkg_info):
|
||||
patched = _patch_egg_dir(pkg_info)
|
||||
else:
|
||||
patched = _patch_file(pkg_info, SETUPTOOLS_PKG_INFO)
|
||||
|
||||
if not patched:
|
||||
log.warn('%s already patched.', pkg_info)
|
||||
return False
|
||||
# now let's move the files out of the way
|
||||
for element in ('setuptools', 'pkg_resources.py', 'site.py'):
|
||||
element = os.path.join(placeholder, element)
|
||||
if os.path.exists(element):
|
||||
_rename_path(element)
|
||||
else:
|
||||
log.warn('Could not find the %s element of the '
|
||||
'Setuptools distribution', element)
|
||||
return True
|
||||
|
||||
_remove_flat_installation = _no_sandbox(_remove_flat_installation)
|
||||
|
||||
def _after_install(dist):
|
||||
log.warn('After install bootstrap.')
|
||||
placeholder = dist.get_command_obj('install').install_purelib
|
||||
_create_fake_setuptools_pkg_info(placeholder)
|
||||
|
||||
def _create_fake_setuptools_pkg_info(placeholder):
|
||||
if not placeholder or not os.path.exists(placeholder):
|
||||
log.warn('Could not find the install location')
|
||||
return
|
||||
pyver = '%s.%s' % (sys.version_info[0], sys.version_info[1])
|
||||
setuptools_file = 'setuptools-%s-py%s.egg-info' % \
|
||||
(SETUPTOOLS_FAKED_VERSION, pyver)
|
||||
pkg_info = os.path.join(placeholder, setuptools_file)
|
||||
if os.path.exists(pkg_info):
|
||||
log.warn('%s already exists', pkg_info)
|
||||
return
|
||||
|
||||
log.warn('Creating %s', pkg_info)
|
||||
f = open(pkg_info, 'w')
|
||||
try:
|
||||
f.write(SETUPTOOLS_PKG_INFO)
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
pth_file = os.path.join(placeholder, 'setuptools.pth')
|
||||
log.warn('Creating %s', pth_file)
|
||||
f = open(pth_file, 'w')
|
||||
try:
|
||||
f.write(os.path.join(os.curdir, setuptools_file))
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
_create_fake_setuptools_pkg_info = _no_sandbox(_create_fake_setuptools_pkg_info)
|
||||
|
||||
def _patch_egg_dir(path):
|
||||
# let's check if it's already patched
|
||||
pkg_info = os.path.join(path, 'EGG-INFO', 'PKG-INFO')
|
||||
if os.path.exists(pkg_info):
|
||||
if _same_content(pkg_info, SETUPTOOLS_PKG_INFO):
|
||||
log.warn('%s already patched.', pkg_info)
|
||||
return False
|
||||
_rename_path(path)
|
||||
os.mkdir(path)
|
||||
os.mkdir(os.path.join(path, 'EGG-INFO'))
|
||||
pkg_info = os.path.join(path, 'EGG-INFO', 'PKG-INFO')
|
||||
f = open(pkg_info, 'w')
|
||||
try:
|
||||
f.write(SETUPTOOLS_PKG_INFO)
|
||||
finally:
|
||||
f.close()
|
||||
return True
|
||||
|
||||
_patch_egg_dir = _no_sandbox(_patch_egg_dir)
|
||||
|
||||
def _before_install():
|
||||
log.warn('Before install bootstrap.')
|
||||
_fake_setuptools()
|
||||
|
||||
|
||||
def _under_prefix(location):
|
||||
if 'install' not in sys.argv:
|
||||
return True
|
||||
args = sys.argv[sys.argv.index('install')+1:]
|
||||
for index, arg in enumerate(args):
|
||||
for option in ('--root', '--prefix'):
|
||||
if arg.startswith('%s=' % option):
|
||||
top_dir = arg.split('root=')[-1]
|
||||
return location.startswith(top_dir)
|
||||
elif arg == option:
|
||||
if len(args) > index:
|
||||
top_dir = args[index+1]
|
||||
return location.startswith(top_dir)
|
||||
if arg == '--user' and USER_SITE is not None:
|
||||
return location.startswith(USER_SITE)
|
||||
return True
|
||||
|
||||
|
||||
def _fake_setuptools():
|
||||
log.warn('Scanning installed packages')
|
||||
try:
|
||||
import pkg_resources
|
||||
except ImportError:
|
||||
# we're cool
|
||||
log.warn('Setuptools or Distribute does not seem to be installed.')
|
||||
return
|
||||
ws = pkg_resources.working_set
|
||||
try:
|
||||
setuptools_dist = ws.find(pkg_resources.Requirement.parse('setuptools',
|
||||
replacement=False))
|
||||
except TypeError:
|
||||
# old distribute API
|
||||
setuptools_dist = ws.find(pkg_resources.Requirement.parse('setuptools'))
|
||||
|
||||
if setuptools_dist is None:
|
||||
log.warn('No setuptools distribution found')
|
||||
return
|
||||
# detecting if it was already faked
|
||||
setuptools_location = setuptools_dist.location
|
||||
log.warn('Setuptools installation detected at %s', setuptools_location)
|
||||
|
||||
# if --root or --preix was provided, and if
|
||||
# setuptools is not located in them, we don't patch it
|
||||
if not _under_prefix(setuptools_location):
|
||||
log.warn('Not patching, --root or --prefix is installing Distribute'
|
||||
' in another location')
|
||||
return
|
||||
|
||||
# let's see if its an egg
|
||||
if not setuptools_location.endswith('.egg'):
|
||||
log.warn('Non-egg installation')
|
||||
res = _remove_flat_installation(setuptools_location)
|
||||
if not res:
|
||||
return
|
||||
else:
|
||||
log.warn('Egg installation')
|
||||
pkg_info = os.path.join(setuptools_location, 'EGG-INFO', 'PKG-INFO')
|
||||
if (os.path.exists(pkg_info) and
|
||||
_same_content(pkg_info, SETUPTOOLS_PKG_INFO)):
|
||||
log.warn('Already patched.')
|
||||
return
|
||||
log.warn('Patching...')
|
||||
# let's create a fake egg replacing setuptools one
|
||||
res = _patch_egg_dir(setuptools_location)
|
||||
if not res:
|
||||
return
|
||||
log.warn('Patched done.')
|
||||
_relaunch()
|
||||
|
||||
|
||||
def _relaunch():
|
||||
log.warn('Relaunching...')
|
||||
# we have to relaunch the process
|
||||
# pip marker to avoid a relaunch bug
|
||||
if sys.argv[:3] == ['-c', 'install', '--single-version-externally-managed']:
|
||||
sys.argv[0] = 'setup.py'
|
||||
args = [sys.executable] + sys.argv
|
||||
sys.exit(subprocess.call(args))
|
||||
|
||||
|
||||
def _extractall(self, path=".", members=None):
|
||||
"""Extract all members from the archive to the current working
|
||||
directory and set owner, modification time and permissions on
|
||||
directories afterwards. `path' specifies a different directory
|
||||
to extract to. `members' is optional and must be a subset of the
|
||||
list returned by getmembers().
|
||||
"""
|
||||
import copy
|
||||
import operator
|
||||
from tarfile import ExtractError
|
||||
directories = []
|
||||
|
||||
if members is None:
|
||||
members = self
|
||||
|
||||
for tarinfo in members:
|
||||
if tarinfo.isdir():
|
||||
# Extract directories with a safe mode.
|
||||
directories.append(tarinfo)
|
||||
tarinfo = copy.copy(tarinfo)
|
||||
tarinfo.mode = 448 # decimal for oct 0700
|
||||
self.extract(tarinfo, path)
|
||||
|
||||
# Reverse sort directories.
|
||||
if sys.version_info < (2, 4):
|
||||
def sorter(dir1, dir2):
|
||||
return cmp(dir1.name, dir2.name)
|
||||
directories.sort(sorter)
|
||||
directories.reverse()
|
||||
else:
|
||||
directories.sort(key=operator.attrgetter('name'), reverse=True)
|
||||
|
||||
# Set correct owner, mtime and filemode on directories.
|
||||
for tarinfo in directories:
|
||||
dirpath = os.path.join(path, tarinfo.name)
|
||||
try:
|
||||
self.chown(tarinfo, dirpath)
|
||||
self.utime(tarinfo, dirpath)
|
||||
self.chmod(tarinfo, dirpath)
|
||||
except ExtractError:
|
||||
e = sys.exc_info()[1]
|
||||
if self.errorlevel > 1:
|
||||
raise
|
||||
else:
|
||||
self._dbg(1, "tarfile: %s" % e)
|
||||
|
||||
|
||||
def main(argv, version=DEFAULT_VERSION):
|
||||
"""Install or upgrade setuptools and EasyInstall"""
|
||||
tarball = download_setuptools()
|
||||
_install(tarball)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main(sys.argv[1:])
|
26
doc/Makefile
26
doc/Makefile
@ -8,8 +8,9 @@ PAPER =
|
||||
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) \
|
||||
$(SPHINXOPTS) $(O) .
|
||||
ALLSPHINXOPTS = -d _build/doctrees $(PAPEROPT_$(PAPER)) \
|
||||
$(SPHINXOPTS) $(O) .
|
||||
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) $(O) .
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml text man pickle json htmlhelp \
|
||||
qthelp devhelp epub latex latexpdf changes linkcheck doctest
|
||||
@ -29,6 +30,9 @@ help:
|
||||
@echo " epub to make an epub file"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run pdflatex"
|
||||
@echo " texinfo to make Texinfo files"
|
||||
@echo " info to make Texinfo files and run them through makeinfo"
|
||||
@echo " gettext to make PO message catalogs"
|
||||
@echo " changes to make an overview over all changed/added/deprecated items"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
|
||||
@ -112,6 +116,11 @@ latexpdf:
|
||||
make -C _build/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in _build/latex."
|
||||
|
||||
gettext:
|
||||
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) _build/locale
|
||||
@echo
|
||||
@echo "Build finished. The message catalogs are in _build/locale."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) _build/changes
|
||||
@echo
|
||||
@ -125,3 +134,16 @@ linkcheck:
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) _build/doctest
|
||||
|
||||
texinfo:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) _build/texinfo
|
||||
@echo
|
||||
@echo "Build finished. The Texinfo files are in _build/texinfo."
|
||||
@echo "Run \`make' in that directory to run these through makeinfo" \
|
||||
"(use \`make info' here to do that automatically)."
|
||||
|
||||
info:
|
||||
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) _build/texinfo
|
||||
@echo "Running Texinfo files through makeinfo..."
|
||||
make -C _build/texinfo info
|
||||
@echo "makeinfo finished; the Info files are in _build/texinfo."
|
||||
|
BIN
doc/_static/pocoo.png
vendored
Normal file
BIN
doc/_static/pocoo.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 4.1 KiB |
2
doc/_templates/index.html
vendored
2
doc/_templates/index.html
vendored
@ -12,7 +12,7 @@
|
||||
<p>
|
||||
Sphinx is a tool that makes it easy to create intelligent and beautiful
|
||||
documentation, written by Georg Brandl and licensed under the BSD license.</p>
|
||||
<p>It was originally created for <a href="http://docs.python.org/dev/">the
|
||||
<p>It was originally created for <a href="http://docs.python.org/">the
|
||||
new Python documentation</a>, and it has excellent facilities for the
|
||||
documentation of Python projects, but C/C++ is already supported as well,
|
||||
and it is planned to add special support for other languages as well. Of
|
||||
|
5
doc/_templates/indexsidebar.html
vendored
5
doc/_templates/indexsidebar.html
vendored
@ -1,3 +1,6 @@
|
||||
<p class="logo"><a href="http://pocoo.org/">
|
||||
<img src="{{ pathto("_static/pocoo.png", 1) }}" /></a></p>
|
||||
|
||||
<h3>Download</h3>
|
||||
{% if version.endswith('(hg)') %}
|
||||
<p>This documentation is for version <b>{{ version }}</b>, which is
|
||||
@ -23,6 +26,6 @@ are also available.</p>
|
||||
<input type="text" name="email" value="your@email"/>
|
||||
<input type="submit" name="sub" value="Subscribe" />
|
||||
</form>
|
||||
<p>or come to the <tt>#python-docs</tt> channel on FreeNode.</p>
|
||||
<p>or come to the <tt>#pocoo</tt> channel on FreeNode.</p>
|
||||
<p>You can also open an issue at the
|
||||
<a href="http://www.bitbucket.org/birkenfeld/sphinx/issues/">tracker</a>.</p>
|
||||
|
@ -144,6 +144,26 @@ Note that a direct PDF builder using ReportLab is available in `rst2pdf
|
||||
|
||||
.. versionadded:: 1.0
|
||||
|
||||
|
||||
.. module:: sphinx.builders.texinfo
|
||||
.. class:: TexinfoBuilder
|
||||
|
||||
This builder produces Texinfo files that can be processed into Info files by
|
||||
the :program:`makeinfo` program. You have to specify which documents are to
|
||||
be included in which Texinfo files via the :confval:`texinfo_documents`
|
||||
configuration value.
|
||||
|
||||
The Info format is the basis of the on-line help system used by GNU Emacs and
|
||||
the terminal-based program :program:`info`. See :ref:`texinfo-faq` for more
|
||||
details. The Texinfo format is the official documentation system used by the
|
||||
GNU project. More information on Texinfo can be found at
|
||||
`<http://www.gnu.org/software/texinfo/>`_.
|
||||
|
||||
Its name is ``texinfo``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. currentmodule:: sphinx.builders.html
|
||||
.. class:: SerializingHTMLBuilder
|
||||
|
||||
@ -220,6 +240,18 @@ Note that a direct PDF builder using ReportLab is available in `rst2pdf
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
.. module:: sphinx.builders.gettext
|
||||
.. class:: MessageCatalogBuilder
|
||||
|
||||
This builder produces gettext-style message catalogs. Each top-level file or
|
||||
subdirectory grows a single ``.pot`` catalog template.
|
||||
|
||||
See the documentation on :ref:`intl` for further reference.
|
||||
|
||||
Its name is ``gettext``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. module:: sphinx.builders.changes
|
||||
.. class:: ChangesBuilder
|
||||
|
||||
@ -255,11 +287,11 @@ All serialization builders outputs one file per source file and a few special
|
||||
files. They also copy the reST source files in the directory ``_sources``
|
||||
under the output directory.
|
||||
|
||||
The :class:`PickleHTMLBuilder` is a builtin subclass that implements the pickle
|
||||
The :class:`.PickleHTMLBuilder` is a builtin subclass that implements the pickle
|
||||
serialization interface.
|
||||
|
||||
The files per source file have the extensions of
|
||||
:attr:`~SerializingHTMLBuilder.out_suffix`, and are arranged in directories
|
||||
:attr:`~.SerializingHTMLBuilder.out_suffix`, and are arranged in directories
|
||||
just as the source files are. They unserialize to a dictionary (or dictionary
|
||||
like structure) with these keys:
|
||||
|
||||
@ -290,7 +322,7 @@ like structure) with these keys:
|
||||
|
||||
The special files are located in the root output directory. They are:
|
||||
|
||||
:attr:`SerializingHTMLBuilder.globalcontext_filename`
|
||||
:attr:`.SerializingHTMLBuilder.globalcontext_filename`
|
||||
A pickled dict with these keys:
|
||||
|
||||
``project``, ``copyright``, ``release``, ``version``
|
||||
@ -309,7 +341,7 @@ The special files are located in the root output directory. They are:
|
||||
``titles``
|
||||
A dictionary of all documents' titles, as HTML strings.
|
||||
|
||||
:attr:`SerializingHTMLBuilder.searchindex_filename`
|
||||
:attr:`.SerializingHTMLBuilder.searchindex_filename`
|
||||
An index that can be used for searching the documentation. It is a pickled
|
||||
list with these entries:
|
||||
|
||||
|
26
doc/conf.py
26
doc/conf.py
@ -13,7 +13,7 @@ templates_path = ['_templates']
|
||||
exclude_patterns = ['_build']
|
||||
|
||||
project = 'Sphinx'
|
||||
copyright = '2007-2010, Georg Brandl'
|
||||
copyright = '2007-2011, Georg Brandl'
|
||||
version = sphinx.__released__
|
||||
release = version
|
||||
show_authors = True
|
||||
@ -21,7 +21,6 @@ show_authors = True
|
||||
html_theme = 'sphinxdoc'
|
||||
modindex_common_prefix = ['sphinx.']
|
||||
html_static_path = ['_static']
|
||||
html_index = 'index.html'
|
||||
html_sidebars = {'index': ['indexsidebar.html', 'searchbox.html']}
|
||||
html_additional_pages = {'index': 'index.html'}
|
||||
html_use_opensearch = 'http://sphinx.pocoo.org'
|
||||
@ -47,6 +46,7 @@ latex_logo = '_static/sphinx.png'
|
||||
latex_elements = {
|
||||
'fontpkg': '\\usepackage{palatino}',
|
||||
}
|
||||
latex_show_urls = 'footnote'
|
||||
|
||||
autodoc_member_order = 'groupwise'
|
||||
todo_include_todos = True
|
||||
@ -66,6 +66,16 @@ man_pages = [
|
||||
'template generator', '', 1),
|
||||
]
|
||||
|
||||
texinfo_documents = [
|
||||
('contents', 'sphinx', 'Sphinx Documentation', 'Georg Brandl',
|
||||
'Sphinx', 'The Sphinx documentation builder.', 'Documentation tools',
|
||||
1),
|
||||
]
|
||||
|
||||
# We're not using intersphinx right now, but if we did, this would be part of
|
||||
# the mapping:
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/dev', None)}
|
||||
|
||||
|
||||
# -- Extension interface -------------------------------------------------------
|
||||
|
||||
@ -91,8 +101,12 @@ def parse_event(env, sig, signode):
|
||||
|
||||
def setup(app):
|
||||
from sphinx.ext.autodoc import cut_lines
|
||||
from sphinx.util.docfields import GroupedField
|
||||
app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
|
||||
app.add_description_unit('confval', 'confval',
|
||||
objname='configuration value',
|
||||
indextemplate='pair: %s; configuration value')
|
||||
app.add_description_unit('event', 'event', 'pair: %s; event', parse_event)
|
||||
app.add_object_type('confval', 'confval',
|
||||
objname='configuration value',
|
||||
indextemplate='pair: %s; configuration value')
|
||||
fdesc = GroupedField('parameter', label='Parameters',
|
||||
names=['param'], can_collapse=True)
|
||||
app.add_object_type('event', 'event', 'pair: %s; event', parse_event,
|
||||
doc_field_types=[fdesc])
|
||||
|
359
doc/config.rst
359
doc/config.rst
@ -98,7 +98,7 @@ General configuration
|
||||
Example patterns:
|
||||
|
||||
- ``'library/xml.rst'`` -- ignores the ``library/xml.rst`` file (replaces
|
||||
entry in :confval:`unused_docs`
|
||||
entry in :confval:`unused_docs`)
|
||||
- ``'library/xml'`` -- ignores the ``library/xml`` directory (replaces entry
|
||||
in :confval:`exclude_trees`)
|
||||
- ``'library/xml*'`` -- ignores all files and directories starting with
|
||||
@ -143,20 +143,6 @@ General configuration
|
||||
.. deprecated:: 1.0
|
||||
Use :confval:`exclude_patterns` instead.
|
||||
|
||||
.. confval:: locale_dirs
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
Directories in which to search for additional Sphinx message catalogs (see
|
||||
:confval:`language`), relative to the source directory. The directories on
|
||||
this path are searched by the standard :mod:`gettext` module for a text
|
||||
domain of ``sphinx``; so if you add the directory :file:`./locale` to this
|
||||
settting, the message catalogs (compiled from ``.po`` format using
|
||||
:program:`msgfmt`) must be in
|
||||
:file:`./locale/{language}/LC_MESSAGES/sphinx.mo`.
|
||||
|
||||
The default is ``[]``.
|
||||
|
||||
.. confval:: templates_path
|
||||
|
||||
A list of paths that contain extra templates (or templates that overwrite
|
||||
@ -246,6 +232,14 @@ General configuration
|
||||
|
||||
.. versionadded:: 1.0
|
||||
|
||||
.. confval:: nitpick_ignore
|
||||
|
||||
A list of ``(type, target)`` tuples (by default empty) that should be ignored
|
||||
when generating warnings in "nitpicky mode". Note that ``type`` should
|
||||
include the domain name. An example entry would be ``('py:func', 'int')``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
Project information
|
||||
-------------------
|
||||
@ -272,38 +266,6 @@ Project information
|
||||
If you don't need the separation provided between :confval:`version` and
|
||||
:confval:`release`, just set them both to the same value.
|
||||
|
||||
.. confval:: language
|
||||
|
||||
The code for the language the docs are written in. Any text automatically
|
||||
generated by Sphinx will be in that language. Also, in the LaTeX builder, a
|
||||
suitable language will be selected as an option for the *Babel* package.
|
||||
Default is ``None``, which means that no translation will be done.
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
Currently supported languages are:
|
||||
|
||||
* ``ca`` -- Catalan
|
||||
* ``cs`` -- Czech
|
||||
* ``da`` -- Danish
|
||||
* ``de`` -- German
|
||||
* ``en`` -- English
|
||||
* ``es`` -- Spanish
|
||||
* ``fi`` -- Finnish
|
||||
* ``fr`` -- French
|
||||
* ``hr`` -- Croatian
|
||||
* ``it`` -- Italian
|
||||
* ``lt`` -- Lithuanian
|
||||
* ``nl`` -- Dutch
|
||||
* ``pl`` -- Polish
|
||||
* ``pt_BR`` -- Brazilian Portuguese
|
||||
* ``ru`` -- Russian
|
||||
* ``sl`` -- Slovenian
|
||||
* ``tr`` -- Turkish
|
||||
* ``uk_UA`` -- Ukrainian
|
||||
* ``zh_CN`` -- Simplified Chinese
|
||||
* ``zh_TW`` -- Traditional Chinese
|
||||
|
||||
.. confval:: today
|
||||
today_fmt
|
||||
|
||||
@ -345,12 +307,12 @@ Project information
|
||||
|
||||
A boolean that decides whether module names are prepended to all
|
||||
:term:`object` names (for object types where a "module" of some kind is
|
||||
defined), e.g. for :rst:dir:`function` directives. Default is ``True``.
|
||||
defined), e.g. for :rst:dir:`py:function` directives. Default is ``True``.
|
||||
|
||||
.. confval:: show_authors
|
||||
|
||||
A boolean that decides whether :rst:dir:`moduleauthor` and :rst:dir:`sectionauthor`
|
||||
directives produce any output in the built files.
|
||||
A boolean that decides whether :rst:dir:`codeauthor` and
|
||||
:rst:dir:`sectionauthor` directives produce any output in the built files.
|
||||
|
||||
.. confval:: modindex_common_prefix
|
||||
|
||||
@ -372,11 +334,78 @@ Project information
|
||||
.. confval:: trim_doctest_flags
|
||||
|
||||
If true, doctest flags (comments looking like ``# doctest: FLAG, ...``) at
|
||||
the ends of lines are removed for all code blocks showing interactive Python
|
||||
sessions (i.e. doctests). Default is true. See the extension
|
||||
:mod:`~sphinx.ext.doctest` for more possibilities of including doctests.
|
||||
the ends of lines and ``<BLANKLINE>`` markers are removed for all code
|
||||
blocks showing interactive Python sessions (i.e. doctests). Default is
|
||||
true. See the extension :mod:`~sphinx.ext.doctest` for more possibilities
|
||||
of including doctests.
|
||||
|
||||
.. versionadded:: 1.0
|
||||
.. versionchanged:: 1.1
|
||||
Now also removes ``<BLANKLINE>``.
|
||||
|
||||
|
||||
.. _intl-options:
|
||||
|
||||
Options for internationalization
|
||||
--------------------------------
|
||||
|
||||
These options influence Sphinx' *Native Language Support*. See the
|
||||
documentation on :ref:`intl` for details.
|
||||
|
||||
.. confval:: language
|
||||
|
||||
The code for the language the docs are written in. Any text automatically
|
||||
generated by Sphinx will be in that language. Also, Sphinx will try to
|
||||
substitute individual paragraphs from your documents with the translation
|
||||
sets obtained from :confval:`locale_dirs`. In the LaTeX builder, a suitable
|
||||
language will be selected as an option for the *Babel* package. Default is
|
||||
``None``, which means that no translation will be done.
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
Currently supported languages by Sphinx are:
|
||||
|
||||
* ``bn`` -- Bengali
|
||||
* ``ca`` -- Catalan
|
||||
* ``cs`` -- Czech
|
||||
* ``da`` -- Danish
|
||||
* ``de`` -- German
|
||||
* ``en`` -- English
|
||||
* ``es`` -- Spanish
|
||||
* ``fa`` -- Iranian
|
||||
* ``fi`` -- Finnish
|
||||
* ``fr`` -- French
|
||||
* ``hr`` -- Croatian
|
||||
* ``it`` -- Italian
|
||||
* ``ja`` -- Japanese
|
||||
* ``lt`` -- Lithuanian
|
||||
* ``nl`` -- Dutch
|
||||
* ``pl`` -- Polish
|
||||
* ``pt_BR`` -- Brazilian Portuguese
|
||||
* ``ru`` -- Russian
|
||||
* ``sl`` -- Slovenian
|
||||
* ``sv`` -- Swedish
|
||||
* ``tr`` -- Turkish
|
||||
* ``uk_UA`` -- Ukrainian
|
||||
* ``zh_CN`` -- Simplified Chinese
|
||||
* ``zh_TW`` -- Traditional Chinese
|
||||
|
||||
.. confval:: locale_dirs
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
Directories in which to search for additional message catalogs (see
|
||||
:confval:`language`), relative to the source directory. The directories on
|
||||
this path are searched by the standard :mod:`gettext` module.
|
||||
|
||||
Internal messages are fetched from a text domain of ``sphinx``; so if you
|
||||
add the directory :file:`./locale` to this settting, the message catalogs
|
||||
(compiled from ``.po`` format using :program:`msgfmt`) must be in
|
||||
:file:`./locale/{language}/LC_MESSAGES/sphinx.mo`. The text domain of
|
||||
individual documents depends on their docname if they are top-level project
|
||||
files and on their base directory otherwise.
|
||||
|
||||
The default is ``[]``.
|
||||
|
||||
|
||||
.. _html-options:
|
||||
@ -434,6 +463,14 @@ that use Sphinx' HTMLWriter class.
|
||||
|
||||
.. versionadded:: 0.4
|
||||
|
||||
.. confval:: html_context
|
||||
|
||||
A dictionary of values to pass into the template engine's context for all
|
||||
pages. Single values can also be put in this dictionary using the
|
||||
:option:`-A` command-line option of ``sphinx-build``.
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
.. confval:: html_logo
|
||||
|
||||
If given, this must be the name of an image file that is the logo of the
|
||||
@ -480,13 +517,19 @@ that use Sphinx' HTMLWriter class.
|
||||
|
||||
.. confval:: html_add_permalinks
|
||||
|
||||
If true, Sphinx will add "permalinks" for each heading and description
|
||||
environment as paragraph signs that become visible when the mouse hovers over
|
||||
them. Default: ``True``.
|
||||
Sphinx will add "permalinks" for each heading and description environment as
|
||||
paragraph signs that become visible when the mouse hovers over them.
|
||||
|
||||
This value determines the text for the permalink; it defaults to ``"¶"``.
|
||||
Set it to ``None`` or the empty string to disable permalinks.
|
||||
|
||||
.. versionadded:: 0.6
|
||||
Previously, this was always activated.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
This can now be a string to select the actual text of the link.
|
||||
Previously, only boolean values were accepted.
|
||||
|
||||
.. confval:: html_sidebars
|
||||
|
||||
Custom sidebar templates, must be a dictionary that maps document names to
|
||||
@ -552,19 +595,6 @@ that use Sphinx' HTMLWriter class.
|
||||
This will render the template ``customdownload.html`` as the page
|
||||
``download.html``.
|
||||
|
||||
.. note::
|
||||
|
||||
Earlier versions of Sphinx had a value called :confval:`html_index` which
|
||||
was a clumsy way of controlling the content of the "index" document. If
|
||||
you used this feature, migrate it by adding an ``'index'`` key to this
|
||||
setting, with your custom template as the value, and in your custom
|
||||
template, use ::
|
||||
|
||||
{% extend "defindex.html" %}
|
||||
{% block tables %}
|
||||
... old template content ...
|
||||
{% endblock %}
|
||||
|
||||
.. confval:: html_domain_indices
|
||||
|
||||
If true, generate domain-specific indices in addition to the general index.
|
||||
@ -682,6 +712,38 @@ that use Sphinx' HTMLWriter class.
|
||||
|
||||
.. versionadded:: 1.0
|
||||
|
||||
.. confval:: html_search_language
|
||||
|
||||
Language to be used for generating the HTML full-text search index. This
|
||||
defaults to the global language selected with :confval:`language`. If there
|
||||
is no support for this language, ``"en"`` is used which selects the English
|
||||
language.
|
||||
|
||||
Support is present for these languages:
|
||||
|
||||
* ``en`` -- English
|
||||
* ``ja`` -- Japanese
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: html_search_options
|
||||
|
||||
A dictionary with options for the search language support, empty by default.
|
||||
The meaning of these options depends on the language selected.
|
||||
|
||||
The English support has no options.
|
||||
|
||||
The Japanese support has these options:
|
||||
|
||||
* ``type`` -- ``'mecab'`` or ``'default'`` (selects either MeCab or
|
||||
TinySegmenter word splitter algorithm)
|
||||
* ``dic_enc`` -- the encoding for the MeCab algorithm
|
||||
* ``dict`` -- the dictionary to use for the MeCab algorithm
|
||||
* ``lib`` -- the library name for finding the MeCab library via ctypes if the
|
||||
Python binding is not installed
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: htmlhelp_basename
|
||||
|
||||
Output file base name for HTML help builder. Default is ``'pydoc'``.
|
||||
@ -767,6 +829,8 @@ the `Dublin Core metadata <http://dublincore.org/>`_.
|
||||
|
||||
The default value is ``()``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: epub_pre_files
|
||||
|
||||
Additional files that should be inserted before the text generated by
|
||||
@ -804,6 +868,7 @@ the `Dublin Core metadata <http://dublincore.org/>`_.
|
||||
a chapter, but can be confusing because it mixes entries of differnet
|
||||
depth in one list. The default value is ``True``.
|
||||
|
||||
|
||||
.. confval:: epub_fix_images
|
||||
|
||||
This flag determines if sphinx should try to fix image formats that are not
|
||||
@ -903,10 +968,18 @@ These options influence LaTeX output.
|
||||
|
||||
.. confval:: latex_show_urls
|
||||
|
||||
If true, add URL addresses after links. This is very useful for printed
|
||||
copies of the manual. Default is ``False``.
|
||||
Control whether to display URL addresses. This is very useful for printed
|
||||
copies of the manual. The setting can have the following values:
|
||||
|
||||
* ``'no'`` -- do not display URLs (default)
|
||||
* ``'footnote'`` -- display URLs in footnotes
|
||||
* ``'inline'`` -- display URLs inline in parentheses
|
||||
|
||||
.. versionadded:: 1.0
|
||||
.. versionchanged:: 1.1
|
||||
This value is now a string; previously it was a boolean value, and a true
|
||||
value selected the ``'inline'`` display. For backwards compatibility,
|
||||
``True`` is still accepted.
|
||||
|
||||
.. confval:: latex_elements
|
||||
|
||||
@ -1019,6 +1092,37 @@ These options influence LaTeX output.
|
||||
Use the ``'pointsize'`` key in the :confval:`latex_elements` value.
|
||||
|
||||
|
||||
.. _text-options:
|
||||
|
||||
Options for text output
|
||||
-----------------------
|
||||
|
||||
These options influence text output.
|
||||
|
||||
.. confval:: text_newlines
|
||||
|
||||
Determines which end-of-line character(s) are used in text output.
|
||||
|
||||
* ``'unix'``: use Unix-style line endings (``\n``)
|
||||
* ``'windows'``: use Windows-style line endings (``\r\n``)
|
||||
* ``'native'``: use the line ending style of the platform the documentation
|
||||
is built on
|
||||
|
||||
Default: ``'unix'``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: text_sectionchars
|
||||
|
||||
A string of 7 characters that should be used for underlining sections.
|
||||
The first character is used for first-level headings, the second for
|
||||
second-level headings and so on.
|
||||
|
||||
The default is ``'*=-~"+`'``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. _man-options:
|
||||
|
||||
Options for manual page output
|
||||
@ -1041,14 +1145,121 @@ These options influence manual page output.
|
||||
well as the name of the manual page (in the NAME section).
|
||||
* *description*: description of the manual page. This is used in the NAME
|
||||
section.
|
||||
* *authors*: A list of strings with authors, or a single string. Can be
|
||||
an empty string or list if you do not want to automatically generate
|
||||
an AUTHORS section in the manual page.
|
||||
* *authors*: A list of strings with authors, or a single string. Can be an
|
||||
empty string or list if you do not want to automatically generate an
|
||||
AUTHORS section in the manual page.
|
||||
* *section*: The manual page section. Used for the output file name as well
|
||||
as in the manual page header.
|
||||
|
||||
.. versionadded:: 1.0
|
||||
|
||||
.. confval:: man_show_urls
|
||||
|
||||
If true, add URL addresses after links. Default is ``False``.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. _texinfo-options:
|
||||
|
||||
Options for Texinfo output
|
||||
--------------------------
|
||||
|
||||
These options influence Texinfo output.
|
||||
|
||||
.. confval:: texinfo_documents
|
||||
|
||||
This value determines how to group the document tree into Texinfo source
|
||||
files. It must be a list of tuples ``(startdocname, targetname, title,
|
||||
author, dir_entry, description, category, toctree_only)``, where the items
|
||||
are:
|
||||
|
||||
* *startdocname*: document name that is the "root" of the Texinfo file. All
|
||||
documents referenced by it in TOC trees will be included in the Texinfo
|
||||
file too. (If you want only one Texinfo file, use your
|
||||
:confval:`master_doc` here.)
|
||||
* *targetname*: file name (no extension) of the Texinfo file in the output
|
||||
directory.
|
||||
* *title*: Texinfo document title. Can be empty to use the title of the
|
||||
*startdoc*.
|
||||
* *author*: Author for the Texinfo document. Use ``\and`` to separate
|
||||
multiple authors, as in: ``'John \and Sarah'``.
|
||||
* *dir_entry*: The name that will appear in the top-level ``DIR`` menu file.
|
||||
* *description*: Descriptive text to appear in the top-level ``DIR`` menu
|
||||
file.
|
||||
* *category*: Specifies the section which this entry will appear in the
|
||||
top-level ``DIR`` menu file.
|
||||
* *toctree_only*: Must be ``True`` or ``False``. If ``True``, the *startdoc*
|
||||
document itself is not included in the output, only the documents
|
||||
referenced by it via TOC trees. With this option, you can put extra stuff
|
||||
in the master document that shows up in the HTML, but not the Texinfo
|
||||
output.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. confval:: texinfo_appendices
|
||||
|
||||
A list of document names to append as an appendix to all manuals.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. confval:: texinfo_elements
|
||||
|
||||
A dictionary that contains Texinfo snippets that override those Sphinx
|
||||
usually puts into the generated ``.texi`` files.
|
||||
|
||||
* Keys that you may want to override include:
|
||||
|
||||
``'paragraphindent'``
|
||||
Number of spaces to indent the first line of each paragraph, default
|
||||
``2``. Specify ``0`` for no indentation.
|
||||
|
||||
``'exampleindent'``
|
||||
Number of spaces to indent the lines for examples or literal blocks,
|
||||
default ``4``. Specify ``0`` for no indentation.
|
||||
|
||||
``'preamble'``
|
||||
Text inserted as is near the beginning of the file.
|
||||
|
||||
* Keys that are set by other options and therefore should not be overridden
|
||||
are:
|
||||
|
||||
``'filename'``
|
||||
``'title'``
|
||||
``'direntry'``
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
Options for the linkcheck builder
|
||||
---------------------------------
|
||||
|
||||
.. confval:: linkcheck_ignore
|
||||
|
||||
A list of regular expressions that match URIs that should not be checked
|
||||
when doing a ``linkcheck`` build. Example::
|
||||
|
||||
linkcheck_ignore = [r'http://localhost:\d+/']
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: linkcheck_timeout
|
||||
|
||||
A timeout value, in seconds, for the linkcheck builder. **Only works in
|
||||
Python 2.6 and higher.** The default is to use Python's global socket
|
||||
timeout.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: linkcheck_workers
|
||||
|
||||
The number of worker threads to use when checking links. Default is 5
|
||||
threads.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
|
@ -14,9 +14,11 @@ Sphinx documentation contents
|
||||
domains
|
||||
builders
|
||||
config
|
||||
intl
|
||||
theming
|
||||
templating
|
||||
extensions
|
||||
websupport
|
||||
|
||||
faq
|
||||
glossary
|
||||
|
@ -52,10 +52,19 @@ flag ``:noindex:``. An example using a Python domain directive::
|
||||
|
||||
.. py:function:: spam(eggs)
|
||||
ham(eggs)
|
||||
:noindex:
|
||||
|
||||
Spam or ham the foo.
|
||||
|
||||
This describes the two Python functions ``spam`` and ``ham``. (Note that when
|
||||
signatures become too long, you can break them if you add a backslash to lines
|
||||
that are continued in the next line. Example::
|
||||
|
||||
.. py:function:: filterwarnings(action, message='', category=Warning, \
|
||||
module='', lineno=0, append=False)
|
||||
:noindex:
|
||||
|
||||
(This example also shows how to use the ``:noindex:`` flag.)
|
||||
|
||||
The domains also provide roles that link back to these object descriptions. For
|
||||
example, to link to one of the functions described in the example above, you
|
||||
could say ::
|
||||
@ -138,11 +147,12 @@ declarations:
|
||||
.. rst:directive:: .. py:currentmodule:: name
|
||||
|
||||
This directive tells Sphinx that the classes, functions etc. documented from
|
||||
here are in the given module (like :rst:dir:`py:module`), but it will not create
|
||||
index entries, an entry in the Global Module Index, or a link target for
|
||||
:rst:role:`mod`. This is helpful in situations where documentation for things in
|
||||
a module is spread over multiple files or sections -- one location has the
|
||||
:rst:dir:`py:module` directive, the others only :rst:dir:`py:currentmodule`.
|
||||
here are in the given module (like :rst:dir:`py:module`), but it will not
|
||||
create index entries, an entry in the Global Module Index, or a link target
|
||||
for :rst:role:`py:mod`. This is helpful in situations where documentation
|
||||
for things in a module is spread over multiple files or sections -- one
|
||||
location has the :rst:dir:`py:module` directive, the others only
|
||||
:rst:dir:`py:currentmodule`.
|
||||
|
||||
|
||||
The following directives are provided for module and class contents:
|
||||
@ -221,6 +231,45 @@ The following directives are provided for module and class contents:
|
||||
|
||||
.. versionadded:: 0.6
|
||||
|
||||
.. rst:directive:: .. py:decorator:: name
|
||||
.. py:decorator:: name(signature)
|
||||
|
||||
Describes a decorator function. The signature should *not* represent the
|
||||
signature of the actual function, but the usage as a decorator. For example,
|
||||
given the functions
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def removename(func):
|
||||
func.__name__ = ''
|
||||
return func
|
||||
|
||||
def setnewname(name):
|
||||
def decorator(func):
|
||||
func.__name__ = name
|
||||
return func
|
||||
return decorator
|
||||
|
||||
the descriptions should look like this::
|
||||
|
||||
.. py:decorator:: removename
|
||||
|
||||
Remove name of the decorated function.
|
||||
|
||||
.. py:decorator:: setnewname(name)
|
||||
|
||||
Set name of the decorated function to *name*.
|
||||
|
||||
There is no ``py:deco`` role to link to a decorator that is marked up with
|
||||
this directive; rather, use the :rst:role:`py:func` role.
|
||||
|
||||
.. rst:directive:: .. py:decoratormethod:: name
|
||||
.. py:decoratormethod:: name(signature)
|
||||
|
||||
Same as :rst:dir:`py:decorator`, but for decorators that are methods.
|
||||
|
||||
Refer to a decorator method using the :rst:role:`py:meth` role.
|
||||
|
||||
|
||||
.. _signatures:
|
||||
|
||||
@ -278,11 +327,6 @@ explained by an example::
|
||||
:type limit: integer or None
|
||||
:rtype: list of strings
|
||||
|
||||
It is also possible to combine parameter type and description, if the type is a
|
||||
single word, like this::
|
||||
|
||||
:param integer limit: maximum number of stack frames to show
|
||||
|
||||
This will render like this:
|
||||
|
||||
.. py:function:: format_exception(etype, value, tb[, limit=None])
|
||||
@ -297,6 +341,13 @@ This will render like this:
|
||||
:type limit: integer or None
|
||||
:rtype: list of strings
|
||||
|
||||
It is also possible to combine parameter type and description, if the type is a
|
||||
single word, like this::
|
||||
|
||||
:param integer limit: maximum number of stack frames to show
|
||||
|
||||
|
||||
.. _python-roles:
|
||||
|
||||
Cross-referencing Python objects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -363,6 +414,9 @@ dot, this order is reversed. For example, in the documentation of Python's
|
||||
:mod:`codecs` module, ``:py:func:`open``` always refers to the built-in
|
||||
function, while ``:py:func:`.open``` refers to :func:`codecs.open`.
|
||||
|
||||
A similar heuristic is used to determine whether the name is an attribute of the
|
||||
currently documented class.
|
||||
|
||||
Also, if the name is prefixed with a dot, and no exact match is found, the
|
||||
target is taken as a suffix and all object names with that suffix are
|
||||
searched. For example, ``:py:meth:`.TarFile.close``` references the
|
||||
@ -370,8 +424,9 @@ searched. For example, ``:py:meth:`.TarFile.close``` references the
|
||||
``tarfile``. Since this can get ambiguous, if there is more than one possible
|
||||
match, you will get a warning from Sphinx.
|
||||
|
||||
A similar heuristic is used to determine whether the name is an attribute of the
|
||||
currently documented class.
|
||||
Note that you can combine the ``~`` and ``.`` prefixes:
|
||||
``:py:meth:`~.TarFile.close``` will reference the ``tarfile.TarFile.close()``
|
||||
method, but the visible link caption will only be ``close()``.
|
||||
|
||||
|
||||
.. _c-domain:
|
||||
@ -424,6 +479,8 @@ The C domain (name **c**) is suited for documentation of C API.
|
||||
.. c:var:: PyObject* PyClass_Type
|
||||
|
||||
|
||||
.. _c-roles:
|
||||
|
||||
Cross-referencing C constructs
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -508,6 +565,9 @@ The following directives are available:
|
||||
|
||||
Select the current C++ namespace for the following objects.
|
||||
|
||||
|
||||
.. _cpp-roles:
|
||||
|
||||
These roles link to the given object types:
|
||||
|
||||
.. rst:role:: cpp:class
|
||||
@ -675,6 +735,8 @@ The JavaScript domain (name **js**) provides the following directives:
|
||||
|
||||
Describes the attribute *name* of *object*.
|
||||
|
||||
.. _js-roles:
|
||||
|
||||
These roles are provided to refer to the described objects:
|
||||
|
||||
.. rst:role:: js:func
|
||||
@ -726,6 +788,8 @@ The reStructuredText domain (name **rst**) provides the following directives:
|
||||
|
||||
Foo description.
|
||||
|
||||
.. _rst-roles:
|
||||
|
||||
These roles are provided to refer to the described objects:
|
||||
|
||||
.. rst:role:: rst:dir
|
||||
@ -739,4 +803,4 @@ The sphinx-contrib_ repository contains more domains available as extensions;
|
||||
currently a Ruby and an Erlang domain.
|
||||
|
||||
|
||||
.. _sphinx-contrib: http://bitbucket.org/birkenfeld/sphinx-contrib/
|
||||
.. _sphinx-contrib: https://bitbucket.org/birkenfeld/sphinx-contrib/
|
||||
|
@ -76,9 +76,9 @@ the following public API:
|
||||
|
||||
Node visitor functions for the Sphinx HTML, LaTeX, text and manpage writers
|
||||
can be given as keyword arguments: the keyword must be one or more of
|
||||
``'html'``, ``'latex'``, ``'text'``, ``'man'``, the value a 2-tuple of
|
||||
``(visit, depart)`` methods. ``depart`` can be ``None`` if the ``visit``
|
||||
function raises :exc:`docutils.nodes.SkipNode`. Example:
|
||||
``'html'``, ``'latex'``, ``'text'``, ``'man'``, ``'texinfo'``, the value a
|
||||
2-tuple of ``(visit, depart)`` methods. ``depart`` can be ``None`` if the
|
||||
``visit`` function raises :exc:`docutils.nodes.SkipNode`. Example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@ -163,7 +163,8 @@ the following public API:
|
||||
|
||||
.. versionadded:: 0.6
|
||||
|
||||
.. method:: Sphinx.add_object_type(directivename, rolename, indextemplate='', parse_node=None, ref_nodeclass=None, objname='')
|
||||
.. method:: Sphinx.add_object_type(directivename, rolename, indextemplate='', parse_node=None, \
|
||||
ref_nodeclass=None, objname='', doc_field_types=[])
|
||||
|
||||
This method is a very convenient way to add a new :term:`object` type that
|
||||
can be cross-referenced. It will do this:
|
||||
@ -210,7 +211,7 @@ the following public API:
|
||||
standard Sphinx roles (see :ref:`xref-syntax`).
|
||||
|
||||
This method is also available under the deprecated alias
|
||||
:meth:`add_description_unit`.
|
||||
``add_description_unit``.
|
||||
|
||||
.. method:: Sphinx.add_crossref_type(directivename, rolename, indextemplate='', ref_nodeclass=None, objname='')
|
||||
|
||||
@ -246,7 +247,8 @@ the following public API:
|
||||
|
||||
Add *filename* to the list of JavaScript files that the default HTML template
|
||||
will include. The filename must be relative to the HTML static path, see
|
||||
:confval:`the docs for the config value <html_static_path>`.
|
||||
:confval:`the docs for the config value <html_static_path>`. A full URI with
|
||||
scheme, like ``http://example.org/foo.js``, is also supported.
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
@ -272,6 +274,8 @@ the following public API:
|
||||
This allows to auto-document new types of objects. See the source of the
|
||||
autodoc module for examples on how to subclass :class:`Documenter`.
|
||||
|
||||
.. XXX add real docs for Documenter and subclassing
|
||||
|
||||
.. versionadded:: 0.6
|
||||
|
||||
.. method:: Sphinx.add_autodoc_attrgetter(type, getter)
|
||||
@ -283,6 +287,15 @@ the following public API:
|
||||
|
||||
.. versionadded:: 0.6
|
||||
|
||||
.. method:: Sphinx.add_search_language(cls)
|
||||
|
||||
Add *cls*, which must be a subclass of :class:`sphinx.search.SearchLanguage`,
|
||||
as a support language for building the HTML full-text search index. The
|
||||
class must have a *lang* attribute that indicates the language it should be
|
||||
used for. See :confval:`html_search_language`.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. method:: Sphinx.connect(event, callback)
|
||||
|
||||
Register *callback* to be called when *event* is emitted. For details on
|
||||
@ -340,6 +353,15 @@ registered event handlers.
|
||||
Emitted when the builder object has been created. It is available as
|
||||
``app.builder``.
|
||||
|
||||
.. event:: env-get-outdated (app, env, added, changed, removed)
|
||||
|
||||
Emitted when the environment determines which source files have changed and
|
||||
should be re-read. *added*, *changed* and *removed* are sets of docnames
|
||||
that the environment has determined. You can return a list of docnames to
|
||||
re-read in addition to these.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. event:: env-purge-doc (app, env, docname)
|
||||
|
||||
Emitted when all traces of a source file should be cleaned from the
|
||||
|
@ -27,20 +27,21 @@ two locations for documentation, while at the same time avoiding
|
||||
auto-generated-looking pure API documentation.
|
||||
|
||||
:mod:`autodoc` provides several directives that are versions of the usual
|
||||
:rst:dir:`module`, :rst:dir:`class` and so forth. On parsing time, they import the
|
||||
corresponding module and extract the docstring of the given objects, inserting
|
||||
them into the page source under a suitable :rst:dir:`module`, :rst:dir:`class` etc.
|
||||
directive.
|
||||
:rst:dir:`py:module`, :rst:dir:`py:class` and so forth. On parsing time, they
|
||||
import the corresponding module and extract the docstring of the given objects,
|
||||
inserting them into the page source under a suitable :rst:dir:`py:module`,
|
||||
:rst:dir:`py:class` etc. directive.
|
||||
|
||||
.. note::
|
||||
|
||||
Just as :rst:dir:`class` respects the current :rst:dir:`module`, :rst:dir:`autoclass`
|
||||
will also do so, and likewise with :rst:dir:`method` and :rst:dir:`class`.
|
||||
Just as :rst:dir:`py:class` respects the current :rst:dir:`py:module`,
|
||||
:rst:dir:`autoclass` will also do so. Likewise, :rst:dir:`automethod` will
|
||||
respect the current :rst:dir:`py:class`.
|
||||
|
||||
|
||||
.. rst:directive:: automodule
|
||||
autoclass
|
||||
autoexception
|
||||
autoclass
|
||||
autoexception
|
||||
|
||||
Document a module, class or exception. All three directives will by default
|
||||
only insert the docstring of the object itself::
|
||||
@ -83,14 +84,17 @@ directive.
|
||||
will document all non-private member functions and properties (that is,
|
||||
those whose name doesn't start with ``_``).
|
||||
|
||||
For modules, ``__all__`` will be respected when looking for members; the
|
||||
order of the members will also be the order in ``__all__``.
|
||||
|
||||
You can also give an explicit list of members; only these will then be
|
||||
documented::
|
||||
|
||||
.. autoclass:: Noodle
|
||||
:members: eat, slurp
|
||||
|
||||
* If you want to make the ``members`` option the default, see
|
||||
:confval:`autodoc_default_flags`.
|
||||
* If you want to make the ``members`` option (or other flag options described
|
||||
below) the default, see :confval:`autodoc_default_flags`.
|
||||
|
||||
* Members without docstrings will be left out, unless you give the
|
||||
``undoc-members`` flag option::
|
||||
@ -99,9 +103,26 @@ directive.
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
* "Private" members (that is, those named like ``_private`` or ``__private``)
|
||||
will be included if the ``private-members`` flag option is given.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
* Python "special" members (that is, those named like ``__special__``) will
|
||||
be included if the ``special-members`` flag option is given::
|
||||
|
||||
.. autoclass:: my.Class
|
||||
:members:
|
||||
:private-members:
|
||||
:special-members:
|
||||
|
||||
would document both "private" and "special" members of the class.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
* For classes and exceptions, members inherited from base classes will be
|
||||
left out, unless you give the ``inherited-members`` flag option, in
|
||||
addition to ``members``::
|
||||
left out when documenting all members, unless you give the
|
||||
``inherited-members`` flag option, in addition to ``members``::
|
||||
|
||||
.. autoclass:: Noodle
|
||||
:members:
|
||||
@ -127,28 +148,29 @@ directive.
|
||||
|
||||
.. versionadded:: 0.4
|
||||
|
||||
* The :rst:dir:`automodule`, :rst:dir:`autoclass` and :rst:dir:`autoexception` directives
|
||||
also support a flag option called ``show-inheritance``. When given, a list
|
||||
of base classes will be inserted just below the class signature (when used
|
||||
with :rst:dir:`automodule`, this will be inserted for every class that is
|
||||
documented in the module).
|
||||
* The :rst:dir:`automodule`, :rst:dir:`autoclass` and
|
||||
:rst:dir:`autoexception` directives also support a flag option called
|
||||
``show-inheritance``. When given, a list of base classes will be inserted
|
||||
just below the class signature (when used with :rst:dir:`automodule`, this
|
||||
will be inserted for every class that is documented in the module).
|
||||
|
||||
.. versionadded:: 0.4
|
||||
|
||||
* All autodoc directives support the ``noindex`` flag option that has the
|
||||
same effect as for standard :rst:dir:`function` etc. directives: no index
|
||||
entries are generated for the documented object (and all autodocumented
|
||||
members).
|
||||
same effect as for standard :rst:dir:`py:function` etc. directives: no
|
||||
index entries are generated for the documented object (and all
|
||||
autodocumented members).
|
||||
|
||||
.. versionadded:: 0.4
|
||||
|
||||
* :rst:dir:`automodule` also recognizes the ``synopsis``, ``platform`` and
|
||||
``deprecated`` options that the standard :rst:dir:`module` directive supports.
|
||||
``deprecated`` options that the standard :rst:dir:`py:module` directive
|
||||
supports.
|
||||
|
||||
.. versionadded:: 0.5
|
||||
|
||||
* :rst:dir:`automodule` and :rst:dir:`autoclass` also has an ``member-order`` option
|
||||
that can be used to override the global value of
|
||||
* :rst:dir:`automodule` and :rst:dir:`autoclass` also has an ``member-order``
|
||||
option that can be used to override the global value of
|
||||
:confval:`autodoc_member_order` for one directive.
|
||||
|
||||
.. versionadded:: 0.6
|
||||
@ -168,29 +190,45 @@ directive.
|
||||
|
||||
|
||||
.. rst:directive:: autofunction
|
||||
autodata
|
||||
automethod
|
||||
autoattribute
|
||||
autodata
|
||||
automethod
|
||||
autoattribute
|
||||
|
||||
These work exactly like :rst:dir:`autoclass` etc., but do not offer the options
|
||||
used for automatic member documentation.
|
||||
|
||||
For module data members and class attributes, documentation can either be put
|
||||
into a special-formatted comment *before* the attribute definition, or in a
|
||||
docstring *after* the definition. This means that in the following class
|
||||
definition, both attributes can be autodocumented::
|
||||
into a special-formatted comment, or in a docstring *after* the definition.
|
||||
Comments need to be either on a line of their own *before* the definition, or
|
||||
immediately after the assignment *on the same line*. The latter form is
|
||||
restricted to one line only.
|
||||
|
||||
This means that in the following class definition, all attributes can be
|
||||
autodocumented::
|
||||
|
||||
class Foo:
|
||||
"""Docstring for class Foo."""
|
||||
|
||||
#: Doc comment for attribute Foo.bar.
|
||||
#: Doc comment for class attribute Foo.bar.
|
||||
#: It can have multiple lines.
|
||||
bar = 1
|
||||
|
||||
flox = 1.5 #: Doc comment for Foo.flox. One line only.
|
||||
|
||||
baz = 2
|
||||
"""Docstring for attribute Foo.baz."""
|
||||
"""Docstring for class attribute Foo.baz."""
|
||||
|
||||
def __init__(self):
|
||||
#: Doc comment for instance attribute qux.
|
||||
self.qux = 3
|
||||
|
||||
self.spam = 4
|
||||
"""Docstring for instance attribute spam."""
|
||||
|
||||
.. versionchanged:: 0.6
|
||||
:rst:dir:`autodata` and :rst:dir:`autoattribute` can now extract docstrings.
|
||||
.. versionchanged:: 1.1
|
||||
Comment docs are now allowed on the same line after an assignment.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -213,8 +251,8 @@ There are also new config values that you can set:
|
||||
|
||||
``"class"``
|
||||
Only the class' docstring is inserted. This is the default. You can
|
||||
still document ``__init__`` as a separate method using :rst:dir:`automethod`
|
||||
or the ``members`` option to :rst:dir:`autoclass`.
|
||||
still document ``__init__`` as a separate method using
|
||||
:rst:dir:`automethod` or the ``members`` option to :rst:dir:`autoclass`.
|
||||
``"both"``
|
||||
Both the class' and the ``__init__`` method's docstring are concatenated
|
||||
and inserted.
|
||||
@ -241,7 +279,8 @@ There are also new config values that you can set:
|
||||
|
||||
This value is a list of autodoc directive flags that should be automatically
|
||||
applied to all autodoc directives. The supported flags are ``'members'``,
|
||||
``'undoc-members'``, ``'inherited-members'`` and ``'show-inheritance'``.
|
||||
``'undoc-members'``, ``'private-members'``, ``'special-members'``,
|
||||
``'inherited-members'`` and ``'show-inheritance'``.
|
||||
|
||||
If you set one of these flags in this config value, you can use a negated
|
||||
form, :samp:`'no-{flag}'`, in an autodoc directive, to disable it once.
|
||||
@ -255,6 +294,20 @@ There are also new config values that you can set:
|
||||
|
||||
.. versionadded:: 1.0
|
||||
|
||||
.. confval:: autodoc_docstring_signature
|
||||
|
||||
Functions imported from C modules cannot be introspected, and therefore the
|
||||
signature for such functions cannot be automatically determined. However, it
|
||||
is an often-used convention to put the signature into the first line of the
|
||||
function's docstring.
|
||||
|
||||
If this boolean value is set to ``True`` (which is the default), autodoc will
|
||||
look at the first line of the docstring for functions and methods, and if it
|
||||
looks like a signature, use the line as the signature and remove it from the
|
||||
docstring content.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
Docstring preprocessing
|
||||
-----------------------
|
||||
|
@ -14,6 +14,7 @@ This extension features one additional builder, the :class:`CoverageBuilder`.
|
||||
|
||||
.. todo:: Write this section.
|
||||
|
||||
|
||||
Several new configuration values can be used to specify what the builder
|
||||
should check:
|
||||
|
||||
@ -28,3 +29,16 @@ should check:
|
||||
.. confval:: coverage_c_regexes
|
||||
|
||||
.. confval:: coverage_ignore_c_items
|
||||
|
||||
.. confval:: coverage_write_headline
|
||||
|
||||
Set to ``False`` to not write headlines.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: coverage_skip_undoc_in_source
|
||||
|
||||
Skip objects that are not documented in the source with a docstring.
|
||||
``False`` by default.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
@ -45,6 +45,14 @@ names.
|
||||
but executed before the doctests of the group(s) it belongs to.
|
||||
|
||||
|
||||
.. rst:directive:: .. testcleanup:: [group]
|
||||
|
||||
A cleanup code block. This code is not shown in the output for other
|
||||
builders, but executed after the doctests of the group(s) it belongs to.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. rst:directive:: .. doctest:: [group]
|
||||
|
||||
A doctest-style code block. You can use standard :mod:`doctest` flags for
|
||||
@ -181,6 +189,14 @@ There are also these config values for customizing the doctest extension:
|
||||
|
||||
.. versionadded:: 0.6
|
||||
|
||||
.. confval:: doctest_global_cleanup
|
||||
|
||||
Python code that is treated like it were put in a ``testcleanup`` directive
|
||||
for *every* file that is tested, and for every group. You can use this to
|
||||
e.g. remove any temporary files that the tests leave behind.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
.. confval:: doctest_test_doctest_blocks
|
||||
|
||||
If this is a nonempty string (the default is ``'default'``), standard reST
|
||||
@ -221,4 +237,4 @@ There are also these config values for customizing the doctest extension:
|
||||
will be interpreted as one block ending and another one starting. Also,
|
||||
removal of ``<BLANKLINE>`` and ``# doctest:`` options only works in
|
||||
:rst:dir:`doctest` blocks, though you may set :confval:`trim_doctest_flags` to
|
||||
achieve the latter in all code blocks with Python console content.
|
||||
achieve that in all code blocks with Python console content.
|
||||
|
@ -27,11 +27,11 @@ The extension adds one new config value:
|
||||
short alias names to a base URL and a *prefix*. For example, to create an
|
||||
alias for the above mentioned issues, you would add ::
|
||||
|
||||
extlinks = {'issue': ('http://bitbucket.org/birkenfeld/sphinx/issue/%s',
|
||||
extlinks = {'issue': ('https://bitbucket.org/birkenfeld/sphinx/issue/%s',
|
||||
'issue ')}
|
||||
|
||||
Now, you can use the alias name as a new role, e.g. ``:issue:`123```. This
|
||||
then inserts a link to http://bitbucket.org/birkenfeld/sphinx/issue/123.
|
||||
then inserts a link to https://bitbucket.org/birkenfeld/sphinx/issue/123.
|
||||
As you can see, the target given in the role is substituted in the base URL
|
||||
in the place of ``%s``.
|
||||
|
||||
|
@ -29,6 +29,17 @@ It adds these directives:
|
||||
:confval:`graphviz_output_format`). In LaTeX output, the code will be
|
||||
rendered to an embeddable PDF file.
|
||||
|
||||
You can also embed external dot files, by giving the file name as an
|
||||
argument to :rst:dir:`graphviz` and no additional content::
|
||||
|
||||
.. graphviz:: external.dot
|
||||
|
||||
As for all file references in Sphinx, if the filename is absolute, it is
|
||||
taken as relative to the source directory.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
Added support for external files.
|
||||
|
||||
|
||||
.. rst:directive:: graph
|
||||
|
||||
@ -61,6 +72,16 @@ It adds these directives:
|
||||
alternate text for HTML output. If not given, the alternate text defaults to
|
||||
the graphviz code.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
All three directives support an ``inline`` flag that controls paragraph
|
||||
breaks in the output. When set, the graph is inserted into the current
|
||||
paragraph. If the flag is not given, paragraph breaks are introduced before
|
||||
and after the image (the default).
|
||||
|
||||
.. versionadded:: 1.1
|
||||
All three directives support a ``caption`` option that can be used to give a
|
||||
caption to the diagram. Naturally, diagrams marked as "inline" cannot have a
|
||||
caption.
|
||||
|
||||
There are also these new config values:
|
||||
|
||||
|
@ -17,7 +17,7 @@ It adds this directive:
|
||||
|
||||
This directive has one or more arguments, each giving a module or class
|
||||
name. Class names can be unqualified; in that case they are taken to exist
|
||||
in the currently described module (see :rst:dir:`module`).
|
||||
in the currently described module (see :rst:dir:`py:module`).
|
||||
|
||||
For each given class, and each class in each given module, the base classes
|
||||
are determined. Then, from all classes and their base classes, a graph is
|
||||
@ -30,6 +30,13 @@ It adds this directive:
|
||||
``lib.``, you can give ``:parts: 1`` to remove that prefix from the displayed
|
||||
node names.)
|
||||
|
||||
It also supports a ``private-bases`` flag option; if given, private base
|
||||
classes (those whose name starts with ``_``) will be included.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
Added ``private-bases`` option; previously, all bases were always
|
||||
included.
|
||||
|
||||
|
||||
New config values are:
|
||||
|
||||
|
@ -84,7 +84,7 @@ linking:
|
||||
To add links to modules and objects in the Python standard library
|
||||
documentation, use::
|
||||
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/', None)}
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/3.2', None)}
|
||||
|
||||
This will download the corresponding :file:`objects.inv` file from the
|
||||
Internet and generate links to the pages under the given URI. The downloaded
|
||||
@ -94,12 +94,12 @@ linking:
|
||||
A second example, showing the meaning of a non-``None`` value of the second
|
||||
tuple item::
|
||||
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/',
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/3.2',
|
||||
'python-inv.txt')}
|
||||
|
||||
This will read the inventory from :file:`python-inv.txt` in the source
|
||||
directory, but still generate links to the pages under
|
||||
``http://docs.python.org/``. It is up to you to update the inventory file as
|
||||
``http://docs.python.org/3.2``. It is up to you to update the inventory file as
|
||||
new objects are added to the Python documentation.
|
||||
|
||||
.. confval:: intersphinx_cache_limit
|
||||
|
@ -17,15 +17,15 @@ if possible, reuse that support too.
|
||||
|
||||
.. note::
|
||||
|
||||
:mod:`sphinx.ext.mathbase` is not meant to be added to the
|
||||
:confval:`extensions` config value, instead, use either
|
||||
:mod:`sphinx.ext.pngmath` or :mod:`sphinx.ext.jsmath` as described below.
|
||||
:mod:`.mathbase` is not meant to be added to the :confval:`extensions` config
|
||||
value, instead, use either :mod:`sphinx.ext.pngmath` or
|
||||
:mod:`sphinx.ext.jsmath` as described below.
|
||||
|
||||
The input language for mathematics is LaTeX markup. This is the de-facto
|
||||
standard for plain-text math notation and has the added advantage that no
|
||||
further translation is necessary when building LaTeX output.
|
||||
|
||||
:mod:`mathbase` defines these new markup elements:
|
||||
:mod:`.mathbase` defines these new markup elements:
|
||||
|
||||
.. rst:role:: math
|
||||
|
||||
@ -170,20 +170,58 @@ There are various config values you can set to influence how the images are buil
|
||||
Unfortunately, this only works when the `preview-latex package`_ is
|
||||
installed. Therefore, the default for this option is ``False``.
|
||||
|
||||
.. confval:: pngmath_add_tooltips
|
||||
|
||||
Default: true. If false, do not add the LaTeX code as an "alt" attribute for
|
||||
math images.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
:mod:`sphinx.ext.mathjax` -- Render math via JavaScript
|
||||
-------------------------------------------------------
|
||||
|
||||
.. module:: sphinx.ext.mathjax
|
||||
:synopsis: Render math using JavaScript via MathJax.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
This extension puts math as-is into the HTML files. The JavaScript package
|
||||
MathJax_ is then loaded and transforms the LaTeX markup to readable math live in
|
||||
the browser.
|
||||
|
||||
Because MathJax (and the necessary fonts) is very large, it is not included in
|
||||
Sphinx. You must install it yourself, and give Sphinx its path in this config
|
||||
value:
|
||||
|
||||
.. confval:: mathjax_path
|
||||
|
||||
The path to the JavaScript file to include in the HTML files in order to load
|
||||
JSMath. There is no default.
|
||||
|
||||
The path can be absolute or relative; if it is relative, it is relative to
|
||||
the ``_static`` directory of the built docs.
|
||||
|
||||
For example, if you put JSMath into the static path of the Sphinx docs, this
|
||||
value would be ``MathJax/MathJax.js``. If you host more than one Sphinx
|
||||
documentation set on one server, it is advisable to install MathJax in a
|
||||
shared location.
|
||||
|
||||
You can also give a full ``http://`` URL. Kevin Dunn maintains a MathJax
|
||||
installation on a public server, which he offers for use by development and
|
||||
production servers::
|
||||
|
||||
mathjax_path = 'http://mathjax.connectmv.com/MathJax.js'
|
||||
|
||||
|
||||
:mod:`sphinx.ext.jsmath` -- Render math via JavaScript
|
||||
------------------------------------------------------
|
||||
|
||||
.. module:: sphinx.ext.jsmath
|
||||
:synopsis: Render math via JavaScript.
|
||||
:synopsis: Render math using JavaScript via JSMath.
|
||||
|
||||
This extension puts math as-is into the HTML files. The JavaScript package
|
||||
jsMath_ is then loaded and transforms the LaTeX markup to readable math live in
|
||||
the browser.
|
||||
|
||||
Because jsMath (and the necessary fonts) is very large, it is not included in
|
||||
Sphinx. You must install it yourself, and give Sphinx its path in this config
|
||||
value:
|
||||
This extension works just as the MathJax extension does, but uses the older
|
||||
package jsMath_. It provides this config value:
|
||||
|
||||
.. confval:: jsmath_path
|
||||
|
||||
@ -200,6 +238,7 @@ value:
|
||||
|
||||
|
||||
.. _dvipng: http://savannah.nongnu.org/projects/dvipng/
|
||||
.. _MathJax: http://www.mathjax.org/
|
||||
.. _jsMath: http://www.math.union.edu/~dpvc/jsmath/
|
||||
.. _preview-latex package: http://www.gnu.org/software/auctex/preview-latex.html
|
||||
.. _AmSMath LaTeX package: http://www.ams.org/tex/amslatex.html
|
||||
.. _AmSMath LaTeX package: http://www.ams.org/publications/authors/tex/amslatex
|
||||
|
@ -59,14 +59,19 @@ These extensions are built in and can be activated by respective entries in the
|
||||
Third-party extensions
|
||||
----------------------
|
||||
|
||||
There are several extensions that are not (yet) maintained in the Sphinx
|
||||
distribution. The `Wiki at BitBucket`_ maintains a list of those.
|
||||
You can find several extensions contributed by users in the `Sphinx Contrib`_
|
||||
repository. It is open for anyone who wants to maintain an extension
|
||||
publicly; just send a short message asking for write permissions.
|
||||
|
||||
If you write an extension that you think others will find useful, please write
|
||||
to the project mailing list (`join here <http://groups.google.com/group/sphinx-dev>`_)
|
||||
and we'll find the proper way of including or hosting it for the public.
|
||||
There are also several extensions hosted elsewhere. The `Wiki at BitBucket`_
|
||||
maintains a list of those.
|
||||
|
||||
.. _Wiki at BitBucket: http://www.bitbucket.org/birkenfeld/sphinx/wiki/Home
|
||||
If you write an extension that you think others will find useful or you think
|
||||
should be included as a part of Sphinx, please write to the project mailing
|
||||
list (`join here <http://groups.google.com/group/sphinx-dev>`_).
|
||||
|
||||
.. _Wiki at BitBucket: https://www.bitbucket.org/birkenfeld/sphinx/wiki/Home
|
||||
.. _Sphinx Contrib: https://www.bitbucket.org/birkenfeld/sphinx-contrib
|
||||
|
||||
|
||||
Where to put your own extensions?
|
||||
|
120
doc/faq.rst
120
doc/faq.rst
@ -52,7 +52,7 @@ Doxygen
|
||||
|
||||
SCons
|
||||
Glenn Hutchings has written a SCons build script to build Sphinx
|
||||
documentation; it is hosted here: http://bitbucket.org/zondo/sphinx-scons
|
||||
documentation; it is hosted here: https://bitbucket.org/zondo/sphinx-scons
|
||||
|
||||
PyPI
|
||||
Jannis Leidel wrote a `setuptools command
|
||||
@ -60,10 +60,14 @@ PyPI
|
||||
Sphinx documentation to the PyPI package documentation area at
|
||||
http://packages.python.org/.
|
||||
|
||||
github pages
|
||||
You can use `Michael Jones' sphinx-to-github tool
|
||||
<http://github.com/michaeljones/sphinx-to-github/tree/master>`_ to prepare
|
||||
Sphinx HTML output.
|
||||
GitHub Pages
|
||||
Directories starting with underscores are ignored by default which breaks
|
||||
static files in Sphinx. GitHub's preprocessor can be `disabled
|
||||
<https://github.com/blog/572-bypassing-jekyll-on-github-pages>`_ to support
|
||||
Sphinx HTML output properly.
|
||||
|
||||
MediaWiki
|
||||
See https://bitbucket.org/kevindunn/sphinx-wiki, a project by Kevin Dunn.
|
||||
|
||||
Google Analytics
|
||||
You can use a custom ``layout.html`` template, like this:
|
||||
@ -148,3 +152,109 @@ some notes:
|
||||
.. _Calibre: http://calibre-ebook.com/
|
||||
.. _FBreader: http://www.fbreader.org/
|
||||
.. _Bookworm: http://bookworm.oreilly.com/
|
||||
|
||||
|
||||
.. _texinfo-faq:
|
||||
|
||||
Texinfo info
|
||||
------------
|
||||
|
||||
The Texinfo builder is currently in an experimental stage but has successfully
|
||||
been used to build the documentation for both Sphinx and Python. The intended
|
||||
use of this builder is to generate Texinfo that is then processed into Info
|
||||
files.
|
||||
|
||||
There are two main programs for reading Info files, ``info`` and GNU Emacs. The
|
||||
``info`` program has less features but is available in most Unix environments
|
||||
and can be quickly accessed from the terminal. Emacs provides better font and
|
||||
color display and supports extensive customization (of course).
|
||||
|
||||
|
||||
.. _texinfo-links:
|
||||
|
||||
Displaying Links
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
One noticeable problem you may encounter with the generated Info files is how
|
||||
references are displayed. If you read the source of an Info file, a reference
|
||||
to this section would look like::
|
||||
|
||||
* note Displaying Links: target-id
|
||||
|
||||
In the stand-alone reader, ``info``, references are displayed just as they
|
||||
appear in the source. Emacs, on the other-hand, will by default replace
|
||||
``\*note:`` with ``see`` and hide the ``target-id``. For example:
|
||||
|
||||
:ref:`texinfo-links`
|
||||
|
||||
The exact behavior of how Emacs displays references is dependent on the variable
|
||||
``Info-hide-note-references``. If set to the value of ``hide``, Emacs will hide
|
||||
both the ``\*note:`` part and the ``target-id``. This is generally the best way
|
||||
to view Sphinx-based documents since they often make frequent use of links and
|
||||
do not take this limitation into account. However, changing this variable
|
||||
affects how all Info documents are displayed and most due take this behavior
|
||||
into account.
|
||||
|
||||
If you want Emacs to display Info files produced by Sphinx using the value
|
||||
``hide`` for ``Info-hide-note-references`` and the default value for all other
|
||||
Info files, try adding the following Emacs Lisp code to your start-up file,
|
||||
``~/.emacs.d/init.el``.
|
||||
|
||||
::
|
||||
|
||||
(defadvice info-insert-file-contents (after
|
||||
sphinx-info-insert-file-contents
|
||||
activate)
|
||||
"Hack to make `Info-hide-note-references' buffer-local and
|
||||
automatically set to `hide' iff it can be determined that this file
|
||||
was created from a Texinfo file generated by Docutils or Sphinx."
|
||||
(set (make-local-variable 'Info-hide-note-references)
|
||||
(default-value 'Info-hide-note-references))
|
||||
(save-excursion
|
||||
(save-restriction
|
||||
(widen) (goto-char (point-min))
|
||||
(when (re-search-forward
|
||||
"^Generated by \\(Sphinx\\|Docutils\\)"
|
||||
(save-excursion (search-forward "^_" nil t)) t)
|
||||
(set (make-local-variable 'Info-hide-note-references)
|
||||
'hide)))))
|
||||
|
||||
|
||||
Notes
|
||||
~~~~~
|
||||
|
||||
The following notes may be helpful if you want to create Texinfo files:
|
||||
|
||||
- Each section corresponds to a different ``node`` in the Info file.
|
||||
|
||||
- Some characters cannot be properly escaped in menu entries and xrefs. The
|
||||
following characters are replaced by spaces in these contexts: ``@``, ``{``,
|
||||
``}``, ``.``, ``,``, and ``:``.
|
||||
|
||||
- In the HTML and Tex output, the word ``see`` is automatically inserted before
|
||||
all xrefs.
|
||||
|
||||
- Links to external Info files can be created using the somewhat official URI
|
||||
scheme ``info``. For example::
|
||||
|
||||
info:Texinfo#makeinfo_options
|
||||
|
||||
which produces:
|
||||
|
||||
info:Texinfo#makeinfo_options
|
||||
|
||||
- Inline markup appears as follows in Info:
|
||||
|
||||
* strong -- \*strong\*
|
||||
* emphasis -- _emphasis_
|
||||
* literal -- \`literal'
|
||||
|
||||
It is possible to change this behavior using the Texinfo command
|
||||
``@definfoenclose``. For example, to make inline markup more closely resemble
|
||||
reST, add the following to your :file:`conf.py`::
|
||||
|
||||
texinfo_elements = {'preamble': """\
|
||||
@definfoenclose strong,**,**
|
||||
@definfoenclose emph,*,*
|
||||
@definfoenclose code,`@w{}`,`@w{}`
|
||||
"""}
|
||||
|
@ -23,7 +23,9 @@ Glossary
|
||||
A reStructuredText markup element that allows marking a block of content
|
||||
with special meaning. Directives are supplied not only by docutils, but
|
||||
Sphinx and custom extensions can add their own. The basic directive
|
||||
syntax looks like this::
|
||||
syntax looks like this:
|
||||
|
||||
.. sourcecode:: rst
|
||||
|
||||
.. directivename:: argument ...
|
||||
:option: value
|
||||
|
69
doc/intl.rst
Normal file
69
doc/intl.rst
Normal file
@ -0,0 +1,69 @@
|
||||
.. _intl:
|
||||
|
||||
Internationalization
|
||||
====================
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
Complementary to translations provided for Sphinx-generated messages such as
|
||||
navigation bars, Sphinx provides mechanisms facilitating *document* translations
|
||||
in itself. See the :ref:`intl-options` for details on configuration.
|
||||
|
||||
.. figure:: translation.png
|
||||
:width: 100%
|
||||
|
||||
Workflow visualization of translations in Sphinx. (The stick-figure is taken
|
||||
from an `XKCD comic <http://xkcd.com/779/>`_.)
|
||||
|
||||
**gettext** [1]_ is an established standard for internationalization and
|
||||
localization. It naïvely maps messages in a program to a translated string.
|
||||
Sphinx uses these facilities to translate whole documents.
|
||||
|
||||
Initially project maintainers have to collect all translatable strings (also
|
||||
referred to as *messages*) to make them known to translators. Sphinx extracts
|
||||
these through invocation of ``sphinx-build -b gettext``.
|
||||
|
||||
Every single element in the doctree will end up in a single message which
|
||||
results in lists being equally split into different chunks while large
|
||||
paragraphs will remain as coarsely-grained as they were in the original
|
||||
document. This grants seamless document updates while still providing a little
|
||||
bit of context for translators in free-text passages. It is the maintainer's
|
||||
task to split up paragraphs which are too large as there is no sane automated
|
||||
way to do that.
|
||||
|
||||
After Sphinx successfully ran the
|
||||
:class:`~sphinx.builders.gettext.MessageCatalogBuilder` you will find a collection
|
||||
of ``.pot`` files in your output directory. These are **catalog templates**
|
||||
and contain messages in your original language *only*.
|
||||
|
||||
They can be delivered to translators which will transform them to ``.po`` files
|
||||
--- so called **message catalogs** --- containing a mapping from the original
|
||||
messages to foreign-language strings.
|
||||
|
||||
Gettext compiles them into a binary format known as **binary catalogs** through
|
||||
:program:`msgfmt` for efficiency reasons. If you make these files discoverable
|
||||
with :confval:`locale_dirs` for your :confval:`language`, Sphinx will pick them
|
||||
up automatically.
|
||||
|
||||
An example: you have a document ``usage.rst`` in your Sphinx project. The
|
||||
gettext builder will put its messages into ``usage.pot``. Image you have
|
||||
Spanish translations [2]_ on your hands in ``usage.po`` --- for your builds to
|
||||
be translated you need to follow these instructions:
|
||||
|
||||
* Compile your message catalog to a locale directory, say ``translated``, so it
|
||||
ends up in ``./translated/es/LC_MESSAGES/usage.mo`` in your source directory
|
||||
(where ``es`` is the language code for Spanish.) ::
|
||||
|
||||
msgfmt "usage.po" -o "translated/es/LC_MESSAGES/usage.mo"
|
||||
|
||||
* Set :confval:`locale_dirs` to ``["translated/"]``.
|
||||
* Set :confval:`language` to ``es`` (also possible via :option:`-D`).
|
||||
* Run your desired build.
|
||||
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [1] See the `GNU gettext utilites
|
||||
<http://www.gnu.org/software/gettext/manual/gettext.html#Introduction>`_
|
||||
for details on that software suite.
|
||||
.. [2] Because nobody expects the Spanish Inquisition!
|
@ -25,7 +25,7 @@ to reStructuredText/Sphinx from other documentation systems.
|
||||
|
||||
* Gerard Flanagan has written a script to convert pure HTML to reST; it can be
|
||||
found at `BitBucket
|
||||
<http://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py>`_.
|
||||
<https://bitbucket.org/djerdo/musette/src/tip/musette/html/html2rest.py>`_.
|
||||
|
||||
* For converting the old Python docs to Sphinx, a converter was written which
|
||||
can be found at `the Python SVN repository
|
||||
@ -45,15 +45,19 @@ See the :ref:`pertinent section in the FAQ list <usingwith>`.
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Sphinx needs at least **Python 2.4** to run, as well as the docutils_ and
|
||||
Jinja2_ libraries. Sphinx should work with docutils version 0.5 or some
|
||||
(not broken) SVN trunk snapshot. If you like to have source code highlighting
|
||||
support, you must also install the Pygments_ library.
|
||||
Sphinx needs at least **Python 2.4** or **Python 3.1** to run, as well as the
|
||||
docutils_ and Jinja2_ libraries. Sphinx should work with docutils version 0.5
|
||||
or some (not broken) SVN trunk snapshot. If you like to have source code
|
||||
highlighting support, you must also install the Pygments_ library.
|
||||
|
||||
If you use **Python 2.4** you also need uuid_.
|
||||
|
||||
.. _reStructuredText: http://docutils.sf.net/rst.html
|
||||
.. _docutils: http://docutils.sf.net/
|
||||
.. _Jinja2: http://jinja.pocoo.org/2/
|
||||
.. _Jinja2: http://jinja.pocoo.org/
|
||||
.. _Pygments: http://pygments.org/
|
||||
.. The given homepage is only a directory listing so I'm using the pypi site.
|
||||
.. _uuid: http://pypi.python.org/pypi/uuid/
|
||||
|
||||
|
||||
Usage
|
||||
|
@ -40,9 +40,16 @@ The :program:`sphinx-build` script has several options:
|
||||
**man**
|
||||
Build manual pages in groff format for UNIX systems.
|
||||
|
||||
**texinfo**
|
||||
Build Texinfo files that can be processed into Info files using
|
||||
:program:`makeinfo`.
|
||||
|
||||
**text**
|
||||
Build plain text files.
|
||||
|
||||
**gettext**
|
||||
Build gettext-style message catalogs (``.pot`` files).
|
||||
|
||||
**doctest**
|
||||
Run all doctests in the documentation, if the :mod:`~sphinx.ext.doctest`
|
||||
extension is enabled.
|
||||
|
@ -3,8 +3,8 @@
|
||||
Sphinx Markup Constructs
|
||||
========================
|
||||
|
||||
Sphinx adds a lot of new directives and interpreted text roles to standard reST
|
||||
markup. This section contains the reference material for these facilities.
|
||||
Sphinx adds a lot of new directives and interpreted text roles to `standard reST
|
||||
markup`_. This section contains the reference material for these facilities.
|
||||
|
||||
.. toctree::
|
||||
|
||||
@ -15,3 +15,5 @@ markup. This section contains the reference material for these facilities.
|
||||
misc
|
||||
|
||||
More markup is added by :ref:`domains`.
|
||||
|
||||
.. _standard reST markup: http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html
|
||||
|
@ -44,6 +44,18 @@ more versatile:
|
||||
tool-tip on mouse-hover) will always be the full target name.
|
||||
|
||||
|
||||
Cross-referencing objects
|
||||
-------------------------
|
||||
|
||||
These roles are described with their respective domains:
|
||||
|
||||
* :ref:`Python <python-roles>`
|
||||
* :ref:`C <c-roles>`
|
||||
* :ref:`C++ <cpp-roles>`
|
||||
* :ref:`JavaScript <js-roles>`
|
||||
* :ref:`ReST <rst-roles>`
|
||||
|
||||
|
||||
.. _ref-role:
|
||||
|
||||
Cross-referencing arbitrary locations
|
||||
@ -141,8 +153,50 @@ Referencing downloadable files
|
||||
suitable link generated to it.
|
||||
|
||||
|
||||
Cross-referencing other items of interest
|
||||
-----------------------------------------
|
||||
|
||||
The following roles do possibly create a cross-reference, but do not refer to
|
||||
objects:
|
||||
|
||||
.. rst:role:: envvar
|
||||
|
||||
An environment variable. Index entries are generated. Also generates a link
|
||||
to the matching :rst:dir:`envvar` directive, if it exists.
|
||||
|
||||
.. rst:role:: token
|
||||
|
||||
The name of a grammar token (used to create links between
|
||||
:rst:dir:`productionlist` directives).
|
||||
|
||||
.. rst:role:: keyword
|
||||
|
||||
The name of a keyword in Python. This creates a link to a reference label
|
||||
with that name, if it exists.
|
||||
|
||||
.. rst:role:: option
|
||||
|
||||
A command-line option to an executable program. The leading hyphen(s) must
|
||||
be included. This generates a link to a :rst:dir:`option` directive, if it
|
||||
exists.
|
||||
|
||||
|
||||
The following role creates a cross-reference to the term in the glossary:
|
||||
|
||||
.. rst:role:: term
|
||||
|
||||
Reference to a term in the glossary. The glossary is created using the
|
||||
``glossary`` directive containing a definition list with terms and
|
||||
definitions. It does not have to be in the same file as the ``term`` markup,
|
||||
for example the Python docs have one global glossary in the ``glossary.rst``
|
||||
file.
|
||||
|
||||
If you use a term that's not explained in a glossary, you'll get a warning
|
||||
during build.
|
||||
|
||||
|
||||
Other semantic markup
|
||||
---------------------
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following roles don't do anything special except formatting the text
|
||||
in a different style:
|
||||
@ -260,13 +314,14 @@ in a different style:
|
||||
.. rst:role:: samp
|
||||
|
||||
A piece of literal text, such as code. Within the contents, you can use
|
||||
curly braces to indicate a "variable" part, as in :rst:dir:`file`. For
|
||||
curly braces to indicate a "variable" part, as in :rst:role:`file`. For
|
||||
example, in ``:samp:`print 1+{variable}```, the part ``variable`` would be
|
||||
emphasized.
|
||||
|
||||
If you don't need the "variable part" indication, use the standard
|
||||
````code```` instead.
|
||||
|
||||
There is also an :rst:role:`index` role to generate index entries.
|
||||
|
||||
The following roles generate external links:
|
||||
|
||||
@ -274,65 +329,25 @@ The following roles generate external links:
|
||||
|
||||
A reference to a Python Enhancement Proposal. This generates appropriate
|
||||
index entries. The text "PEP *number*\ " is generated; in the HTML output,
|
||||
this text is a hyperlink to an online copy of the specified PEP.
|
||||
this text is a hyperlink to an online copy of the specified PEP. You can
|
||||
link to a specific section by saying ``:pep:`number#anchor```.
|
||||
|
||||
.. rst:role:: rfc
|
||||
|
||||
A reference to an Internet Request for Comments. This generates appropriate
|
||||
index entries. The text "RFC *number*\ " is generated; in the HTML output,
|
||||
this text is a hyperlink to an online copy of the specified RFC.
|
||||
this text is a hyperlink to an online copy of the specified RFC. You can
|
||||
link to a specific section by saying ``:rfc:`number#anchor```.
|
||||
|
||||
|
||||
Note that there are no special roles for including hyperlinks as you can use
|
||||
the standard reST markup for that purpose.
|
||||
|
||||
|
||||
Cross-referencing other items of interest
|
||||
-----------------------------------------
|
||||
|
||||
The following roles do possibly create a cross-reference, but do not refer to
|
||||
objects:
|
||||
|
||||
.. rst:role:: envvar
|
||||
|
||||
An environment variable. Index entries are generated. Also generates a link
|
||||
to the matching :rst:dir:`envvar` directive, if it exists.
|
||||
|
||||
.. rst:role:: token
|
||||
|
||||
The name of a grammar token (used to create links between
|
||||
:rst:dir:`productionlist` directives).
|
||||
|
||||
.. rst:role:: keyword
|
||||
|
||||
The name of a keyword in Python. This creates a link to a reference label
|
||||
with that name, if it exists.
|
||||
|
||||
.. rst:role:: option
|
||||
|
||||
A command-line option to an executable program. The leading hyphen(s) must
|
||||
be included. This generates a link to a :rst:dir:`option` directive, if it
|
||||
exists.
|
||||
|
||||
|
||||
The following role creates a cross-reference to the term in the glossary:
|
||||
|
||||
.. rst:role:: term
|
||||
|
||||
Reference to a term in the glossary. The glossary is created using the
|
||||
``glossary`` directive containing a definition list with terms and
|
||||
definitions. It does not have to be in the same file as the ``term`` markup,
|
||||
for example the Python docs have one global glossary in the ``glossary.rst``
|
||||
file.
|
||||
|
||||
If you use a term that's not explained in a glossary, you'll get a warning
|
||||
during build.
|
||||
|
||||
|
||||
.. _default-substitutions:
|
||||
|
||||
Substitutions
|
||||
-------------
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The documentation system provides three substitutions that are defined by default.
|
||||
They are set in the build configuration file.
|
||||
|
@ -13,10 +13,12 @@ like this::
|
||||
|
||||
:fieldname: Field content
|
||||
|
||||
A field list at the very top of a file is parsed by docutils as the "docinfo",
|
||||
A field list near the top of a file is parsed by docutils as the "docinfo"
|
||||
which is normally used to record the author, date of publication and other
|
||||
metadata. *In Sphinx*, the docinfo is used as metadata, too, but not displayed
|
||||
in the output.
|
||||
metadata. *In Sphinx*, a field list preceding any other markup is moved from
|
||||
the docinfo to the Sphinx environment as document metadata and is not displayed
|
||||
in the output; a field list appearing after the document title will be part of
|
||||
the docinfo as normal and will be displayed in the output.
|
||||
|
||||
At the moment, these metadata fields are recognized:
|
||||
|
||||
@ -62,6 +64,105 @@ Meta-information markup
|
||||
:confval:`show_authors` configuration value is True.
|
||||
|
||||
|
||||
Index-generating markup
|
||||
-----------------------
|
||||
|
||||
Sphinx automatically creates index entries from all object descriptions (like
|
||||
functions, classes or attributes) like discussed in :ref:`domains`.
|
||||
|
||||
However, there is also explicit markup available, to make the index more
|
||||
comprehensive and enable index entries in documents where information is not
|
||||
mainly contained in information units, such as the language reference.
|
||||
|
||||
.. rst:directive:: .. index:: <entries>
|
||||
|
||||
This directive contains one or more index entries. Each entry consists of a
|
||||
type and a value, separated by a colon.
|
||||
|
||||
For example::
|
||||
|
||||
.. index::
|
||||
single: execution; context
|
||||
module: __main__
|
||||
module: sys
|
||||
triple: module; search; path
|
||||
|
||||
The execution context
|
||||
---------------------
|
||||
|
||||
...
|
||||
|
||||
This directive contains five entries, which will be converted to entries in
|
||||
the generated index which link to the exact location of the index statement
|
||||
(or, in case of offline media, the corresponding page number).
|
||||
|
||||
Since index directives generate cross-reference targets at their location in
|
||||
the source, it makes sense to put them *before* the thing they refer to --
|
||||
e.g. a heading, as in the example above.
|
||||
|
||||
The possible entry types are:
|
||||
|
||||
single
|
||||
Creates a single index entry. Can be made a subentry by separating the
|
||||
subentry text with a semicolon (this notation is also used below to
|
||||
describe what entries are created).
|
||||
pair
|
||||
``pair: loop; statement`` is a shortcut that creates two index entries,
|
||||
namely ``loop; statement`` and ``statement; loop``.
|
||||
triple
|
||||
Likewise, ``triple: module; search; path`` is a shortcut that creates
|
||||
three index entries, which are ``module; search path``, ``search; path,
|
||||
module`` and ``path; module search``.
|
||||
see
|
||||
``see: entry; other`` creates an index entry that refers from ``entry`` to
|
||||
``other``.
|
||||
seealso
|
||||
Like ``see``, but inserts "see also" instead of "see".
|
||||
module, keyword, operator, object, exception, statement, builtin
|
||||
These all create two index entries. For example, ``module: hashlib``
|
||||
creates the entries ``module; hashlib`` and ``hashlib; module``. (These
|
||||
are Python-specific and therefore deprecated.)
|
||||
|
||||
You can mark up "main" index entries by prefixing them with an exclamation
|
||||
mark. The references to "main" entries are emphasized in the generated
|
||||
index. For example, if two pages contain ::
|
||||
|
||||
.. index:: Python
|
||||
|
||||
and one page contains ::
|
||||
|
||||
.. index:: ! Python
|
||||
|
||||
then the backlink to the latter page is emphasized among the three backlinks.
|
||||
|
||||
For index directives containing only "single" entries, there is a shorthand
|
||||
notation::
|
||||
|
||||
.. index:: BNF, grammar, syntax, notation
|
||||
|
||||
This creates four index entries.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
Added ``see`` and ``seealso`` types, as well as marking main entries.
|
||||
|
||||
.. rst:role:: index
|
||||
|
||||
While the :rst:dir:`index` directive is a block-level markup and links to the
|
||||
beginning of the next paragraph, there is also a corresponding role that sets
|
||||
the link target directly where it is used.
|
||||
|
||||
The content of the role can be a simple phrase, which is then kept in the
|
||||
text and used as an index entry. It can also be a combination of text and
|
||||
index entry, styled like with explicit targets of cross-references. In that
|
||||
case, the "target" part can be a full entry as described for the directive
|
||||
above. For example::
|
||||
|
||||
This is a normal reST :index:`paragraph` that contains several
|
||||
:index:`index entries <pair: index; entry>`.
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
|
||||
.. _tags:
|
||||
|
||||
Including content based on tags
|
||||
@ -76,7 +177,7 @@ Including content based on tags
|
||||
|
||||
Undefined tags are false, defined tags (via the ``-t`` command-line option or
|
||||
within :file:`conf.py`) are true. Boolean expressions, also using
|
||||
parentheses (like ``html and (latex or draft)`` are supported.
|
||||
parentheses (like ``html and (latex or draft)``) are supported.
|
||||
|
||||
The format of the current builder (``html``, ``latex`` or ``text``) is always
|
||||
set as a tag.
|
||||
@ -124,9 +225,16 @@ following directive exists:
|
||||
|
||||
.. warning::
|
||||
|
||||
Tables that contain literal blocks cannot be set with ``tabulary``. They are
|
||||
therefore set with the standard LaTeX ``tabular`` environment. Also, the
|
||||
verbatim environment used for literal blocks only works in ``p{width}``
|
||||
columns, which means that by default, Sphinx generates such column specs for
|
||||
such tables. Use the :rst:dir:`tabularcolumns` directive to get finer control
|
||||
over such tables.
|
||||
Tables that contain list-like elements such as object descriptions,
|
||||
blockquotes or any kind of lists cannot be set out of the box with
|
||||
``tabulary``. They are therefore set with the standard LaTeX ``tabular``
|
||||
environment if you don't give a ``tabularcolumns`` directive. If you do, the
|
||||
table will be set with ``tabulary``, but you must use the ``p{width}``
|
||||
construct for the columns that contain these elements.
|
||||
|
||||
Literal blocks do not work with ``tabulary`` at all, so tables containing a
|
||||
literal block are always set with ``tabular``. Also, the verbatim
|
||||
environment used for literal blocks only works in ``p{width}`` columns, which
|
||||
means that by default, Sphinx generates such column specs for such tables.
|
||||
Use the :rst:dir:`tabularcolumns` directive to get finer control over such
|
||||
tables.
|
||||
|
@ -42,15 +42,25 @@ units as well as normal text:
|
||||
Example::
|
||||
|
||||
.. versionadded:: 2.5
|
||||
The `spam` parameter.
|
||||
The *spam* parameter.
|
||||
|
||||
Note that there must be no blank line between the directive head and the
|
||||
explanation; this is to make these blocks visually continuous in the markup.
|
||||
|
||||
.. rst:directive:: .. versionchanged:: version
|
||||
|
||||
Similar to :rst:dir:`versionadded`, but describes when and what changed in the named
|
||||
feature in some way (new parameters, changed side effects, etc.).
|
||||
Similar to :rst:dir:`versionadded`, but describes when and what changed in
|
||||
the named feature in some way (new parameters, changed side effects, etc.).
|
||||
|
||||
.. rst:directive:: .. deprecated:: version
|
||||
|
||||
Similar to :rst:dir:`versionchanged`, but describes when the feature was
|
||||
deprecated. An explanation can also be given, for example to inform the
|
||||
reader what should be used instead. Example::
|
||||
|
||||
.. deprecated:: 3.1
|
||||
Use :func:`spam` instead.
|
||||
|
||||
|
||||
--------------
|
||||
|
||||
@ -102,6 +112,10 @@ units as well as normal text:
|
||||
|
||||
.. centered:: LICENSE AGREEMENT
|
||||
|
||||
.. deprecated:: 1.1
|
||||
This presentation-only directive is a legacy from older versions. Use a
|
||||
:rst:dir:`rst-class` directive instead and add an appropriate style.
|
||||
|
||||
|
||||
.. rst:directive:: hlist
|
||||
|
||||
@ -134,76 +148,14 @@ For local tables of contents, use the standard reST :dudir:`contents directive
|
||||
<contents>`.
|
||||
|
||||
|
||||
Index-generating markup
|
||||
-----------------------
|
||||
|
||||
Sphinx automatically creates index entries from all object descriptions (like
|
||||
functions, classes or attributes) like discussed in :ref:`domains`.
|
||||
|
||||
However, there is also an explicit directive available, to make the index more
|
||||
comprehensive and enable index entries in documents where information is not
|
||||
mainly contained in information units, such as the language reference.
|
||||
|
||||
.. rst:directive:: .. index:: <entries>
|
||||
|
||||
This directive contains one or more index entries. Each entry consists of a
|
||||
type and a value, separated by a colon.
|
||||
|
||||
For example::
|
||||
|
||||
.. index::
|
||||
single: execution; context
|
||||
module: __main__
|
||||
module: sys
|
||||
triple: module; search; path
|
||||
|
||||
The execution context
|
||||
---------------------
|
||||
|
||||
...
|
||||
|
||||
This directive contains five entries, which will be converted to entries in
|
||||
the generated index which link to the exact location of the index statement
|
||||
(or, in case of offline media, the corresponding page number).
|
||||
|
||||
Since index directives generate cross-reference targets at their location in
|
||||
the source, it makes sense to put them *before* the thing they refer to --
|
||||
e.g. a heading, as in the example above.
|
||||
|
||||
The possible entry types are:
|
||||
|
||||
single
|
||||
Creates a single index entry. Can be made a subentry by separating the
|
||||
subentry text with a semicolon (this notation is also used below to
|
||||
describe what entries are created).
|
||||
pair
|
||||
``pair: loop; statement`` is a shortcut that creates two index entries,
|
||||
namely ``loop; statement`` and ``statement; loop``.
|
||||
triple
|
||||
Likewise, ``triple: module; search; path`` is a shortcut that creates
|
||||
three index entries, which are ``module; search path``, ``search; path,
|
||||
module`` and ``path; module search``.
|
||||
module, keyword, operator, object, exception, statement, builtin
|
||||
These all create two index entries. For example, ``module: hashlib``
|
||||
creates the entries ``module; hashlib`` and ``hashlib; module``. (These
|
||||
are Python-specific and therefore deprecated.)
|
||||
|
||||
For index directives containing only "single" entries, there is a shorthand
|
||||
notation::
|
||||
|
||||
.. index:: BNF, grammar, syntax, notation
|
||||
|
||||
This creates four index entries.
|
||||
|
||||
|
||||
Glossary
|
||||
--------
|
||||
|
||||
.. rst:directive:: .. glossary::
|
||||
|
||||
This directive must contain a reST definition list with terms and
|
||||
definitions. The definitions will then be referencable with the :rst:role:`term`
|
||||
role. Example::
|
||||
This directive must contain a reST definition-list-like markup with terms and
|
||||
definitions. The definitions will then be referencable with the
|
||||
:rst:role:`term` role. Example::
|
||||
|
||||
.. glossary::
|
||||
|
||||
@ -217,10 +169,25 @@ Glossary
|
||||
The directory which, including its subdirectories, contains all
|
||||
source files for one Sphinx project.
|
||||
|
||||
In contrast to regular definition lists, *multiple* terms per entry are
|
||||
allowed, and inline markup is allowed in terms. You can link to all of the
|
||||
terms. For example::
|
||||
|
||||
.. glossary::
|
||||
|
||||
term 1
|
||||
term 2
|
||||
Definition of both terms.
|
||||
|
||||
(When the glossary is sorted, the first term determines the sort order.)
|
||||
|
||||
.. versionadded:: 0.6
|
||||
You can now give the glossary directive a ``:sorted:`` flag that will
|
||||
automatically sort the entries alphabetically.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
Now supports multiple terms and inline markup in terms.
|
||||
|
||||
|
||||
Grammar production displays
|
||||
---------------------------
|
||||
|
@ -41,6 +41,8 @@ tables of contents. The ``toctree`` directive is the central element.
|
||||
document, the library index. From this information it generates "next
|
||||
chapter", "previous chapter" and "parent chapter" links.
|
||||
|
||||
**Entries**
|
||||
|
||||
Document titles in the :rst:dir:`toctree` will be automatically read from the
|
||||
title of the referenced document. If that isn't what you want, you can
|
||||
specify an explicit title and target using a similar syntax to reST
|
||||
@ -59,8 +61,10 @@ tables of contents. The ``toctree`` directive is the central element.
|
||||
You can also add external links, by giving an HTTP URL instead of a document
|
||||
name.
|
||||
|
||||
**Section numbering**
|
||||
|
||||
If you want to have section numbers even in HTML output, give the toctree a
|
||||
``numbered`` flag option. For example::
|
||||
``numbered`` option. For example::
|
||||
|
||||
.. toctree::
|
||||
:numbered:
|
||||
@ -71,6 +75,11 @@ tables of contents. The ``toctree`` directive is the central element.
|
||||
Numbering then starts at the heading of ``foo``. Sub-toctrees are
|
||||
automatically numbered (don't give the ``numbered`` flag to those).
|
||||
|
||||
Numbering up to a specific depth is also possible, by giving the depth as a
|
||||
numeric argument to ``numbered``.
|
||||
|
||||
**Additional options**
|
||||
|
||||
If you want only the titles of documents in the tree to show up, not other
|
||||
headings of the same level, you can use the ``titlesonly`` option::
|
||||
|
||||
@ -133,6 +142,9 @@ tables of contents. The ``toctree`` directive is the central element.
|
||||
.. versionchanged:: 1.0
|
||||
Added "titlesonly" option.
|
||||
|
||||
.. versionchanged:: 1.1
|
||||
Added numeric argument to "numbered".
|
||||
|
||||
|
||||
Special names
|
||||
-------------
|
||||
@ -151,7 +163,7 @@ The special document names (and pages generated for them) are:
|
||||
:ref:`object descriptions <basic-domain-markup>`, and from :rst:dir:`index`
|
||||
directives.
|
||||
|
||||
The module index contains one entry per :rst:dir:`module` directive.
|
||||
The Python module index contains one entry per :rst:dir:`py:module` directive.
|
||||
|
||||
The search page contains a form that uses the generated JSON search index and
|
||||
JavaScript to full-text search the generated documents for search words; it
|
||||
|
@ -325,14 +325,15 @@ directives.) Looking at this example, ::
|
||||
|
||||
.. function:: foo(x)
|
||||
foo(y, z)
|
||||
:bar: no
|
||||
:module: some.module.name
|
||||
|
||||
Return a line of text input from the user.
|
||||
|
||||
``function`` is the directive name. It is given two arguments here, the
|
||||
remainder of the first line and the second line, as well as one option ``bar``
|
||||
(as you can see, options are given in the lines immediately following the
|
||||
arguments and indicated by the colons).
|
||||
remainder of the first line and the second line, as well as one option
|
||||
``module`` (as you can see, options are given in the lines immediately following
|
||||
the arguments and indicated by the colons). Options must be indented to the
|
||||
same level as the directive content.
|
||||
|
||||
The directive content follows after a blank line and is indented relative to the
|
||||
directive start.
|
||||
|
@ -21,10 +21,10 @@ No. You have several other options:
|
||||
configuration value accordingly.
|
||||
|
||||
* You can :ref:`write a custom builder <writing-builders>` that derives from
|
||||
:class:`~sphinx.builders.StandaloneHTMLBuilder` and calls your template engine
|
||||
of choice.
|
||||
:class:`~sphinx.builders.html.StandaloneHTMLBuilder` and calls your template
|
||||
engine of choice.
|
||||
|
||||
* You can use the :class:`~sphinx.builders.PickleHTMLBuilder` that produces
|
||||
* You can use the :class:`~sphinx.builders.html.PickleHTMLBuilder` that produces
|
||||
pickle files with the page contents, and postprocess them using a custom tool,
|
||||
or use them in your Web application.
|
||||
|
||||
@ -261,9 +261,9 @@ in the future.
|
||||
|
||||
.. data:: file_suffix
|
||||
|
||||
The value of the builder's :attr:`out_suffix` attribute, i.e. the file name
|
||||
extension that the output files will get. For a standard HTML builder, this
|
||||
is usually ``.html``.
|
||||
The value of the builder's :attr:`~.SerializingHTMLBuilder.out_suffix`
|
||||
attribute, i.e. the file name extension that the output files will get. For
|
||||
a standard HTML builder, this is usually ``.html``.
|
||||
|
||||
.. data:: has_source
|
||||
|
||||
|
BIN
doc/themes/fullsize/pyramid.png
vendored
Normal file
BIN
doc/themes/fullsize/pyramid.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 129 KiB |
BIN
doc/themes/pyramid.png
vendored
Normal file
BIN
doc/themes/pyramid.png
vendored
Normal file
Binary file not shown.
After Width: | Height: | Size: 48 KiB |
@ -69,9 +69,9 @@ Builtin themes
|
||||
| | |
|
||||
| *traditional* | *nature* |
|
||||
+--------------------+--------------------+
|
||||
| |haiku| | |
|
||||
| |haiku| | |pyramid| |
|
||||
| | |
|
||||
| *haiku* | |
|
||||
| *haiku* | *pyramid* |
|
||||
+--------------------+--------------------+
|
||||
|
||||
.. |default| image:: themes/default.png
|
||||
@ -81,6 +81,7 @@ Builtin themes
|
||||
.. |traditional| image:: themes/traditional.png
|
||||
.. |nature| image:: themes/nature.png
|
||||
.. |haiku| image:: themes/haiku.png
|
||||
.. |pyramid| image:: themes/pyramid.png
|
||||
|
||||
Sphinx comes with a selection of themes to choose from.
|
||||
|
||||
@ -88,12 +89,15 @@ These themes are:
|
||||
|
||||
* **basic** -- This is a basically unstyled layout used as the base for the
|
||||
other themes, and usable as the base for custom themes as well. The HTML
|
||||
contains all important elements like sidebar and relation bar. There is one
|
||||
option (which is inherited by the other themes):
|
||||
contains all important elements like sidebar and relation bar. There are
|
||||
these options (which are inherited by the other themes):
|
||||
|
||||
- **nosidebar** (true or false): Don't include the sidebar. Defaults to
|
||||
false.
|
||||
|
||||
- **sidebarwidth** (an integer): Width of the sidebar in pixels. (Do not
|
||||
include ``px`` in the value.) Defaults to 230 pixels.
|
||||
|
||||
* **default** -- This is the default theme, which looks like `the Python
|
||||
documentation <http://docs.python.org/>`_. It can be customized via these
|
||||
options:
|
||||
@ -119,6 +123,8 @@ These themes are:
|
||||
- **footerbgcolor** (CSS color): Background color for the footer line.
|
||||
- **footertextcolor** (CSS color): Text color for the footer line.
|
||||
- **sidebarbgcolor** (CSS color): Background color for the sidebar.
|
||||
- **sidebarbtncolor** (CSS color): Background color for the sidebar collapse
|
||||
button (used when *collapsiblesidebar* is true).
|
||||
- **sidebartextcolor** (CSS color): Text color for the sidebar.
|
||||
- **sidebarlinkcolor** (CSS color): Link color for the sidebar.
|
||||
- **relbarbgcolor** (CSS color): Background color for the relation bar.
|
||||
@ -139,11 +145,11 @@ These themes are:
|
||||
- **headfont** (CSS font-family): Font for headings.
|
||||
|
||||
* **sphinxdoc** -- The theme used for this documentation. It features a sidebar
|
||||
on the right side. There are currently no options beyond *nosidebar*.
|
||||
on the right side. There are currently no options beyond *nosidebar* and
|
||||
*sidebarwidth*.
|
||||
|
||||
* **scrolls** -- A more lightweight theme, based on `the Jinja documentation
|
||||
<http://jinja.pocoo.org/2/documentation/>`_. The following color options are
|
||||
available:
|
||||
<http://jinja.pocoo.org/>`_. The following color options are available:
|
||||
|
||||
- **headerbordercolor**
|
||||
- **subheadlinecolor**
|
||||
@ -174,7 +180,11 @@ These themes are:
|
||||
is ``justify``.
|
||||
|
||||
* **nature** -- A greenish theme. There are currently no options beyond
|
||||
*nosidebar*.
|
||||
*nosidebar* and *sidebarwidth*.
|
||||
|
||||
* **pyramid** -- A theme from the Pyramid web framework project, designed by
|
||||
Blaise Laflamme. There are currently no options beyond *nosidebar* and
|
||||
*sidebarwidth*.
|
||||
|
||||
* **haiku** -- A theme without sidebar inspired by the `Haiku OS user guide
|
||||
<http://www.haiku-os.org/docs/userguide/en/contents.html>`_. The following
|
||||
@ -188,7 +198,7 @@ These themes are:
|
||||
**hoverlinkcolor** (CSS colors): Colors for various body elements.
|
||||
|
||||
* **traditional** -- A theme resembling the old Python documentation. There are
|
||||
currently no options beyond *nosidebar*.
|
||||
currently no options beyond *nosidebar* and *sidebarwidth*.
|
||||
|
||||
* **epub** -- A theme for the epub builder. There are currently no options.
|
||||
This theme tries to save visual space which is a sparse resource on ebook
|
||||
@ -204,7 +214,7 @@ name), containing the following:
|
||||
* A :file:`theme.conf` file, see below.
|
||||
* HTML templates, if needed.
|
||||
* A ``static/`` directory containing any static files that will be copied to the
|
||||
output statid directory on build. These can be images, styles, script files.
|
||||
output static directory on build. These can be images, styles, script files.
|
||||
|
||||
The :file:`theme.conf` file is in INI format [1]_ (readable by the standard
|
||||
Python :mod:`ConfigParser` module) and has the following structure:
|
||||
|
BIN
doc/translation.png
Normal file
BIN
doc/translation.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 40 KiB |
65
doc/web/api.rst
Normal file
65
doc/web/api.rst
Normal file
@ -0,0 +1,65 @@
|
||||
.. _websupportapi:
|
||||
|
||||
.. currentmodule:: sphinx.websupport
|
||||
|
||||
The WebSupport Class
|
||||
====================
|
||||
|
||||
.. class:: WebSupport
|
||||
|
||||
The main API class for the web support package. All interactions with the
|
||||
web support package should occur through this class.
|
||||
|
||||
The class takes the following keyword arguments:
|
||||
|
||||
srcdir
|
||||
The directory containing reStructuredText source files.
|
||||
|
||||
builddir
|
||||
The directory that build data and static files should be placed in. This
|
||||
should be used when creating a :class:`WebSupport` object that will be
|
||||
used to build data.
|
||||
|
||||
datadir
|
||||
The directory that the web support data is in. This should be used when
|
||||
creating a :class:`WebSupport` object that will be used to retrieve data.
|
||||
|
||||
search
|
||||
This may contain either a string (e.g. 'xapian') referencing a built-in
|
||||
search adapter to use, or an instance of a subclass of
|
||||
:class:`~.search.BaseSearch`.
|
||||
|
||||
storage
|
||||
This may contain either a string representing a database uri, or an
|
||||
instance of a subclass of :class:`~.storage.StorageBackend`. If this is
|
||||
not provided, a new sqlite database will be created.
|
||||
|
||||
moderation_callback
|
||||
A callable to be called when a new comment is added that is not
|
||||
displayed. It must accept one argument: a dictionary representing the
|
||||
comment that was added.
|
||||
|
||||
staticdir
|
||||
If static files are served from a location besides ``'/static'``, this
|
||||
should be a string with the name of that location
|
||||
(e.g. ``'/static_files'``).
|
||||
|
||||
docroot
|
||||
If the documentation is not served from the base path of a URL, this
|
||||
should be a string specifying that path (e.g. ``'docs'``).
|
||||
|
||||
|
||||
Methods
|
||||
~~~~~~~
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.build
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.get_document
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.get_data
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.add_comment
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.process_vote
|
||||
|
||||
.. automethod:: sphinx.websupport.WebSupport.get_search_results
|
254
doc/web/quickstart.rst
Normal file
254
doc/web/quickstart.rst
Normal file
@ -0,0 +1,254 @@
|
||||
.. _websupportquickstart:
|
||||
|
||||
Web Support Quick Start
|
||||
=======================
|
||||
|
||||
Building Documentation Data
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To make use of the web support package in your application you'll need to build
|
||||
the data it uses. This data includes pickle files representing documents,
|
||||
search indices, and node data that is used to track where comments and other
|
||||
things are in a document. To do this you will need to create an instance of the
|
||||
:class:`~.WebSupport` class and call its :meth:`~.WebSupport.build` method::
|
||||
|
||||
from sphinx.websupport import WebSupport
|
||||
|
||||
support = WebSupport(srcdir='/path/to/rst/sources/',
|
||||
builddir='/path/to/build/outdir',
|
||||
search='xapian')
|
||||
|
||||
support.build()
|
||||
|
||||
This will read reStructuredText sources from `srcdir` and place the necessary
|
||||
data in `builddir`. The `builddir` will contain two sub-directories: one named
|
||||
"data" that contains all the data needed to display documents, search through
|
||||
documents, and add comments to documents. The other directory will be called
|
||||
"static" and contains static files that should be served from "/static".
|
||||
|
||||
.. note::
|
||||
|
||||
If you wish to serve static files from a path other than "/static", you can
|
||||
do so by providing the *staticdir* keyword argument when creating the
|
||||
:class:`~.WebSupport` object.
|
||||
|
||||
|
||||
Integrating Sphinx Documents Into Your Webapp
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now that the data is built, it's time to do something useful with it. Start off
|
||||
by creating a :class:`~.WebSupport` object for your application::
|
||||
|
||||
from sphinx.websupport import WebSupport
|
||||
|
||||
support = WebSupport(datadir='/path/to/the/data',
|
||||
search='xapian')
|
||||
|
||||
You'll only need one of these for each set of documentation you will be working
|
||||
with. You can then call it's :meth:`~.WebSupport.get_document` method to access
|
||||
individual documents::
|
||||
|
||||
contents = support.get_document('contents')
|
||||
|
||||
This will return a dictionary containing the following items:
|
||||
|
||||
* **body**: The main body of the document as HTML
|
||||
* **sidebar**: The sidebar of the document as HTML
|
||||
* **relbar**: A div containing links to related documents
|
||||
* **title**: The title of the document
|
||||
* **css**: Links to css files used by Sphinx
|
||||
* **js**: Javascript containing comment options
|
||||
|
||||
This dict can then be used as context for templates. The goal is to be easy to
|
||||
integrate with your existing templating system. An example using `Jinja2
|
||||
<http://jinja.pocoo.org/>`_ is:
|
||||
|
||||
.. sourcecode:: html+jinja
|
||||
|
||||
{%- extends "layout.html" %}
|
||||
|
||||
{%- block title %}
|
||||
{{ document.title }}
|
||||
{%- endblock %}
|
||||
|
||||
{% block css %}
|
||||
{{ super() }}
|
||||
{{ document.css|safe }}
|
||||
<link rel="stylesheet" href="/static/websupport-custom.css" type="text/css">
|
||||
{% endblock %}
|
||||
|
||||
{%- block js %}
|
||||
{{ super() }}
|
||||
{{ document.js|safe }}
|
||||
{%- endblock %}
|
||||
|
||||
{%- block relbar %}
|
||||
{{ document.relbar|safe }}
|
||||
{%- endblock %}
|
||||
|
||||
{%- block body %}
|
||||
{{ document.body|safe }}
|
||||
{%- endblock %}
|
||||
|
||||
{%- block sidebar %}
|
||||
{{ document.sidebar|safe }}
|
||||
{%- endblock %}
|
||||
|
||||
|
||||
Authentication
|
||||
--------------
|
||||
|
||||
To use certain features such as voting, it must be possible to authenticate
|
||||
users. The details of the authentication are left to your application. Once a
|
||||
user has been authenticated you can pass the user's details to certain
|
||||
:class:`~.WebSupport` methods using the *username* and *moderator* keyword
|
||||
arguments. The web support package will store the username with comments and
|
||||
votes. The only caveat is that if you allow users to change their username you
|
||||
must update the websupport package's data::
|
||||
|
||||
support.update_username(old_username, new_username)
|
||||
|
||||
*username* should be a unique string which identifies a user, and *moderator*
|
||||
should be a boolean representing whether the user has moderation privilieges.
|
||||
The default value for *moderator* is *False*.
|
||||
|
||||
An example `Flask <http://flask.pocoo.org/>`_ function that checks whether a
|
||||
user is logged in and then retrieves a document is::
|
||||
|
||||
from sphinx.websupport.errors import *
|
||||
|
||||
@app.route('/<path:docname>')
|
||||
def doc(docname):
|
||||
username = g.user.name if g.user else ''
|
||||
moderator = g.user.moderator if g.user else False
|
||||
try:
|
||||
document = support.get_document(docname, username, moderator)
|
||||
except DocumentNotFoundError:
|
||||
abort(404)
|
||||
return render_template('doc.html', document=document)
|
||||
|
||||
The first thing to notice is that the *docname* is just the request path. This
|
||||
makes accessing the correct document easy from a single view. If the user is
|
||||
authenticated, then the username and moderation status are passed along with the
|
||||
docname to :meth:`~.WebSupport.get_document`. The web support package will then
|
||||
add this data to the ``COMMENT_OPTIONS`` that are used in the template.
|
||||
|
||||
.. note::
|
||||
|
||||
This only works works if your documentation is served from your
|
||||
document root. If it is served from another directory, you will
|
||||
need to prefix the url route with that directory, and give the `docroot`
|
||||
keyword argument when creating the web support object::
|
||||
|
||||
support = WebSupport(..., docroot='docs')
|
||||
|
||||
@app.route('/docs/<path:docname>')
|
||||
|
||||
|
||||
Performing Searches
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use the search form built-in to the Sphinx sidebar, create a function to
|
||||
handle requests to the url 'search' relative to the documentation root. The
|
||||
user's search query will be in the GET parameters, with the key `q`. Then use
|
||||
the :meth:`~sphinx.websupport.WebSupport.get_search_results` method to retrieve
|
||||
search results. In `Flask <http://flask.pocoo.org/>`_ that would be like this::
|
||||
|
||||
@app.route('/search')
|
||||
def search():
|
||||
q = request.args.get('q')
|
||||
document = support.get_search_results(q)
|
||||
return render_template('doc.html', document=document)
|
||||
|
||||
Note that we used the same template to render our search results as we did to
|
||||
render our documents. That's because :meth:`~.WebSupport.get_search_results`
|
||||
returns a context dict in the same format that :meth:`~.WebSupport.get_document`
|
||||
does.
|
||||
|
||||
|
||||
Comments & Proposals
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Now that this is done it's time to define the functions that handle the AJAX
|
||||
calls from the script. You will need three functions. The first function is
|
||||
used to add a new comment, and will call the web support method
|
||||
:meth:`~.WebSupport.add_comment`::
|
||||
|
||||
@app.route('/docs/add_comment', methods=['POST'])
|
||||
def add_comment():
|
||||
parent_id = request.form.get('parent', '')
|
||||
node_id = request.form.get('node', '')
|
||||
text = request.form.get('text', '')
|
||||
proposal = request.form.get('proposal', '')
|
||||
username = g.user.name if g.user is not None else 'Anonymous'
|
||||
comment = support.add_comment(text, node_id='node_id',
|
||||
parent_id='parent_id',
|
||||
username=username, proposal=proposal)
|
||||
return jsonify(comment=comment)
|
||||
|
||||
You'll notice that both a `parent_id` and `node_id` are sent with the
|
||||
request. If the comment is being attached directly to a node, `parent_id`
|
||||
will be empty. If the comment is a child of another comment, then `node_id`
|
||||
will be empty. Then next function handles the retrieval of comments for a
|
||||
specific node, and is aptly named
|
||||
:meth:`~sphinx.websupport.WebSupport.get_data`::
|
||||
|
||||
@app.route('/docs/get_comments')
|
||||
def get_comments():
|
||||
username = g.user.name if g.user else None
|
||||
moderator = g.user.moderator if g.user else False
|
||||
node_id = request.args.get('node', '')
|
||||
data = support.get_data(node_id, username, moderator)
|
||||
return jsonify(**data)
|
||||
|
||||
The final function that is needed will call :meth:`~.WebSupport.process_vote`,
|
||||
and will handle user votes on comments::
|
||||
|
||||
@app.route('/docs/process_vote', methods=['POST'])
|
||||
def process_vote():
|
||||
if g.user is None:
|
||||
abort(401)
|
||||
comment_id = request.form.get('comment_id')
|
||||
value = request.form.get('value')
|
||||
if value is None or comment_id is None:
|
||||
abort(400)
|
||||
support.process_vote(comment_id, g.user.id, value)
|
||||
return "success"
|
||||
|
||||
|
||||
Comment Moderation
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, all comments added through :meth:`~.WebSupport.add_comment` are
|
||||
automatically displayed. If you wish to have some form of moderation, you can
|
||||
pass the `displayed` keyword argument::
|
||||
|
||||
comment = support.add_comment(text, node_id='node_id',
|
||||
parent_id='parent_id',
|
||||
username=username, proposal=proposal,
|
||||
displayed=False)
|
||||
|
||||
You can then create a new view to handle the moderation of comments. It
|
||||
will be called when a moderator decides a comment should be accepted and
|
||||
displayed::
|
||||
|
||||
@app.route('/docs/accept_comment', methods=['POST'])
|
||||
def accept_comment():
|
||||
moderator = g.user.moderator if g.user else False
|
||||
comment_id = request.form.get('id')
|
||||
support.accept_comment(comment_id, moderator=moderator)
|
||||
return 'OK'
|
||||
|
||||
Rejecting comments happens via comment deletion.
|
||||
|
||||
To perform a custom action (such as emailing a moderator) when a new comment is
|
||||
added but not displayed, you can pass callable to the :class:`~.WebSupport`
|
||||
class when instantiating your support object::
|
||||
|
||||
def moderation_callback(comment):
|
||||
"""Do something..."""
|
||||
|
||||
support = WebSupport(..., moderation_callback=moderation_callback)
|
||||
|
||||
The moderation callback must take one argument, which will be the same comment
|
||||
dict that is returned by :meth:`add_comment`.
|
45
doc/web/searchadapters.rst
Normal file
45
doc/web/searchadapters.rst
Normal file
@ -0,0 +1,45 @@
|
||||
.. _searchadapters:
|
||||
|
||||
.. currentmodule:: sphinx.websupport.search
|
||||
|
||||
Search Adapters
|
||||
===============
|
||||
|
||||
To create a custom search adapter you will need to subclass the
|
||||
:class:`BaseSearch` class. Then create an instance of the new class and pass
|
||||
that as the `search` keyword argument when you create the :class:`~.WebSupport`
|
||||
object::
|
||||
|
||||
support = WebSupport(srcdir=srcdir,
|
||||
builddir=builddir,
|
||||
search=MySearch())
|
||||
|
||||
For more information about creating a custom search adapter, please see the
|
||||
documentation of the :class:`BaseSearch` class below.
|
||||
|
||||
.. class:: BaseSearch
|
||||
|
||||
Defines an interface for search adapters.
|
||||
|
||||
|
||||
BaseSearch Methods
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following methods are defined in the BaseSearch class. Some methods do
|
||||
not need to be overridden, but some (:meth:`~BaseSearch.add_document` and
|
||||
:meth:`~BaseSearch.handle_query`) must be overridden in your subclass. For a
|
||||
working example, look at the built-in adapter for whoosh.
|
||||
|
||||
.. automethod:: BaseSearch.init_indexing
|
||||
|
||||
.. automethod:: BaseSearch.finish_indexing
|
||||
|
||||
.. automethod:: BaseSearch.feed
|
||||
|
||||
.. automethod:: BaseSearch.add_document
|
||||
|
||||
.. automethod:: BaseSearch.query
|
||||
|
||||
.. automethod:: BaseSearch.handle_query
|
||||
|
||||
.. automethod:: BaseSearch.extract_context
|
44
doc/web/storagebackends.rst
Normal file
44
doc/web/storagebackends.rst
Normal file
@ -0,0 +1,44 @@
|
||||
.. _storagebackends:
|
||||
|
||||
.. currentmodule:: sphinx.websupport.storage
|
||||
|
||||
Storage Backends
|
||||
================
|
||||
|
||||
To create a custom storage backend you will need to subclass the
|
||||
:class:`StorageBackend` class. Then create an instance of the new class and
|
||||
pass that as the `storage` keyword argument when you create the
|
||||
:class:`~.WebSupport` object::
|
||||
|
||||
support = WebSupport(srcdir=srcdir,
|
||||
builddir=builddir,
|
||||
storage=MyStorage())
|
||||
|
||||
For more information about creating a custom storage backend, please see the
|
||||
documentation of the :class:`StorageBackend` class below.
|
||||
|
||||
.. class:: StorageBackend
|
||||
|
||||
Defines an interface for storage backends.
|
||||
|
||||
|
||||
StorageBackend Methods
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. automethod:: StorageBackend.pre_build
|
||||
|
||||
.. automethod:: StorageBackend.add_node
|
||||
|
||||
.. automethod:: StorageBackend.post_build
|
||||
|
||||
.. automethod:: StorageBackend.add_comment
|
||||
|
||||
.. automethod:: StorageBackend.delete_comment
|
||||
|
||||
.. automethod:: StorageBackend.get_data
|
||||
|
||||
.. automethod:: StorageBackend.process_vote
|
||||
|
||||
.. automethod:: StorageBackend.update_username
|
||||
|
||||
.. automethod:: StorageBackend.accept_comment
|
16
doc/websupport.rst
Normal file
16
doc/websupport.rst
Normal file
@ -0,0 +1,16 @@
|
||||
.. _websupport:
|
||||
|
||||
Sphinx Web Support
|
||||
==================
|
||||
|
||||
.. versionadded:: 1.1
|
||||
|
||||
Sphinx provides a Python API to easily integrate Sphinx documentation into your
|
||||
web application. To learn more read the :ref:`websupportquickstart`.
|
||||
|
||||
.. toctree::
|
||||
|
||||
web/quickstart
|
||||
web/api
|
||||
web/searchadapters
|
||||
web/storagebackends
|
276
ez_setup.py
276
ez_setup.py
@ -1,276 +0,0 @@
|
||||
#!python
|
||||
"""Bootstrap setuptools installation
|
||||
|
||||
If you want to use setuptools in your package's setup.py, just include this
|
||||
file in the same directory with it, and add this to the top of your setup.py::
|
||||
|
||||
from ez_setup import use_setuptools
|
||||
use_setuptools()
|
||||
|
||||
If you want to require a specific version of setuptools, set a download
|
||||
mirror, or use an alternate download directory, you can do so by supplying
|
||||
the appropriate options to ``use_setuptools()``.
|
||||
|
||||
This file can also be run as a script to install or upgrade setuptools.
|
||||
"""
|
||||
import sys
|
||||
DEFAULT_VERSION = "0.6c9"
|
||||
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
|
||||
|
||||
md5_data = {
|
||||
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
|
||||
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
|
||||
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
|
||||
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
|
||||
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
|
||||
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
|
||||
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
|
||||
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
|
||||
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
|
||||
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
|
||||
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
|
||||
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
|
||||
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
|
||||
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
|
||||
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
|
||||
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
|
||||
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
|
||||
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
|
||||
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
|
||||
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
|
||||
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
|
||||
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
|
||||
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
|
||||
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
|
||||
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
|
||||
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
|
||||
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
|
||||
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
|
||||
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
|
||||
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
|
||||
'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
|
||||
'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
|
||||
'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
|
||||
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
|
||||
}
|
||||
|
||||
import sys, os
|
||||
try: from hashlib import md5
|
||||
except ImportError: from md5 import md5
|
||||
|
||||
def _validate_md5(egg_name, data):
|
||||
if egg_name in md5_data:
|
||||
digest = md5(data).hexdigest()
|
||||
if digest != md5_data[egg_name]:
|
||||
print >>sys.stderr, (
|
||||
"md5 validation of %s failed! (Possible download problem?)"
|
||||
% egg_name
|
||||
)
|
||||
sys.exit(2)
|
||||
return data
|
||||
|
||||
def use_setuptools(
|
||||
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
|
||||
download_delay=15
|
||||
):
|
||||
"""Automatically find/download setuptools and make it available on sys.path
|
||||
|
||||
`version` should be a valid setuptools version number that is available
|
||||
as an egg for download under the `download_base` URL (which should end with
|
||||
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
|
||||
it is not already available. If `download_delay` is specified, it should
|
||||
be the number of seconds that will be paused before initiating a download,
|
||||
should one be required. If an older version of setuptools is installed,
|
||||
this routine will print a message to ``sys.stderr`` and raise SystemExit in
|
||||
an attempt to abort the calling script.
|
||||
"""
|
||||
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
|
||||
def do_download():
|
||||
egg = download_setuptools(version, download_base, to_dir, download_delay)
|
||||
sys.path.insert(0, egg)
|
||||
import setuptools; setuptools.bootstrap_install_from = egg
|
||||
try:
|
||||
import pkg_resources
|
||||
except ImportError:
|
||||
return do_download()
|
||||
try:
|
||||
pkg_resources.require("setuptools>="+version); return
|
||||
except pkg_resources.VersionConflict, e:
|
||||
if was_imported:
|
||||
print >>sys.stderr, (
|
||||
"The required version of setuptools (>=%s) is not available, and\n"
|
||||
"can't be installed while this script is running. Please install\n"
|
||||
" a more recent version first, using 'easy_install -U setuptools'."
|
||||
"\n\n(Currently using %r)"
|
||||
) % (version, e.args[0])
|
||||
sys.exit(2)
|
||||
else:
|
||||
del pkg_resources, sys.modules['pkg_resources'] # reload ok
|
||||
return do_download()
|
||||
except pkg_resources.DistributionNotFound:
|
||||
return do_download()
|
||||
|
||||
def download_setuptools(
|
||||
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
|
||||
delay = 15
|
||||
):
|
||||
"""Download setuptools from a specified location and return its filename
|
||||
|
||||
`version` should be a valid setuptools version number that is available
|
||||
as an egg for download under the `download_base` URL (which should end
|
||||
with a '/'). `to_dir` is the directory where the egg will be downloaded.
|
||||
`delay` is the number of seconds to pause before an actual download attempt.
|
||||
"""
|
||||
import urllib2, shutil
|
||||
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
|
||||
url = download_base + egg_name
|
||||
saveto = os.path.join(to_dir, egg_name)
|
||||
src = dst = None
|
||||
if not os.path.exists(saveto): # Avoid repeated downloads
|
||||
try:
|
||||
from distutils import log
|
||||
if delay:
|
||||
log.warn("""
|
||||
---------------------------------------------------------------------------
|
||||
This script requires setuptools version %s to run (even to display
|
||||
help). I will attempt to download it for you (from
|
||||
%s), but
|
||||
you may need to enable firewall access for this script first.
|
||||
I will start the download in %d seconds.
|
||||
|
||||
(Note: if this machine does not have network access, please obtain the file
|
||||
|
||||
%s
|
||||
|
||||
and place it in this directory before rerunning this script.)
|
||||
---------------------------------------------------------------------------""",
|
||||
version, download_base, delay, url
|
||||
); from time import sleep; sleep(delay)
|
||||
log.warn("Downloading %s", url)
|
||||
src = urllib2.urlopen(url)
|
||||
# Read/write all in one block, so we don't create a corrupt file
|
||||
# if the download is interrupted.
|
||||
data = _validate_md5(egg_name, src.read())
|
||||
dst = open(saveto,"wb"); dst.write(data)
|
||||
finally:
|
||||
if src: src.close()
|
||||
if dst: dst.close()
|
||||
return os.path.realpath(saveto)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
def main(argv, version=DEFAULT_VERSION):
|
||||
"""Install or upgrade setuptools and EasyInstall"""
|
||||
try:
|
||||
import setuptools
|
||||
except ImportError:
|
||||
egg = None
|
||||
try:
|
||||
egg = download_setuptools(version, delay=0)
|
||||
sys.path.insert(0,egg)
|
||||
from setuptools.command.easy_install import main
|
||||
return main(list(argv)+[egg]) # we're done here
|
||||
finally:
|
||||
if egg and os.path.exists(egg):
|
||||
os.unlink(egg)
|
||||
else:
|
||||
if setuptools.__version__ == '0.0.1':
|
||||
print >>sys.stderr, (
|
||||
"You have an obsolete version of setuptools installed. Please\n"
|
||||
"remove it from your system entirely before rerunning this script."
|
||||
)
|
||||
sys.exit(2)
|
||||
|
||||
req = "setuptools>="+version
|
||||
import pkg_resources
|
||||
try:
|
||||
pkg_resources.require(req)
|
||||
except pkg_resources.VersionConflict:
|
||||
try:
|
||||
from setuptools.command.easy_install import main
|
||||
except ImportError:
|
||||
from easy_install import main
|
||||
main(list(argv)+[download_setuptools(delay=0)])
|
||||
sys.exit(0) # try to force an exit
|
||||
else:
|
||||
if argv:
|
||||
from setuptools.command.easy_install import main
|
||||
main(argv)
|
||||
else:
|
||||
print "Setuptools version",version,"or greater has been installed."
|
||||
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
|
||||
|
||||
def update_md5(filenames):
|
||||
"""Update our built-in md5 registry"""
|
||||
|
||||
import re
|
||||
|
||||
for name in filenames:
|
||||
base = os.path.basename(name)
|
||||
f = open(name,'rb')
|
||||
md5_data[base] = md5(f.read()).hexdigest()
|
||||
f.close()
|
||||
|
||||
data = [" %r: %r,\n" % it for it in md5_data.items()]
|
||||
data.sort()
|
||||
repl = "".join(data)
|
||||
|
||||
import inspect
|
||||
srcfile = inspect.getsourcefile(sys.modules[__name__])
|
||||
f = open(srcfile, 'rb'); src = f.read(); f.close()
|
||||
|
||||
match = re.search("\nmd5_data = {\n([^}]+)}", src)
|
||||
if not match:
|
||||
print >>sys.stderr, "Internal error!"
|
||||
sys.exit(2)
|
||||
|
||||
src = src[:match.start(1)] + repl + src[match.end(1):]
|
||||
f = open(srcfile,'w')
|
||||
f.write(src)
|
||||
f.close()
|
||||
|
||||
|
||||
if __name__=='__main__':
|
||||
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
|
||||
update_md5(sys.argv[2:])
|
||||
else:
|
||||
main(sys.argv[1:])
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
12
setup.py
12
setup.py
@ -2,8 +2,8 @@
|
||||
try:
|
||||
from setuptools import setup, find_packages
|
||||
except ImportError:
|
||||
import ez_setup
|
||||
ez_setup.use_setuptools()
|
||||
import distribute_setup
|
||||
distribute_setup.use_setuptools()
|
||||
from setuptools import setup, find_packages
|
||||
|
||||
import os
|
||||
@ -47,7 +47,7 @@ A development egg can be found `here
|
||||
requires = ['Pygments>=0.8', 'Jinja2>=2.2', 'docutils>=0.5']
|
||||
|
||||
if sys.version_info < (2, 4):
|
||||
print 'ERROR: Sphinx requires at least Python 2.4 to run.'
|
||||
print('ERROR: Sphinx requires at least Python 2.4 to run.')
|
||||
sys.exit(1)
|
||||
|
||||
if sys.version_info < (2, 5):
|
||||
@ -63,6 +63,9 @@ if sys.version_info < (2, 5):
|
||||
else:
|
||||
del requires[-1]
|
||||
|
||||
# The uuid module is new in the stdlib in 2.5
|
||||
requires.append('uuid>=1.30')
|
||||
|
||||
|
||||
# Provide a "compile_catalog" command that also creates the translated
|
||||
# JavaScript files if Babel is available.
|
||||
@ -190,6 +193,7 @@ setup(
|
||||
'console_scripts': [
|
||||
'sphinx-build = sphinx:main',
|
||||
'sphinx-quickstart = sphinx.quickstart:main',
|
||||
'sphinx-apidoc = sphinx.apidoc:main',
|
||||
'sphinx-autogen = sphinx.ext.autosummary.generate:main',
|
||||
],
|
||||
'distutils.commands': [
|
||||
@ -198,4 +202,6 @@ setup(
|
||||
},
|
||||
install_requires=requires,
|
||||
cmdclass=cmdclass,
|
||||
use_2to3=True,
|
||||
use_2to3_fixers=['custom_fixers'],
|
||||
)
|
||||
|
15
sphinx-apidoc.py
Executable file
15
sphinx-apidoc.py
Executable file
@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Sphinx - Python documentation toolchain
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
if __name__ == '__main__':
|
||||
from sphinx.apidoc import main
|
||||
sys.exit(main(sys.argv))
|
@ -4,8 +4,8 @@
|
||||
Sphinx - Python documentation toolchain
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007-2010 by Georg Brandl.
|
||||
:license: BSD.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
@ -4,7 +4,7 @@
|
||||
Sphinx - Python documentation toolchain
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -4,7 +4,7 @@
|
||||
Sphinx - Python documentation toolchain
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -5,14 +5,17 @@
|
||||
|
||||
The Sphinx documentation toolchain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
# Keep this file executable as-is in Python 3!
|
||||
# (Otherwise getting the version out of it from setup.py is impossible.)
|
||||
|
||||
import sys
|
||||
from os import path
|
||||
|
||||
__version__ = '1.1pre'
|
||||
__version__ = '1.1pre'
|
||||
__released__ = '1.1 (hg)' # used when Sphinx builds its own docs
|
||||
|
||||
package_dir = path.abspath(path.dirname(__file__))
|
||||
@ -34,14 +37,16 @@ if '+' in __version__ or 'pre' in __version__:
|
||||
|
||||
|
||||
def main(argv=sys.argv):
|
||||
"""Sphinx build "main" command-line entry."""
|
||||
if sys.version_info[:3] < (2, 4, 0):
|
||||
print >>sys.stderr, \
|
||||
'Error: Sphinx requires at least Python 2.4 to run.'
|
||||
sys.stderr.write('Error: Sphinx requires at least '
|
||||
'Python 2.4 to run.\n')
|
||||
return 1
|
||||
|
||||
try:
|
||||
from sphinx import cmdline
|
||||
except ImportError, err:
|
||||
except ImportError:
|
||||
err = sys.exc_info()[1]
|
||||
errstr = str(err)
|
||||
if errstr.lower().startswith('no module named'):
|
||||
whichmod = errstr[16:]
|
||||
@ -54,14 +59,14 @@ def main(argv=sys.argv):
|
||||
whichmod = 'roman module (which is distributed with Docutils)'
|
||||
hint = ('This can happen if you upgraded docutils using\n'
|
||||
'easy_install without uninstalling the old version'
|
||||
'first.')
|
||||
'first.\n')
|
||||
else:
|
||||
whichmod += ' module'
|
||||
print >>sys.stderr, ('Error: The %s cannot be found. '
|
||||
'Did you install Sphinx and its dependencies '
|
||||
'correctly?' % whichmod)
|
||||
sys.stderr.write('Error: The %s cannot be found. '
|
||||
'Did you install Sphinx and its dependencies '
|
||||
'correctly?\n' % whichmod)
|
||||
if hint:
|
||||
print >> sys.stderr, hint
|
||||
sys.stderr.write(hint)
|
||||
return 1
|
||||
raise
|
||||
return cmdline.main(argv)
|
||||
|
@ -5,109 +5,177 @@
|
||||
|
||||
Additional docutils nodes.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
# index markup
|
||||
class index(nodes.Invisible, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
class toctree(nodes.General, nodes.Element):
|
||||
"""Node for inserting a "TOC tree"."""
|
||||
|
||||
|
||||
# domain-specific object descriptions (class, function etc.)
|
||||
|
||||
# parent node for signature and content
|
||||
class desc(nodes.Admonition, nodes.Element): pass
|
||||
class desc(nodes.Admonition, nodes.Element):
|
||||
"""Node for object descriptions.
|
||||
|
||||
# additional name parts (module name, class name)
|
||||
class desc_addname(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
This node is similar to a "definition list" with one definition. It
|
||||
contains one or more ``desc_signature`` and a ``desc_content``.
|
||||
"""
|
||||
|
||||
class desc_signature(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for object signatures.
|
||||
|
||||
The "term" part of the custom Sphinx definition list.
|
||||
"""
|
||||
|
||||
|
||||
# nodes to use within a desc_signature
|
||||
|
||||
class desc_addname(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for additional name parts (module name, class name)."""
|
||||
# compatibility alias
|
||||
desc_classname = desc_addname
|
||||
# return type (C); object type
|
||||
class desc_type(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
# -> annotation (Python)
|
||||
|
||||
class desc_type(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for return types or object type names."""
|
||||
|
||||
class desc_returns(desc_type):
|
||||
"""Node for a "returns" annotation (a la -> in Python)."""
|
||||
def astext(self):
|
||||
return ' -> ' + nodes.TextElement.astext(self)
|
||||
# main name of object
|
||||
class desc_name(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
# argument list
|
||||
class desc_signature(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
class desc_name(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for the main object name."""
|
||||
|
||||
class desc_parameterlist(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for a general parameter list."""
|
||||
child_text_separator = ', '
|
||||
class desc_parameter(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
class desc_parameter(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for a single parameter."""
|
||||
|
||||
class desc_optional(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for marking optional parts of the parameter list."""
|
||||
child_text_separator = ', '
|
||||
def astext(self):
|
||||
return '[' + nodes.TextElement.astext(self) + ']'
|
||||
# annotation (not Python 3-style annotations)
|
||||
class desc_annotation(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
# node for content
|
||||
class desc_content(nodes.General, nodes.Element): pass
|
||||
class desc_annotation(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for signature annotations (not Python 3-style annotations)."""
|
||||
|
||||
# \versionadded, \versionchanged, \deprecated
|
||||
class versionmodified(nodes.Admonition, nodes.TextElement): pass
|
||||
class desc_content(nodes.General, nodes.Element):
|
||||
"""Node for object description content.
|
||||
|
||||
# seealso
|
||||
class seealso(nodes.Admonition, nodes.Element): pass
|
||||
This is the "definition" part of the custom Sphinx definition list.
|
||||
"""
|
||||
|
||||
# productionlist
|
||||
class productionlist(nodes.Admonition, nodes.Element): pass
|
||||
class production(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
# toc tree
|
||||
class toctree(nodes.General, nodes.Element): pass
|
||||
# new admonition-like constructs
|
||||
|
||||
# centered
|
||||
class centered(nodes.Part, nodes.Element): pass
|
||||
class versionmodified(nodes.Admonition, nodes.TextElement):
|
||||
"""Node for version change entries.
|
||||
|
||||
# pending xref
|
||||
class pending_xref(nodes.Inline, nodes.Element): pass
|
||||
Currently used for "versionadded", "versionchanged" and "deprecated"
|
||||
directives.
|
||||
"""
|
||||
|
||||
# compact paragraph -- never makes a <p>
|
||||
class compact_paragraph(nodes.paragraph): pass
|
||||
class seealso(nodes.Admonition, nodes.Element):
|
||||
"""Custom "see also" admonition."""
|
||||
|
||||
# reference to a file to download
|
||||
class download_reference(nodes.reference): pass
|
||||
class productionlist(nodes.Admonition, nodes.Element):
|
||||
"""Node for grammar production lists.
|
||||
|
||||
# for the ACKS list
|
||||
class acks(nodes.Element): pass
|
||||
Contains ``production`` nodes.
|
||||
"""
|
||||
|
||||
# for horizontal lists
|
||||
class hlist(nodes.Element): pass
|
||||
class hlistcol(nodes.Element): pass
|
||||
class production(nodes.Part, nodes.Inline, nodes.TextElement):
|
||||
"""Node for a single grammar production rule."""
|
||||
|
||||
# sets the highlighting language for literal blocks
|
||||
class highlightlang(nodes.Element): pass
|
||||
|
||||
# like emphasis, but doesn't apply further text processors, e.g. smartypants
|
||||
class literal_emphasis(nodes.emphasis): pass
|
||||
# other directive-level nodes
|
||||
|
||||
# for abbreviations (with explanations)
|
||||
class abbreviation(nodes.Inline, nodes.TextElement): pass
|
||||
class index(nodes.Invisible, nodes.Inline, nodes.TextElement):
|
||||
"""Node for index entries.
|
||||
|
||||
# glossary
|
||||
class glossary(nodes.Element): pass
|
||||
This node is created by the ``index`` directive and has one attribute,
|
||||
``entries``. Its value is a list of 4-tuples of ``(entrytype, entryname,
|
||||
target, ignored)``.
|
||||
|
||||
# start of a file, used in the LaTeX builder only
|
||||
class start_of_file(nodes.Element): pass
|
||||
*entrytype* is one of "single", "pair", "double", "triple".
|
||||
"""
|
||||
|
||||
# tabular column specification, used for the LaTeX writer
|
||||
class tabular_col_spec(nodes.Element): pass
|
||||
class centered(nodes.Part, nodes.Element):
|
||||
"""Deprecated."""
|
||||
|
||||
# only (in/exclusion based on tags)
|
||||
class only(nodes.Element): pass
|
||||
class acks(nodes.Element):
|
||||
"""Special node for "acks" lists."""
|
||||
|
||||
# meta directive -- same as docutils' standard meta node, but pickleable
|
||||
class meta(nodes.Special, nodes.PreBibliographic, nodes.Element): pass
|
||||
class hlist(nodes.Element):
|
||||
"""Node for "horizontal lists", i.e. lists that should be compressed to
|
||||
take up less vertical space.
|
||||
"""
|
||||
|
||||
# make them known to docutils. this is needed, because the HTML writer
|
||||
# will choke at some point if these are not added
|
||||
nodes._add_node_class_names("""index desc desc_content desc_signature
|
||||
desc_type desc_returns desc_addname desc_name desc_parameterlist
|
||||
desc_parameter desc_optional download_reference hlist hlistcol
|
||||
centered versionmodified seealso productionlist production toctree
|
||||
pending_xref compact_paragraph highlightlang literal_emphasis
|
||||
abbreviation glossary acks module start_of_file tabular_col_spec
|
||||
meta""".split())
|
||||
class hlistcol(nodes.Element):
|
||||
"""Node for one column in a horizontal list."""
|
||||
|
||||
class compact_paragraph(nodes.paragraph):
|
||||
"""Node for a compact paragraph (which never makes a <p> node)."""
|
||||
|
||||
class glossary(nodes.Element):
|
||||
"""Node to insert a glossary."""
|
||||
|
||||
class only(nodes.Element):
|
||||
"""Node for "only" directives (conditional inclusion based on tags)."""
|
||||
|
||||
|
||||
# meta-information nodes
|
||||
|
||||
class start_of_file(nodes.Element):
|
||||
"""Node to mark start of a new file, used in the LaTeX builder only."""
|
||||
|
||||
class highlightlang(nodes.Element):
|
||||
"""Inserted to set the highlight language and line number options for
|
||||
subsequent code blocks.
|
||||
"""
|
||||
|
||||
class tabular_col_spec(nodes.Element):
|
||||
"""Node for specifying tabular columns, used for LaTeX output."""
|
||||
|
||||
class meta(nodes.Special, nodes.PreBibliographic, nodes.Element):
|
||||
"""Node for meta directive -- same as docutils' standard meta node,
|
||||
but pickleable.
|
||||
"""
|
||||
|
||||
|
||||
# inline nodes
|
||||
|
||||
class pending_xref(nodes.Inline, nodes.Element):
|
||||
"""Node for cross-references that cannot be resolved without complete
|
||||
information about all documents.
|
||||
|
||||
These nodes are resolved before writing output, in
|
||||
BuildEnvironment.resolve_references.
|
||||
"""
|
||||
|
||||
class download_reference(nodes.reference):
|
||||
"""Node for download references, similar to pending_xref."""
|
||||
|
||||
class literal_emphasis(nodes.emphasis):
|
||||
"""Node that behaves like `emphasis`, but further text processors are not
|
||||
applied (e.g. smartypants for HTML output).
|
||||
"""
|
||||
|
||||
class abbreviation(nodes.Inline, nodes.TextElement):
|
||||
"""Node for abbreviations with explanations."""
|
||||
|
||||
class termsep(nodes.Structural, nodes.Element):
|
||||
"""Separates two terms within a <term> node."""
|
||||
|
||||
|
||||
# make the new nodes known to docutils; needed because the HTML writer will
|
||||
# choke at some point if these are not added
|
||||
nodes._add_node_class_names(k for k in globals().keys()
|
||||
if k != 'nodes' and k[0] != '_')
|
||||
|
263
sphinx/apidoc.py
Normal file
263
sphinx/apidoc.py
Normal file
@ -0,0 +1,263 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.apidoc
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Parses a directory tree looking for Python modules and packages and creates
|
||||
ReST files appropriately to create code documentation with Sphinx. It also
|
||||
creates a modules index (named modules.<suffix>).
|
||||
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import optparse
|
||||
from os import path
|
||||
|
||||
# automodule options
|
||||
OPTIONS = [
|
||||
'members',
|
||||
'undoc-members',
|
||||
# 'inherited-members', # disabled because there's a bug in sphinx
|
||||
'show-inheritance',
|
||||
]
|
||||
|
||||
INITPY = '__init__.py'
|
||||
|
||||
def makename(package, module):
|
||||
"""Join package and module with a dot."""
|
||||
# Both package and module can be None/empty.
|
||||
if package:
|
||||
name = package
|
||||
if module:
|
||||
name += '.' + module
|
||||
else:
|
||||
name = module
|
||||
return name
|
||||
|
||||
def write_file(name, text, opts):
|
||||
"""Write the output file for module/package <name>."""
|
||||
fname = path.join(opts.destdir, "%s.%s" % (name, opts.suffix))
|
||||
if opts.dryrun:
|
||||
print 'Would create file %s.' % fname
|
||||
return
|
||||
if not opts.force and path.isfile(fname):
|
||||
print 'File %s already exists, skipping.' % fname
|
||||
else:
|
||||
print 'Creating file %s.' % fname
|
||||
f = open(fname, 'w')
|
||||
try:
|
||||
f.write(text)
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
def format_heading(level, text):
|
||||
"""Create a heading of <level> [1, 2 or 3 supported]."""
|
||||
underlining = ['=', '-', '~', ][level-1] * len(text)
|
||||
return '%s\n%s\n\n' % (text, underlining)
|
||||
|
||||
def format_directive(module, package=None):
|
||||
"""Create the automodule directive and add the options."""
|
||||
directive = '.. automodule:: %s\n' % makename(package, module)
|
||||
for option in OPTIONS:
|
||||
directive += ' :%s:\n' % option
|
||||
return directive
|
||||
|
||||
def create_module_file(package, module, opts):
|
||||
"""Build the text of the file and write the file."""
|
||||
text = format_heading(1, '%s Module' % module)
|
||||
#text += format_heading(2, ':mod:`%s` Module' % module)
|
||||
text += format_directive(module, package)
|
||||
write_file(makename(package, module), text, opts)
|
||||
|
||||
def create_package_file(root, master_package, subroot, py_files, opts, subs):
|
||||
"""Build the text of the file and write the file."""
|
||||
package = path.split(root)[-1]
|
||||
text = format_heading(1, '%s Package' % package)
|
||||
# add each module in the package
|
||||
for py_file in py_files:
|
||||
if shall_skip(path.join(root, py_file)):
|
||||
continue
|
||||
is_package = py_file == INITPY
|
||||
py_file = path.splitext(py_file)[0]
|
||||
py_path = makename(subroot, py_file)
|
||||
if is_package:
|
||||
heading = ':mod:`%s` Package' % package
|
||||
else:
|
||||
heading = ':mod:`%s` Module' % py_file
|
||||
text += format_heading(2, heading)
|
||||
text += format_directive(is_package and subroot or py_path,
|
||||
master_package)
|
||||
text += '\n'
|
||||
|
||||
# build a list of directories that are packages (contain an INITPY file)
|
||||
subs = [sub for sub in subs if path.isfile(path.join(root, sub, INITPY))]
|
||||
# if there are some package directories, add a TOC for theses subpackages
|
||||
if subs:
|
||||
text += format_heading(2, 'Subpackages')
|
||||
text += '.. toctree::\n\n'
|
||||
for sub in subs:
|
||||
text += ' %s.%s\n' % (makename(master_package, subroot), sub)
|
||||
text += '\n'
|
||||
|
||||
write_file(makename(master_package, subroot), text, opts)
|
||||
|
||||
def create_modules_toc_file(master_package, modules, opts, name='modules'):
|
||||
"""
|
||||
Create the module's index.
|
||||
"""
|
||||
text = format_heading(1, '%s Modules' % opts.header)
|
||||
text += '.. toctree::\n'
|
||||
text += ' :maxdepth: %s\n\n' % opts.maxdepth
|
||||
|
||||
modules.sort()
|
||||
prev_module = ''
|
||||
for module in modules:
|
||||
# look if the module is a subpackage and, if yes, ignore it
|
||||
if module.startswith(prev_module + '.'):
|
||||
continue
|
||||
prev_module = module
|
||||
text += ' %s\n' % module
|
||||
|
||||
write_file(name, text, opts)
|
||||
|
||||
def shall_skip(module):
|
||||
"""
|
||||
Check if we want to skip this module.
|
||||
"""
|
||||
# skip it, if there is nothing (or just \n or \r\n) in the file
|
||||
return path.getsize(module) < 3
|
||||
|
||||
def recurse_tree(rootpath, excludes, opts):
|
||||
"""
|
||||
Look for every file in the directory tree and create the corresponding
|
||||
ReST files.
|
||||
"""
|
||||
# check if the base directory is a package and get is name
|
||||
if INITPY in os.listdir(rootpath):
|
||||
package_name = path.abspath(rootpath).split(path.sep)[-1]
|
||||
else:
|
||||
package_name = None
|
||||
|
||||
toc = []
|
||||
tree = os.walk(rootpath, False)
|
||||
for root, subs, files in tree:
|
||||
# keep only the Python script files
|
||||
py_files = sorted([f for f in files if path.splitext(f)[1] == '.py'])
|
||||
if INITPY in py_files:
|
||||
py_files.remove(INITPY)
|
||||
py_files.insert(0, INITPY)
|
||||
# remove hidden ('.') and private ('_') directories
|
||||
subs = sorted([sub for sub in subs if sub[0] not in ['.', '_']])
|
||||
# check if there are valid files to process
|
||||
# TODO: could add check for windows hidden files
|
||||
if "/." in root or "/_" in root \
|
||||
or not py_files \
|
||||
or is_excluded(root, excludes):
|
||||
continue
|
||||
if INITPY in py_files:
|
||||
# we are in package ...
|
||||
if (# ... with subpackage(s)
|
||||
subs
|
||||
or
|
||||
# ... with some module(s)
|
||||
len(py_files) > 1
|
||||
or
|
||||
# ... with a not-to-be-skipped INITPY file
|
||||
not shall_skip(path.join(root, INITPY))
|
||||
):
|
||||
subroot = root[len(rootpath):].lstrip(path.sep).\
|
||||
replace(path.sep, '.')
|
||||
create_package_file(root, package_name, subroot,
|
||||
py_files, opts, subs)
|
||||
toc.append(makename(package_name, subroot))
|
||||
elif root == rootpath:
|
||||
# if we are at the root level, we don't require it to be a package
|
||||
for py_file in py_files:
|
||||
if not shall_skip(path.join(rootpath, py_file)):
|
||||
module = path.splitext(py_file)[0]
|
||||
create_module_file(package_name, module, opts)
|
||||
toc.append(makename(package_name, module))
|
||||
|
||||
# create the module's index
|
||||
if not opts.notoc:
|
||||
create_modules_toc_file(package_name, toc, opts)
|
||||
|
||||
def normalize_excludes(rootpath, excludes):
|
||||
"""
|
||||
Normalize the excluded directory list:
|
||||
* must be either an absolute path or start with rootpath,
|
||||
* otherwise it is joined with rootpath
|
||||
* with trailing slash
|
||||
"""
|
||||
sep = path.sep
|
||||
f_excludes = []
|
||||
for exclude in excludes:
|
||||
if not path.isabs(exclude) and not exclude.startswith(rootpath):
|
||||
exclude = path.join(rootpath, exclude)
|
||||
if not exclude.endswith(sep):
|
||||
exclude += sep
|
||||
f_excludes.append(exclude)
|
||||
return f_excludes
|
||||
|
||||
def is_excluded(root, excludes):
|
||||
"""
|
||||
Check if the directory is in the exclude list.
|
||||
|
||||
Note: by having trailing slashes, we avoid common prefix issues, like
|
||||
e.g. an exlude "foo" also accidentally excluding "foobar".
|
||||
"""
|
||||
sep = path.sep
|
||||
if not root.endswith(sep):
|
||||
root += sep
|
||||
for exclude in excludes:
|
||||
if root.startswith(exclude):
|
||||
return True
|
||||
return False
|
||||
|
||||
def main(argv=sys.argv):
|
||||
"""
|
||||
Parse and check the command line arguments.
|
||||
"""
|
||||
parser = optparse.OptionParser(
|
||||
usage="""\
|
||||
usage: %prog [options] -o <output_path> <module_path> [exclude_paths, ...]
|
||||
|
||||
Look recursively in <module_path> for Python modules and packages and create
|
||||
a reST file with automodule directives per package in the <output_path>.
|
||||
|
||||
Note: By default this script will not overwrite already created files.""")
|
||||
|
||||
parser.add_option('-o', '--output-dir', action='store', dest='destdir',
|
||||
help='Directory to place all output', default='')
|
||||
parser.add_option('-d', '--maxdepth', action='store', dest='maxdepth',
|
||||
help='Maximum depth of submodules to show in the TOC '
|
||||
'(default: 4)', type='int', default=4)
|
||||
parser.add_option('-f', '--force', action='store_true', dest='force',
|
||||
help='Overwrite all the files')
|
||||
parser.add_option('-n', '--dry-run', action='store_true', dest='dryrun',
|
||||
help='Run the script without creating the files')
|
||||
parser.add_option('-T', '--no-toc', action='store_true', dest='notoc',
|
||||
help='Don\'t create the table of contents file')
|
||||
parser.add_option('-H', '--doc-header', action='store', dest='header',
|
||||
help='Documentation Header (default: Project)',
|
||||
default='Project')
|
||||
parser.add_option('-s', '--suffix', action='store', dest='suffix',
|
||||
help='file suffix (default: rst)', default='rst')
|
||||
|
||||
(opts, args) = parser.parse_args(argv[1:])
|
||||
|
||||
if not args:
|
||||
parser.error('A package path is required.')
|
||||
if not opts.destdir:
|
||||
parser.error('An output directory is required.')
|
||||
rootpath, excludes = args[0], args[1:]
|
||||
if not path.isdir(rootpath):
|
||||
print >>sys.stderr, '%s is not a directory.' % rootpath
|
||||
sys.exit(1)
|
||||
if not path.isdir(opts.destdir):
|
||||
print '%s is not a valid output directory.' % opts.destdir
|
||||
sys.exit(1)
|
||||
excludes = normalize_excludes(rootpath, excludes)
|
||||
recurse_tree(rootpath, excludes, opts)
|
@ -7,7 +7,7 @@
|
||||
|
||||
Gracefully adapted from the TextPress system by Armin.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -37,12 +37,10 @@ from sphinx.util.osutil import ENOENT
|
||||
from sphinx.util.console import bold
|
||||
|
||||
|
||||
# Directive is either new-style or old-style
|
||||
clstypes = (type, types.ClassType)
|
||||
|
||||
# List of all known core events. Maps name to arguments description.
|
||||
events = {
|
||||
'builder-inited': '',
|
||||
'env-get-outdated': 'env, added, changed, removed',
|
||||
'env-purge-doc': 'env, docname',
|
||||
'source-read': 'docname, source text',
|
||||
'doctree-read': 'the doctree before being pickled',
|
||||
@ -136,9 +134,8 @@ class Sphinx(object):
|
||||
self._init_builder(buildername)
|
||||
|
||||
def _init_i18n(self):
|
||||
"""
|
||||
Load translated strings from the configured localedirs if
|
||||
enabled in the configuration.
|
||||
"""Load translated strings from the configured localedirs if enabled in
|
||||
the configuration.
|
||||
"""
|
||||
if self.config.language is not None:
|
||||
self.info(bold('loading translations [%s]... ' %
|
||||
@ -213,6 +210,12 @@ class Sphinx(object):
|
||||
self.builder.cleanup()
|
||||
|
||||
def warn(self, message, location=None, prefix='WARNING: '):
|
||||
if isinstance(location, tuple):
|
||||
docname, lineno = location
|
||||
if docname:
|
||||
location = '%s:%s' % (self.env.doc2path(docname), lineno or '')
|
||||
else:
|
||||
location = None
|
||||
warntext = location and '%s: %s%s\n' % (location, prefix, message) or \
|
||||
'%s%s\n' % (prefix, message)
|
||||
if self.warningiserror:
|
||||
@ -362,6 +365,9 @@ class Sphinx(object):
|
||||
elif key == 'man':
|
||||
from sphinx.writers.manpage import ManualPageTranslator \
|
||||
as translator
|
||||
elif key == 'texinfo':
|
||||
from sphinx.writers.texinfo import TexinfoTranslator \
|
||||
as translator
|
||||
else:
|
||||
# ignore invalid keys for compatibility
|
||||
continue
|
||||
@ -426,13 +432,15 @@ class Sphinx(object):
|
||||
setattr(self.domains[domain], 'get_%s_index' % name, func)
|
||||
|
||||
def add_object_type(self, directivename, rolename, indextemplate='',
|
||||
parse_node=None, ref_nodeclass=None, objname=''):
|
||||
parse_node=None, ref_nodeclass=None, objname='',
|
||||
doc_field_types=[]):
|
||||
StandardDomain.object_types[directivename] = \
|
||||
ObjType(objname or directivename, rolename)
|
||||
# create a subclass of GenericObject as the new directive
|
||||
new_directive = type(directivename, (GenericObject, object),
|
||||
{'indextemplate': indextemplate,
|
||||
'parse_node': staticmethod(parse_node)})
|
||||
'parse_node': staticmethod(parse_node),
|
||||
'doc_field_types': doc_field_types})
|
||||
StandardDomain.directives[directivename] = new_directive
|
||||
# XXX support more options?
|
||||
StandardDomain.roles[rolename] = XRefRole(innernodeclass=ref_nodeclass)
|
||||
@ -456,8 +464,11 @@ class Sphinx(object):
|
||||
|
||||
def add_javascript(self, filename):
|
||||
from sphinx.builders.html import StandaloneHTMLBuilder
|
||||
StandaloneHTMLBuilder.script_files.append(
|
||||
posixpath.join('_static', filename))
|
||||
if '://' in filename:
|
||||
StandaloneHTMLBuilder.script_files.append(filename)
|
||||
else:
|
||||
StandaloneHTMLBuilder.script_files.append(
|
||||
posixpath.join('_static', filename))
|
||||
|
||||
def add_stylesheet(self, filename):
|
||||
from sphinx.builders.html import StandaloneHTMLBuilder
|
||||
@ -479,6 +490,11 @@ class Sphinx(object):
|
||||
from sphinx.ext import autodoc
|
||||
autodoc.AutoDirective._special_attrgetters[type] = getter
|
||||
|
||||
def add_search_language(self, cls):
|
||||
from sphinx.search import languages, SearchLanguage
|
||||
assert isinstance(cls, SearchLanguage)
|
||||
languages[cls.lang] = cls
|
||||
|
||||
|
||||
class TemplateBridge(object):
|
||||
"""
|
||||
@ -487,8 +503,7 @@ class TemplateBridge(object):
|
||||
"""
|
||||
|
||||
def init(self, builder, theme=None, dirs=None):
|
||||
"""
|
||||
Called by the builder to initialize the template system.
|
||||
"""Called by the builder to initialize the template system.
|
||||
|
||||
*builder* is the builder object; you'll probably want to look at the
|
||||
value of ``builder.config.templates_path``.
|
||||
@ -499,23 +514,20 @@ class TemplateBridge(object):
|
||||
raise NotImplementedError('must be implemented in subclasses')
|
||||
|
||||
def newest_template_mtime(self):
|
||||
"""
|
||||
Called by the builder to determine if output files are outdated
|
||||
"""Called by the builder to determine if output files are outdated
|
||||
because of template changes. Return the mtime of the newest template
|
||||
file that was changed. The default implementation returns ``0``.
|
||||
"""
|
||||
return 0
|
||||
|
||||
def render(self, template, context):
|
||||
"""
|
||||
Called by the builder to render a template given as a filename with a
|
||||
specified context (a Python dictionary).
|
||||
"""Called by the builder to render a template given as a filename with
|
||||
a specified context (a Python dictionary).
|
||||
"""
|
||||
raise NotImplementedError('must be implemented in subclasses')
|
||||
|
||||
def render_string(self, template, context):
|
||||
"""
|
||||
Called by the builder to render a template given as a string with a
|
||||
"""Called by the builder to render a template given as a string with a
|
||||
specified context (a Python dictionary).
|
||||
"""
|
||||
raise NotImplementedError('must be implemented in subclasses')
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
Builder superclass for all builders.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -31,9 +31,12 @@ class Builder(object):
|
||||
name = ''
|
||||
# builder's output format, or '' if no document output is produced
|
||||
format = ''
|
||||
# doctree versioning method
|
||||
versioning_method = 'none'
|
||||
|
||||
def __init__(self, app):
|
||||
self.env = app.env
|
||||
self.env.set_versioning_method(self.versioning_method)
|
||||
self.srcdir = app.srcdir
|
||||
self.confdir = app.confdir
|
||||
self.outdir = app.outdir
|
||||
@ -55,16 +58,13 @@ class Builder(object):
|
||||
|
||||
# helper methods
|
||||
def init(self):
|
||||
"""
|
||||
Load necessary templates and perform initialization. The default
|
||||
"""Load necessary templates and perform initialization. The default
|
||||
implementation does nothing.
|
||||
"""
|
||||
pass
|
||||
|
||||
def create_template_bridge(self):
|
||||
"""
|
||||
Return the template bridge configured.
|
||||
"""
|
||||
"""Return the template bridge configured."""
|
||||
if self.config.template_bridge:
|
||||
self.templates = self.app.import_object(
|
||||
self.config.template_bridge, 'template_bridge setting')()
|
||||
@ -73,23 +73,23 @@ class Builder(object):
|
||||
self.templates = BuiltinTemplateLoader()
|
||||
|
||||
def get_target_uri(self, docname, typ=None):
|
||||
"""
|
||||
Return the target URI for a document name (*typ* can be used to qualify
|
||||
the link characteristic for individual builders).
|
||||
"""Return the target URI for a document name.
|
||||
|
||||
*typ* can be used to qualify the link characteristic for individual
|
||||
builders.
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_relative_uri(self, from_, to, typ=None):
|
||||
"""
|
||||
Return a relative URI between two source filenames. May raise
|
||||
environment.NoUri if there's no way to return a sensible URI.
|
||||
"""Return a relative URI between two source filenames.
|
||||
|
||||
May raise environment.NoUri if there's no way to return a sensible URI.
|
||||
"""
|
||||
return relative_uri(self.get_target_uri(from_),
|
||||
self.get_target_uri(to, typ))
|
||||
|
||||
def get_outdated_docs(self):
|
||||
"""
|
||||
Return an iterable of output files that are outdated, or a string
|
||||
"""Return an iterable of output files that are outdated, or a string
|
||||
describing what an update build will build.
|
||||
|
||||
If the builder does not output individual files corresponding to
|
||||
@ -129,9 +129,7 @@ class Builder(object):
|
||||
supported_image_types = []
|
||||
|
||||
def post_process_images(self, doctree):
|
||||
"""
|
||||
Pick the best candidate for all image URIs.
|
||||
"""
|
||||
"""Pick the best candidate for all image URIs."""
|
||||
for node in doctree.traverse(nodes.image):
|
||||
if '?' in node['candidates']:
|
||||
# don't rewrite nonlocal image URIs
|
||||
@ -198,9 +196,9 @@ class Builder(object):
|
||||
'out of date' % len(to_build))
|
||||
|
||||
def build(self, docnames, summary=None, method='update'):
|
||||
"""
|
||||
Main build method. First updates the environment, and then
|
||||
calls :meth:`write`.
|
||||
"""Main build method.
|
||||
|
||||
First updates the environment, and then calls :meth:`write`.
|
||||
"""
|
||||
if summary:
|
||||
self.info(bold('building [%s]: ' % self.name), nonl=1)
|
||||
@ -277,7 +275,8 @@ class Builder(object):
|
||||
# add all toctree-containing files that may have changed
|
||||
for docname in list(docnames):
|
||||
for tocdocname in self.env.files_to_rebuild.get(docname, []):
|
||||
docnames.add(tocdocname)
|
||||
if tocdocname in self.env.found_docs:
|
||||
docnames.add(tocdocname)
|
||||
docnames.add(self.config.master_doc)
|
||||
|
||||
self.info(bold('preparing documents... '), nonl=True)
|
||||
@ -302,15 +301,18 @@ class Builder(object):
|
||||
raise NotImplementedError
|
||||
|
||||
def finish(self):
|
||||
"""
|
||||
Finish the building process. The default implementation does nothing.
|
||||
"""Finish the building process.
|
||||
|
||||
The default implementation does nothing.
|
||||
"""
|
||||
pass
|
||||
|
||||
def cleanup(self):
|
||||
"""Cleanup any resources.
|
||||
|
||||
The default implementation does nothing.
|
||||
"""
|
||||
Cleanup any resources. The default implementation does nothing.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
BUILTIN_BUILDERS = {
|
||||
@ -327,6 +329,9 @@ BUILTIN_BUILDERS = {
|
||||
'latex': ('latex', 'LaTeXBuilder'),
|
||||
'text': ('text', 'TextBuilder'),
|
||||
'man': ('manpage', 'ManualPageBuilder'),
|
||||
'texinfo': ('texinfo', 'TexinfoBuilder'),
|
||||
'changes': ('changes', 'ChangesBuilder'),
|
||||
'linkcheck': ('linkcheck', 'CheckExternalLinksBuilder'),
|
||||
'websupport': ('websupport', 'WebSupportBuilder'),
|
||||
'gettext': ('gettext', 'MessageCatalogBuilder'),
|
||||
}
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
Changelog builder.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -30,7 +30,8 @@ class ChangesBuilder(Builder):
|
||||
|
||||
def init(self):
|
||||
self.create_template_bridge()
|
||||
Theme.init_themes(self)
|
||||
Theme.init_themes(self.confdir, self.config.html_theme_path,
|
||||
warn=self.warn)
|
||||
self.theme = Theme('default')
|
||||
self.templates.init(self, self.theme)
|
||||
|
||||
|
@ -7,7 +7,7 @@
|
||||
|
||||
.. _Devhelp: http://live.gnome.org/devhelp
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -42,7 +42,6 @@ except ImportError:
|
||||
class DevhelpBuilder(StandaloneHTMLBuilder):
|
||||
"""
|
||||
Builder that also outputs GNOME Devhelp file.
|
||||
|
||||
"""
|
||||
name = 'devhelp'
|
||||
|
||||
|
@ -6,14 +6,15 @@
|
||||
Build epub files.
|
||||
Originally derived from qthelp.py.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import codecs
|
||||
import sys
|
||||
import time
|
||||
import codecs
|
||||
import zipfile
|
||||
from os import path
|
||||
|
||||
@ -147,7 +148,8 @@ _refuri_re = re.compile("([^#:]*#)(.*)")
|
||||
# The epub publisher
|
||||
|
||||
class EpubBuilder(StandaloneHTMLBuilder):
|
||||
"""Builder that outputs epub files.
|
||||
"""
|
||||
Builder that outputs epub files.
|
||||
|
||||
It creates the metainfo files container.opf, toc.ncx, mimetype, and
|
||||
META-INF/container.xml. Afterwards, all necessary files are zipped to an
|
||||
@ -186,7 +188,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
name = name.replace('<', '<')
|
||||
name = name.replace('>', '>')
|
||||
name = name.replace('"', '"')
|
||||
name = name.replace('\'', ''')
|
||||
name = name.replace('\'', ''')
|
||||
return name
|
||||
|
||||
def get_refnodes(self, doctree, result):
|
||||
@ -244,12 +246,12 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
})
|
||||
|
||||
def fix_fragment(self, prefix, fragment):
|
||||
"""Return a href/id attribute with colons replaced by hyphens.
|
||||
"""
|
||||
"""Return a href/id attribute with colons replaced by hyphens."""
|
||||
return prefix + fragment.replace(':', '-')
|
||||
|
||||
def fix_ids(self, tree):
|
||||
"""Replace colons with hyphens in href and id attributes.
|
||||
|
||||
Some readers crash because they interpret the part as a
|
||||
transport protocol specification.
|
||||
"""
|
||||
@ -268,8 +270,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
node.attributes['ids'] = newids
|
||||
|
||||
def add_visible_links(self, tree):
|
||||
"""Append visible link targets after external links.
|
||||
"""
|
||||
"""Append visible link targets after external links."""
|
||||
for node in tree.traverse(nodes.reference):
|
||||
uri = node.get('refuri', '')
|
||||
if (uri.startswith('http:') or uri.startswith('https:') or
|
||||
@ -283,6 +284,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
|
||||
def write_doc(self, docname, doctree):
|
||||
"""Write one document file.
|
||||
|
||||
This method is overwritten in order to fix fragment identifiers
|
||||
and to add visible external links.
|
||||
"""
|
||||
@ -291,22 +293,22 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
return StandaloneHTMLBuilder.write_doc(self, docname, doctree)
|
||||
|
||||
def fix_genindex(self, tree):
|
||||
"""Fix href attributes for genindex pages.
|
||||
"""
|
||||
"""Fix href attributes for genindex pages."""
|
||||
# XXX: modifies tree inline
|
||||
# Logic modeled from themes/basic/genindex.html
|
||||
for key, columns in tree:
|
||||
for entryname, (links, subitems) in columns:
|
||||
for (i, link) in enumerate(links):
|
||||
for (i, (ismain, link)) in enumerate(links):
|
||||
m = _refuri_re.match(link)
|
||||
if m:
|
||||
links[i] = self.fix_fragment(m.group(1), m.group(2))
|
||||
links[i] = (ismain,
|
||||
self.fix_fragment(m.group(1), m.group(2)))
|
||||
for subentryname, subentrylinks in subitems:
|
||||
for (i, link) in enumerate(subentrylinks):
|
||||
for (i, (ismain, link)) in enumerate(subentrylinks):
|
||||
m = _refuri_re.match(link)
|
||||
if m:
|
||||
subentrylinks[i] = \
|
||||
self.fix_fragment(m.group(1), m.group(2))
|
||||
subentrylinks[i] = (ismain,
|
||||
self.fix_fragment(m.group(1), m.group(2)))
|
||||
|
||||
def copy_image_files_pil(self):
|
||||
"""Copy images using the PIL.
|
||||
@ -362,8 +364,9 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
def handle_page(self, pagename, addctx, templatename='page.html',
|
||||
outfilename=None, event_arg=None):
|
||||
"""Create a rendered page.
|
||||
This method is overwritten for genindex pages in order to fix
|
||||
href link attributes.
|
||||
|
||||
This method is overwritten for genindex pages in order to fix href link
|
||||
attributes.
|
||||
"""
|
||||
if pagename.startswith('genindex'):
|
||||
self.fix_genindex(addctx['genindexentries'])
|
||||
@ -465,6 +468,14 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
spine.append(_spine_template % {
|
||||
'idref': self.esc(self.make_id(item['refuri']))
|
||||
})
|
||||
for info in self.domain_indices:
|
||||
spine.append(_spine_template % {
|
||||
'idref': self.esc(self.make_id(info[0] + self.out_suffix))
|
||||
})
|
||||
if self.config.html_use_index:
|
||||
spine.append(_spine_template % {
|
||||
'idref': self.esc(self.make_id('genindex' + self.out_suffix))
|
||||
})
|
||||
|
||||
# add the optional cover
|
||||
content_tmpl = _content_template
|
||||
@ -513,6 +524,7 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
|
||||
def insert_subnav(self, node, subnav):
|
||||
"""Insert nested navpoints for given node.
|
||||
|
||||
The node and subnav are already rendered to text.
|
||||
"""
|
||||
nlist = node.rsplit('\n', 1)
|
||||
@ -522,8 +534,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
def build_navpoints(self, nodes):
|
||||
"""Create the toc navigation structure.
|
||||
|
||||
Subelements of a node are nested inside the navpoint.
|
||||
For nested nodes the parent node is reinserted in the subnav.
|
||||
Subelements of a node are nested inside the navpoint. For nested nodes
|
||||
the parent node is reinserted in the subnav.
|
||||
"""
|
||||
navstack = []
|
||||
navlist = []
|
||||
@ -563,8 +575,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
return '\n'.join(navlist)
|
||||
|
||||
def toc_metadata(self, level, navpoints):
|
||||
"""Create a dictionary with all metadata for the toc.ncx
|
||||
file properly escaped.
|
||||
"""Create a dictionary with all metadata for the toc.ncx file
|
||||
properly escaped.
|
||||
"""
|
||||
metadata = {}
|
||||
metadata['uid'] = self.config.epub_uid
|
||||
@ -589,8 +601,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
def build_epub(self, outdir, outname):
|
||||
"""Write the epub file.
|
||||
|
||||
It is a zip file with the mimetype file stored uncompressed
|
||||
as the first entry.
|
||||
It is a zip file with the mimetype file stored uncompressed as the first
|
||||
entry.
|
||||
"""
|
||||
self.info('writing %s file...' % outname)
|
||||
projectfiles = ['META-INF/container.xml', 'content.opf', 'toc.ncx'] \
|
||||
@ -600,7 +612,8 @@ class EpubBuilder(StandaloneHTMLBuilder):
|
||||
epub.write(path.join(outdir, 'mimetype'), 'mimetype', \
|
||||
zipfile.ZIP_STORED)
|
||||
for file in projectfiles:
|
||||
if isinstance(file, unicode):
|
||||
file = file.encode('utf-8')
|
||||
epub.write(path.join(outdir, file), file, zipfile.ZIP_DEFLATED)
|
||||
fp = path.join(outdir, file)
|
||||
if isinstance(fp, unicode):
|
||||
fp = fp.encode(sys.getfilesystemencoding())
|
||||
epub.write(fp, file, zipfile.ZIP_DEFLATED)
|
||||
epub.close()
|
||||
|
103
sphinx/builders/gettext.py
Normal file
103
sphinx/builders/gettext.py
Normal file
@ -0,0 +1,103 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.builders.gettext
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The MessageCatalogBuilder class.
|
||||
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from os import path
|
||||
from codecs import open
|
||||
from datetime import datetime
|
||||
from collections import defaultdict
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
from sphinx.builders import Builder
|
||||
from sphinx.util.nodes import extract_messages
|
||||
from sphinx.util.osutil import SEP, copyfile
|
||||
from sphinx.util.console import darkgreen
|
||||
|
||||
POHEADER = ur"""
|
||||
# SOME DESCRIPTIVE TITLE.
|
||||
# Copyright (C) %(copyright)s
|
||||
# This file is distributed under the same license as the %(project)s package.
|
||||
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
|
||||
#
|
||||
#, fuzzy
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: %(version)s\n"
|
||||
"Report-Msgid-Bugs-To: \n"
|
||||
"POT-Creation-Date: %(ctime)s\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
"MIME-Version: 1.0\n"
|
||||
"Content-Type: text/plain; charset=UTF-8\n"
|
||||
"Content-Transfer-Encoding: 8bit\n"
|
||||
|
||||
"""[1:]
|
||||
|
||||
|
||||
class I18nBuilder(Builder):
|
||||
"""
|
||||
General i18n builder.
|
||||
"""
|
||||
name = 'i18n'
|
||||
versioning_method = 'text'
|
||||
|
||||
def init(self):
|
||||
Builder.init(self)
|
||||
self.catalogs = defaultdict(dict)
|
||||
|
||||
def get_target_uri(self, docname, typ=None):
|
||||
return ''
|
||||
|
||||
def get_outdated_docs(self):
|
||||
return self.env.found_docs
|
||||
|
||||
def prepare_writing(self, docnames):
|
||||
return
|
||||
|
||||
def write_doc(self, docname, doctree):
|
||||
catalog = self.catalogs[docname.split(SEP, 1)[0]]
|
||||
|
||||
for node, msg in extract_messages(doctree):
|
||||
catalog.setdefault(node.uid, msg)
|
||||
|
||||
|
||||
class MessageCatalogBuilder(I18nBuilder):
|
||||
"""
|
||||
Builds gettext-style message catalogs (.pot files).
|
||||
"""
|
||||
name = 'gettext'
|
||||
|
||||
def finish(self):
|
||||
I18nBuilder.finish(self)
|
||||
data = dict(
|
||||
version = self.config.version,
|
||||
copyright = self.config.copyright,
|
||||
project = self.config.project,
|
||||
# XXX should supply tz
|
||||
ctime = datetime.now().strftime('%Y-%m-%d %H:%M%z'),
|
||||
)
|
||||
for section, messages in self.status_iterator(
|
||||
self.catalogs.iteritems(), "writing message catalogs... ",
|
||||
lambda (section, _):darkgreen(section), len(self.catalogs)):
|
||||
|
||||
pofn = path.join(self.outdir, section + '.pot')
|
||||
pofile = open(pofn, 'w', encoding='utf-8')
|
||||
try:
|
||||
pofile.write(POHEADER % data)
|
||||
for uid, message in messages.iteritems():
|
||||
# message contains *one* line of text ready for translation
|
||||
message = message.replace(u'\\', ur'\\'). \
|
||||
replace(u'"', ur'\"')
|
||||
pomsg = u'#%s\nmsgid "%s"\nmsgstr ""\n\n' % (uid, message)
|
||||
pofile.write(pomsg)
|
||||
finally:
|
||||
pofile.close()
|
@ -5,7 +5,7 @@
|
||||
|
||||
Several HTML builders.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -35,7 +35,7 @@ from sphinx.util.osutil import SEP, os_path, relative_uri, ensuredir, \
|
||||
movefile, ustrftime, copyfile
|
||||
from sphinx.util.nodes import inline_all_toctrees
|
||||
from sphinx.util.matching import patmatch, compile_matchers
|
||||
from sphinx.util.pycompat import any
|
||||
from sphinx.util.pycompat import any, b
|
||||
from sphinx.errors import SphinxError
|
||||
from sphinx.locale import _
|
||||
from sphinx.search import js_index
|
||||
@ -63,6 +63,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
out_suffix = '.html'
|
||||
link_suffix = '.html' # defaults to matching out_suffix
|
||||
indexer_format = js_index
|
||||
indexer_dumps_unicode = True
|
||||
supported_image_types = ['image/svg+xml', 'image/png',
|
||||
'image/gif', 'image/jpeg']
|
||||
searchindex_filename = 'searchindex.js'
|
||||
@ -87,6 +88,8 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.tags_hash = ''
|
||||
# section numbers for headings in the currently visited document
|
||||
self.secnumbers = {}
|
||||
# currently written docname
|
||||
self.current_docname = None
|
||||
|
||||
self.init_templates()
|
||||
self.init_highlighter()
|
||||
@ -100,21 +103,28 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.link_suffix = self.out_suffix
|
||||
|
||||
if self.config.language is not None:
|
||||
jsfile_list = [path.join(package_dir, 'locale',
|
||||
self.config.language, 'LC_MESSAGES', 'sphinx.js'),
|
||||
path.join(sys.prefix, 'share/sphinx/locale',
|
||||
self.config.language, 'sphinx.js')]
|
||||
if self._get_translations_js():
|
||||
self.script_files.append('_static/translations.js')
|
||||
|
||||
for jsfile in jsfile_list:
|
||||
if path.isfile(jsfile):
|
||||
self.script_files.append('_static/translations.js')
|
||||
break
|
||||
def _get_translations_js(self):
|
||||
candidates = [path.join(package_dir, 'locale', self.config.language,
|
||||
'LC_MESSAGES', 'sphinx.js'),
|
||||
path.join(sys.prefix, 'share/sphinx/locale',
|
||||
self.config.language, 'sphinx.js')] + \
|
||||
[path.join(dir, self.config.language,
|
||||
'LC_MESSAGES', 'sphinx.js')
|
||||
for dir in self.config.locale_dirs]
|
||||
for jsfile in candidates:
|
||||
if path.isfile(jsfile):
|
||||
return jsfile
|
||||
return None
|
||||
|
||||
def get_theme_config(self):
|
||||
return self.config.html_theme, self.config.html_theme_options
|
||||
|
||||
def init_templates(self):
|
||||
Theme.init_themes(self)
|
||||
Theme.init_themes(self.confdir, self.config.html_theme_path,
|
||||
warn=self.warn)
|
||||
themename, themeoptions = self.get_theme_config()
|
||||
self.theme = Theme(themename)
|
||||
self.theme_options = themeoptions.copy()
|
||||
@ -146,8 +156,9 @@ class StandaloneHTMLBuilder(Builder):
|
||||
cfgdict = dict((name, self.config[name])
|
||||
for (name, desc) in self.config.values.iteritems()
|
||||
if desc[1] == 'html')
|
||||
self.config_hash = md5(str(cfgdict)).hexdigest()
|
||||
self.tags_hash = md5(str(sorted(self.tags))).hexdigest()
|
||||
self.config_hash = md5(unicode(cfgdict).encode('utf-8')).hexdigest()
|
||||
self.tags_hash = md5(unicode(sorted(self.tags)).encode('utf-8')) \
|
||||
.hexdigest()
|
||||
old_config_hash = old_tags_hash = ''
|
||||
try:
|
||||
fp = open(path.join(self.outdir, '.buildinfo'))
|
||||
@ -199,7 +210,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
"""Utility: Render a lone doctree node."""
|
||||
if node is None:
|
||||
return {'fragment': ''}
|
||||
doc = new_document('<partial node>')
|
||||
doc = new_document(b('<partial node>'))
|
||||
doc.append(node)
|
||||
|
||||
if self._publisher is None:
|
||||
@ -221,10 +232,15 @@ class StandaloneHTMLBuilder(Builder):
|
||||
return pub.writer.parts
|
||||
|
||||
def prepare_writing(self, docnames):
|
||||
from sphinx.search import IndexBuilder
|
||||
|
||||
self.indexer = IndexBuilder(self.env)
|
||||
# create the search indexer
|
||||
from sphinx.search import IndexBuilder, languages
|
||||
lang = self.config.html_search_language or self.config.language
|
||||
if not lang or lang not in languages:
|
||||
lang = 'en'
|
||||
self.indexer = IndexBuilder(self.env, lang,
|
||||
self.config.html_search_options)
|
||||
self.load_indexer(docnames)
|
||||
|
||||
self.docwriter = HTMLWriter(self)
|
||||
self.docsettings = OptionParser(
|
||||
defaults=self.env.settings,
|
||||
@ -371,7 +387,8 @@ class StandaloneHTMLBuilder(Builder):
|
||||
meta = self.env.metadata.get(docname)
|
||||
|
||||
# local TOC and global TOC tree
|
||||
toc = self.render_partial(self.env.get_toc_for(docname))['fragment']
|
||||
self_toc = self.env.get_toc_for(docname, self)
|
||||
toc = self.render_partial(self_toc)['fragment']
|
||||
|
||||
return dict(
|
||||
parents = parents,
|
||||
@ -396,6 +413,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.imgpath = relative_uri(self.get_target_uri(docname), '_images')
|
||||
self.post_process_images(doctree)
|
||||
self.dlpath = relative_uri(self.get_target_uri(docname), '_downloads')
|
||||
self.current_docname = docname
|
||||
self.docwriter.write(doctree, destination)
|
||||
self.docwriter.assemble_parts()
|
||||
body = self.docwriter.parts['fragment']
|
||||
@ -523,22 +541,22 @@ class StandaloneHTMLBuilder(Builder):
|
||||
f.close()
|
||||
# then, copy translations JavaScript file
|
||||
if self.config.language is not None:
|
||||
jsfile_list = [path.join(package_dir, 'locale',
|
||||
self.config.language, 'LC_MESSAGES', 'sphinx.js'),
|
||||
path.join(sys.prefix, 'share/sphinx/locale',
|
||||
self.config.language, 'sphinx.js')]
|
||||
for jsfile in jsfile_list:
|
||||
if path.isfile(jsfile):
|
||||
copyfile(jsfile, path.join(self.outdir, '_static',
|
||||
'translations.js'))
|
||||
break
|
||||
jsfile = self._get_translations_js()
|
||||
if jsfile:
|
||||
copyfile(jsfile, path.join(self.outdir, '_static',
|
||||
'translations.js'))
|
||||
|
||||
# add context items for search function used in searchtools.js_t
|
||||
ctx = self.globalcontext.copy()
|
||||
ctx.update(self.indexer.context_for_searchtool())
|
||||
|
||||
# then, copy over theme-supplied static files
|
||||
if self.theme:
|
||||
themeentries = [path.join(themepath, 'static')
|
||||
for themepath in self.theme.get_dirchain()[::-1]]
|
||||
for entry in themeentries:
|
||||
copy_static_entry(entry, path.join(self.outdir, '_static'),
|
||||
self, self.globalcontext)
|
||||
self, ctx)
|
||||
# then, copy over all user-supplied static files
|
||||
staticentries = [path.join(self.confdir, spath)
|
||||
for spath in self.config.html_static_path]
|
||||
@ -551,7 +569,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.warn('html_static_path entry %r does not exist' % entry)
|
||||
continue
|
||||
copy_static_entry(entry, path.join(self.outdir, '_static'), self,
|
||||
self.globalcontext, exclude_matchers=matchers)
|
||||
ctx, exclude_matchers=matchers)
|
||||
# copy logo and favicon files if not already in static path
|
||||
if self.config.html_logo:
|
||||
logobase = path.basename(self.config.html_logo)
|
||||
@ -585,8 +603,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.theme.cleanup()
|
||||
|
||||
def post_process_images(self, doctree):
|
||||
"""
|
||||
Pick the best candidate for an image and link down-scaled images to
|
||||
"""Pick the best candidate for an image and link down-scaled images to
|
||||
their high res version.
|
||||
"""
|
||||
Builder.post_process_images(self, doctree)
|
||||
@ -610,7 +627,11 @@ class StandaloneHTMLBuilder(Builder):
|
||||
def load_indexer(self, docnames):
|
||||
keep = set(self.env.all_docs) - set(docnames)
|
||||
try:
|
||||
f = open(path.join(self.outdir, self.searchindex_filename), 'rb')
|
||||
searchindexfn = path.join(self.outdir, self.searchindex_filename)
|
||||
if self.indexer_dumps_unicode:
|
||||
f = codecs.open(searchindexfn, 'r', encoding='utf-8')
|
||||
else:
|
||||
f = open(searchindexfn, 'rb')
|
||||
try:
|
||||
self.indexer.load(f, self.indexer_format)
|
||||
finally:
|
||||
@ -678,13 +699,19 @@ class StandaloneHTMLBuilder(Builder):
|
||||
|
||||
def pathto(otheruri, resource=False,
|
||||
baseuri=self.get_target_uri(pagename)):
|
||||
if not resource:
|
||||
if resource and '://' in otheruri:
|
||||
# allow non-local resources given by scheme
|
||||
return otheruri
|
||||
elif not resource:
|
||||
otheruri = self.get_target_uri(otheruri)
|
||||
uri = relative_uri(baseuri, otheruri) or '#'
|
||||
return uri
|
||||
ctx['pathto'] = pathto
|
||||
ctx['hasdoc'] = lambda name: name in self.env.all_docs
|
||||
ctx['encoding'] = encoding = self.config.html_output_encoding
|
||||
if self.name != 'htmlhelp':
|
||||
ctx['encoding'] = encoding = self.config.html_output_encoding
|
||||
else:
|
||||
ctx['encoding'] = encoding = self.encoding
|
||||
ctx['toctree'] = lambda **kw: self._get_local_toctree(pagename, **kw)
|
||||
self.add_sidebars(pagename, ctx)
|
||||
ctx.update(addctx)
|
||||
@ -705,7 +732,7 @@ class StandaloneHTMLBuilder(Builder):
|
||||
# outfilename's path is in general different from self.outdir
|
||||
ensuredir(path.dirname(outfilename))
|
||||
try:
|
||||
f = codecs.open(outfilename, 'w', encoding)
|
||||
f = codecs.open(outfilename, 'w', encoding, 'xmlcharrefreplace')
|
||||
try:
|
||||
f.write(output)
|
||||
finally:
|
||||
@ -727,10 +754,12 @@ class StandaloneHTMLBuilder(Builder):
|
||||
self.info(bold('dumping object inventory... '), nonl=True)
|
||||
f = open(path.join(self.outdir, INVENTORY_FILENAME), 'wb')
|
||||
try:
|
||||
f.write('# Sphinx inventory version 2\n')
|
||||
f.write('# Project: %s\n' % self.config.project.encode('utf-8'))
|
||||
f.write('# Version: %s\n' % self.config.version.encode('utf-8'))
|
||||
f.write('# The remainder of this file is compressed using zlib.\n')
|
||||
f.write((u'# Sphinx inventory version 2\n'
|
||||
u'# Project: %s\n'
|
||||
u'# Version: %s\n'
|
||||
u'# The remainder of this file is compressed using zlib.\n'
|
||||
% (self.config.project, self.config.version)
|
||||
).encode('utf-8'))
|
||||
compressor = zlib.compressobj(9)
|
||||
for domainname, domain in self.env.domains.iteritems():
|
||||
for name, dispname, type, docname, anchor, prio in \
|
||||
@ -742,11 +771,9 @@ class StandaloneHTMLBuilder(Builder):
|
||||
if dispname == name:
|
||||
dispname = u'-'
|
||||
f.write(compressor.compress(
|
||||
'%s %s:%s %s %s %s\n' % (name.encode('utf-8'),
|
||||
domainname.encode('utf-8'),
|
||||
type.encode('utf-8'), prio,
|
||||
uri.encode('utf-8'),
|
||||
dispname.encode('utf-8'))))
|
||||
(u'%s %s:%s %s %s %s\n' % (name, domainname, type,
|
||||
prio, uri, dispname)
|
||||
).encode('utf-8')))
|
||||
f.write(compressor.flush())
|
||||
finally:
|
||||
f.close()
|
||||
@ -758,7 +785,10 @@ class StandaloneHTMLBuilder(Builder):
|
||||
searchindexfn = path.join(self.outdir, self.searchindex_filename)
|
||||
# first write to a temporary file, so that if dumping fails,
|
||||
# the existing index won't be overwritten
|
||||
f = open(searchindexfn + '.tmp', 'wb')
|
||||
if self.indexer_dumps_unicode:
|
||||
f = codecs.open(searchindexfn + '.tmp', 'w', encoding='utf-8')
|
||||
else:
|
||||
f = open(searchindexfn + '.tmp', 'wb')
|
||||
try:
|
||||
self.indexer.dump(f, self.indexer_format)
|
||||
finally:
|
||||
@ -847,8 +877,14 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
|
||||
def get_doc_context(self, docname, body, metatags):
|
||||
# no relation links...
|
||||
toc = self.env.get_toctree_for(self.config.master_doc, self, False)
|
||||
self.fix_refuris(toc)
|
||||
toc = self.render_partial(toc)['fragment']
|
||||
# if there is no toctree, toc is None
|
||||
if toc:
|
||||
self.fix_refuris(toc)
|
||||
toc = self.render_partial(toc)['fragment']
|
||||
display_toc = True
|
||||
else:
|
||||
toc = ''
|
||||
display_toc = False
|
||||
return dict(
|
||||
parents = [],
|
||||
prev = None,
|
||||
@ -861,7 +897,7 @@ class SingleFileHTMLBuilder(StandaloneHTMLBuilder):
|
||||
rellinks = [],
|
||||
sourcename = '',
|
||||
toc = toc,
|
||||
display_toc = True,
|
||||
display_toc = display_toc,
|
||||
)
|
||||
|
||||
def write(self, *ignored):
|
||||
@ -909,6 +945,9 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
|
||||
#: implements a `dump`, `load`, `dumps` and `loads` functions
|
||||
#: (pickle, simplejson etc.)
|
||||
implementation = None
|
||||
implementation_dumps_unicode = False
|
||||
#: additional arguments for dump()
|
||||
additional_dump_args = ()
|
||||
|
||||
#: the filename for the global context file
|
||||
globalcontext_filename = None
|
||||
@ -931,6 +970,16 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
|
||||
return docname[:-5] # up to sep
|
||||
return docname + SEP
|
||||
|
||||
def dump_context(self, context, filename):
|
||||
if self.implementation_dumps_unicode:
|
||||
f = codecs.open(filename, 'w', encoding='utf-8')
|
||||
else:
|
||||
f = open(filename, 'wb')
|
||||
try:
|
||||
self.implementation.dump(context, f, *self.additional_dump_args)
|
||||
finally:
|
||||
f.close()
|
||||
|
||||
def handle_page(self, pagename, ctx, templatename='page.html',
|
||||
outfilename=None, event_arg=None):
|
||||
ctx['current_page_name'] = pagename
|
||||
@ -944,11 +993,7 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
|
||||
ctx, event_arg)
|
||||
|
||||
ensuredir(path.dirname(outfilename))
|
||||
f = open(outfilename, 'wb')
|
||||
try:
|
||||
self.implementation.dump(ctx, f, 2)
|
||||
finally:
|
||||
f.close()
|
||||
self.dump_context(ctx, outfilename)
|
||||
|
||||
# if there is a source file, copy the source file for the
|
||||
# "show source" link
|
||||
@ -961,11 +1006,7 @@ class SerializingHTMLBuilder(StandaloneHTMLBuilder):
|
||||
def handle_finish(self):
|
||||
# dump the global context
|
||||
outfilename = path.join(self.outdir, self.globalcontext_filename)
|
||||
f = open(outfilename, 'wb')
|
||||
try:
|
||||
self.implementation.dump(self.globalcontext, f, 2)
|
||||
finally:
|
||||
f.close()
|
||||
self.dump_context(self.globalcontext, outfilename)
|
||||
|
||||
# super here to dump the search index
|
||||
StandaloneHTMLBuilder.handle_finish(self)
|
||||
@ -985,7 +1026,10 @@ class PickleHTMLBuilder(SerializingHTMLBuilder):
|
||||
A Builder that dumps the generated HTML into pickle files.
|
||||
"""
|
||||
implementation = pickle
|
||||
implementation_dumps_unicode = False
|
||||
additional_dump_args = (pickle.HIGHEST_PROTOCOL,)
|
||||
indexer_format = pickle
|
||||
indexer_dumps_unicode = False
|
||||
name = 'pickle'
|
||||
out_suffix = '.fpickle'
|
||||
globalcontext_filename = 'globalcontext.pickle'
|
||||
@ -1000,7 +1044,9 @@ class JSONHTMLBuilder(SerializingHTMLBuilder):
|
||||
A builder that dumps the generated HTML into JSON files.
|
||||
"""
|
||||
implementation = jsonimpl
|
||||
implementation_dumps_unicode = True
|
||||
indexer_format = jsonimpl
|
||||
indexer_dumps_unicode = True
|
||||
name = 'json'
|
||||
out_suffix = '.fjson'
|
||||
globalcontext_filename = 'globalcontext.json'
|
||||
|
@ -6,7 +6,7 @@
|
||||
Build HTML help support files.
|
||||
Parts adapted from Python's Doc/tools/prechm.py.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -200,7 +200,7 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
||||
outdir += os.sep
|
||||
olen = len(outdir)
|
||||
for root, dirs, files in os.walk(outdir):
|
||||
staticdir = (root == path.join(outdir, '_static'))
|
||||
staticdir = root.startswith(path.join(outdir, '_static'))
|
||||
for fn in files:
|
||||
if (staticdir and not fn.endswith('.js')) or \
|
||||
fn.endswith('.html'):
|
||||
@ -258,7 +258,8 @@ class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
||||
def write_index(title, refs, subitems):
|
||||
def write_param(name, value):
|
||||
item = ' <param name="%s" value="%s">\n' % (name, value)
|
||||
f.write(item.encode('ascii', 'xmlcharrefreplace'))
|
||||
f.write(item.encode(self.encoding, 'xmlcharrefreplace')
|
||||
.decode(self.encoding))
|
||||
title = cgi.escape(title)
|
||||
f.write('<LI> <OBJECT type="text/sitemap">\n')
|
||||
write_param('Keyword', title)
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
LaTeX builder.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -46,10 +46,6 @@ class LaTeXBuilder(Builder):
|
||||
return 'all documents' # for now
|
||||
|
||||
def get_target_uri(self, docname, typ=None):
|
||||
if typ == 'token':
|
||||
# token references are always inside production lists and must be
|
||||
# replaced by \token{} in LaTeX
|
||||
return '@token'
|
||||
if docname not in self.docnames:
|
||||
raise NoUri
|
||||
else:
|
||||
|
@ -5,13 +5,17 @@
|
||||
|
||||
The CheckExternalLinksBuilder class.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
import Queue
|
||||
import socket
|
||||
import threading
|
||||
from os import path
|
||||
from urllib2 import build_opener, HTTPError
|
||||
from urllib2 import build_opener, Request
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
@ -23,6 +27,12 @@ opener = build_opener()
|
||||
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
|
||||
|
||||
|
||||
class HeadRequest(Request):
|
||||
"""Subclass of urllib2.Request that sends a HEAD request."""
|
||||
def get_method(self):
|
||||
return 'HEAD'
|
||||
|
||||
|
||||
class CheckExternalLinksBuilder(Builder):
|
||||
"""
|
||||
Checks for broken external links.
|
||||
@ -30,6 +40,7 @@ class CheckExternalLinksBuilder(Builder):
|
||||
name = 'linkcheck'
|
||||
|
||||
def init(self):
|
||||
self.to_ignore = map(re.compile, self.app.config.linkcheck_ignore)
|
||||
self.good = set()
|
||||
self.broken = {}
|
||||
self.redirected = {}
|
||||
@ -38,6 +49,83 @@ class CheckExternalLinksBuilder(Builder):
|
||||
# create output file
|
||||
open(path.join(self.outdir, 'output.txt'), 'w').close()
|
||||
|
||||
# create queues and worker threads
|
||||
self.wqueue = Queue.Queue()
|
||||
self.rqueue = Queue.Queue()
|
||||
self.workers = []
|
||||
for i in range(self.app.config.linkcheck_workers):
|
||||
thread = threading.Thread(target=self.check_thread)
|
||||
thread.setDaemon(True)
|
||||
thread.start()
|
||||
self.workers.append(thread)
|
||||
|
||||
def check_thread(self):
|
||||
kwargs = {}
|
||||
if sys.version_info > (2, 5) and self.app.config.linkcheck_timeout:
|
||||
kwargs['timeout'] = self.app.config.linkcheck_timeout
|
||||
|
||||
def check():
|
||||
# check for various conditions without bothering the network
|
||||
if len(uri) == 0 or uri[0:7] == 'mailto:' or uri[0:4] == 'ftp:':
|
||||
return 'unchecked', ''
|
||||
elif not (uri[0:5] == 'http:' or uri[0:6] == 'https:'):
|
||||
return 'local', ''
|
||||
elif uri in self.good:
|
||||
return 'working', ''
|
||||
elif uri in self.broken:
|
||||
return 'broken', self.broken[uri]
|
||||
elif uri in self.redirected:
|
||||
return 'redirected', self.redirected[uri]
|
||||
for rex in self.to_ignore:
|
||||
if rex.match(uri):
|
||||
return 'ignored', ''
|
||||
|
||||
# need to actually check the URI
|
||||
try:
|
||||
f = opener.open(HeadRequest(uri), **kwargs)
|
||||
f.close()
|
||||
except Exception, err:
|
||||
self.broken[uri] = str(err)
|
||||
return 'broken', str(err)
|
||||
if f.url.rstrip('/') == uri.rstrip('/'):
|
||||
self.good.add(uri)
|
||||
return 'working', 'new'
|
||||
else:
|
||||
self.redirected[uri] = f.url
|
||||
return 'redirected', f.url
|
||||
|
||||
while True:
|
||||
uri, docname, lineno = self.wqueue.get()
|
||||
if uri is None:
|
||||
break
|
||||
status, info = check()
|
||||
self.rqueue.put((uri, docname, lineno, status, info))
|
||||
|
||||
def process_result(self, result):
|
||||
uri, docname, lineno, status, info = result
|
||||
if status == 'unchecked':
|
||||
return
|
||||
if status == 'working' and info != 'new':
|
||||
return
|
||||
if lineno:
|
||||
self.info('(line %3d) ' % lineno, nonl=1)
|
||||
if status == 'ignored':
|
||||
self.info(uri + ' - ' + darkgray('ignored'))
|
||||
elif status == 'local':
|
||||
self.info(uri + ' - ' + darkgray('local'))
|
||||
self.write_entry('local', docname, lineno, uri)
|
||||
elif status == 'working':
|
||||
self.info(uri + ' - ' + darkgreen('working'))
|
||||
elif status == 'broken':
|
||||
self.info(uri + ' - ' + red('broken: ') + info)
|
||||
self.write_entry('broken', docname, lineno, uri + ': ' + info)
|
||||
if self.app.quiet:
|
||||
self.warn('broken link: %s' % uri,
|
||||
'%s:%s' % (self.env.doc2path(docname), lineno))
|
||||
elif status == 'redirected':
|
||||
self.info(uri + ' - ' + purple('redirected') + ' to ' + info)
|
||||
self.write_entry('redirected', docname, lineno, uri + ' to ' + info)
|
||||
|
||||
def get_target_uri(self, docname, typ=None):
|
||||
return ''
|
||||
|
||||
@ -49,61 +137,25 @@ class CheckExternalLinksBuilder(Builder):
|
||||
|
||||
def write_doc(self, docname, doctree):
|
||||
self.info()
|
||||
n = 0
|
||||
for node in doctree.traverse(nodes.reference):
|
||||
try:
|
||||
self.check(node, docname)
|
||||
except KeyError:
|
||||
if 'refuri' not in node:
|
||||
continue
|
||||
|
||||
def check(self, node, docname):
|
||||
uri = node['refuri']
|
||||
|
||||
if '#' in uri:
|
||||
uri = uri.split('#')[0]
|
||||
|
||||
if uri in self.good:
|
||||
return
|
||||
|
||||
lineno = None
|
||||
while lineno is None:
|
||||
node = node.parent
|
||||
if node is None:
|
||||
break
|
||||
lineno = node.line
|
||||
|
||||
if len(uri) == 0 or uri[0:7] == 'mailto:' or uri[0:4] == 'ftp:':
|
||||
return
|
||||
|
||||
if lineno:
|
||||
self.info('(line %3d) ' % lineno, nonl=1)
|
||||
if uri[0:5] == 'http:' or uri[0:6] == 'https:':
|
||||
self.info(uri, nonl=1)
|
||||
|
||||
if uri in self.broken:
|
||||
(r, s) = self.broken[uri]
|
||||
elif uri in self.redirected:
|
||||
(r, s) = self.redirected[uri]
|
||||
else:
|
||||
(r, s) = self.resolve(uri)
|
||||
|
||||
if r == 0:
|
||||
self.info(' - ' + darkgreen('working'))
|
||||
self.good.add(uri)
|
||||
elif r == 2:
|
||||
self.info(' - ' + red('broken: ') + s)
|
||||
self.write_entry('broken', docname, lineno, uri + ': ' + s)
|
||||
self.broken[uri] = (r, s)
|
||||
if self.app.quiet:
|
||||
self.warn('broken link: %s' % uri,
|
||||
'%s:%s' % (self.env.doc2path(docname), lineno))
|
||||
else:
|
||||
self.info(' - ' + purple('redirected') + ' to ' + s)
|
||||
self.write_entry('redirected', docname,
|
||||
lineno, uri + ' to ' + s)
|
||||
self.redirected[uri] = (r, s)
|
||||
else:
|
||||
self.info(uri + ' - ' + darkgray('local'))
|
||||
self.write_entry('local', docname, lineno, uri)
|
||||
uri = node['refuri']
|
||||
if '#' in uri:
|
||||
uri = uri.split('#')[0]
|
||||
lineno = None
|
||||
while lineno is None:
|
||||
node = node.parent
|
||||
if node is None:
|
||||
break
|
||||
lineno = node.line
|
||||
self.wqueue.put((uri, docname, lineno), False)
|
||||
n += 1
|
||||
done = 0
|
||||
while done < n:
|
||||
self.process_result(self.rqueue.get())
|
||||
done += 1
|
||||
|
||||
if self.broken:
|
||||
self.app.statuscode = 1
|
||||
@ -114,21 +166,6 @@ class CheckExternalLinksBuilder(Builder):
|
||||
line, what, uri))
|
||||
output.close()
|
||||
|
||||
def resolve(self, uri):
|
||||
try:
|
||||
f = opener.open(uri)
|
||||
f.close()
|
||||
except HTTPError, err:
|
||||
#if err.code == 403 and uri.startswith('http://en.wikipedia.org/'):
|
||||
# # Wikipedia blocks requests from urllib User-Agent
|
||||
# return (0, 0)
|
||||
return (2, str(err))
|
||||
except Exception, err:
|
||||
return (2, str(err))
|
||||
if f.url.rstrip('/') == uri.rstrip('/'):
|
||||
return (0, 0)
|
||||
else:
|
||||
return (1, f.url)
|
||||
|
||||
def finish(self):
|
||||
return
|
||||
for worker in self.workers:
|
||||
self.wqueue.put((None, None, None), False)
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
Manual pages builder.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
Build input files for the Qt collection generator.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -23,7 +23,7 @@ from sphinx.builders.html import StandaloneHTMLBuilder
|
||||
|
||||
|
||||
_idpattern = re.compile(
|
||||
r'(?P<title>.+) (\((?P<id>[\w\.]+)( (?P<descr>\w+))?\))$')
|
||||
r'(?P<title>.+) (\((class in )?(?P<id>[\w\.]+)( (?P<descr>\w+))?\))$')
|
||||
|
||||
|
||||
# Qt Help Collection Project (.qhcp).
|
||||
@ -130,12 +130,20 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
|
||||
for indexname, indexcls, content, collapse in self.domain_indices:
|
||||
item = section_template % {'title': indexcls.localname,
|
||||
'ref': '%s.html' % indexname}
|
||||
sections.append(' '*4*4 + item)
|
||||
sections = '\n'.join(sections)
|
||||
sections.append((' ' * 4 * 4 + item).encode('utf-8'))
|
||||
# sections may be unicode strings or byte strings, we have to make sure
|
||||
# they are all byte strings before joining them
|
||||
new_sections = []
|
||||
for section in sections:
|
||||
if isinstance(section, unicode):
|
||||
new_sections.append(section.encode('utf-8'))
|
||||
else:
|
||||
new_sections.append(section)
|
||||
sections = u'\n'.encode('utf-8').join(new_sections)
|
||||
|
||||
# keywords
|
||||
keywords = []
|
||||
index = self.env.create_index(self)
|
||||
index = self.env.create_index(self, group_entries=False)
|
||||
for (key, group) in index:
|
||||
for title, (refs, subitems) in group:
|
||||
keywords.extend(self.build_keywords(title, refs, subitems))
|
||||
@ -165,6 +173,7 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
|
||||
nspace = 'org.sphinx.%s.%s' % (outname, self.config.version)
|
||||
nspace = re.sub('[^a-zA-Z0-9.]', '', nspace)
|
||||
nspace = re.sub(r'\.+', '.', nspace).strip('.')
|
||||
nspace = nspace.lower()
|
||||
|
||||
# write the project file
|
||||
f = codecs.open(path.join(outdir, outname+'.qhp'), 'w', 'utf-8')
|
||||
@ -230,7 +239,7 @@ class QtHelpBuilder(StandaloneHTMLBuilder):
|
||||
link = node['refuri']
|
||||
title = escape(node.astext()).replace('"','"')
|
||||
item = section_template % {'title': title, 'ref': link}
|
||||
item = ' '*4*indentlevel + item.encode('ascii', 'xmlcharrefreplace')
|
||||
item = u' ' * 4 * indentlevel + item
|
||||
parts.append(item.encode('ascii', 'xmlcharrefreplace'))
|
||||
elif isinstance(node, nodes.bullet_list):
|
||||
for subnode in node:
|
||||
|
239
sphinx/builders/texinfo.py
Normal file
239
sphinx/builders/texinfo.py
Normal file
@ -0,0 +1,239 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.builders.texinfo
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Texinfo builder.
|
||||
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from os import path
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.io import FileOutput
|
||||
from docutils.utils import new_document
|
||||
from docutils.frontend import OptionParser
|
||||
|
||||
from sphinx import addnodes
|
||||
from sphinx.locale import _
|
||||
from sphinx.builders import Builder
|
||||
from sphinx.environment import NoUri
|
||||
from sphinx.util.nodes import inline_all_toctrees
|
||||
from sphinx.util.osutil import SEP, copyfile
|
||||
from sphinx.util.console import bold, darkgreen
|
||||
from sphinx.writers.texinfo import TexinfoWriter
|
||||
|
||||
|
||||
TEXINFO_MAKEFILE = '''\
|
||||
# Makefile for Sphinx Texinfo output
|
||||
|
||||
infodir ?= /usr/share/info
|
||||
|
||||
MAKEINFO = makeinfo --no-split
|
||||
MAKEINFO_html = makeinfo --no-split --html
|
||||
MAKEINFO_plaintext = makeinfo --no-split --plaintext
|
||||
TEXI2PDF = texi2pdf --batch --expand
|
||||
INSTALL_INFO = install-info
|
||||
|
||||
ALLDOCS = $(basename $(wildcard *.texi))
|
||||
|
||||
all: info
|
||||
info: $(addsuffix .info,$(ALLDOCS))
|
||||
plaintext: $(addsuffix .txt,$(ALLDOCS))
|
||||
html: $(addsuffix .html,$(ALLDOCS))
|
||||
pdf: $(addsuffix .pdf,$(ALLDOCS))
|
||||
|
||||
install-info: info
|
||||
\tfor f in *.info; do \\
|
||||
\t cp -t $(infodir) "$$f" && \\
|
||||
\t $(INSTALL_INFO) --info-dir=$(infodir) "$$f" ; \\
|
||||
\tdone
|
||||
|
||||
uninstall-info: info
|
||||
\tfor f in *.info; do \\
|
||||
\t rm -f "$(infodir)/$$f" ; \\
|
||||
\t $(INSTALL_INFO) --delete --info-dir=$(infodir) "$$f" ; \\
|
||||
\tdone
|
||||
|
||||
%.info: %.texi
|
||||
\t$(MAKEINFO) -o '$@' '$<'
|
||||
|
||||
%.txt: %.texi
|
||||
\t$(MAKEINFO_plaintext) -o '$@' '$<'
|
||||
|
||||
%.html: %.texi
|
||||
\t$(MAKEINFO_html) -o '$@' '$<'
|
||||
|
||||
%.pdf: %.texi
|
||||
\t-$(TEXI2PDF) '$<'
|
||||
\t-$(TEXI2PDF) '$<'
|
||||
\t-$(TEXI2PDF) '$<'
|
||||
|
||||
clean:
|
||||
\t-rm -f *.info *.pdf *.txt *.html
|
||||
\t-rm -f *.log *.ind *.aux *.toc *.syn *.idx *.out *.ilg *.pla *.ky *.pg
|
||||
\t-rm -f *.vr *.tp *.fn *.fns *.def *.defs *.cp *.cps *.ge *.ges *.mo
|
||||
|
||||
.PHONY: all info plaintext html pdf install-info uninstall-info clean
|
||||
'''
|
||||
|
||||
|
||||
class TexinfoBuilder(Builder):
|
||||
"""
|
||||
Builds Texinfo output to create Info documentation.
|
||||
"""
|
||||
name = 'texinfo'
|
||||
format = 'texinfo'
|
||||
supported_image_types = ['application/pdf', 'image/png',
|
||||
'image/gif', 'image/jpeg']
|
||||
|
||||
def init(self):
|
||||
self.docnames = []
|
||||
self.document_data = []
|
||||
|
||||
def get_outdated_docs(self):
|
||||
return 'all documents' # for now
|
||||
|
||||
def get_target_uri(self, docname, typ=None):
|
||||
if docname not in self.docnames:
|
||||
raise NoUri
|
||||
else:
|
||||
return '%' + docname
|
||||
|
||||
def get_relative_uri(self, from_, to, typ=None):
|
||||
# ignore source path
|
||||
return self.get_target_uri(to, typ)
|
||||
|
||||
def init_document_data(self):
|
||||
preliminary_document_data = map(list, self.config.texinfo_documents)
|
||||
if not preliminary_document_data:
|
||||
self.warn('no "texinfo_documents" config value found; no documents '
|
||||
'will be written')
|
||||
return
|
||||
# assign subdirs to titles
|
||||
self.titles = []
|
||||
for entry in preliminary_document_data:
|
||||
docname = entry[0]
|
||||
if docname not in self.env.all_docs:
|
||||
self.warn('"texinfo_documents" config value references unknown '
|
||||
'document %s' % docname)
|
||||
continue
|
||||
self.document_data.append(entry)
|
||||
if docname.endswith(SEP+'index'):
|
||||
docname = docname[:-5]
|
||||
self.titles.append((docname, entry[2]))
|
||||
|
||||
def write(self, *ignored):
|
||||
self.init_document_data()
|
||||
for entry in self.document_data:
|
||||
docname, targetname, title, author = entry[:4]
|
||||
targetname += '.texi'
|
||||
direntry = description = category = ''
|
||||
if len(entry) > 6:
|
||||
direntry, description, category = entry[4:7]
|
||||
toctree_only = False
|
||||
if len(entry) > 7:
|
||||
toctree_only = entry[7]
|
||||
destination = FileOutput(
|
||||
destination_path=path.join(self.outdir, targetname),
|
||||
encoding='utf-8')
|
||||
self.info("processing " + targetname + "... ", nonl=1)
|
||||
doctree = self.assemble_doctree(docname, toctree_only,
|
||||
appendices=(self.config.texinfo_appendices or []))
|
||||
self.info("writing... ", nonl=1)
|
||||
|
||||
# Add an Index section
|
||||
if self.config.texinfo_domain_indices:
|
||||
doctree.append(
|
||||
nodes.section('',
|
||||
nodes.title(_("Index"),
|
||||
nodes.Text(_('Index'),
|
||||
_('Index'))),
|
||||
nodes.raw('@printindex ge\n',
|
||||
nodes.Text('@printindex ge\n',
|
||||
'@printindex ge\n'),
|
||||
format="texinfo")))
|
||||
self.post_process_images(doctree)
|
||||
docwriter = TexinfoWriter(self)
|
||||
settings = OptionParser(
|
||||
defaults=self.env.settings,
|
||||
components=(docwriter,)).get_default_values()
|
||||
settings.author = author
|
||||
settings.title = title
|
||||
settings.texinfo_filename = targetname[:-5] + '.info'
|
||||
settings.texinfo_elements = self.config.texinfo_elements
|
||||
settings.texinfo_dir_entry = direntry or ''
|
||||
settings.texinfo_dir_category = category or ''
|
||||
settings.texinfo_dir_description = description or ''
|
||||
settings.docname = docname
|
||||
doctree.settings = settings
|
||||
docwriter.write(doctree, destination)
|
||||
self.info("done")
|
||||
|
||||
def assemble_doctree(self, indexfile, toctree_only, appendices):
|
||||
self.docnames = set([indexfile] + appendices)
|
||||
self.info(darkgreen(indexfile) + " ", nonl=1)
|
||||
tree = self.env.get_doctree(indexfile)
|
||||
tree['docname'] = indexfile
|
||||
if toctree_only:
|
||||
# extract toctree nodes from the tree and put them in a
|
||||
# fresh document
|
||||
new_tree = new_document('<texinfo output>')
|
||||
new_sect = nodes.section()
|
||||
new_sect += nodes.title(u'<Set title in conf.py>',
|
||||
u'<Set title in conf.py>')
|
||||
new_tree += new_sect
|
||||
for node in tree.traverse(addnodes.toctree):
|
||||
new_sect += node
|
||||
tree = new_tree
|
||||
largetree = inline_all_toctrees(self, self.docnames, indexfile, tree,
|
||||
darkgreen)
|
||||
largetree['docname'] = indexfile
|
||||
for docname in appendices:
|
||||
appendix = self.env.get_doctree(docname)
|
||||
appendix['docname'] = docname
|
||||
largetree.append(appendix)
|
||||
self.info()
|
||||
self.info("resolving references...")
|
||||
self.env.resolve_references(largetree, indexfile, self)
|
||||
# TODO: add support for external :ref:s
|
||||
for pendingnode in largetree.traverse(addnodes.pending_xref):
|
||||
docname = pendingnode['refdocname']
|
||||
sectname = pendingnode['refsectname']
|
||||
newnodes = [nodes.emphasis(sectname, sectname)]
|
||||
for subdir, title in self.titles:
|
||||
if docname.startswith(subdir):
|
||||
newnodes.append(nodes.Text(_(' (in '), _(' (in ')))
|
||||
newnodes.append(nodes.emphasis(title, title))
|
||||
newnodes.append(nodes.Text(')', ')'))
|
||||
break
|
||||
else:
|
||||
pass
|
||||
pendingnode.replace_self(newnodes)
|
||||
return largetree
|
||||
|
||||
def finish(self):
|
||||
# copy image files
|
||||
if self.images:
|
||||
self.info(bold('copying images...'), nonl=1)
|
||||
for src, dest in self.images.iteritems():
|
||||
self.info(' '+src, nonl=1)
|
||||
copyfile(path.join(self.srcdir, src),
|
||||
path.join(self.outdir, dest))
|
||||
self.info()
|
||||
|
||||
self.info(bold('copying Texinfo support files... '), nonl=True)
|
||||
# copy Makefile
|
||||
fn = path.join(self.outdir, 'Makefile')
|
||||
self.info(fn, nonl=1)
|
||||
try:
|
||||
mkfile = open(fn, 'w')
|
||||
try:
|
||||
mkfile.write(TEXINFO_MAKEFILE)
|
||||
finally:
|
||||
mkfile.close()
|
||||
except (IOError, OSError), err:
|
||||
self.warn("error writing file %s: %s" % (fn, err))
|
||||
self.info(' done')
|
@ -5,7 +5,7 @@
|
||||
|
||||
Plain-text Sphinx builder.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
155
sphinx/builders/websupport.py
Normal file
155
sphinx/builders/websupport.py
Normal file
@ -0,0 +1,155 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.builders.websupport
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Builder for the web support package.
|
||||
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
from os import path
|
||||
import posixpath
|
||||
import shutil
|
||||
|
||||
from docutils.io import StringOutput
|
||||
|
||||
from sphinx.jinja2glue import BuiltinTemplateLoader
|
||||
from sphinx.util.osutil import os_path, relative_uri, ensuredir, copyfile
|
||||
from sphinx.builders.html import PickleHTMLBuilder
|
||||
from sphinx.writers.websupport import WebSupportTranslator
|
||||
|
||||
|
||||
class WebSupportBuilder(PickleHTMLBuilder):
|
||||
"""
|
||||
Builds documents for the web support package.
|
||||
"""
|
||||
name = 'websupport'
|
||||
versioning_method = 'commentable'
|
||||
|
||||
def init(self):
|
||||
PickleHTMLBuilder.init(self)
|
||||
# templates are needed for this builder, but the serializing
|
||||
# builder does not initialize them
|
||||
self.init_templates()
|
||||
if not isinstance(self.templates, BuiltinTemplateLoader):
|
||||
raise RuntimeError('websupport builder must be used with '
|
||||
'the builtin templates')
|
||||
# add our custom JS
|
||||
self.script_files.append('_static/websupport.js')
|
||||
|
||||
def set_webinfo(self, staticdir, virtual_staticdir, search, storage):
|
||||
self.staticdir = staticdir
|
||||
self.virtual_staticdir = virtual_staticdir
|
||||
self.search = search
|
||||
self.storage = storage
|
||||
|
||||
def init_translator_class(self):
|
||||
self.translator_class = WebSupportTranslator
|
||||
|
||||
def prepare_writing(self, docnames):
|
||||
PickleHTMLBuilder.prepare_writing(self, docnames)
|
||||
self.globalcontext['no_search_suffix'] = True
|
||||
|
||||
def write_doc(self, docname, doctree):
|
||||
destination = StringOutput(encoding='utf-8')
|
||||
doctree.settings = self.docsettings
|
||||
|
||||
self.cur_docname = docname
|
||||
self.secnumbers = self.env.toc_secnumbers.get(docname, {})
|
||||
self.imgpath = '/' + posixpath.join(self.virtual_staticdir, '_images')
|
||||
self.post_process_images(doctree)
|
||||
self.dlpath = '/' + posixpath.join(self.virtual_staticdir, '_downloads')
|
||||
self.docwriter.write(doctree, destination)
|
||||
self.docwriter.assemble_parts()
|
||||
body = self.docwriter.parts['fragment']
|
||||
metatags = self.docwriter.clean_meta
|
||||
|
||||
ctx = self.get_doc_context(docname, body, metatags)
|
||||
self.index_page(docname, doctree, ctx.get('title', ''))
|
||||
self.handle_page(docname, ctx, event_arg=doctree)
|
||||
|
||||
def load_indexer(self, docnames):
|
||||
self.indexer = self.search
|
||||
self.indexer.init_indexing(changed=docnames)
|
||||
|
||||
def _render_page(self, pagename, addctx, templatename, event_arg=None):
|
||||
# This is mostly copied from StandaloneHTMLBuilder. However, instead
|
||||
# of rendering the template and saving the html, create a context
|
||||
# dict and pickle it.
|
||||
ctx = self.globalcontext.copy()
|
||||
ctx['pagename'] = pagename
|
||||
|
||||
def pathto(otheruri, resource=False,
|
||||
baseuri=self.get_target_uri(pagename)):
|
||||
if resource and '://' in otheruri:
|
||||
return otheruri
|
||||
elif not resource:
|
||||
otheruri = self.get_target_uri(otheruri)
|
||||
return relative_uri(baseuri, otheruri) or '#'
|
||||
else:
|
||||
return '/' + posixpath.join(self.virtual_staticdir, otheruri)
|
||||
ctx['pathto'] = pathto
|
||||
ctx['hasdoc'] = lambda name: name in self.env.all_docs
|
||||
ctx['encoding'] = self.config.html_output_encoding
|
||||
ctx['toctree'] = lambda **kw: self._get_local_toctree(pagename, **kw)
|
||||
self.add_sidebars(pagename, ctx)
|
||||
ctx.update(addctx)
|
||||
|
||||
self.app.emit('html-page-context', pagename, templatename,
|
||||
ctx, event_arg)
|
||||
|
||||
# create a dict that will be pickled and used by webapps
|
||||
doc_ctx = {
|
||||
'body': ctx.get('body', ''),
|
||||
'title': ctx.get('title', ''),
|
||||
}
|
||||
# partially render the html template to get at interesting macros
|
||||
template = self.templates.environment.get_template(templatename)
|
||||
template_module = template.make_module(ctx)
|
||||
for item in ['sidebar', 'relbar', 'script', 'css']:
|
||||
if hasattr(template_module, item):
|
||||
doc_ctx[item] = getattr(template_module, item)()
|
||||
|
||||
return ctx, doc_ctx
|
||||
|
||||
def handle_page(self, pagename, addctx, templatename='page.html',
|
||||
outfilename=None, event_arg=None):
|
||||
ctx, doc_ctx = self._render_page(pagename, addctx,
|
||||
templatename, event_arg)
|
||||
|
||||
if not outfilename:
|
||||
outfilename = path.join(self.outdir, 'pickles',
|
||||
os_path(pagename) + self.out_suffix)
|
||||
ensuredir(path.dirname(outfilename))
|
||||
self.dump_context(doc_ctx, outfilename)
|
||||
|
||||
# if there is a source file, copy the source file for the
|
||||
# "show source" link
|
||||
if ctx.get('sourcename'):
|
||||
source_name = path.join(self.staticdir,
|
||||
'_sources', os_path(ctx['sourcename']))
|
||||
ensuredir(path.dirname(source_name))
|
||||
copyfile(self.env.doc2path(pagename), source_name)
|
||||
|
||||
def handle_finish(self):
|
||||
# get global values for css and script files
|
||||
_, doc_ctx = self._render_page('tmp', {}, 'page.html')
|
||||
self.globalcontext['css'] = doc_ctx['css']
|
||||
self.globalcontext['script'] = doc_ctx['script']
|
||||
|
||||
PickleHTMLBuilder.handle_finish(self)
|
||||
|
||||
# move static stuff over to separate directory
|
||||
directories = ['_images', '_static']
|
||||
for directory in directories:
|
||||
src = path.join(self.outdir, directory)
|
||||
dst = path.join(self.staticdir, directory)
|
||||
if path.isdir(src):
|
||||
if path.isdir(dst):
|
||||
shutil.rmtree(dst)
|
||||
shutil.move(src, dst)
|
||||
|
||||
def dump_search_index(self):
|
||||
self.indexer.finish_indexing()
|
@ -5,7 +5,7 @@
|
||||
|
||||
sphinx-build command-line handling.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -90,6 +90,13 @@ def main(argv):
|
||||
if err:
|
||||
return 1
|
||||
|
||||
# likely encoding used for command-line arguments
|
||||
try:
|
||||
locale = __import__('locale') # due to submodule of the same name
|
||||
likely_encoding = locale.getpreferredencoding()
|
||||
except Exception:
|
||||
likely_encoding = None
|
||||
|
||||
buildername = None
|
||||
force_all = freshenv = warningiserror = use_pdb = False
|
||||
status = sys.stdout
|
||||
@ -129,7 +136,11 @@ def main(argv):
|
||||
try:
|
||||
val = int(val)
|
||||
except ValueError:
|
||||
pass
|
||||
if likely_encoding:
|
||||
try:
|
||||
val = val.decode(likely_encoding)
|
||||
except UnicodeError:
|
||||
pass
|
||||
confoverrides[key] = val
|
||||
elif opt == '-A':
|
||||
try:
|
||||
@ -141,7 +152,11 @@ def main(argv):
|
||||
try:
|
||||
val = int(val)
|
||||
except ValueError:
|
||||
pass
|
||||
if likely_encoding:
|
||||
try:
|
||||
val = val.decode(likely_encoding)
|
||||
except UnicodeError:
|
||||
pass
|
||||
confoverrides['html_context.%s' % key] = val
|
||||
elif opt == '-n':
|
||||
confoverrides['nitpicky'] = True
|
||||
|
@ -5,22 +5,29 @@
|
||||
|
||||
Build configuration file handling.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from os import path
|
||||
|
||||
from sphinx.errors import ConfigError
|
||||
from sphinx.util.osutil import make_filename
|
||||
from sphinx.util.pycompat import bytes, b, convert_with_2to3
|
||||
|
||||
nonascii_re = re.compile(r'[\x80-\xff]')
|
||||
nonascii_re = re.compile(b(r'[\x80-\xff]'))
|
||||
|
||||
CONFIG_SYNTAX_ERROR = "There is a syntax error in your configuration file: %s"
|
||||
if sys.version_info >= (3, 0):
|
||||
CONFIG_SYNTAX_ERROR += "\nDid you change the syntax from 2.x to 3.x?"
|
||||
|
||||
class Config(object):
|
||||
"""Configuration file abstraction."""
|
||||
"""
|
||||
Configuration file abstraction.
|
||||
"""
|
||||
|
||||
# the values are: (default, what needs to be rebuilt if changed)
|
||||
|
||||
@ -64,12 +71,13 @@ class Config(object):
|
||||
primary_domain = ('py', 'env'),
|
||||
needs_sphinx = (None, None),
|
||||
nitpicky = (False, 'env'),
|
||||
nitpick_ignore = ([], 'env'),
|
||||
|
||||
# HTML options
|
||||
html_theme = ('default', 'html'),
|
||||
html_theme_path = ([], 'html'),
|
||||
html_theme_options = ({}, 'html'),
|
||||
html_title = (lambda self: '%s v%s documentation' %
|
||||
html_title = (lambda self: '%s %s documentation' %
|
||||
(self.project, self.release),
|
||||
'html'),
|
||||
html_short_title = (lambda self: self.html_title, 'html'),
|
||||
@ -85,7 +93,7 @@ class Config(object):
|
||||
html_additional_pages = ({}, 'html'),
|
||||
html_use_modindex = (True, 'html'), # deprecated
|
||||
html_domain_indices = (True, 'html'),
|
||||
html_add_permalinks = (True, 'html'),
|
||||
html_add_permalinks = (u'\u00B6', 'html'),
|
||||
html_use_index = (True, 'html'),
|
||||
html_split_index = (False, 'html'),
|
||||
html_copy_source = (True, 'html'),
|
||||
@ -99,6 +107,8 @@ class Config(object):
|
||||
html_output_encoding = ('utf-8', 'html'),
|
||||
html_compact_lists = (True, 'html'),
|
||||
html_secnumber_suffix = ('. ', 'html'),
|
||||
html_search_language = (None, 'html'),
|
||||
html_search_options = ({}, 'html'),
|
||||
|
||||
# HTML help only options
|
||||
htmlhelp_basename = (lambda self: make_filename(self.project), None),
|
||||
@ -136,7 +146,7 @@ class Config(object):
|
||||
latex_use_parts = (False, None),
|
||||
latex_use_modindex = (True, None), # deprecated
|
||||
latex_domain_indices = (True, None),
|
||||
latex_show_urls = (False, None),
|
||||
latex_show_urls = ('no', None),
|
||||
latex_show_pagerefs = (False, None),
|
||||
# paper_size and font_size are still separate values
|
||||
# so that you can give them easily on the command line
|
||||
@ -149,11 +159,23 @@ class Config(object):
|
||||
latex_preamble = ('', None),
|
||||
|
||||
# text options
|
||||
text_sectionchars = ('*=-~"+`', 'text'),
|
||||
text_windows_newlines = (False, 'text'),
|
||||
text_sectionchars = ('*=-~"+`', 'env'),
|
||||
text_newlines = ('unix', 'env'),
|
||||
|
||||
# manpage options
|
||||
man_pages = ([], None),
|
||||
man_show_urls = (False, None),
|
||||
|
||||
# Texinfo options
|
||||
texinfo_documents = ([], None),
|
||||
texinfo_appendices = ([], None),
|
||||
texinfo_elements = ({}, None),
|
||||
texinfo_domain_indices = (True, None),
|
||||
|
||||
# linkcheck options
|
||||
linkcheck_ignore = ([], None),
|
||||
linkcheck_timeout = (None, None),
|
||||
linkcheck_workers = (5, None),
|
||||
)
|
||||
|
||||
def __init__(self, dirname, filename, overrides, tags):
|
||||
@ -166,12 +188,30 @@ class Config(object):
|
||||
config['tags'] = tags
|
||||
olddir = os.getcwd()
|
||||
try:
|
||||
# we promise to have the config dir as current dir while the
|
||||
# config file is executed
|
||||
os.chdir(dirname)
|
||||
# get config source
|
||||
f = open(config_file, 'rb')
|
||||
try:
|
||||
os.chdir(dirname)
|
||||
execfile(config['__file__'], config)
|
||||
source = f.read()
|
||||
finally:
|
||||
f.close()
|
||||
try:
|
||||
# compile to a code object, handle syntax errors
|
||||
try:
|
||||
code = compile(source, config_file, 'exec')
|
||||
except SyntaxError:
|
||||
if convert_with_2to3:
|
||||
# maybe the file uses 2.x syntax; try to refactor to
|
||||
# 3.x syntax using 2to3
|
||||
source = convert_with_2to3(config_file)
|
||||
code = compile(source, config_file, 'exec')
|
||||
else:
|
||||
raise
|
||||
exec code in config
|
||||
except SyntaxError, err:
|
||||
raise ConfigError('There is a syntax error in your '
|
||||
'configuration file: ' + str(err))
|
||||
raise ConfigError(CONFIG_SYNTAX_ERROR % err)
|
||||
finally:
|
||||
os.chdir(olddir)
|
||||
|
||||
@ -185,10 +225,11 @@ class Config(object):
|
||||
# check all string values for non-ASCII characters in bytestrings,
|
||||
# since that can result in UnicodeErrors all over the place
|
||||
for name, value in self._raw_config.iteritems():
|
||||
if isinstance(value, str) and nonascii_re.search(value):
|
||||
if isinstance(value, bytes) and nonascii_re.search(value):
|
||||
warn('the config value %r is set to a string with non-ASCII '
|
||||
'characters; this can lead to Unicode errors occurring. '
|
||||
'Please use Unicode strings, e.g. u"Content".' % name)
|
||||
'Please use Unicode strings, e.g. %r.' % (name, u'Content')
|
||||
)
|
||||
|
||||
def init_values(self):
|
||||
config = self._raw_config
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
Handlers for additional ReST directives.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -32,7 +32,8 @@ except AttributeError:
|
||||
|
||||
|
||||
# RE to strip backslash escapes
|
||||
strip_backslash_re = re.compile(r'\\(?=[^\\])')
|
||||
nl_escape_re = re.compile(r'\\\n')
|
||||
strip_backslash_re = re.compile(r'\\(.)')
|
||||
|
||||
|
||||
class ObjectDescription(Directive):
|
||||
@ -57,10 +58,12 @@ class ObjectDescription(Directive):
|
||||
"""
|
||||
Retrieve the signatures to document from the directive arguments. By
|
||||
default, signatures are given as arguments, one per line.
|
||||
|
||||
Backslash-escaping of newlines is supported.
|
||||
"""
|
||||
lines = nl_escape_re.sub('', self.arguments[0]).split('\n')
|
||||
# remove backslashes to support (dummy) escapes; helps Vim highlighting
|
||||
return [strip_backslash_re.sub('', sig.strip())
|
||||
for sig in self.arguments[0].split('\n')]
|
||||
return [strip_backslash_re.sub(r'\1', line.strip()) for line in lines]
|
||||
|
||||
def handle_signature(self, sig, signode):
|
||||
"""
|
||||
@ -159,7 +162,6 @@ class ObjectDescription(Directive):
|
||||
self.env.temp_data['object'] = self.names[0]
|
||||
self.before_content()
|
||||
self.state.nested_parse(self.content, self.content_offset, contentnode)
|
||||
#self.handle_doc_fields(contentnode)
|
||||
DocFieldTransformer(self).transform_all(contentnode)
|
||||
self.env.temp_data['object'] = None
|
||||
self.after_content()
|
||||
|
@ -3,14 +3,12 @@
|
||||
sphinx.directives.code
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import codecs
|
||||
from os import path
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.parsers.rst import Directive, directives
|
||||
@ -64,6 +62,7 @@ class CodeBlock(Directive):
|
||||
literal = nodes.literal_block(code, code)
|
||||
literal['language'] = self.arguments[0]
|
||||
literal['linenos'] = 'linenos' in self.options
|
||||
literal.line = self.lineno
|
||||
return [literal]
|
||||
|
||||
|
||||
@ -93,23 +92,11 @@ class LiteralInclude(Directive):
|
||||
|
||||
def run(self):
|
||||
document = self.state.document
|
||||
filename = self.arguments[0]
|
||||
if not document.settings.file_insertion_enabled:
|
||||
return [document.reporter.warning('File insertion disabled',
|
||||
line=self.lineno)]
|
||||
env = document.settings.env
|
||||
if filename.startswith('/') or filename.startswith(os.sep):
|
||||
rel_fn = filename[1:]
|
||||
else:
|
||||
docdir = path.dirname(env.doc2path(env.docname, base=None))
|
||||
rel_fn = path.join(docdir, filename)
|
||||
try:
|
||||
fn = path.join(env.srcdir, rel_fn)
|
||||
except UnicodeDecodeError:
|
||||
# the source directory is a bytestring with non-ASCII characters;
|
||||
# let's try to encode the rel_fn in the file system encoding
|
||||
rel_fn = rel_fn.encode(sys.getfilesystemencoding())
|
||||
fn = path.join(env.srcdir, rel_fn)
|
||||
rel_filename, filename = env.relfn2path(self.arguments[0])
|
||||
|
||||
if 'pyobject' in self.options and 'lines' in self.options:
|
||||
return [document.reporter.warning(
|
||||
@ -119,7 +106,7 @@ class LiteralInclude(Directive):
|
||||
encoding = self.options.get('encoding', env.config.source_encoding)
|
||||
codec_info = codecs.lookup(encoding)
|
||||
try:
|
||||
f = codecs.StreamReaderWriter(open(fn, 'U'),
|
||||
f = codecs.StreamReaderWriter(open(filename, 'rb'),
|
||||
codec_info[2], codec_info[3], 'strict')
|
||||
lines = f.readlines()
|
||||
f.close()
|
||||
@ -136,7 +123,7 @@ class LiteralInclude(Directive):
|
||||
objectname = self.options.get('pyobject')
|
||||
if objectname is not None:
|
||||
from sphinx.pycode import ModuleAnalyzer
|
||||
analyzer = ModuleAnalyzer.for_file(fn, '')
|
||||
analyzer = ModuleAnalyzer.for_file(filename, '')
|
||||
tags = analyzer.find_tags()
|
||||
if objectname not in tags:
|
||||
return [document.reporter.warning(
|
||||
@ -178,13 +165,14 @@ class LiteralInclude(Directive):
|
||||
text = ''.join(lines)
|
||||
if self.options.get('tab-width'):
|
||||
text = text.expandtabs(self.options['tab-width'])
|
||||
retnode = nodes.literal_block(text, text, source=fn)
|
||||
retnode = nodes.literal_block(text, text, source=filename)
|
||||
retnode.line = 1
|
||||
retnode.attributes['line_number'] = self.lineno
|
||||
if self.options.get('language', ''):
|
||||
retnode['language'] = self.options['language']
|
||||
if 'linenos' in self.options:
|
||||
retnode['linenos'] = True
|
||||
document.settings.env.note_dependency(rel_fn)
|
||||
env.note_dependency(rel_filename)
|
||||
return [retnode]
|
||||
|
||||
|
||||
|
@ -3,7 +3,7 @@
|
||||
sphinx.directives.other
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -11,21 +11,28 @@ import os
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.parsers.rst import Directive, directives
|
||||
from docutils.parsers.rst.directives.misc import Class
|
||||
from docutils.parsers.rst.directives.misc import Include as BaseInclude
|
||||
|
||||
from sphinx import addnodes
|
||||
from sphinx.locale import pairindextypes, _
|
||||
from sphinx.locale import _
|
||||
from sphinx.util import url_re, docname_join
|
||||
from sphinx.util.nodes import explicit_title_re
|
||||
from sphinx.util.nodes import explicit_title_re, process_index_entry
|
||||
from sphinx.util.compat import make_admonition
|
||||
from sphinx.util.matching import patfilter
|
||||
|
||||
|
||||
def int_or_nothing(argument):
|
||||
if not argument:
|
||||
return 999
|
||||
return int(argument)
|
||||
|
||||
|
||||
class TocTree(Directive):
|
||||
"""
|
||||
Directive to notify Sphinx about the hierarchical structure of the docs,
|
||||
and to include a table-of-contents like tree in the current document.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
@ -34,7 +41,7 @@ class TocTree(Directive):
|
||||
'maxdepth': int,
|
||||
'glob': directives.flag,
|
||||
'hidden': directives.flag,
|
||||
'numbered': directives.flag,
|
||||
'numbered': int_or_nothing,
|
||||
'titlesonly': directives.flag,
|
||||
}
|
||||
|
||||
@ -73,8 +80,9 @@ class TocTree(Directive):
|
||||
entries.append((title, ref))
|
||||
elif docname not in env.found_docs:
|
||||
ret.append(self.state.document.reporter.warning(
|
||||
'toctree references unknown document %r' % docname,
|
||||
line=self.lineno))
|
||||
'toctree contains reference to nonexisting '
|
||||
'document %r' % docname, line=self.lineno))
|
||||
env.note_reread()
|
||||
else:
|
||||
entries.append((title, docname))
|
||||
includefiles.append(docname)
|
||||
@ -98,7 +106,7 @@ class TocTree(Directive):
|
||||
subnode['maxdepth'] = self.options.get('maxdepth', -1)
|
||||
subnode['glob'] = glob
|
||||
subnode['hidden'] = 'hidden' in self.options
|
||||
subnode['numbered'] = 'numbered' in self.options
|
||||
subnode['numbered'] = self.options.get('numbered', 0)
|
||||
subnode['titlesonly'] = 'titlesonly' in self.options
|
||||
wrappernode = nodes.compound(classes=['toctree-wrapper'])
|
||||
wrappernode.append(subnode)
|
||||
@ -111,7 +119,6 @@ class Author(Directive):
|
||||
Directive to give the name of the author of the current document
|
||||
or section. Shown in the output only if the show_authors option is on.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
@ -144,17 +151,12 @@ class Index(Directive):
|
||||
"""
|
||||
Directive to add entries to the index.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
final_argument_whitespace = True
|
||||
option_spec = {}
|
||||
|
||||
indextypes = [
|
||||
'single', 'pair', 'double', 'triple',
|
||||
]
|
||||
|
||||
def run(self):
|
||||
arguments = self.arguments[0].split('\n')
|
||||
env = self.state.document.settings.env
|
||||
@ -163,29 +165,9 @@ class Index(Directive):
|
||||
self.state.document.note_explicit_target(targetnode)
|
||||
indexnode = addnodes.index()
|
||||
indexnode['entries'] = ne = []
|
||||
indexnode['inline'] = False
|
||||
for entry in arguments:
|
||||
entry = entry.strip()
|
||||
for type in pairindextypes:
|
||||
if entry.startswith(type+':'):
|
||||
value = entry[len(type)+1:].strip()
|
||||
value = pairindextypes[type] + '; ' + value
|
||||
ne.append(('pair', value, targetid, value))
|
||||
break
|
||||
else:
|
||||
for type in self.indextypes:
|
||||
if entry.startswith(type+':'):
|
||||
value = entry[len(type)+1:].strip()
|
||||
if type == 'double':
|
||||
type = 'pair'
|
||||
ne.append((type, value, targetid, value))
|
||||
break
|
||||
# shorthand notation for single entries
|
||||
else:
|
||||
for value in entry.split(','):
|
||||
value = value.strip()
|
||||
if not value:
|
||||
continue
|
||||
ne.append(('single', value, targetid, value))
|
||||
ne.extend(process_index_entry(entry, targetid))
|
||||
return [indexnode, targetnode]
|
||||
|
||||
|
||||
@ -193,7 +175,6 @@ class VersionChange(Directive):
|
||||
"""
|
||||
Directive to describe a change/addition/deprecation in a specific version.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 1
|
||||
optional_arguments = 1
|
||||
@ -223,7 +204,6 @@ class SeeAlso(Directive):
|
||||
"""
|
||||
An admonition mentioning things to look at as reference.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 1
|
||||
@ -249,7 +229,6 @@ class TabularColumns(Directive):
|
||||
"""
|
||||
Directive to give an explicit tabulary column definition to LaTeX.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
@ -259,6 +238,7 @@ class TabularColumns(Directive):
|
||||
def run(self):
|
||||
node = addnodes.tabular_col_spec()
|
||||
node['spec'] = self.arguments[0]
|
||||
node.line = self.lineno
|
||||
return [node]
|
||||
|
||||
|
||||
@ -266,7 +246,6 @@ class Centered(Directive):
|
||||
"""
|
||||
Directive to create a centered line of bold text.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
@ -283,12 +262,10 @@ class Centered(Directive):
|
||||
return [subnode] + messages
|
||||
|
||||
|
||||
|
||||
class Acks(Directive):
|
||||
"""
|
||||
Directive for a list of names.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
@ -310,7 +287,6 @@ class HList(Directive):
|
||||
"""
|
||||
Directive for a list that gets compacted horizontally.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
@ -347,7 +323,6 @@ class Only(Directive):
|
||||
"""
|
||||
Directive to only include text if the given tag(s) are enabled.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
required_arguments = 1
|
||||
optional_arguments = 0
|
||||
@ -364,19 +339,16 @@ class Only(Directive):
|
||||
return [node]
|
||||
|
||||
|
||||
from docutils.parsers.rst.directives.misc import Include as BaseInclude
|
||||
|
||||
class Include(BaseInclude):
|
||||
"""
|
||||
Like the standard "Include" directive, but interprets absolute paths
|
||||
correctly.
|
||||
"correctly", i.e. relative to source directory.
|
||||
"""
|
||||
|
||||
def run(self):
|
||||
if self.arguments[0].startswith('/') or \
|
||||
self.arguments[0].startswith(os.sep):
|
||||
env = self.state.document.settings.env
|
||||
self.arguments[0] = os.path.join(env.srcdir, self.arguments[0][1:])
|
||||
env = self.state.document.settings.env
|
||||
rel_filename, filename = env.relfn2path(self.arguments[0])
|
||||
self.arguments[0] = filename
|
||||
return BaseInclude.run(self)
|
||||
|
||||
|
||||
@ -397,7 +369,6 @@ directives.register_directive('only', Only)
|
||||
directives.register_directive('include', Include)
|
||||
|
||||
# register the standard rst class directive under a different name
|
||||
from docutils.parsers.rst.directives.misc import Class
|
||||
# only for backwards compatibility now
|
||||
directives.register_directive('cssclass', Class)
|
||||
# new standard name when default-domain with "class" is in effect
|
||||
|
@ -6,7 +6,7 @@
|
||||
Support for domains, which are groupings of description directives
|
||||
and roles describing e.g. constructs of one programming language.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -66,9 +66,8 @@ class Index(object):
|
||||
self.domain = domain
|
||||
|
||||
def generate(self, docnames=None):
|
||||
"""
|
||||
Return entries for the index given by *name*. If *docnames* is given,
|
||||
restrict to entries referring to these docnames.
|
||||
"""Return entries for the index given by *name*. If *docnames* is
|
||||
given, restrict to entries referring to these docnames.
|
||||
|
||||
The return value is a tuple of ``(content, collapse)``, where *collapse*
|
||||
is a boolean that determines if sub-entries should start collapsed (for
|
||||
@ -132,6 +131,8 @@ class Domain(object):
|
||||
roles = {}
|
||||
#: a list of Index subclasses
|
||||
indices = []
|
||||
#: role name -> a warning message if reference is missing
|
||||
dangling_warnings = {}
|
||||
|
||||
#: data value for a fresh environment
|
||||
initial_data = {}
|
||||
@ -158,8 +159,7 @@ class Domain(object):
|
||||
self.objtypes_for_role = self._role2type.get
|
||||
|
||||
def role(self, name):
|
||||
"""
|
||||
Return a role adapter function that always gives the registered
|
||||
"""Return a role adapter function that always gives the registered
|
||||
role its full name ('domain:name') as the first argument.
|
||||
"""
|
||||
if name in self._role_cache:
|
||||
@ -175,8 +175,7 @@ class Domain(object):
|
||||
return role_adapter
|
||||
|
||||
def directive(self, name):
|
||||
"""
|
||||
Return a directive adapter class that always gives the registered
|
||||
"""Return a directive adapter class that always gives the registered
|
||||
directive its full name ('domain:name') as ``self.name``.
|
||||
"""
|
||||
if name in self._directive_cache:
|
||||
@ -195,21 +194,16 @@ class Domain(object):
|
||||
# methods that should be overwritten
|
||||
|
||||
def clear_doc(self, docname):
|
||||
"""
|
||||
Remove traces of a document in the domain-specific inventories.
|
||||
"""
|
||||
"""Remove traces of a document in the domain-specific inventories."""
|
||||
pass
|
||||
|
||||
def process_doc(self, env, docname, document):
|
||||
"""
|
||||
Process a document after it is read by the environment.
|
||||
"""
|
||||
"""Process a document after it is read by the environment."""
|
||||
pass
|
||||
|
||||
def resolve_xref(self, env, fromdocname, builder,
|
||||
typ, target, node, contnode):
|
||||
"""
|
||||
Resolve the ``pending_xref`` *node* with the given *typ* and *target*.
|
||||
"""Resolve the pending_xref *node* with the given *typ* and *target*.
|
||||
|
||||
This method should return a new node, to replace the xref node,
|
||||
containing the *contnode* which is the markup content of the
|
||||
@ -225,8 +219,7 @@ class Domain(object):
|
||||
pass
|
||||
|
||||
def get_objects(self):
|
||||
"""
|
||||
Return an iterable of "object descriptions", which are tuples with
|
||||
"""Return an iterable of "object descriptions", which are tuples with
|
||||
five items:
|
||||
|
||||
* `name` -- fully qualified name
|
||||
@ -245,9 +238,7 @@ class Domain(object):
|
||||
return []
|
||||
|
||||
def get_type_name(self, type, primary=False):
|
||||
"""
|
||||
Return full name for given ObjType.
|
||||
"""
|
||||
"""Return full name for given ObjType."""
|
||||
if primary:
|
||||
return type.lname
|
||||
return _('%s %s') % (self.label, type.lname)
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
The C language domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -99,13 +99,20 @@ class CObject(ObjectDescription):
|
||||
m = c_funcptr_name_re.match(name)
|
||||
if m:
|
||||
name = m.group(1)
|
||||
|
||||
typename = self.env.temp_data.get('c:type')
|
||||
if self.name == 'c:member' and typename:
|
||||
fullname = typename + '.' + name
|
||||
else:
|
||||
fullname = name
|
||||
|
||||
if not arglist:
|
||||
if self.objtype == 'function':
|
||||
# for functions, add an empty parameter list
|
||||
signode += addnodes.desc_parameterlist()
|
||||
if const:
|
||||
signode += addnodes.desc_addname(const, const)
|
||||
return name
|
||||
return fullname
|
||||
|
||||
paramlist = addnodes.desc_parameterlist()
|
||||
arglist = arglist.replace('`', '').replace('\\ ', '') # remove markup
|
||||
@ -121,12 +128,13 @@ class CObject(ObjectDescription):
|
||||
self._parse_type(param, arg)
|
||||
else:
|
||||
self._parse_type(param, ctype)
|
||||
param += nodes.emphasis(' '+argname, ' '+argname)
|
||||
# separate by non-breaking space in the output
|
||||
param += nodes.emphasis(' '+argname, u'\xa0'+argname)
|
||||
paramlist += param
|
||||
signode += paramlist
|
||||
if const:
|
||||
signode += addnodes.desc_addname(const, const)
|
||||
return name
|
||||
return fullname
|
||||
|
||||
def get_index_text(self, name):
|
||||
if self.objtype == 'function':
|
||||
@ -160,7 +168,32 @@ class CObject(ObjectDescription):
|
||||
|
||||
indextext = self.get_index_text(name)
|
||||
if indextext:
|
||||
self.indexnode['entries'].append(('single', indextext, name, name))
|
||||
self.indexnode['entries'].append(('single', indextext, name, ''))
|
||||
|
||||
def before_content(self):
|
||||
self.typename_set = False
|
||||
if self.name == 'c:type':
|
||||
if self.names:
|
||||
self.env.temp_data['c:type'] = self.names[0]
|
||||
self.typename_set = True
|
||||
|
||||
def after_content(self):
|
||||
if self.typename_set:
|
||||
self.env.temp_data['c:type'] = None
|
||||
|
||||
|
||||
class CXRefRole(XRefRole):
|
||||
def process_link(self, env, refnode, has_explicit_title, title, target):
|
||||
if not has_explicit_title:
|
||||
target = target.lstrip('~') # only has a meaning for the title
|
||||
# if the first character is a tilde, don't display the module/class
|
||||
# parts of the contents
|
||||
if title[0:1] == '~':
|
||||
title = title[1:]
|
||||
dot = title.rfind('.')
|
||||
if dot != -1:
|
||||
title = title[dot+1:]
|
||||
return title, target
|
||||
|
||||
|
||||
class CDomain(Domain):
|
||||
@ -183,11 +216,11 @@ class CDomain(Domain):
|
||||
'var': CObject,
|
||||
}
|
||||
roles = {
|
||||
'func' : XRefRole(fix_parens=True),
|
||||
'member': XRefRole(),
|
||||
'macro': XRefRole(),
|
||||
'data': XRefRole(),
|
||||
'type': XRefRole(),
|
||||
'func' : CXRefRole(fix_parens=True),
|
||||
'member': CXRefRole(),
|
||||
'macro': CXRefRole(),
|
||||
'data': CXRefRole(),
|
||||
'type': CXRefRole(),
|
||||
}
|
||||
initial_data = {
|
||||
'objects': {}, # fullname -> docname, objtype
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
The C++ language domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -21,14 +21,14 @@ from sphinx.domains import Domain, ObjType
|
||||
from sphinx.directives import ObjectDescription
|
||||
from sphinx.util.nodes import make_refnode
|
||||
from sphinx.util.compat import Directive
|
||||
from sphinx.util.docfields import TypedField
|
||||
|
||||
|
||||
_identifier_re = re.compile(r'\b(~?[a-zA-Z_][a-zA-Z0-9_]*)\b')
|
||||
_identifier_re = re.compile(r'(~?\b[a-zA-Z_][a-zA-Z0-9_]*)\b')
|
||||
_whitespace_re = re.compile(r'\s+(?u)')
|
||||
_string_re = re.compile(r"[LuU8]?('([^'\\]*(?:\\.[^'\\]*)*)'"
|
||||
r'|"([^"\\]*(?:\\.[^"\\]*)*)")', re.S)
|
||||
_visibility_re = re.compile(r'\b(public|private|protected)\b')
|
||||
_array_def_re = re.compile(r'\[\s*(.+?)?\s*\]')
|
||||
_operator_re = re.compile(r'''(?x)
|
||||
\[\s*\]
|
||||
| \(\s*\)
|
||||
@ -110,7 +110,7 @@ class DefinitionError(Exception):
|
||||
return self.description
|
||||
|
||||
def __str__(self):
|
||||
return unicode(self.encode('utf-8'))
|
||||
return unicode(self).encode('utf-8')
|
||||
|
||||
|
||||
class DefExpr(object):
|
||||
@ -132,17 +132,21 @@ class DefExpr(object):
|
||||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
__hash__ = None
|
||||
|
||||
def clone(self):
|
||||
"""Close a definition expression node"""
|
||||
"""Clone a definition expression node."""
|
||||
return deepcopy(self)
|
||||
|
||||
def get_id(self):
|
||||
"""Returns the id for the node"""
|
||||
"""Return the id for the node."""
|
||||
return u''
|
||||
|
||||
def get_name(self):
|
||||
"""Returns the name. Returns either `None` or a node with
|
||||
a name you might call :meth:`split_owner` on.
|
||||
"""Return the name.
|
||||
|
||||
Returns either `None` or a node with a name you might call
|
||||
:meth:`split_owner` on.
|
||||
"""
|
||||
return None
|
||||
|
||||
@ -150,19 +154,19 @@ class DefExpr(object):
|
||||
"""Nodes returned by :meth:`get_name` can split off their
|
||||
owning parent. This function returns the owner and the
|
||||
name as a tuple of two items. If a node does not support
|
||||
it, :exc:`NotImplementedError` is raised.
|
||||
it, it returns None as owner and self as name.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
return None, self
|
||||
|
||||
def prefix(self, prefix):
|
||||
"""Prefixes a name node (a node returned by :meth:`get_name`)."""
|
||||
"""Prefix a name node (a node returned by :meth:`get_name`)."""
|
||||
raise NotImplementedError()
|
||||
|
||||
def __str__(self):
|
||||
return unicode(self).encode('utf-8')
|
||||
|
||||
def __repr__(self):
|
||||
return '<defexpr %s>' % self
|
||||
return '<%s %s>' % (self.__class__.__name__, self)
|
||||
|
||||
|
||||
class PrimaryDefExpr(DefExpr):
|
||||
@ -170,9 +174,6 @@ class PrimaryDefExpr(DefExpr):
|
||||
def get_name(self):
|
||||
return self
|
||||
|
||||
def split_owner(self):
|
||||
return None, self
|
||||
|
||||
def prefix(self, prefix):
|
||||
if isinstance(prefix, PathDefExpr):
|
||||
prefix = prefix.clone()
|
||||
@ -273,6 +274,22 @@ class PtrDefExpr(WrappingDefExpr):
|
||||
return u'%s*' % self.typename
|
||||
|
||||
|
||||
class ArrayDefExpr(WrappingDefExpr):
|
||||
|
||||
def __init__(self, typename, size_hint=None):
|
||||
WrappingDefExpr.__init__(self, typename)
|
||||
self.size_hint = size_hint
|
||||
|
||||
def get_id(self):
|
||||
return self.typename.get_id() + u'A'
|
||||
|
||||
def __unicode__(self):
|
||||
return u'%s[%s]' % (
|
||||
self.typename,
|
||||
self.size_hint is not None and unicode(self.size_hint) or u''
|
||||
)
|
||||
|
||||
|
||||
class RefDefExpr(WrappingDefExpr):
|
||||
|
||||
def get_id(self):
|
||||
@ -323,9 +340,8 @@ class ArgumentDefExpr(DefExpr):
|
||||
return self.type.get_id()
|
||||
|
||||
def __unicode__(self):
|
||||
return (self.type is not None and u'%s %s' % (self.type, self.name)
|
||||
or unicode(self.name)) + (self.default is not None and
|
||||
u'=%s' % self.default or u'')
|
||||
return (u'%s %s' % (self.type or u'', self.name or u'')).strip() + \
|
||||
(self.default is not None and u'=%s' % self.default or u'')
|
||||
|
||||
|
||||
class NamedDefExpr(DefExpr):
|
||||
@ -445,9 +461,9 @@ class DefinitionParser(object):
|
||||
'mutable': None,
|
||||
'const': None,
|
||||
'typename': None,
|
||||
'unsigned': set(('char', 'int', 'long')),
|
||||
'signed': set(('char', 'int', 'long')),
|
||||
'short': set(('int', 'short')),
|
||||
'unsigned': set(('char', 'short', 'int', 'long')),
|
||||
'signed': set(('char', 'short', 'int', 'long')),
|
||||
'short': set(('int',)),
|
||||
'long': set(('int', 'long', 'double'))
|
||||
}
|
||||
|
||||
@ -563,6 +579,8 @@ class DefinitionParser(object):
|
||||
expr = ConstDefExpr(expr)
|
||||
elif self.skip_string('*'):
|
||||
expr = PtrDefExpr(expr)
|
||||
elif self.match(_array_def_re):
|
||||
expr = ArrayDefExpr(expr, self.last_match.group(1))
|
||||
elif self.skip_string('&'):
|
||||
expr = RefDefExpr(expr)
|
||||
else:
|
||||
@ -694,14 +712,13 @@ class DefinitionParser(object):
|
||||
self.fail('expected comma between arguments')
|
||||
self.skip_ws()
|
||||
|
||||
argname = self._parse_type()
|
||||
argtype = default = None
|
||||
argtype = self._parse_type()
|
||||
argname = default = None
|
||||
self.skip_ws()
|
||||
if self.skip_string('='):
|
||||
self.pos += 1
|
||||
default = self._parse_default_expr()
|
||||
elif self.current_char not in ',)':
|
||||
argtype = argname
|
||||
argname = self._parse_name()
|
||||
self.skip_ws()
|
||||
if self.skip_string('='):
|
||||
@ -824,17 +841,18 @@ class CPPObject(ObjectDescription):
|
||||
def add_target_and_index(self, sigobj, sig, signode):
|
||||
theid = sigobj.get_id()
|
||||
name = unicode(sigobj.name)
|
||||
signode['names'].append(theid)
|
||||
signode['ids'].append(theid)
|
||||
signode['first'] = (not self.names)
|
||||
self.state.document.note_explicit_target(signode)
|
||||
if theid not in self.state.document.ids:
|
||||
signode['names'].append(theid)
|
||||
signode['ids'].append(theid)
|
||||
signode['first'] = (not self.names)
|
||||
self.state.document.note_explicit_target(signode)
|
||||
|
||||
self.env.domaindata['cpp']['objects'].setdefault(name,
|
||||
(self.env.docname, self.objtype, theid))
|
||||
self.env.domaindata['cpp']['objects'].setdefault(name,
|
||||
(self.env.docname, self.objtype, theid))
|
||||
|
||||
indextext = self.get_index_text(name)
|
||||
if indextext:
|
||||
self.indexnode['entries'].append(('single', indextext, name, name))
|
||||
self.indexnode['entries'].append(('single', indextext, theid, ''))
|
||||
|
||||
def before_content(self):
|
||||
lastname = self.names and self.names[-1]
|
||||
@ -982,8 +1000,9 @@ class CPPFunctionObject(CPPObject):
|
||||
|
||||
|
||||
class CPPCurrentNamespace(Directive):
|
||||
"""This directive is just to tell Sphinx that we're documenting
|
||||
stuff in namespace foo.
|
||||
"""
|
||||
This directive is just to tell Sphinx that we're documenting stuff in
|
||||
namespace foo.
|
||||
"""
|
||||
|
||||
has_content = False
|
||||
@ -1071,13 +1090,15 @@ class CPPDomain(Domain):
|
||||
contnode, name)
|
||||
|
||||
parser = DefinitionParser(target)
|
||||
# XXX: warn?
|
||||
try:
|
||||
expr = parser.parse_type().get_name()
|
||||
parser.skip_ws()
|
||||
if not parser.eof or expr is None:
|
||||
return None
|
||||
raise DefinitionError('')
|
||||
except DefinitionError:
|
||||
refdoc = node.get('refdoc', fromdocname)
|
||||
env.warn(refdoc, 'unparseable C++ definition: %r' % target,
|
||||
node.line)
|
||||
return None
|
||||
|
||||
parent = node['cpp:parent']
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
The JavaScript domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -13,8 +13,8 @@ from sphinx import addnodes
|
||||
from sphinx.domains import Domain, ObjType
|
||||
from sphinx.locale import l_, _
|
||||
from sphinx.directives import ObjectDescription
|
||||
from sphinx.domains.python import py_paramlist_re as js_paramlist_re
|
||||
from sphinx.roles import XRefRole
|
||||
from sphinx.domains.python import _pseudo_parse_arglist
|
||||
from sphinx.util.nodes import make_refnode
|
||||
from sphinx.util.docfields import Field, GroupedField, TypedField
|
||||
|
||||
@ -56,7 +56,7 @@ class JSObject(ObjectDescription):
|
||||
else:
|
||||
# just a function or constructor
|
||||
objectname = ''
|
||||
fullname = ''
|
||||
fullname = name
|
||||
|
||||
signode['object'] = objectname
|
||||
signode['fullname'] = fullname
|
||||
@ -68,28 +68,10 @@ class JSObject(ObjectDescription):
|
||||
signode += addnodes.desc_addname(nameprefix + '.', nameprefix + '.')
|
||||
signode += addnodes.desc_name(name, name)
|
||||
if self.has_arguments:
|
||||
signode += addnodes.desc_parameterlist()
|
||||
if not arglist:
|
||||
return fullname, nameprefix
|
||||
|
||||
stack = [signode[-1]]
|
||||
for token in js_paramlist_re.split(arglist):
|
||||
if token == '[':
|
||||
opt = addnodes.desc_optional()
|
||||
stack[-1] += opt
|
||||
stack.append(opt)
|
||||
elif token == ']':
|
||||
try:
|
||||
stack.pop()
|
||||
except IndexError:
|
||||
raise ValueError()
|
||||
elif not token or token == ',' or token.isspace():
|
||||
pass
|
||||
if not arglist:
|
||||
signode += addnodes.desc_parameterlist()
|
||||
else:
|
||||
token = token.strip()
|
||||
stack[-1] += addnodes.desc_parameter(token, token)
|
||||
if len(stack) != 1:
|
||||
raise ValueError()
|
||||
_pseudo_parse_arglist(signode, arglist)
|
||||
return fullname, nameprefix
|
||||
|
||||
def add_target_and_index(self, name_obj, sig, signode):
|
||||
@ -114,7 +96,8 @@ class JSObject(ObjectDescription):
|
||||
indextext = self.get_index_text(objectname, name_obj)
|
||||
if indextext:
|
||||
self.indexnode['entries'].append(('single', indextext,
|
||||
fullname, fullname))
|
||||
fullname.replace('$', '_S_'),
|
||||
''))
|
||||
|
||||
def get_index_text(self, objectname, name_obj):
|
||||
name, obj = name_obj
|
||||
@ -148,7 +131,7 @@ class JSCallable(JSObject):
|
||||
|
||||
|
||||
class JSConstructor(JSCallable):
|
||||
"""Like a callable but with a different prefix"""
|
||||
"""Like a callable but with a different prefix."""
|
||||
display_prefix = 'class '
|
||||
|
||||
|
||||
@ -226,8 +209,10 @@ class JavaScriptDomain(Domain):
|
||||
name, obj = self.find_obj(env, objectname, target, typ, searchorder)
|
||||
if not obj:
|
||||
return None
|
||||
return make_refnode(builder, fromdocname, obj[0], name, contnode, name)
|
||||
return make_refnode(builder, fromdocname, obj[0],
|
||||
name.replace('$', '_S_'), contnode, name)
|
||||
|
||||
def get_objects(self):
|
||||
for refname, (docname, type) in self.data['objects'].iteritems():
|
||||
yield refname, refname, type, docname, refname, 1
|
||||
yield refname, refname, type, docname, \
|
||||
refname.replace('$', '_S_'), 1
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
The Python domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -33,7 +33,52 @@ py_sig_re = re.compile(
|
||||
)? $ # and nothing more
|
||||
''', re.VERBOSE)
|
||||
|
||||
py_paramlist_re = re.compile(r'([\[\],])') # split at '[', ']' and ','
|
||||
|
||||
def _pseudo_parse_arglist(signode, arglist):
|
||||
""""Parse" a list of arguments separated by commas.
|
||||
|
||||
Arguments can have "optional" annotations given by enclosing them in
|
||||
brackets. Currently, this will split at any comma, even if it's inside a
|
||||
string literal (e.g. default argument value).
|
||||
"""
|
||||
paramlist = addnodes.desc_parameterlist()
|
||||
stack = [paramlist]
|
||||
try:
|
||||
for argument in arglist.split(','):
|
||||
argument = argument.strip()
|
||||
ends_open = ends_close = 0
|
||||
while argument.startswith('['):
|
||||
stack.append(addnodes.desc_optional())
|
||||
stack[-2] += stack[-1]
|
||||
argument = argument[1:].strip()
|
||||
while argument.startswith(']'):
|
||||
stack.pop()
|
||||
argument = argument[1:].strip()
|
||||
while argument.endswith(']'):
|
||||
ends_close += 1
|
||||
argument = argument[:-1].strip()
|
||||
while argument.endswith('['):
|
||||
ends_open += 1
|
||||
argument = argument[:-1].strip()
|
||||
if argument:
|
||||
stack[-1] += addnodes.desc_parameter(argument, argument)
|
||||
while ends_open:
|
||||
stack.append(addnodes.desc_optional())
|
||||
stack[-2] += stack[-1]
|
||||
ends_open -= 1
|
||||
while ends_close:
|
||||
stack.pop()
|
||||
ends_close -= 1
|
||||
if len(stack) != 1:
|
||||
raise IndexError
|
||||
except IndexError:
|
||||
# if there are too few or too many elements on the stack, just give up
|
||||
# and treat the whole argument list as one argument, discarding the
|
||||
# already partially populated paramlist node
|
||||
signode += addnodes.desc_parameterlist()
|
||||
signode[-1] += addnodes.desc_parameter(arglist, arglist)
|
||||
else:
|
||||
signode += paramlist
|
||||
|
||||
|
||||
class PyObject(ObjectDescription):
|
||||
@ -43,16 +88,19 @@ class PyObject(ObjectDescription):
|
||||
option_spec = {
|
||||
'noindex': directives.flag,
|
||||
'module': directives.unchanged,
|
||||
'annotation': directives.unchanged,
|
||||
}
|
||||
|
||||
doc_field_types = [
|
||||
TypedField('parameter', label=l_('Parameters'),
|
||||
names=('param', 'parameter', 'arg', 'argument',
|
||||
'keyword', 'kwarg', 'kwparam'),
|
||||
typerolename='obj', typenames=('paramtype', 'type')),
|
||||
typerolename='obj', typenames=('paramtype', 'type'),
|
||||
can_collapse=True),
|
||||
TypedField('variable', label=l_('Variables'), rolename='obj',
|
||||
names=('var', 'ivar', 'cvar'),
|
||||
typerolename='obj', typenames=('vartype',)),
|
||||
typerolename='obj', typenames=('vartype',),
|
||||
can_collapse=True),
|
||||
GroupedField('exceptions', label=l_('Raises'), rolename='exc',
|
||||
names=('raises', 'raise', 'exception', 'except'),
|
||||
can_collapse=True),
|
||||
@ -63,22 +111,21 @@ class PyObject(ObjectDescription):
|
||||
]
|
||||
|
||||
def get_signature_prefix(self, sig):
|
||||
"""
|
||||
May return a prefix to put before the object name in the signature.
|
||||
"""May return a prefix to put before the object name in the
|
||||
signature.
|
||||
"""
|
||||
return ''
|
||||
|
||||
def needs_arglist(self):
|
||||
"""
|
||||
May return true if an empty argument list is to be generated even if
|
||||
"""May return true if an empty argument list is to be generated even if
|
||||
the document contains none.
|
||||
"""
|
||||
return False
|
||||
|
||||
def handle_signature(self, sig, signode):
|
||||
"""
|
||||
Transform a Python signature into RST nodes.
|
||||
Returns (fully qualified name of the thing, classname if any).
|
||||
"""Transform a Python signature into RST nodes.
|
||||
|
||||
Return (fully qualified name of the thing, classname if any).
|
||||
|
||||
If inside a class, the current class name is handled intelligently:
|
||||
* it is stripped from the displayed name if present
|
||||
@ -134,6 +181,8 @@ class PyObject(ObjectDescription):
|
||||
nodetext = modname + '.'
|
||||
signode += addnodes.desc_addname(nodetext, nodetext)
|
||||
|
||||
anno = self.options.get('annotation')
|
||||
|
||||
signode += addnodes.desc_name(name, name)
|
||||
if not arglist:
|
||||
if self.needs_arglist():
|
||||
@ -141,35 +190,19 @@ class PyObject(ObjectDescription):
|
||||
signode += addnodes.desc_parameterlist()
|
||||
if retann:
|
||||
signode += addnodes.desc_returns(retann, retann)
|
||||
if anno:
|
||||
signode += addnodes.desc_annotation(' ' + anno, ' ' + anno)
|
||||
return fullname, name_prefix
|
||||
signode += addnodes.desc_parameterlist()
|
||||
|
||||
stack = [signode[-1]]
|
||||
for token in py_paramlist_re.split(arglist):
|
||||
if token == '[':
|
||||
opt = addnodes.desc_optional()
|
||||
stack[-1] += opt
|
||||
stack.append(opt)
|
||||
elif token == ']':
|
||||
try:
|
||||
stack.pop()
|
||||
except IndexError:
|
||||
raise ValueError
|
||||
elif not token or token == ',' or token.isspace():
|
||||
pass
|
||||
else:
|
||||
token = token.strip()
|
||||
stack[-1] += addnodes.desc_parameter(token, token)
|
||||
if len(stack) != 1:
|
||||
raise ValueError
|
||||
_pseudo_parse_arglist(signode, arglist)
|
||||
if retann:
|
||||
signode += addnodes.desc_returns(retann, retann)
|
||||
if anno:
|
||||
signode += addnodes.desc_annotation(' ' + anno, ' ' + anno)
|
||||
return fullname, name_prefix
|
||||
|
||||
def get_index_text(self, modname, name):
|
||||
"""
|
||||
Return the text for the index entry of the object.
|
||||
"""
|
||||
"""Return the text for the index entry of the object."""
|
||||
raise NotImplementedError('must be implemented in subclasses')
|
||||
|
||||
def add_target_and_index(self, name_cls, sig, signode):
|
||||
@ -196,7 +229,7 @@ class PyObject(ObjectDescription):
|
||||
indextext = self.get_index_text(modname, name_cls)
|
||||
if indextext:
|
||||
self.indexnode['entries'].append(('single', indextext,
|
||||
fullname, fullname))
|
||||
fullname, ''))
|
||||
|
||||
def before_content(self):
|
||||
# needed for automatic qualification of members (reset in subclasses)
|
||||
@ -332,6 +365,38 @@ class PyClassmember(PyObject):
|
||||
self.clsname_set = True
|
||||
|
||||
|
||||
class PyDecoratorMixin(object):
|
||||
"""
|
||||
Mixin for decorator directives.
|
||||
"""
|
||||
def handle_signature(self, sig, signode):
|
||||
ret = super(PyDecoratorMixin, self).handle_signature(sig, signode)
|
||||
signode.insert(0, addnodes.desc_addname('@', '@'))
|
||||
return ret
|
||||
|
||||
def needs_arglist(self):
|
||||
return False
|
||||
|
||||
|
||||
class PyDecoratorFunction(PyDecoratorMixin, PyModulelevel):
|
||||
"""
|
||||
Directive to mark functions meant to be used as decorators.
|
||||
"""
|
||||
def run(self):
|
||||
# a decorator function is a function after all
|
||||
self.name = 'py:function'
|
||||
return PyModulelevel.run(self)
|
||||
|
||||
|
||||
class PyDecoratorMethod(PyDecoratorMixin, PyClassmember):
|
||||
"""
|
||||
Directive to mark methods meant to be used as decorators.
|
||||
"""
|
||||
def run(self):
|
||||
self.name = 'py:method'
|
||||
return PyClassmember.run(self)
|
||||
|
||||
|
||||
class PyModule(Directive):
|
||||
"""
|
||||
Directive to mark description of a new module.
|
||||
@ -356,22 +421,18 @@ class PyModule(Directive):
|
||||
env.domaindata['py']['modules'][modname] = \
|
||||
(env.docname, self.options.get('synopsis', ''),
|
||||
self.options.get('platform', ''), 'deprecated' in self.options)
|
||||
# make a duplicate entry in 'objects' to facilitate searching for the
|
||||
# module in PythonDomain.find_obj()
|
||||
env.domaindata['py']['objects'][modname] = (env.docname, 'module')
|
||||
targetnode = nodes.target('', '', ids=['module-' + modname], ismod=True)
|
||||
self.state.document.note_explicit_target(targetnode)
|
||||
ret = [targetnode]
|
||||
# XXX this behavior of the module directive is a mess...
|
||||
if 'platform' in self.options:
|
||||
platform = self.options['platform']
|
||||
node = nodes.paragraph()
|
||||
node += nodes.emphasis('', _('Platforms: '))
|
||||
node += nodes.Text(platform, platform)
|
||||
ret.append(node)
|
||||
# the synopsis isn't printed; in fact, it is only used in the
|
||||
# modindex currently
|
||||
# the platform and synopsis aren't printed; in fact, they are only used
|
||||
# in the modindex currently
|
||||
if not noindex:
|
||||
indextext = _('%s (module)') % modname
|
||||
inode = addnodes.index(entries=[('single', indextext,
|
||||
'module-' + modname, modname)])
|
||||
'module-' + modname, '')])
|
||||
ret.append(inode)
|
||||
return ret
|
||||
|
||||
@ -506,16 +567,18 @@ class PythonDomain(Domain):
|
||||
}
|
||||
|
||||
directives = {
|
||||
'function': PyModulelevel,
|
||||
'data': PyModulelevel,
|
||||
'class': PyClasslike,
|
||||
'exception': PyClasslike,
|
||||
'method': PyClassmember,
|
||||
'classmethod': PyClassmember,
|
||||
'staticmethod': PyClassmember,
|
||||
'attribute': PyClassmember,
|
||||
'module': PyModule,
|
||||
'currentmodule': PyCurrentModule,
|
||||
'function': PyModulelevel,
|
||||
'data': PyModulelevel,
|
||||
'class': PyClasslike,
|
||||
'exception': PyClasslike,
|
||||
'method': PyClassmember,
|
||||
'classmethod': PyClassmember,
|
||||
'staticmethod': PyClassmember,
|
||||
'attribute': PyClassmember,
|
||||
'module': PyModule,
|
||||
'currentmodule': PyCurrentModule,
|
||||
'decorator': PyDecoratorFunction,
|
||||
'decoratormethod': PyDecoratorMethod,
|
||||
}
|
||||
roles = {
|
||||
'data': PyXRefRole(),
|
||||
@ -544,38 +607,46 @@ class PythonDomain(Domain):
|
||||
if fn == docname:
|
||||
del self.data['modules'][modname]
|
||||
|
||||
def find_obj(self, env, modname, classname, name, type, searchorder=0):
|
||||
"""
|
||||
Find a Python object for "name", perhaps using the given module and/or
|
||||
classname. Returns a list of (name, object entry) tuples.
|
||||
def find_obj(self, env, modname, classname, name, type, searchmode=0):
|
||||
"""Find a Python object for "name", perhaps using the given module
|
||||
and/or classname. Returns a list of (name, object entry) tuples.
|
||||
"""
|
||||
# skip parens
|
||||
if name[-2:] == '()':
|
||||
name = name[:-2]
|
||||
|
||||
if not name:
|
||||
return None, None
|
||||
return []
|
||||
|
||||
objects = self.data['objects']
|
||||
matches = []
|
||||
|
||||
newname = None
|
||||
if searchorder == 1:
|
||||
if modname and classname and \
|
||||
modname + '.' + classname + '.' + name in objects:
|
||||
newname = modname + '.' + classname + '.' + name
|
||||
elif modname and modname + '.' + name in objects:
|
||||
newname = modname + '.' + name
|
||||
elif name in objects:
|
||||
newname = name
|
||||
else:
|
||||
# "fuzzy" searching mode
|
||||
searchname = '.' + name
|
||||
matches = [(name, objects[name]) for name in objects
|
||||
if name.endswith(searchname)]
|
||||
if searchmode == 1:
|
||||
objtypes = self.objtypes_for_role(type)
|
||||
if modname and classname:
|
||||
fullname = modname + '.' + classname + '.' + name
|
||||
if fullname in objects and objects[fullname][1] in objtypes:
|
||||
newname = fullname
|
||||
if not newname:
|
||||
if modname and modname + '.' + name in objects and \
|
||||
objects[modname + '.' + name][1] in objtypes:
|
||||
newname = modname + '.' + name
|
||||
elif name in objects and objects[name][1] in objtypes:
|
||||
newname = name
|
||||
else:
|
||||
# "fuzzy" searching mode
|
||||
searchname = '.' + name
|
||||
matches = [(name, objects[name]) for name in objects
|
||||
if name.endswith(searchname)
|
||||
and objects[name][1] in objtypes]
|
||||
else:
|
||||
# NOTE: searching for exact match, object type is not considered
|
||||
if name in objects:
|
||||
newname = name
|
||||
elif type == 'mod':
|
||||
# only exact matches allowed for modules
|
||||
return []
|
||||
elif classname and classname + '.' + name in objects:
|
||||
newname = classname + '.' + name
|
||||
elif modname and modname + '.' + name in objects:
|
||||
@ -597,33 +668,35 @@ class PythonDomain(Domain):
|
||||
|
||||
def resolve_xref(self, env, fromdocname, builder,
|
||||
type, target, node, contnode):
|
||||
if (type == 'mod' or
|
||||
type == 'obj' and target in self.data['modules']):
|
||||
docname, synopsis, platform, deprecated = \
|
||||
self.data['modules'].get(target, ('','','', ''))
|
||||
if not docname:
|
||||
return None
|
||||
else:
|
||||
title = '%s%s%s' % ((platform and '(%s) ' % platform),
|
||||
synopsis,
|
||||
(deprecated and ' (deprecated)' or ''))
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
'module-' + target, contnode, title)
|
||||
modname = node.get('py:module')
|
||||
clsname = node.get('py:class')
|
||||
searchmode = node.hasattr('refspecific') and 1 or 0
|
||||
matches = self.find_obj(env, modname, clsname, target,
|
||||
type, searchmode)
|
||||
if not matches:
|
||||
return None
|
||||
elif len(matches) > 1:
|
||||
env.warn(fromdocname,
|
||||
'more than one target found for cross-reference '
|
||||
'%r: %s' % (target,
|
||||
', '.join(match[0] for match in matches)),
|
||||
node.line)
|
||||
name, obj = matches[0]
|
||||
|
||||
if obj[1] == 'module':
|
||||
# get additional info for modules
|
||||
docname, synopsis, platform, deprecated = self.data['modules'][name]
|
||||
assert docname == obj[0]
|
||||
title = name
|
||||
if synopsis:
|
||||
title += ': ' + synopsis
|
||||
if deprecated:
|
||||
title += _(' (deprecated)')
|
||||
if platform:
|
||||
title += ' (' + platform + ')'
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
'module-' + name, contnode, title)
|
||||
else:
|
||||
modname = node.get('py:module')
|
||||
clsname = node.get('py:class')
|
||||
searchorder = node.hasattr('refspecific') and 1 or 0
|
||||
matches = self.find_obj(env, modname, clsname, target,
|
||||
type, searchorder)
|
||||
if not matches:
|
||||
return None
|
||||
elif len(matches) > 1:
|
||||
env.warn(fromdocname,
|
||||
'more than one target found for cross-reference '
|
||||
'%r: %s' % (target,
|
||||
', '.join(match[0] for match in matches)),
|
||||
node.line)
|
||||
name, obj = matches[0]
|
||||
return make_refnode(builder, fromdocname, obj[0], name,
|
||||
contnode, name)
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
|
||||
The reStructuredText domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -28,9 +28,10 @@ class ReSTMarkup(ObjectDescription):
|
||||
"""
|
||||
|
||||
def add_target_and_index(self, name, sig, signode):
|
||||
if name not in self.state.document.ids:
|
||||
signode['names'].append(name)
|
||||
signode['ids'].append(name)
|
||||
targetname = self.objtype + '-' + name
|
||||
if targetname not in self.state.document.ids:
|
||||
signode['names'].append(targetname)
|
||||
signode['ids'].append(targetname)
|
||||
signode['first'] = (not self.names)
|
||||
self.state.document.note_explicit_target(signode)
|
||||
|
||||
@ -47,7 +48,7 @@ class ReSTMarkup(ObjectDescription):
|
||||
indextext = self.get_index_text(self.objtype, name)
|
||||
if indextext:
|
||||
self.indexnode['entries'].append(('single', indextext,
|
||||
name, name))
|
||||
targetname, ''))
|
||||
|
||||
def get_index_text(self, objectname, name):
|
||||
if self.objtype == 'directive':
|
||||
@ -58,9 +59,10 @@ class ReSTMarkup(ObjectDescription):
|
||||
|
||||
|
||||
def parse_directive(d):
|
||||
"""
|
||||
Parses a directive signature. Returns (directive, arguments) string tuple.
|
||||
if no arguments are given, returns (directive, '').
|
||||
"""Parse a directive signature.
|
||||
|
||||
Returns (directive, arguments) string tuple. If no arguments are given,
|
||||
returns (directive, '').
|
||||
"""
|
||||
dir = d.strip()
|
||||
if not dir.startswith('.'):
|
||||
@ -129,8 +131,9 @@ class ReSTDomain(Domain):
|
||||
if (objtype, target) in objects:
|
||||
return make_refnode(builder, fromdocname,
|
||||
objects[objtype, target],
|
||||
target, contnode, target)
|
||||
objtype + '-' + target,
|
||||
contnode, target + ' ' + objtype)
|
||||
|
||||
def get_objects(self):
|
||||
for (typ, name), docname in self.data['objects'].iteritems():
|
||||
yield name, name, typ, docname, name, 1
|
||||
yield name, name, typ, docname, typ + '-' + name, 1
|
||||
|
@ -5,14 +5,16 @@
|
||||
|
||||
The standard domain.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import unicodedata
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.parsers.rst import directives
|
||||
from docutils.statemachine import ViewList
|
||||
|
||||
from sphinx import addnodes
|
||||
from sphinx.roles import XRefRole
|
||||
@ -59,7 +61,7 @@ class GenericObject(ObjectDescription):
|
||||
indextype = 'single'
|
||||
indexentry = self.indextemplate % (name,)
|
||||
self.indexnode['entries'].append((indextype, indexentry,
|
||||
targetname, targetname))
|
||||
targetname, ''))
|
||||
self.env.domaindata['std']['objects'][self.objtype, name] = \
|
||||
self.env.docname, targetname
|
||||
|
||||
@ -80,8 +82,8 @@ class EnvVarXRefRole(XRefRole):
|
||||
tgtid = 'index-%s' % env.new_serialno('index')
|
||||
indexnode = addnodes.index()
|
||||
indexnode['entries'] = [
|
||||
('single', varname, tgtid, varname),
|
||||
('single', _('environment variable; %s') % varname, tgtid, varname)
|
||||
('single', varname, tgtid, ''),
|
||||
('single', _('environment variable; %s') % varname, tgtid, '')
|
||||
]
|
||||
targetnode = nodes.target('', '', ids=[tgtid])
|
||||
document.note_explicit_target(targetnode)
|
||||
@ -116,7 +118,7 @@ class Target(Directive):
|
||||
indextype = indexentry[:colon].strip()
|
||||
indexentry = indexentry[colon+1:].strip()
|
||||
inode = addnodes.index(entries=[(indextype, indexentry,
|
||||
targetname, targetname)])
|
||||
targetname, '')])
|
||||
ret.insert(0, inode)
|
||||
name = self.name
|
||||
if ':' in self.name:
|
||||
@ -159,7 +161,7 @@ class Cmdoption(ObjectDescription):
|
||||
self.indexnode['entries'].append(
|
||||
('pair', _('%scommand line option; %s') %
|
||||
((currprogram and currprogram + ' ' or ''), sig),
|
||||
targetname, targetname))
|
||||
targetname, ''))
|
||||
self.env.domaindata['std']['progoptions'][currprogram, name] = \
|
||||
self.env.docname, targetname
|
||||
|
||||
@ -205,8 +207,8 @@ class OptionXRefRole(XRefRole):
|
||||
|
||||
class Glossary(Directive):
|
||||
"""
|
||||
Directive to create a glossary with cross-reference targets
|
||||
for :term: roles.
|
||||
Directive to create a glossary with cross-reference targets for :term:
|
||||
roles.
|
||||
"""
|
||||
|
||||
has_content = True
|
||||
@ -223,37 +225,100 @@ class Glossary(Directive):
|
||||
gloss_entries = env.temp_data.setdefault('gloss_entries', set())
|
||||
node = addnodes.glossary()
|
||||
node.document = self.state.document
|
||||
self.state.nested_parse(self.content, self.content_offset, node)
|
||||
|
||||
# the content should be definition lists
|
||||
dls = [child for child in node
|
||||
if isinstance(child, nodes.definition_list)]
|
||||
# now, extract definition terms to enable cross-reference creation
|
||||
new_dl = nodes.definition_list()
|
||||
new_dl['classes'].append('glossary')
|
||||
# This directive implements a custom format of the reST definition list
|
||||
# that allows multiple lines of terms before the definition. This is
|
||||
# easy to parse since we know that the contents of the glossary *must
|
||||
# be* a definition list.
|
||||
|
||||
# first, collect single entries
|
||||
entries = []
|
||||
in_definition = True
|
||||
was_empty = True
|
||||
messages = []
|
||||
for line, (source, lineno) in zip(self.content, self.content.items):
|
||||
# empty line -> add to last definition
|
||||
if not line:
|
||||
if in_definition and entries:
|
||||
entries[-1][1].append('', source, lineno)
|
||||
was_empty = True
|
||||
continue
|
||||
# unindented line -> a term
|
||||
if line and not line[0].isspace():
|
||||
# first term of definition
|
||||
if in_definition:
|
||||
if not was_empty:
|
||||
messages.append(self.state.reporter.system_message(
|
||||
2, 'glossary term must be preceded by empty line',
|
||||
source=source, line=lineno))
|
||||
entries.append(([(line, source, lineno)], ViewList()))
|
||||
in_definition = False
|
||||
# second term and following
|
||||
else:
|
||||
if was_empty:
|
||||
messages.append(self.state.reporter.system_message(
|
||||
2, 'glossary terms must not be separated by empty '
|
||||
'lines', source=source, line=lineno))
|
||||
entries[-1][0].append((line, source, lineno))
|
||||
else:
|
||||
if not in_definition:
|
||||
# first line of definition, determines indentation
|
||||
in_definition = True
|
||||
indent_len = len(line) - len(line.lstrip())
|
||||
entries[-1][1].append(line[indent_len:], source, lineno)
|
||||
was_empty = False
|
||||
|
||||
# now, parse all the entries into a big definition list
|
||||
items = []
|
||||
for dl in dls:
|
||||
for li in dl.children:
|
||||
if not li.children or not isinstance(li[0], nodes.term):
|
||||
continue
|
||||
termtext = li.children[0].astext()
|
||||
for terms, definition in entries:
|
||||
termtexts = []
|
||||
termnodes = []
|
||||
system_messages = []
|
||||
ids = []
|
||||
for line, source, lineno in terms:
|
||||
# parse the term with inline markup
|
||||
res = self.state.inline_text(line, lineno)
|
||||
system_messages.extend(res[1])
|
||||
|
||||
# get a text-only representation of the term and register it
|
||||
# as a cross-reference target
|
||||
tmp = nodes.paragraph('', '', *res[0])
|
||||
termtext = tmp.astext()
|
||||
new_id = 'term-' + nodes.make_id(termtext)
|
||||
if new_id in gloss_entries:
|
||||
new_id = 'term-' + str(len(gloss_entries))
|
||||
gloss_entries.add(new_id)
|
||||
li[0]['names'].append(new_id)
|
||||
li[0]['ids'].append(new_id)
|
||||
ids.append(new_id)
|
||||
objects['term', termtext.lower()] = env.docname, new_id
|
||||
termtexts.append(termtext)
|
||||
# add an index entry too
|
||||
indexnode = addnodes.index()
|
||||
indexnode['entries'] = [('single', termtext, new_id, termtext)]
|
||||
li.insert(0, indexnode)
|
||||
items.append((termtext, li))
|
||||
indexnode['entries'] = [('single', termtext, new_id, 'main')]
|
||||
termnodes.append(indexnode)
|
||||
termnodes.extend(res[0])
|
||||
termnodes.append(addnodes.termsep())
|
||||
# make a single "term" node with all the terms, separated by termsep
|
||||
# nodes (remove the dangling trailing separator)
|
||||
term = nodes.term('', '', *termnodes[:-1])
|
||||
term['ids'].extend(ids)
|
||||
term['names'].extend(ids)
|
||||
term += system_messages
|
||||
|
||||
defnode = nodes.definition()
|
||||
self.state.nested_parse(definition, definition.items[0][1], defnode)
|
||||
|
||||
items.append((termtexts,
|
||||
nodes.definition_list_item('', term, defnode)))
|
||||
|
||||
if 'sorted' in self.options:
|
||||
items.sort(key=lambda x: x[0].lower())
|
||||
new_dl.extend(item[1] for item in items)
|
||||
node.children = [new_dl]
|
||||
return [node]
|
||||
items.sort(key=lambda x:
|
||||
unicodedata.normalize('NFD', x[0][0].lower()))
|
||||
|
||||
dlist = nodes.definition_list()
|
||||
dlist['classes'].append('glossary')
|
||||
dlist.extend(item[1] for item in items)
|
||||
node += dlist
|
||||
return messages + [node]
|
||||
|
||||
|
||||
token_re = re.compile('`([a-z_][a-z0-9_]*)`')
|
||||
@ -346,11 +411,13 @@ class StandardDomain(Domain):
|
||||
# links to tokens in grammar productions
|
||||
'token': XRefRole(),
|
||||
# links to terms in glossary
|
||||
'term': XRefRole(lowercase=True, innernodeclass=nodes.emphasis),
|
||||
'term': XRefRole(lowercase=True, innernodeclass=nodes.emphasis,
|
||||
warn_dangling=True),
|
||||
# links to headings or arbitrary labels
|
||||
'ref': XRefRole(lowercase=True, innernodeclass=nodes.emphasis),
|
||||
'ref': XRefRole(lowercase=True, innernodeclass=nodes.emphasis,
|
||||
warn_dangling=True),
|
||||
# links to labels, without a different title
|
||||
'keyword': XRefRole(),
|
||||
'keyword': XRefRole(warn_dangling=True),
|
||||
}
|
||||
|
||||
initial_data = {
|
||||
@ -368,6 +435,13 @@ class StandardDomain(Domain):
|
||||
},
|
||||
}
|
||||
|
||||
dangling_warnings = {
|
||||
'term': 'term not in glossary: %(target)s',
|
||||
'ref': 'undefined label: %(target)s (if the link has no caption '
|
||||
'the label must precede a section header)',
|
||||
'keyword': 'unknown keyword: %(target)s',
|
||||
}
|
||||
|
||||
def clear_doc(self, docname):
|
||||
for key, (fn, _) in self.data['progoptions'].items():
|
||||
if fn == docname:
|
||||
@ -425,27 +499,16 @@ class StandardDomain(Domain):
|
||||
def resolve_xref(self, env, fromdocname, builder,
|
||||
typ, target, node, contnode):
|
||||
if typ == 'ref':
|
||||
#refdoc = node.get('refdoc', fromdocname)
|
||||
if node['refexplicit']:
|
||||
# reference to anonymous label; the reference uses
|
||||
# the supplied link caption
|
||||
docname, labelid = self.data['anonlabels'].get(target, ('',''))
|
||||
sectname = node.astext()
|
||||
# XXX warn somehow if not resolved by intersphinx
|
||||
#if not docname:
|
||||
# env.warn(refdoc, 'undefined label: %s' %
|
||||
# target, node.line)
|
||||
else:
|
||||
# reference to named label; the final node will
|
||||
# contain the section name after the label
|
||||
docname, labelid, sectname = self.data['labels'].get(target,
|
||||
('','',''))
|
||||
# XXX warn somehow if not resolved by intersphinx
|
||||
#if not docname:
|
||||
# env.warn(refdoc,
|
||||
# 'undefined label: %s' % target + ' -- if you '
|
||||
# 'don\'t give a link caption the label must '
|
||||
# 'precede a section header.', node.line)
|
||||
if not docname:
|
||||
return None
|
||||
newnode = nodes.reference('', '', internal=True)
|
||||
@ -469,30 +532,29 @@ class StandardDomain(Domain):
|
||||
# keywords are oddballs: they are referenced by named labels
|
||||
docname, labelid, _ = self.data['labels'].get(target, ('','',''))
|
||||
if not docname:
|
||||
#env.warn(refdoc, 'unknown keyword: %s' % target)
|
||||
return None
|
||||
else:
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
elif typ == 'option':
|
||||
progname = node['refprogram']
|
||||
docname, labelid = self.data['progoptions'].get((progname, target),
|
||||
('', ''))
|
||||
if not docname:
|
||||
return None
|
||||
else:
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
else:
|
||||
docname, labelid = self.data['objects'].get((typ, target), ('', ''))
|
||||
if not docname:
|
||||
if typ == 'term':
|
||||
env.warn(node.get('refdoc', fromdocname),
|
||||
'term not in glossary: %s' % target, node.line)
|
||||
return None
|
||||
objtypes = self.objtypes_for_role(typ) or []
|
||||
for objtype in objtypes:
|
||||
if (objtype, target) in self.data['objects']:
|
||||
docname, labelid = self.data['objects'][objtype, target]
|
||||
break
|
||||
else:
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
docname, labelid = '', ''
|
||||
if not docname:
|
||||
return None
|
||||
return make_refnode(builder, fromdocname, docname,
|
||||
labelid, contnode)
|
||||
|
||||
def get_objects(self):
|
||||
for (prog, option), info in self.data['progoptions'].iteritems():
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -6,7 +6,7 @@
|
||||
Contains SphinxError and a few subclasses (in an extra module to avoid
|
||||
circular import problems).
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -5,6 +5,6 @@
|
||||
|
||||
Contains Sphinx features not activated by default.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
@ -7,14 +7,15 @@
|
||||
the doctree, thus avoiding duplication between docstrings and documentation
|
||||
for those who like elaborate docstrings.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
import inspect
|
||||
from types import FunctionType, BuiltinFunctionType, MethodType, ClassType
|
||||
import traceback
|
||||
from types import FunctionType, BuiltinFunctionType, MethodType
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.utils import assemble_option_dict
|
||||
@ -26,16 +27,12 @@ from sphinx.pycode import ModuleAnalyzer, PycodeError
|
||||
from sphinx.application import ExtensionError
|
||||
from sphinx.util.nodes import nested_parse_with_titles
|
||||
from sphinx.util.compat import Directive
|
||||
from sphinx.util.inspect import isdescriptor, safe_getmembers, safe_getattr
|
||||
from sphinx.util.inspect import getargspec, isdescriptor, safe_getmembers, \
|
||||
safe_getattr, safe_repr
|
||||
from sphinx.util.pycompat import base_exception, class_types
|
||||
from sphinx.util.docstrings import prepare_docstring
|
||||
|
||||
|
||||
try:
|
||||
base_exception = BaseException
|
||||
except NameError:
|
||||
base_exception = Exception
|
||||
|
||||
|
||||
#: extended signature RE: with explicit module name separated by ::
|
||||
py_ext_sig_re = re.compile(
|
||||
r'''^ ([\w.]+::)? # explicit module name
|
||||
@ -90,7 +87,8 @@ def members_set_option(arg):
|
||||
|
||||
def bool_option(arg):
|
||||
"""Used to convert flag options to auto directives. (Instead of
|
||||
directives.flag(), which returns None.)"""
|
||||
directives.flag(), which returns None).
|
||||
"""
|
||||
return True
|
||||
|
||||
|
||||
@ -107,7 +105,7 @@ class AutodocReporter(object):
|
||||
return getattr(self.reporter, name)
|
||||
|
||||
def system_message(self, level, message, *children, **kwargs):
|
||||
if 'line' in kwargs:
|
||||
if 'line' in kwargs and 'source' not in kwargs:
|
||||
try:
|
||||
source, line = self.viewlist.items[kwargs['line']]
|
||||
except IndexError:
|
||||
@ -138,8 +136,7 @@ class AutodocReporter(object):
|
||||
# Some useful event listener factories for autodoc-process-docstring.
|
||||
|
||||
def cut_lines(pre, post=0, what=None):
|
||||
"""
|
||||
Return a listener that removes the first *pre* and last *post*
|
||||
"""Return a listener that removes the first *pre* and last *post*
|
||||
lines of every docstring. If *what* is a sequence of strings,
|
||||
only docstrings of a type in *what* will be processed.
|
||||
|
||||
@ -165,9 +162,8 @@ def cut_lines(pre, post=0, what=None):
|
||||
return process
|
||||
|
||||
def between(marker, what=None, keepempty=False, exclude=False):
|
||||
"""
|
||||
Return a listener that either keeps, or if *exclude* is True excludes, lines
|
||||
between lines that match the *marker* regular expression. If no line
|
||||
"""Return a listener that either keeps, or if *exclude* is True excludes,
|
||||
lines between lines that match the *marker* regular expression. If no line
|
||||
matches, the resulting docstring would be empty, so no change will be made
|
||||
unless *keepempty* is true.
|
||||
|
||||
@ -222,6 +218,8 @@ class Documenter(object):
|
||||
priority = 0
|
||||
#: order if autodoc_member_order is set to 'groupwise'
|
||||
member_order = 0
|
||||
#: true if the generated content may contain titles
|
||||
titles_allowed = False
|
||||
|
||||
option_spec = {'noindex': bool_option}
|
||||
|
||||
@ -256,6 +254,9 @@ class Documenter(object):
|
||||
self.retann = None
|
||||
# the object to document (set after import_object succeeds)
|
||||
self.object = None
|
||||
self.object_name = None
|
||||
# the parent/owner of the object to document
|
||||
self.parent = None
|
||||
# the module analyzer to get at attribute docs, or None
|
||||
self.analyzer = None
|
||||
|
||||
@ -264,8 +265,7 @@ class Documenter(object):
|
||||
self.directive.result.append(self.indent + line, source, *lineno)
|
||||
|
||||
def resolve_name(self, modname, parents, path, base):
|
||||
"""
|
||||
Resolve the module and name of the object to document given by the
|
||||
"""Resolve the module and name of the object to document given by the
|
||||
arguments and the current module/class.
|
||||
|
||||
Must return a pair of the module name and a chain of attributes; for
|
||||
@ -275,8 +275,7 @@ class Documenter(object):
|
||||
raise NotImplementedError('must be implemented in subclasses')
|
||||
|
||||
def parse_name(self):
|
||||
"""
|
||||
Determine what module to import and what attribute to document.
|
||||
"""Determine what module to import and what attribute to document.
|
||||
|
||||
Returns True and sets *self.modname*, *self.objpath*, *self.fullname*,
|
||||
*self.args* and *self.retann* if parsing and resolving was successful.
|
||||
@ -313,38 +312,44 @@ class Documenter(object):
|
||||
return True
|
||||
|
||||
def import_object(self):
|
||||
"""
|
||||
Import the object given by *self.modname* and *self.objpath* and sets
|
||||
"""Import the object given by *self.modname* and *self.objpath* and set
|
||||
it as *self.object*.
|
||||
|
||||
Returns True if successful, False if an error occurred.
|
||||
"""
|
||||
try:
|
||||
__import__(self.modname)
|
||||
parent = None
|
||||
obj = self.module = sys.modules[self.modname]
|
||||
for part in self.objpath:
|
||||
parent = obj
|
||||
obj = self.get_attr(obj, part)
|
||||
self.object_name = part
|
||||
self.parent = parent
|
||||
self.object = obj
|
||||
return True
|
||||
# this used to only catch SyntaxError, ImportError and AttributeError,
|
||||
# but importing modules with side effects can raise all kinds of errors
|
||||
except Exception, err:
|
||||
if self.env.app and not self.env.app.quiet:
|
||||
self.env.app.info(traceback.format_exc().rstrip())
|
||||
self.directive.warn(
|
||||
'autodoc can\'t import/find %s %r, it reported error: '
|
||||
'"%s", please check your spelling and sys.path' %
|
||||
(self.objtype, str(self.fullname), err))
|
||||
self.env.note_reread()
|
||||
return False
|
||||
|
||||
def get_real_modname(self):
|
||||
"""
|
||||
Get the real module name of an object to document. (It can differ
|
||||
from the name of the module through which the object was imported.)
|
||||
"""Get the real module name of an object to document.
|
||||
|
||||
It can differ from the name of the module through which the object was
|
||||
imported.
|
||||
"""
|
||||
return self.get_attr(self.object, '__module__', None) or self.modname
|
||||
|
||||
def check_module(self):
|
||||
"""
|
||||
Check if *self.object* is really defined in the module given by
|
||||
"""Check if *self.object* is really defined in the module given by
|
||||
*self.modname*.
|
||||
"""
|
||||
modname = self.get_attr(self.object, '__module__', None)
|
||||
@ -353,25 +358,26 @@ class Documenter(object):
|
||||
return True
|
||||
|
||||
def format_args(self):
|
||||
"""
|
||||
Format the argument signature of *self.object*. Should return None if
|
||||
the object does not have a signature.
|
||||
"""Format the argument signature of *self.object*.
|
||||
|
||||
Should return None if the object does not have a signature.
|
||||
"""
|
||||
return None
|
||||
|
||||
def format_name(self):
|
||||
"""
|
||||
Format the name of *self.object*. This normally should be something
|
||||
that can be parsed by the generated directive, but doesn't need to be
|
||||
(Sphinx will display it unparsed then).
|
||||
"""Format the name of *self.object*.
|
||||
|
||||
This normally should be something that can be parsed by the generated
|
||||
directive, but doesn't need to be (Sphinx will display it unparsed
|
||||
then).
|
||||
"""
|
||||
# normally the name doesn't contain the module (except for module
|
||||
# directives of course)
|
||||
return '.'.join(self.objpath) or self.modname
|
||||
|
||||
def format_signature(self):
|
||||
"""
|
||||
Format the signature (arguments and return annotation) of the object.
|
||||
"""Format the signature (arguments and return annotation) of the object.
|
||||
|
||||
Let the user process it via the ``autodoc-process-signature`` event.
|
||||
"""
|
||||
if self.args is not None:
|
||||
@ -413,13 +419,16 @@ class Documenter(object):
|
||||
# etc. don't support a prepended module name
|
||||
self.add_line(u' :module: %s' % self.modname, '<autodoc>')
|
||||
|
||||
def get_doc(self, encoding=None):
|
||||
def get_doc(self, encoding=None, ignore=1):
|
||||
"""Decode and return lines of the docstring(s) for the object."""
|
||||
docstring = self.get_attr(self.object, '__doc__', None)
|
||||
if docstring:
|
||||
# make sure we have Unicode docstrings, then sanitize and split
|
||||
# into lines
|
||||
return [prepare_docstring(force_decode(docstring, encoding))]
|
||||
# make sure we have Unicode docstrings, then sanitize and split
|
||||
# into lines
|
||||
if isinstance(docstring, unicode):
|
||||
return [prepare_docstring(docstring, ignore)]
|
||||
elif docstring:
|
||||
return [prepare_docstring(force_decode(docstring, encoding),
|
||||
ignore)]
|
||||
return []
|
||||
|
||||
def process_doc(self, docstrings):
|
||||
@ -438,8 +447,11 @@ class Documenter(object):
|
||||
# set sourcename and add content from attribute documentation
|
||||
if self.analyzer:
|
||||
# prevent encoding errors when the file name is non-ASCII
|
||||
filename = unicode(self.analyzer.srcname,
|
||||
sys.getfilesystemencoding(), 'replace')
|
||||
if not isinstance(self.analyzer.srcname, unicode):
|
||||
filename = unicode(self.analyzer.srcname,
|
||||
sys.getfilesystemencoding(), 'replace')
|
||||
else:
|
||||
filename = self.analyzer.srcname
|
||||
sourcename = u'%s:docstring of %s' % (filename, self.fullname)
|
||||
|
||||
attr_docs = self.analyzer.find_attr_docs()
|
||||
@ -457,6 +469,11 @@ class Documenter(object):
|
||||
if not no_docstring:
|
||||
encoding = self.analyzer and self.analyzer.encoding
|
||||
docstrings = self.get_doc(encoding)
|
||||
if not docstrings:
|
||||
# append at least a dummy docstring, so that the event
|
||||
# autodoc-process-docstring is fired and can add some
|
||||
# content if desired
|
||||
docstrings.append([])
|
||||
for i, line in enumerate(self.process_doc(docstrings)):
|
||||
self.add_line(line, sourcename, i)
|
||||
|
||||
@ -466,8 +483,7 @@ class Documenter(object):
|
||||
self.add_line(line, src[0], src[1])
|
||||
|
||||
def get_object_members(self, want_all):
|
||||
"""
|
||||
Return `(members_check_module, members)` where `members` is a
|
||||
"""Return `(members_check_module, members)` where `members` is a
|
||||
list of `(membername, member)` pairs of the members of *self.object*.
|
||||
|
||||
If *want_all* is True, return all members. Else, only return those
|
||||
@ -511,11 +527,15 @@ class Documenter(object):
|
||||
return False, sorted(members)
|
||||
|
||||
def filter_members(self, members, want_all):
|
||||
"""
|
||||
Filter the given member list: members are skipped if
|
||||
"""Filter the given member list.
|
||||
|
||||
- they are private (except if given explicitly)
|
||||
- they are undocumented (except if undoc-members is given)
|
||||
Members are skipped if
|
||||
|
||||
- they are private (except if given explicitly or the private-members
|
||||
option is set)
|
||||
- they are special methods (except if given explicitly or the
|
||||
special-members option is set)
|
||||
- they are undocumented (except if the undoc-members option is set)
|
||||
|
||||
The user can override the skipping decision by connecting to the
|
||||
``autodoc-skip-member`` event.
|
||||
@ -535,17 +555,27 @@ class Documenter(object):
|
||||
# if isattr is True, the member is documented as an attribute
|
||||
isattr = False
|
||||
|
||||
if want_all and membername.startswith('_'):
|
||||
if want_all and membername.startswith('__') and \
|
||||
membername.endswith('__') and len(membername) > 4:
|
||||
# special __methods__
|
||||
skip = not self.options.special_members
|
||||
elif want_all and membername.startswith('_'):
|
||||
# ignore members whose name starts with _ by default
|
||||
skip = True
|
||||
skip = not self.options.private_members
|
||||
elif (namespace, membername) in attr_docs:
|
||||
# keep documented attributes
|
||||
skip = False
|
||||
isattr = True
|
||||
else:
|
||||
# ignore undocumented members if :undoc-members:
|
||||
# is not given
|
||||
# ignore undocumented members if :undoc-members: is not given
|
||||
doc = self.get_attr(member, '__doc__', None)
|
||||
# if the member __doc__ is the same as self's __doc__, it's just
|
||||
# inherited and therefore not the member's doc
|
||||
cls = self.get_attr(member, '__class__', None)
|
||||
if cls:
|
||||
cls_doc = self.get_attr(cls, '__doc__', None)
|
||||
if cls_doc == doc:
|
||||
doc = None
|
||||
skip = not self.options.undoc_members and not doc
|
||||
|
||||
# give the user a chance to decide whether this member
|
||||
@ -565,9 +595,10 @@ class Documenter(object):
|
||||
return ret
|
||||
|
||||
def document_members(self, all_members=False):
|
||||
"""
|
||||
Generate reST for member documentation. If *all_members* is True,
|
||||
do all members, else those given by *self.options.members*.
|
||||
"""Generate reST for member documentation.
|
||||
|
||||
If *all_members* is True, do all members, else those given by
|
||||
*self.options.members*.
|
||||
"""
|
||||
# set current namespace for finding members
|
||||
self.env.temp_data['autodoc:module'] = self.modname
|
||||
@ -625,8 +656,8 @@ class Documenter(object):
|
||||
|
||||
def generate(self, more_content=None, real_modname=None,
|
||||
check_module=False, all_members=False):
|
||||
"""
|
||||
Generate reST for the object given by *self.name*, and possibly members.
|
||||
"""Generate reST for the object given by *self.name*, and possibly for
|
||||
its members.
|
||||
|
||||
If *more_content* is given, include that content. If *real_modname* is
|
||||
given, use that module name to find attribute docs. If *check_module* is
|
||||
@ -659,7 +690,7 @@ class Documenter(object):
|
||||
# parse right now, to get PycodeErrors on parsing (results will
|
||||
# be cached anyway)
|
||||
self.analyzer.find_attr_docs()
|
||||
except PycodeError, err:
|
||||
except PycodeError:
|
||||
# no source file -- e.g. for builtin and C modules
|
||||
self.analyzer = None
|
||||
# at least add the module.__file__ as a dependency
|
||||
@ -676,7 +707,7 @@ class Documenter(object):
|
||||
# make sure that the result starts with an empty line. This is
|
||||
# necessary for some situations where another directive preprocesses
|
||||
# reST and no starting newline is present
|
||||
self.add_line(u'', '')
|
||||
self.add_line(u'', '<autodoc>')
|
||||
|
||||
# format the object's signature, if any
|
||||
sig = self.format_signature()
|
||||
@ -701,6 +732,7 @@ class ModuleDocumenter(Documenter):
|
||||
"""
|
||||
objtype = 'module'
|
||||
content_indent = u''
|
||||
titles_allowed = True
|
||||
|
||||
option_spec = {
|
||||
'members': members_option, 'undoc-members': bool_option,
|
||||
@ -708,6 +740,7 @@ class ModuleDocumenter(Documenter):
|
||||
'show-inheritance': bool_option, 'synopsis': identity,
|
||||
'platform': identity, 'deprecated': bool_option,
|
||||
'member-order': identity, 'exclude-members': members_set_option,
|
||||
'private-members': bool_option, 'special-members': bool_option,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
@ -814,7 +847,53 @@ class ClassLevelDocumenter(Documenter):
|
||||
return modname, parents + [base]
|
||||
|
||||
|
||||
class FunctionDocumenter(ModuleLevelDocumenter):
|
||||
class DocstringSignatureMixin(object):
|
||||
"""
|
||||
Mixin for FunctionDocumenter and MethodDocumenter to provide the
|
||||
feature of reading the signature from the docstring.
|
||||
"""
|
||||
|
||||
def _find_signature(self, encoding=None):
|
||||
docstrings = Documenter.get_doc(self, encoding, 2)
|
||||
if len(docstrings) != 1:
|
||||
return
|
||||
doclines = docstrings[0]
|
||||
setattr(self, '__new_doclines', doclines)
|
||||
if not doclines:
|
||||
return
|
||||
# match first line of docstring against signature RE
|
||||
match = py_ext_sig_re.match(doclines[0])
|
||||
if not match:
|
||||
return
|
||||
exmod, path, base, args, retann = match.groups()
|
||||
# the base name must match ours
|
||||
if not self.objpath or base != self.objpath[-1]:
|
||||
return
|
||||
# ok, now jump over remaining empty lines and set the remaining
|
||||
# lines as the new doclines
|
||||
i = 1
|
||||
while i < len(doclines) and not doclines[i].strip():
|
||||
i += 1
|
||||
setattr(self, '__new_doclines', doclines[i:])
|
||||
return args, retann
|
||||
|
||||
def get_doc(self, encoding=None, ignore=1):
|
||||
lines = getattr(self, '__new_doclines', None)
|
||||
if lines is not None:
|
||||
return [lines]
|
||||
return Documenter.get_doc(self, encoding, ignore)
|
||||
|
||||
def format_signature(self):
|
||||
if self.args is None and self.env.config.autodoc_docstring_signature:
|
||||
# only act if a signature is not explicitly given already, and if
|
||||
# the feature is enabled
|
||||
result = self._find_signature()
|
||||
if result is not None:
|
||||
self.args, self.retann = result
|
||||
return Documenter.format_signature(self)
|
||||
|
||||
|
||||
class FunctionDocumenter(DocstringSignatureMixin, ModuleLevelDocumenter):
|
||||
"""
|
||||
Specialized Documenter subclass for functions.
|
||||
"""
|
||||
@ -828,18 +907,18 @@ class FunctionDocumenter(ModuleLevelDocumenter):
|
||||
def format_args(self):
|
||||
if inspect.isbuiltin(self.object) or \
|
||||
inspect.ismethoddescriptor(self.object):
|
||||
# can never get arguments of a C function or method
|
||||
return None
|
||||
# cannot introspect arguments of a C function or method
|
||||
pass
|
||||
try:
|
||||
argspec = inspect.getargspec(self.object)
|
||||
argspec = getargspec(self.object)
|
||||
except TypeError:
|
||||
# if a class should be documented as function (yay duck
|
||||
# typing) we try to use the constructor signature as function
|
||||
# signature without the first argument.
|
||||
try:
|
||||
argspec = inspect.getargspec(self.object.__new__)
|
||||
argspec = getargspec(self.object.__new__)
|
||||
except TypeError:
|
||||
argspec = inspect.getargspec(self.object.__init__)
|
||||
argspec = getargspec(self.object.__init__)
|
||||
if argspec[0]:
|
||||
del argspec[0][0]
|
||||
args = inspect.formatargspec(*argspec)
|
||||
@ -862,11 +941,12 @@ class ClassDocumenter(ModuleLevelDocumenter):
|
||||
'noindex': bool_option, 'inherited-members': bool_option,
|
||||
'show-inheritance': bool_option, 'member-order': identity,
|
||||
'exclude-members': members_set_option,
|
||||
'private-members': bool_option, 'special-members': bool_option,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def can_document_member(cls, member, membername, isattr, parent):
|
||||
return isinstance(member, (type, ClassType))
|
||||
return isinstance(member, class_types)
|
||||
|
||||
def import_object(self):
|
||||
ret = ModuleLevelDocumenter.import_object(self)
|
||||
@ -888,7 +968,7 @@ class ClassDocumenter(ModuleLevelDocumenter):
|
||||
(inspect.ismethod(initmeth) or inspect.isfunction(initmeth)):
|
||||
return None
|
||||
try:
|
||||
argspec = inspect.getargspec(initmeth)
|
||||
argspec = getargspec(initmeth)
|
||||
except TypeError:
|
||||
# still not possible: happens e.g. for old-style classes
|
||||
# with __init__ in C
|
||||
@ -918,13 +998,13 @@ class ClassDocumenter(ModuleLevelDocumenter):
|
||||
self.add_line(_(u' Bases: %s') % ', '.join(bases),
|
||||
'<autodoc>')
|
||||
|
||||
def get_doc(self, encoding=None):
|
||||
def get_doc(self, encoding=None, ignore=1):
|
||||
content = self.env.config.autoclass_content
|
||||
|
||||
docstrings = []
|
||||
docstring = self.get_attr(self.object, '__doc__', None)
|
||||
if docstring:
|
||||
docstrings.append(docstring)
|
||||
attrdocstring = self.get_attr(self.object, '__doc__', None)
|
||||
if attrdocstring:
|
||||
docstrings.append(attrdocstring)
|
||||
|
||||
# for classes, what the "docstring" is can be controlled via a
|
||||
# config value; the default is only the class docstring
|
||||
@ -939,9 +1019,12 @@ class ClassDocumenter(ModuleLevelDocumenter):
|
||||
docstrings = [initdocstring]
|
||||
else:
|
||||
docstrings.append(initdocstring)
|
||||
|
||||
return [prepare_docstring(force_decode(docstring, encoding))
|
||||
for docstring in docstrings]
|
||||
doc = []
|
||||
for docstring in docstrings:
|
||||
if not isinstance(docstring, unicode):
|
||||
docstring = force_decode(docstring, encoding)
|
||||
doc.append(prepare_docstring(docstring))
|
||||
return doc
|
||||
|
||||
def add_content(self, more_content, no_docstring=False):
|
||||
if self.doc_as_attr:
|
||||
@ -972,7 +1055,7 @@ class ExceptionDocumenter(ClassDocumenter):
|
||||
|
||||
@classmethod
|
||||
def can_document_member(cls, member, membername, isattr, parent):
|
||||
return isinstance(member, (type, ClassType)) and \
|
||||
return isinstance(member, class_types) and \
|
||||
issubclass(member, base_exception)
|
||||
|
||||
|
||||
@ -982,16 +1065,26 @@ class DataDocumenter(ModuleLevelDocumenter):
|
||||
"""
|
||||
objtype = 'data'
|
||||
member_order = 40
|
||||
priority = -10
|
||||
|
||||
@classmethod
|
||||
def can_document_member(cls, member, membername, isattr, parent):
|
||||
return isinstance(parent, ModuleDocumenter) and isattr
|
||||
|
||||
def add_directive_header(self, sig):
|
||||
ModuleLevelDocumenter.add_directive_header(self, sig)
|
||||
try:
|
||||
objrepr = safe_repr(self.object)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
self.add_line(u' :annotation: = ' + objrepr, '<autodoc>')
|
||||
|
||||
def document_members(self, all_members=False):
|
||||
pass
|
||||
|
||||
|
||||
class MethodDocumenter(ClassLevelDocumenter):
|
||||
class MethodDocumenter(DocstringSignatureMixin, ClassLevelDocumenter):
|
||||
"""
|
||||
Specialized Documenter subclass for methods (normal, static and class).
|
||||
"""
|
||||
@ -1004,31 +1097,45 @@ class MethodDocumenter(ClassLevelDocumenter):
|
||||
return inspect.isroutine(member) and \
|
||||
not isinstance(parent, ModuleDocumenter)
|
||||
|
||||
def import_object(self):
|
||||
ret = ClassLevelDocumenter.import_object(self)
|
||||
if isinstance(self.object, classmethod) or \
|
||||
(isinstance(self.object, MethodType) and
|
||||
self.object.im_self is not None):
|
||||
self.directivetype = 'classmethod'
|
||||
# document class and static members before ordinary ones
|
||||
self.member_order = self.member_order - 1
|
||||
elif isinstance(self.object, FunctionType) or \
|
||||
(isinstance(self.object, BuiltinFunctionType) and
|
||||
hasattr(self.object, '__self__') and
|
||||
self.object.__self__ is not None):
|
||||
self.directivetype = 'staticmethod'
|
||||
# document class and static members before ordinary ones
|
||||
self.member_order = self.member_order - 1
|
||||
else:
|
||||
self.directivetype = 'method'
|
||||
return ret
|
||||
if sys.version_info >= (3, 0):
|
||||
def import_object(self):
|
||||
ret = ClassLevelDocumenter.import_object(self)
|
||||
obj_from_parent = self.parent.__dict__.get(self.object_name)
|
||||
if isinstance(obj_from_parent, classmethod):
|
||||
self.directivetype = 'classmethod'
|
||||
self.member_order = self.member_order - 1
|
||||
elif isinstance(obj_from_parent, staticmethod):
|
||||
self.directivetype = 'staticmethod'
|
||||
self.member_order = self.member_order - 1
|
||||
else:
|
||||
self.directivetype = 'method'
|
||||
return ret
|
||||
else:
|
||||
def import_object(self):
|
||||
ret = ClassLevelDocumenter.import_object(self)
|
||||
if isinstance(self.object, classmethod) or \
|
||||
(isinstance(self.object, MethodType) and
|
||||
self.object.im_self is not None):
|
||||
self.directivetype = 'classmethod'
|
||||
# document class and static members before ordinary ones
|
||||
self.member_order = self.member_order - 1
|
||||
elif isinstance(self.object, FunctionType) or \
|
||||
(isinstance(self.object, BuiltinFunctionType) and
|
||||
hasattr(self.object, '__self__') and
|
||||
self.object.__self__ is not None):
|
||||
self.directivetype = 'staticmethod'
|
||||
# document class and static members before ordinary ones
|
||||
self.member_order = self.member_order - 1
|
||||
else:
|
||||
self.directivetype = 'method'
|
||||
return ret
|
||||
|
||||
def format_args(self):
|
||||
if inspect.isbuiltin(self.object) or \
|
||||
inspect.ismethoddescriptor(self.object):
|
||||
# can never get arguments of a C function or method
|
||||
return None
|
||||
argspec = inspect.getargspec(self.object)
|
||||
argspec = getargspec(self.object)
|
||||
if argspec[0] and argspec[0][0] in ('cls', 'self'):
|
||||
del argspec[0][0]
|
||||
return inspect.formatargspec(*argspec)
|
||||
@ -1054,12 +1161,44 @@ class AttributeDocumenter(ClassLevelDocumenter):
|
||||
def can_document_member(cls, member, membername, isattr, parent):
|
||||
isdatadesc = isdescriptor(member) and not \
|
||||
isinstance(member, cls.method_types)
|
||||
return isdatadesc or \
|
||||
(isattr and not isinstance(parent, ModuleDocumenter))
|
||||
return isdatadesc or (not isinstance(parent, ModuleDocumenter)
|
||||
and not inspect.isroutine(member)
|
||||
and not isinstance(member, class_types))
|
||||
|
||||
def document_members(self, all_members=False):
|
||||
pass
|
||||
|
||||
def import_object(self):
|
||||
ret = ClassLevelDocumenter.import_object(self)
|
||||
if isdescriptor(self.object) and \
|
||||
not isinstance(self.object, self.method_types):
|
||||
self._datadescriptor = True
|
||||
else:
|
||||
# if it's not a data descriptor
|
||||
self._datadescriptor = False
|
||||
return ret
|
||||
|
||||
def get_real_modname(self):
|
||||
return self.get_attr(self.parent or self.object, '__module__', None) \
|
||||
or self.modname
|
||||
|
||||
def add_directive_header(self, sig):
|
||||
ClassLevelDocumenter.add_directive_header(self, sig)
|
||||
if not self._datadescriptor:
|
||||
try:
|
||||
objrepr = safe_repr(self.object)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
self.add_line(u' :annotation: = ' + objrepr, '<autodoc>')
|
||||
|
||||
def add_content(self, more_content, no_docstring=False):
|
||||
if not self._datadescriptor:
|
||||
# if it's not a data descriptor, its docstring is very probably the
|
||||
# wrong thing to display
|
||||
no_docstring = True
|
||||
ClassLevelDocumenter.add_content(self, more_content, no_docstring)
|
||||
|
||||
|
||||
class InstanceAttributeDocumenter(AttributeDocumenter):
|
||||
"""
|
||||
@ -1082,6 +1221,7 @@ class InstanceAttributeDocumenter(AttributeDocumenter):
|
||||
"""Never import anything."""
|
||||
# disguise as an attribute
|
||||
self.objtype = 'attribute'
|
||||
self._datadescriptor = False
|
||||
return True
|
||||
|
||||
def add_content(self, more_content, no_docstring=False):
|
||||
@ -1111,8 +1251,10 @@ class AutoDirective(Directive):
|
||||
_special_attrgetters = {}
|
||||
|
||||
# flags that can be given in autodoc_default_flags
|
||||
_default_flags = set(['members', 'undoc-members', 'inherited-members',
|
||||
'show-inheritance'])
|
||||
_default_flags = set([
|
||||
'members', 'undoc-members', 'inherited-members', 'show-inheritance',
|
||||
'private-members', 'special-members',
|
||||
])
|
||||
|
||||
# standard docutils directive settings
|
||||
has_content = True
|
||||
@ -1164,7 +1306,7 @@ class AutoDirective(Directive):
|
||||
self.state.memo.reporter = AutodocReporter(self.result,
|
||||
self.state.memo.reporter)
|
||||
|
||||
if self.name == 'automodule':
|
||||
if documenter.titles_allowed:
|
||||
node = nodes.section()
|
||||
# necessary so that the child nodes get the right source/line set
|
||||
node.document = self.state.document
|
||||
@ -1202,6 +1344,7 @@ def setup(app):
|
||||
app.add_config_value('autoclass_content', 'class', True)
|
||||
app.add_config_value('autodoc_member_order', 'alphabetic', True)
|
||||
app.add_config_value('autodoc_default_flags', [], True)
|
||||
app.add_config_value('autodoc_docstring_signature', True, True)
|
||||
app.add_event('autodoc-process-docstring')
|
||||
app.add_event('autodoc-process-signature')
|
||||
app.add_event('autodoc-skip-member')
|
||||
|
@ -49,7 +49,7 @@
|
||||
resolved to a Python object, and otherwise it becomes simple emphasis.
|
||||
This can be used as the default role to make links 'smart'.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -73,8 +73,7 @@ class autosummary_toc(nodes.comment):
|
||||
pass
|
||||
|
||||
def process_autosummary_toc(app, doctree):
|
||||
"""
|
||||
Insert items described in autosummary:: to the TOC tree, but do
|
||||
"""Insert items described in autosummary:: to the TOC tree, but do
|
||||
not generate the toctree:: list.
|
||||
"""
|
||||
env = app.builder.env
|
||||
@ -134,27 +133,19 @@ except AttributeError:
|
||||
return False
|
||||
isgetsetdescriptor = ismemberdescriptor
|
||||
|
||||
def get_documenter(obj):
|
||||
def get_documenter(obj, parent):
|
||||
"""Get an autodoc.Documenter class suitable for documenting the given
|
||||
object.
|
||||
"""
|
||||
Get an autodoc.Documenter class suitable for documenting the given object
|
||||
"""
|
||||
import sphinx.ext.autodoc as autodoc
|
||||
from sphinx.ext.autodoc import AutoDirective, DataDocumenter
|
||||
|
||||
if inspect.isclass(obj):
|
||||
if issubclass(obj, Exception):
|
||||
return autodoc.ExceptionDocumenter
|
||||
return autodoc.ClassDocumenter
|
||||
elif inspect.ismodule(obj):
|
||||
return autodoc.ModuleDocumenter
|
||||
elif inspect.ismethod(obj) or inspect.ismethoddescriptor(obj):
|
||||
return autodoc.MethodDocumenter
|
||||
elif (ismemberdescriptor(obj) or isgetsetdescriptor(obj)
|
||||
or inspect.isdatadescriptor(obj)):
|
||||
return autodoc.AttributeDocumenter
|
||||
elif inspect.isroutine(obj):
|
||||
return autodoc.FunctionDocumenter
|
||||
classes = [cls for cls in AutoDirective._registry.values()
|
||||
if cls.can_document_member(obj, '', False, parent)]
|
||||
if classes:
|
||||
classes.sort(key=lambda cls: cls.priority)
|
||||
return classes[-1]
|
||||
else:
|
||||
return autodoc.DataDocumenter
|
||||
return DataDocumenter
|
||||
|
||||
|
||||
# -- .. autosummary:: ----------------------------------------------------------
|
||||
@ -218,8 +209,7 @@ class Autosummary(Directive):
|
||||
return self.warnings + nodes
|
||||
|
||||
def get_items(self, names):
|
||||
"""
|
||||
Try to import the given names, and return a list of
|
||||
"""Try to import the given names, and return a list of
|
||||
``[(name, signature, summary_string, real_name), ...]``.
|
||||
"""
|
||||
env = self.state.document.settings.env
|
||||
@ -240,7 +230,7 @@ class Autosummary(Directive):
|
||||
display_name = name.split('.')[-1]
|
||||
|
||||
try:
|
||||
obj, real_name = import_by_name(name, prefixes=prefixes)
|
||||
real_name, obj, parent = import_by_name(name, prefixes=prefixes)
|
||||
except ImportError:
|
||||
self.warn('failed to import %s' % name)
|
||||
items.append((name, '', '', name))
|
||||
@ -248,7 +238,7 @@ class Autosummary(Directive):
|
||||
|
||||
# NB. using real_name here is important, since Documenters
|
||||
# handle module prefixes slightly differently
|
||||
documenter = get_documenter(obj)(self, real_name)
|
||||
documenter = get_documenter(obj, parent)(self, real_name)
|
||||
if not documenter.parse_name():
|
||||
self.warn('failed to parse name %s' % real_name)
|
||||
items.append((display_name, '', '', real_name))
|
||||
@ -287,8 +277,7 @@ class Autosummary(Directive):
|
||||
return items
|
||||
|
||||
def get_table(self, items):
|
||||
"""
|
||||
Generate a proper list of table nodes for autosummary:: directive.
|
||||
"""Generate a proper list of table nodes for autosummary:: directive.
|
||||
|
||||
*items* is a list produced by :meth:`get_items`.
|
||||
"""
|
||||
@ -351,8 +340,7 @@ def mangle_signature(sig, max_chars=30):
|
||||
return u"(%s)" % sig
|
||||
|
||||
def limited_join(sep, items, max_chars=30, overflow_marker="..."):
|
||||
"""
|
||||
Join a number of strings to one, limiting the length to *max_chars*.
|
||||
"""Join a number of strings to one, limiting the length to *max_chars*.
|
||||
|
||||
If the string overflows this limit, replace the last fitting item by
|
||||
*overflow_marker*.
|
||||
@ -377,8 +365,7 @@ def limited_join(sep, items, max_chars=30, overflow_marker="..."):
|
||||
# -- Importing items -----------------------------------------------------------
|
||||
|
||||
def import_by_name(name, prefixes=[None]):
|
||||
"""
|
||||
Import a Python object that has the given *name*, under one of the
|
||||
"""Import a Python object that has the given *name*, under one of the
|
||||
*prefixes*. The first name that succeeds is used.
|
||||
"""
|
||||
tried = []
|
||||
@ -388,7 +375,8 @@ def import_by_name(name, prefixes=[None]):
|
||||
prefixed_name = '.'.join([prefix, name])
|
||||
else:
|
||||
prefixed_name = name
|
||||
return _import_by_name(prefixed_name), prefixed_name
|
||||
obj, parent = _import_by_name(prefixed_name)
|
||||
return prefixed_name, obj, parent
|
||||
except ImportError:
|
||||
tried.append(prefixed_name)
|
||||
raise ImportError('no module named %s' % ' or '.join(tried))
|
||||
@ -403,7 +391,8 @@ def _import_by_name(name):
|
||||
if modname:
|
||||
try:
|
||||
__import__(modname)
|
||||
return getattr(sys.modules[modname], name_parts[-1])
|
||||
mod = sys.modules[modname]
|
||||
return getattr(mod, name_parts[-1]), mod
|
||||
except (ImportError, IndexError, AttributeError):
|
||||
pass
|
||||
|
||||
@ -421,12 +410,14 @@ def _import_by_name(name):
|
||||
break
|
||||
|
||||
if last_j < len(name_parts):
|
||||
parent = None
|
||||
obj = sys.modules[modname]
|
||||
for obj_name in name_parts[last_j:]:
|
||||
parent = obj
|
||||
obj = getattr(obj, obj_name)
|
||||
return obj
|
||||
return obj, parent
|
||||
else:
|
||||
return sys.modules[modname]
|
||||
return sys.modules[modname], None
|
||||
except (ValueError, ImportError, AttributeError, KeyError), e:
|
||||
raise ImportError(*e.args)
|
||||
|
||||
@ -435,8 +426,7 @@ def _import_by_name(name):
|
||||
|
||||
def autolink_role(typ, rawtext, etext, lineno, inliner,
|
||||
options={}, content=[]):
|
||||
"""
|
||||
Smart linking role.
|
||||
"""Smart linking role.
|
||||
|
||||
Expands to ':obj:`text`' if `text` is an object that can be imported;
|
||||
otherwise expands to '*text*'.
|
||||
@ -449,7 +439,7 @@ def autolink_role(typ, rawtext, etext, lineno, inliner,
|
||||
prefixes = [None]
|
||||
#prefixes.insert(0, inliner.document.settings.env.currmodule)
|
||||
try:
|
||||
obj, name = import_by_name(pnode['reftarget'], prefixes)
|
||||
name, obj, parent = import_by_name(pnode['reftarget'], prefixes)
|
||||
except ImportError:
|
||||
content = pnode[0]
|
||||
r[0][0] = nodes.emphasis(rawtext, content[0].astext(),
|
||||
@ -487,12 +477,14 @@ def setup(app):
|
||||
html=(autosummary_toc_visit_html, autosummary_noop),
|
||||
latex=(autosummary_noop, autosummary_noop),
|
||||
text=(autosummary_noop, autosummary_noop),
|
||||
man=(autosummary_noop, autosummary_noop))
|
||||
man=(autosummary_noop, autosummary_noop),
|
||||
texinfo=(autosummary_noop, autosummary_noop))
|
||||
app.add_node(autosummary_table,
|
||||
html=(autosummary_table_visit_html, autosummary_noop),
|
||||
latex=(autosummary_noop, autosummary_noop),
|
||||
text=(autosummary_noop, autosummary_noop),
|
||||
man=(autosummary_noop, autosummary_noop))
|
||||
man=(autosummary_noop, autosummary_noop),
|
||||
texinfo=(autosummary_noop, autosummary_noop))
|
||||
app.add_directive('autosummary', Autosummary)
|
||||
app.add_role('autolink', autolink_role)
|
||||
app.connect('doctree-read', process_autosummary_toc)
|
||||
|
@ -12,11 +12,12 @@
|
||||
Example Makefile rule::
|
||||
|
||||
generate:
|
||||
sphinx-autogen source/*.rst source/generated
|
||||
sphinx-autogen -o source/generated source/*.rst
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
@ -107,7 +108,7 @@ def generate_autosummary_docs(sources, output_dir=None, suffix='.rst',
|
||||
ensuredir(path)
|
||||
|
||||
try:
|
||||
obj, name = import_by_name(name)
|
||||
name, obj, parent = import_by_name(name)
|
||||
except ImportError, e:
|
||||
warn('[autosummary] failed to import %r: %s' % (name, e))
|
||||
continue
|
||||
@ -123,7 +124,7 @@ def generate_autosummary_docs(sources, output_dir=None, suffix='.rst',
|
||||
f = open(fn, 'w')
|
||||
|
||||
try:
|
||||
doc = get_documenter(obj)
|
||||
doc = get_documenter(obj, parent)
|
||||
|
||||
if template_name is not None:
|
||||
template = template_env.get_template(template_name)
|
||||
@ -137,7 +138,7 @@ def generate_autosummary_docs(sources, output_dir=None, suffix='.rst',
|
||||
def get_members(obj, typ, include_public=[]):
|
||||
items = [
|
||||
name for name in dir(obj)
|
||||
if get_documenter(getattr(obj, name)).objtype == typ
|
||||
if get_documenter(getattr(obj, name), obj).objtype == typ
|
||||
]
|
||||
public = [x for x in items
|
||||
if x in include_public or not x.startswith('_')]
|
||||
@ -193,8 +194,8 @@ def generate_autosummary_docs(sources, output_dir=None, suffix='.rst',
|
||||
# -- Finding documented entries in files ---------------------------------------
|
||||
|
||||
def find_autosummary_in_files(filenames):
|
||||
"""
|
||||
Find out what items are documented in source/*.rst.
|
||||
"""Find out what items are documented in source/*.rst.
|
||||
|
||||
See `find_autosummary_in_lines`.
|
||||
"""
|
||||
documented = []
|
||||
@ -206,12 +207,12 @@ def find_autosummary_in_files(filenames):
|
||||
return documented
|
||||
|
||||
def find_autosummary_in_docstring(name, module=None, filename=None):
|
||||
"""
|
||||
Find out what items are documented in the given object's docstring.
|
||||
"""Find out what items are documented in the given object's docstring.
|
||||
|
||||
See `find_autosummary_in_lines`.
|
||||
"""
|
||||
try:
|
||||
obj, real_name = import_by_name(name)
|
||||
real_name, obj, parent = import_by_name(name)
|
||||
lines = pydoc.getdoc(obj).splitlines()
|
||||
return find_autosummary_in_lines(lines, module=name, filename=filename)
|
||||
except AttributeError:
|
||||
@ -221,8 +222,8 @@ def find_autosummary_in_docstring(name, module=None, filename=None):
|
||||
return []
|
||||
|
||||
def find_autosummary_in_lines(lines, module=None, filename=None):
|
||||
"""
|
||||
Find out what items appear in autosummary:: directives in the given lines.
|
||||
"""Find out what items appear in autosummary:: directives in the
|
||||
given lines.
|
||||
|
||||
Returns a list of (name, toctree, template) where *name* is a name
|
||||
of an object and *toctree* the :toctree: path of the corresponding
|
||||
|
@ -6,7 +6,7 @@
|
||||
Check Python modules and C API for coverage. Mostly written by Josip
|
||||
Dzolonga for the Google Highly Open Participation contest.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -105,7 +105,8 @@ class CoverageBuilder(Builder):
|
||||
output_file = path.join(self.outdir, 'c.txt')
|
||||
op = open(output_file, 'w')
|
||||
try:
|
||||
write_header(op, 'Undocumented C API elements', '=')
|
||||
if self.config.coverage_write_headline:
|
||||
write_header(op, 'Undocumented C API elements', '=')
|
||||
op.write('\n')
|
||||
|
||||
for filename, undoc in self.c_undoc.iteritems():
|
||||
@ -120,6 +121,8 @@ class CoverageBuilder(Builder):
|
||||
objects = self.env.domaindata['py']['objects']
|
||||
modules = self.env.domaindata['py']['modules']
|
||||
|
||||
skip_undoc = self.config.coverage_skip_undoc_in_source
|
||||
|
||||
for mod_name in modules:
|
||||
ignore = False
|
||||
for exp in self.mod_ignorexps:
|
||||
@ -160,6 +163,8 @@ class CoverageBuilder(Builder):
|
||||
if exp.match(name):
|
||||
break
|
||||
else:
|
||||
if skip_undoc and not obj.__doc__:
|
||||
continue
|
||||
funcs.append(name)
|
||||
elif inspect.isclass(obj):
|
||||
for exp in self.cls_ignorexps:
|
||||
@ -167,17 +172,27 @@ class CoverageBuilder(Builder):
|
||||
break
|
||||
else:
|
||||
if full_name not in objects:
|
||||
if skip_undoc and not obj.__doc__:
|
||||
continue
|
||||
# not documented at all
|
||||
classes[name] = []
|
||||
continue
|
||||
|
||||
attrs = []
|
||||
|
||||
for attr_name, attr in inspect.getmembers(
|
||||
obj, inspect.ismethod):
|
||||
for attr_name in dir(obj):
|
||||
if attr_name not in obj.__dict__:
|
||||
continue
|
||||
attr = getattr(obj, attr_name)
|
||||
if not (inspect.ismethod(attr) or
|
||||
inspect.isfunction(attr)):
|
||||
continue
|
||||
if attr_name[0] == '_':
|
||||
# starts with an underscore, ignore it
|
||||
continue
|
||||
if skip_undoc and not attr.__doc__:
|
||||
# skip methods without docstring if wished
|
||||
continue
|
||||
|
||||
full_attr_name = '%s.%s' % (full_name, attr_name)
|
||||
if full_attr_name not in objects:
|
||||
@ -194,8 +209,8 @@ class CoverageBuilder(Builder):
|
||||
op = open(output_file, 'w')
|
||||
failed = []
|
||||
try:
|
||||
write_header(op, 'Undocumented Python objects', '=')
|
||||
|
||||
if self.config.coverage_write_headline:
|
||||
write_header(op, 'Undocumented Python objects', '=')
|
||||
keys = self.py_undoc.keys()
|
||||
keys.sort()
|
||||
for name in keys:
|
||||
@ -245,3 +260,5 @@ def setup(app):
|
||||
app.add_config_value('coverage_c_path', [], False)
|
||||
app.add_config_value('coverage_c_regexes', {}, False)
|
||||
app.add_config_value('coverage_ignore_c_items', {}, False)
|
||||
app.add_config_value('coverage_write_headline', True, False)
|
||||
app.add_config_value('coverage_skip_undoc_in_source', False, False)
|
||||
|
@ -6,7 +6,7 @@
|
||||
Mimic doctest by automatically executing code snippets and checking
|
||||
their results.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -56,7 +56,7 @@ class TestDirective(Directive):
|
||||
test = code
|
||||
code = doctestopt_re.sub('', code)
|
||||
nodetype = nodes.literal_block
|
||||
if self.name == 'testsetup' or 'hide' in self.options:
|
||||
if self.name in ('testsetup', 'testcleanup') or 'hide' in self.options:
|
||||
nodetype = nodes.comment
|
||||
if self.arguments:
|
||||
groups = [x.strip() for x in self.arguments[0].split(',')]
|
||||
@ -86,6 +86,9 @@ class TestDirective(Directive):
|
||||
class TestsetupDirective(TestDirective):
|
||||
option_spec = {}
|
||||
|
||||
class TestcleanupDirective(TestDirective):
|
||||
option_spec = {}
|
||||
|
||||
class DoctestDirective(TestDirective):
|
||||
option_spec = {
|
||||
'hide': directives.flag,
|
||||
@ -113,6 +116,7 @@ class TestGroup(object):
|
||||
self.name = name
|
||||
self.setup = []
|
||||
self.tests = []
|
||||
self.cleanup = []
|
||||
|
||||
def add_code(self, code, prepend=False):
|
||||
if code.type == 'testsetup':
|
||||
@ -120,6 +124,8 @@ class TestGroup(object):
|
||||
self.setup.insert(0, code)
|
||||
else:
|
||||
self.setup.append(code)
|
||||
elif code.type == 'testcleanup':
|
||||
self.cleanup.append(code)
|
||||
elif code.type == 'doctest':
|
||||
self.tests.append([code])
|
||||
elif code.type == 'testcode':
|
||||
@ -131,8 +137,8 @@ class TestGroup(object):
|
||||
raise RuntimeError('invalid TestCode type')
|
||||
|
||||
def __repr__(self):
|
||||
return 'TestGroup(name=%r, setup=%r, tests=%r)' % (
|
||||
self.name, self.setup, self.tests)
|
||||
return 'TestGroup(name=%r, setup=%r, cleanup=%r, tests=%r)' % (
|
||||
self.name, self.setup, self.cleanup, self.tests)
|
||||
|
||||
|
||||
class TestCode(object):
|
||||
@ -149,14 +155,14 @@ class TestCode(object):
|
||||
|
||||
class SphinxDocTestRunner(doctest.DocTestRunner):
|
||||
def summarize(self, out, verbose=None):
|
||||
io = StringIO.StringIO()
|
||||
string_io = StringIO.StringIO()
|
||||
old_stdout = sys.stdout
|
||||
sys.stdout = io
|
||||
sys.stdout = string_io
|
||||
try:
|
||||
res = doctest.DocTestRunner.summarize(self, verbose)
|
||||
finally:
|
||||
sys.stdout = old_stdout
|
||||
out(io.getvalue())
|
||||
out(string_io.getvalue())
|
||||
return res
|
||||
|
||||
def _DocTestRunner__patched_linecache_getlines(self, filename,
|
||||
@ -204,6 +210,8 @@ class DocTestBuilder(Builder):
|
||||
self.total_tries = 0
|
||||
self.setup_failures = 0
|
||||
self.setup_tries = 0
|
||||
self.cleanup_failures = 0
|
||||
self.cleanup_tries = 0
|
||||
|
||||
date = time.strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
@ -240,12 +248,14 @@ Doctest summary
|
||||
%5d test%s
|
||||
%5d failure%s in tests
|
||||
%5d failure%s in setup code
|
||||
%5d failure%s in cleanup code
|
||||
''' % (self.total_tries, s(self.total_tries),
|
||||
self.total_failures, s(self.total_failures),
|
||||
self.setup_failures, s(self.setup_failures)))
|
||||
self.setup_failures, s(self.setup_failures),
|
||||
self.cleanup_failures, s(self.cleanup_failures)))
|
||||
self.outfile.close()
|
||||
|
||||
if self.total_failures or self.setup_failures:
|
||||
if self.total_failures or self.setup_failures or self.cleanup_failures:
|
||||
self.app.statuscode = 1
|
||||
|
||||
def write(self, build_docnames, updated_docnames, method='update'):
|
||||
@ -265,6 +275,12 @@ Doctest summary
|
||||
optionflags=self.opt)
|
||||
self.test_runner = SphinxDocTestRunner(verbose=False,
|
||||
optionflags=self.opt)
|
||||
self.cleanup_runner = SphinxDocTestRunner(verbose=False,
|
||||
optionflags=self.opt)
|
||||
|
||||
self.test_runner._fakeout = self.setup_runner._fakeout
|
||||
self.cleanup_runner._fakeout = self.setup_runner._fakeout
|
||||
|
||||
if self.config.doctest_test_doctest_blocks:
|
||||
def condition(node):
|
||||
return (isinstance(node, (nodes.literal_block, nodes.comment))
|
||||
@ -298,6 +314,11 @@ Doctest summary
|
||||
'testsetup', lineno=0)
|
||||
for group in groups.itervalues():
|
||||
group.add_code(code, prepend=True)
|
||||
if self.config.doctest_global_cleanup:
|
||||
code = TestCode(self.config.doctest_global_cleanup,
|
||||
'testcleanup', lineno=0)
|
||||
for group in groups.itervalues():
|
||||
group.add_code(code)
|
||||
if not groups:
|
||||
return
|
||||
|
||||
@ -313,29 +334,43 @@ Doctest summary
|
||||
res_f, res_t = self.test_runner.summarize(self._out, verbose=True)
|
||||
self.total_failures += res_f
|
||||
self.total_tries += res_t
|
||||
if self.cleanup_runner.tries:
|
||||
res_f, res_t = self.cleanup_runner.summarize(self._out,
|
||||
verbose=True)
|
||||
self.cleanup_failures += res_f
|
||||
self.cleanup_tries += res_t
|
||||
|
||||
def compile(self, code, name, type, flags, dont_inherit):
|
||||
return compile(code, name, self.type, flags, dont_inherit)
|
||||
|
||||
def test_group(self, group, filename):
|
||||
ns = {}
|
||||
setup_examples = []
|
||||
for setup in group.setup:
|
||||
setup_examples.append(doctest.Example(setup.code, '',
|
||||
lineno=setup.lineno))
|
||||
if setup_examples:
|
||||
# simulate a doctest with the setup code
|
||||
setup_doctest = doctest.DocTest(setup_examples, {},
|
||||
'%s (setup code)' % group.name,
|
||||
filename, 0, None)
|
||||
setup_doctest.globs = ns
|
||||
old_f = self.setup_runner.failures
|
||||
|
||||
def run_setup_cleanup(runner, testcodes, what):
|
||||
examples = []
|
||||
for testcode in testcodes:
|
||||
examples.append(doctest.Example(testcode.code, '',
|
||||
lineno=testcode.lineno))
|
||||
if not examples:
|
||||
return True
|
||||
# simulate a doctest with the code
|
||||
sim_doctest = doctest.DocTest(examples, {},
|
||||
'%s (%s code)' % (group.name, what),
|
||||
filename, 0, None)
|
||||
sim_doctest.globs = ns
|
||||
old_f = runner.failures
|
||||
self.type = 'exec' # the snippet may contain multiple statements
|
||||
self.setup_runner.run(setup_doctest, out=self._warn_out,
|
||||
clear_globs=False)
|
||||
if self.setup_runner.failures > old_f:
|
||||
# don't run the group
|
||||
return
|
||||
runner.run(sim_doctest, out=self._warn_out, clear_globs=False)
|
||||
if runner.failures > old_f:
|
||||
return False
|
||||
return True
|
||||
|
||||
# run the setup code
|
||||
if not run_setup_cleanup(self.setup_runner, group.setup, 'setup'):
|
||||
# if setup failed, don't run the group
|
||||
return
|
||||
|
||||
# run the tests
|
||||
for code in group.tests:
|
||||
if len(code) == 1:
|
||||
# ordinary doctests (code/output interleaved)
|
||||
@ -373,9 +408,13 @@ Doctest summary
|
||||
# also don't clear the globs namespace after running the doctest
|
||||
self.test_runner.run(test, out=self._warn_out, clear_globs=False)
|
||||
|
||||
# run the cleanup
|
||||
run_setup_cleanup(self.cleanup_runner, group.cleanup, 'cleanup')
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.add_directive('testsetup', TestsetupDirective)
|
||||
app.add_directive('testcleanup', TestcleanupDirective)
|
||||
app.add_directive('doctest', DoctestDirective)
|
||||
app.add_directive('testcode', TestcodeDirective)
|
||||
app.add_directive('testoutput', TestoutputDirective)
|
||||
@ -384,3 +423,4 @@ def setup(app):
|
||||
app.add_config_value('doctest_path', [], False)
|
||||
app.add_config_value('doctest_test_doctest_blocks', 'default', False)
|
||||
app.add_config_value('doctest_global_setup', '', False)
|
||||
app.add_config_value('doctest_global_cleanup', '', False)
|
||||
|
@ -20,7 +20,7 @@
|
||||
|
||||
You can also give an explicit caption, e.g. :exmpl:`Foo <foo>`.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -6,11 +6,12 @@
|
||||
Allow graphviz-formatted graphs to be included in Sphinx-generated
|
||||
documents inline.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import re
|
||||
import codecs
|
||||
import posixpath
|
||||
from os import path
|
||||
from math import ceil
|
||||
@ -46,23 +47,48 @@ class Graphviz(Directive):
|
||||
"""
|
||||
has_content = True
|
||||
required_arguments = 0
|
||||
optional_arguments = 0
|
||||
optional_arguments = 1
|
||||
final_argument_whitespace = False
|
||||
option_spec = {
|
||||
'alt': directives.unchanged,
|
||||
'inline': directives.flag,
|
||||
'caption': directives.unchanged,
|
||||
}
|
||||
|
||||
def run(self):
|
||||
dotcode = '\n'.join(self.content)
|
||||
if not dotcode.strip():
|
||||
return [self.state_machine.reporter.warning(
|
||||
'Ignoring "graphviz" directive without content.',
|
||||
line=self.lineno)]
|
||||
if self.arguments:
|
||||
document = self.state.document
|
||||
if self.content:
|
||||
return [document.reporter.warning(
|
||||
'Graphviz directive cannot have both content and '
|
||||
'a filename argument', line=self.lineno)]
|
||||
env = self.state.document.settings.env
|
||||
rel_filename, filename = env.relfn2path(self.arguments[0])
|
||||
env.note_dependency(rel_filename)
|
||||
try:
|
||||
fp = codecs.open(filename, 'r', 'utf-8')
|
||||
try:
|
||||
dotcode = fp.read()
|
||||
finally:
|
||||
fp.close()
|
||||
except (IOError, OSError):
|
||||
return [document.reporter.warning(
|
||||
'External Graphviz file %r not found or reading '
|
||||
'it failed' % filename, line=self.lineno)]
|
||||
else:
|
||||
dotcode = '\n'.join(self.content)
|
||||
if not dotcode.strip():
|
||||
return [self.state_machine.reporter.warning(
|
||||
'Ignoring "graphviz" directive without content.',
|
||||
line=self.lineno)]
|
||||
node = graphviz()
|
||||
node['code'] = dotcode
|
||||
node['options'] = []
|
||||
if 'alt' in self.options:
|
||||
node['alt'] = self.options['alt']
|
||||
if 'caption' in self.options:
|
||||
node['caption'] = self.options['caption']
|
||||
node['inline'] = 'inline' in self.options
|
||||
return [node]
|
||||
|
||||
|
||||
@ -76,6 +102,8 @@ class GraphvizSimple(Directive):
|
||||
final_argument_whitespace = False
|
||||
option_spec = {
|
||||
'alt': directives.unchanged,
|
||||
'inline': directives.flag,
|
||||
'caption': directives.unchanged,
|
||||
}
|
||||
|
||||
def run(self):
|
||||
@ -85,14 +113,16 @@ class GraphvizSimple(Directive):
|
||||
node['options'] = []
|
||||
if 'alt' in self.options:
|
||||
node['alt'] = self.options['alt']
|
||||
if 'caption' in self.options:
|
||||
node['caption'] = self.options['caption']
|
||||
node['inline'] = 'inline' in self.options
|
||||
return [node]
|
||||
|
||||
|
||||
def render_dot(self, code, options, format, prefix='graphviz'):
|
||||
"""
|
||||
Render graphviz code into a PNG or PDF output file.
|
||||
"""
|
||||
"""Render graphviz code into a PNG or PDF output file."""
|
||||
hashkey = code.encode('utf-8') + str(options) + \
|
||||
str(self.builder.config.graphviz_dot) + \
|
||||
str(self.builder.config.graphviz_dot_args)
|
||||
fname = '%s-%s.%s' % (prefix, sha(hashkey).hexdigest(), format)
|
||||
if hasattr(self.builder, 'imgpath'):
|
||||
@ -193,7 +223,13 @@ def render_dot_html(self, node, code, options, prefix='graphviz',
|
||||
self.builder.warn('dot code %r: ' % code + str(exc))
|
||||
raise nodes.SkipNode
|
||||
|
||||
self.body.append(self.starttag(node, 'p', CLASS='graphviz'))
|
||||
inline = node.get('inline', False)
|
||||
if inline:
|
||||
wrapper = 'span'
|
||||
else:
|
||||
wrapper = 'p'
|
||||
|
||||
self.body.append(self.starttag(node, wrapper, CLASS='graphviz'))
|
||||
if fname is None:
|
||||
self.body.append(self.encode(code))
|
||||
else:
|
||||
@ -219,8 +255,11 @@ def render_dot_html(self, node, code, options, prefix='graphviz',
|
||||
self.body.append('<img src="%s" alt="%s" usemap="#%s" %s/>\n' %
|
||||
(fname, alt, mapname, imgcss))
|
||||
self.body.extend(imgmap)
|
||||
if node.get('caption') and not inline:
|
||||
self.body.append('</p>\n<p class="caption">')
|
||||
self.body.append(self.encode(node['caption']))
|
||||
|
||||
self.body.append('</p>\n')
|
||||
self.body.append('</%s>\n' % wrapper)
|
||||
raise nodes.SkipNode
|
||||
|
||||
|
||||
@ -235,8 +274,25 @@ def render_dot_latex(self, node, code, options, prefix='graphviz'):
|
||||
self.builder.warn('dot code %r: ' % code + str(exc))
|
||||
raise nodes.SkipNode
|
||||
|
||||
inline = node.get('inline', False)
|
||||
if inline:
|
||||
para_separator = ''
|
||||
else:
|
||||
para_separator = '\n'
|
||||
|
||||
if fname is not None:
|
||||
self.body.append('\\includegraphics{%s}' % fname)
|
||||
caption = node.get('caption')
|
||||
# XXX add ids from previous target node
|
||||
if caption and not inline:
|
||||
self.body.append('\n\\begin{figure}[h!]')
|
||||
self.body.append('\n\\begin{center}')
|
||||
self.body.append('\n\\caption{%s}' % self.encode(caption))
|
||||
self.body.append('\n\\includegraphics{%s}' % fname)
|
||||
self.body.append('\n\\end{center}')
|
||||
self.body.append('\n\\end{figure}\n')
|
||||
else:
|
||||
self.body.append('%s\\includegraphics{%s}%s' %
|
||||
(para_separator, fname, para_separator))
|
||||
raise nodes.SkipNode
|
||||
|
||||
|
||||
|
@ -16,7 +16,7 @@
|
||||
namespace of the project configuration (that is, all variables from
|
||||
``conf.py`` are available.)
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
r"""
|
||||
sphinx.ext.inheritance_diagram
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@ -32,7 +32,7 @@
|
||||
The graph is inserted as a PNG+image map into HTML and a PDF in
|
||||
LaTeX.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
@ -66,28 +66,26 @@ class InheritanceGraph(object):
|
||||
from all the way to the root "object", and then is able to generate a
|
||||
graphviz dot graph from them.
|
||||
"""
|
||||
def __init__(self, class_names, currmodule, show_builtins=False):
|
||||
"""
|
||||
*class_names* is a list of child classes to show bases from.
|
||||
def __init__(self, class_names, currmodule, show_builtins=False,
|
||||
private_bases=False, parts=0):
|
||||
"""*class_names* is a list of child classes to show bases from.
|
||||
|
||||
If *show_builtins* is True, then Python builtins will be shown
|
||||
in the graph.
|
||||
"""
|
||||
self.class_names = class_names
|
||||
self.classes = self._import_classes(class_names, currmodule)
|
||||
self.all_classes = self._all_classes(self.classes)
|
||||
if len(self.all_classes) == 0:
|
||||
classes = self._import_classes(class_names, currmodule)
|
||||
self.class_info = self._class_info(classes, show_builtins,
|
||||
private_bases, parts)
|
||||
if not self.class_info:
|
||||
raise InheritanceException('No classes found for '
|
||||
'inheritance diagram')
|
||||
self.show_builtins = show_builtins
|
||||
|
||||
def _import_class_or_module(self, name, currmodule):
|
||||
"""
|
||||
Import a class using its fully-qualified *name*.
|
||||
"""
|
||||
"""Import a class using its fully-qualified *name*."""
|
||||
try:
|
||||
path, base = class_sig_re.match(name).groups()
|
||||
except ValueError:
|
||||
except (AttributeError, ValueError):
|
||||
raise InheritanceException('Invalid class or module %r specified '
|
||||
'for inheritance diagram' % name)
|
||||
|
||||
@ -129,36 +127,52 @@ class InheritanceGraph(object):
|
||||
'not a class or module' % name)
|
||||
|
||||
def _import_classes(self, class_names, currmodule):
|
||||
"""
|
||||
Import a list of classes.
|
||||
"""
|
||||
"""Import a list of classes."""
|
||||
classes = []
|
||||
for name in class_names:
|
||||
classes.extend(self._import_class_or_module(name, currmodule))
|
||||
return classes
|
||||
|
||||
def _all_classes(self, classes):
|
||||
"""
|
||||
Return a list of all classes that are ancestors of *classes*.
|
||||
def _class_info(self, classes, show_builtins, private_bases, parts):
|
||||
"""Return name and bases for all classes that are ancestors of
|
||||
*classes*.
|
||||
|
||||
*parts* gives the number of dotted name parts that is removed from the
|
||||
displayed node names.
|
||||
"""
|
||||
all_classes = {}
|
||||
builtins = __builtins__.values()
|
||||
|
||||
def recurse(cls):
|
||||
all_classes[cls] = None
|
||||
for c in cls.__bases__:
|
||||
if c not in all_classes:
|
||||
recurse(c)
|
||||
if not show_builtins and cls in builtins:
|
||||
return
|
||||
if not private_bases and cls.__name__.startswith('_'):
|
||||
return
|
||||
|
||||
nodename = self.class_name(cls, parts)
|
||||
fullname = self.class_name(cls, 0)
|
||||
|
||||
baselist = []
|
||||
all_classes[cls] = (nodename, fullname, baselist)
|
||||
for base in cls.__bases__:
|
||||
if not show_builtins and base in builtins:
|
||||
continue
|
||||
if not private_bases and base.__name__.startswith('_'):
|
||||
continue
|
||||
baselist.append(self.class_name(base, parts))
|
||||
if base not in all_classes:
|
||||
recurse(base)
|
||||
|
||||
for cls in classes:
|
||||
recurse(cls)
|
||||
|
||||
return all_classes.keys()
|
||||
return all_classes.values()
|
||||
|
||||
def class_name(self, cls, parts=0):
|
||||
"""
|
||||
Given a class object, return a fully-qualified name. This
|
||||
works for things I've tested in matplotlib so far, but may not
|
||||
be completely general.
|
||||
"""Given a class object, return a fully-qualified name.
|
||||
|
||||
This works for things I've tested in matplotlib so far, but may not be
|
||||
completely general.
|
||||
"""
|
||||
module = cls.__module__
|
||||
if module == '__builtin__':
|
||||
@ -171,10 +185,8 @@ class InheritanceGraph(object):
|
||||
return '.'.join(name_parts[-parts:])
|
||||
|
||||
def get_all_class_names(self):
|
||||
"""
|
||||
Get all of the class names involved in the graph.
|
||||
"""
|
||||
return [self.class_name(x) for x in self.all_classes]
|
||||
"""Get all of the class names involved in the graph."""
|
||||
return [fullname for (_, fullname, _) in self.class_info]
|
||||
|
||||
# These are the default attrs for graphviz
|
||||
default_graph_attrs = {
|
||||
@ -200,11 +212,10 @@ class InheritanceGraph(object):
|
||||
def _format_graph_attrs(self, attrs):
|
||||
return ''.join(['%s=%s;\n' % x for x in attrs.items()])
|
||||
|
||||
def generate_dot(self, name, parts=0, urls={}, env=None,
|
||||
def generate_dot(self, name, urls={}, env=None,
|
||||
graph_attrs={}, node_attrs={}, edge_attrs={}):
|
||||
"""
|
||||
Generate a graphviz dot graph from the classes that
|
||||
were passed in to __init__.
|
||||
"""Generate a graphviz dot graph from the classes that were passed in
|
||||
to __init__.
|
||||
|
||||
*name* is the name of the graph.
|
||||
|
||||
@ -228,26 +239,17 @@ class InheritanceGraph(object):
|
||||
res.append('digraph %s {\n' % name)
|
||||
res.append(self._format_graph_attrs(g_attrs))
|
||||
|
||||
for cls in self.all_classes:
|
||||
if not self.show_builtins and cls in __builtins__.values():
|
||||
continue
|
||||
|
||||
name = self.class_name(cls, parts)
|
||||
|
||||
for name, fullname, bases in self.class_info:
|
||||
# Write the node
|
||||
this_node_attrs = n_attrs.copy()
|
||||
url = urls.get(self.class_name(cls))
|
||||
url = urls.get(fullname)
|
||||
if url is not None:
|
||||
this_node_attrs['URL'] = '"%s"' % url
|
||||
res.append(' "%s" [%s];\n' %
|
||||
(name, self._format_node_attrs(this_node_attrs)))
|
||||
|
||||
# Write the edges
|
||||
for base in cls.__bases__:
|
||||
if not self.show_builtins and base in __builtins__.values():
|
||||
continue
|
||||
|
||||
base_name = self.class_name(base, parts)
|
||||
for base_name in bases:
|
||||
res.append(' "%s" -> "%s" [%s];\n' %
|
||||
(base_name, name,
|
||||
self._format_node_attrs(e_attrs)))
|
||||
@ -272,6 +274,7 @@ class InheritanceDiagram(Directive):
|
||||
final_argument_whitespace = True
|
||||
option_spec = {
|
||||
'parts': directives.nonnegative_int,
|
||||
'private-bases': directives.flag,
|
||||
}
|
||||
|
||||
def run(self):
|
||||
@ -280,11 +283,16 @@ class InheritanceDiagram(Directive):
|
||||
env = self.state.document.settings.env
|
||||
class_names = self.arguments[0].split()
|
||||
class_role = env.get_domain('py').role('class')
|
||||
# Store the original content for use as a hash
|
||||
node['parts'] = self.options.get('parts', 0)
|
||||
node['content'] = ', '.join(class_names)
|
||||
|
||||
# Create a graph starting with the list of classes
|
||||
try:
|
||||
graph = InheritanceGraph(class_names,
|
||||
env.temp_data.get('py:module'))
|
||||
graph = InheritanceGraph(
|
||||
class_names, env.temp_data.get('py:module'),
|
||||
parts=node['parts'],
|
||||
private_bases='private-bases' in self.options)
|
||||
except InheritanceException, err:
|
||||
return [node.document.reporter.warning(err.args[0],
|
||||
line=self.lineno)]
|
||||
@ -300,9 +308,6 @@ class InheritanceDiagram(Directive):
|
||||
# Store the graph object so we can use it to generate the
|
||||
# dot file later
|
||||
node['graph'] = graph
|
||||
# Store the original content for use as a hash
|
||||
node['parts'] = self.options.get('parts', 0)
|
||||
node['content'] = ', '.join(class_names)
|
||||
return [node]
|
||||
|
||||
|
||||
@ -316,7 +321,6 @@ def html_visit_inheritance_diagram(self, node):
|
||||
image map.
|
||||
"""
|
||||
graph = node['graph']
|
||||
parts = node['parts']
|
||||
|
||||
graph_hash = get_graph_hash(node)
|
||||
name = 'inheritance%s' % graph_hash
|
||||
@ -329,7 +333,7 @@ def html_visit_inheritance_diagram(self, node):
|
||||
elif child.get('refid') is not None:
|
||||
urls[child['reftitle']] = '#' + child.get('refid')
|
||||
|
||||
dotcode = graph.generate_dot(name, parts, urls, env=self.builder.env)
|
||||
dotcode = graph.generate_dot(name, urls, env=self.builder.env)
|
||||
render_dot_html(self, node, dotcode, [], 'inheritance', 'inheritance',
|
||||
alt='Inheritance diagram of ' + node['content'])
|
||||
raise nodes.SkipNode
|
||||
@ -340,12 +344,11 @@ def latex_visit_inheritance_diagram(self, node):
|
||||
Output the graph for LaTeX. This will insert a PDF.
|
||||
"""
|
||||
graph = node['graph']
|
||||
parts = node['parts']
|
||||
|
||||
graph_hash = get_graph_hash(node)
|
||||
name = 'inheritance%s' % graph_hash
|
||||
|
||||
dotcode = graph.generate_dot(name, parts, env=self.builder.env,
|
||||
dotcode = graph.generate_dot(name, env=self.builder.env,
|
||||
graph_attrs={'size': '"6.0,6.0"'})
|
||||
render_dot_latex(self, node, dotcode, [], 'inheritance')
|
||||
raise nodes.SkipNode
|
||||
@ -362,7 +365,8 @@ def setup(app):
|
||||
latex=(latex_visit_inheritance_diagram, None),
|
||||
html=(html_visit_inheritance_diagram, None),
|
||||
text=(skip, None),
|
||||
man=(skip, None))
|
||||
man=(skip, None),
|
||||
texinfo=(skip, None))
|
||||
app.add_directive('inheritance-diagram', InheritanceDiagram)
|
||||
app.add_config_value('inheritance_graph_attrs', {}, False),
|
||||
app.add_config_value('inheritance_node_attrs', {}, False),
|
||||
|
@ -20,12 +20,13 @@
|
||||
also be specified individually, e.g. if the docs should be buildable
|
||||
without Internet access.
|
||||
|
||||
:copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
:copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
|
||||
:license: BSD, see LICENSE for details.
|
||||
"""
|
||||
|
||||
import time
|
||||
import zlib
|
||||
import codecs
|
||||
import urllib2
|
||||
import posixpath
|
||||
from os import path
|
||||
@ -33,19 +34,26 @@ from os import path
|
||||
from docutils import nodes
|
||||
|
||||
from sphinx.builders.html import INVENTORY_FILENAME
|
||||
from sphinx.util.pycompat import b
|
||||
|
||||
|
||||
handlers = [urllib2.ProxyHandler(), urllib2.HTTPRedirectHandler(),
|
||||
urllib2.HTTPHandler()]
|
||||
if hasattr(urllib2, 'HTTPSHandler'):
|
||||
try:
|
||||
handlers.append(urllib2.HTTPSHandler)
|
||||
except NameError:
|
||||
pass
|
||||
|
||||
urllib2.install_opener(urllib2.build_opener(*handlers))
|
||||
|
||||
UTF8StreamReader = codecs.lookup('utf-8')[2]
|
||||
|
||||
|
||||
def read_inventory_v1(f, uri, join):
|
||||
f = UTF8StreamReader(f)
|
||||
invdata = {}
|
||||
line = f.next()
|
||||
projname = line.rstrip()[11:].decode('utf-8')
|
||||
projname = line.rstrip()[11:]
|
||||
line = f.next()
|
||||
version = line.rstrip()[11:]
|
||||
for line in f:
|
||||
@ -68,25 +76,25 @@ def read_inventory_v2(f, uri, join, bufsize=16*1024):
|
||||
projname = line.rstrip()[11:].decode('utf-8')
|
||||
line = f.readline()
|
||||
version = line.rstrip()[11:].decode('utf-8')
|
||||
line = f.readline()
|
||||
line = f.readline().decode('utf-8')
|
||||
if 'zlib' not in line:
|
||||
raise ValueError
|
||||
|
||||
def read_chunks():
|
||||
decompressor = zlib.decompressobj()
|
||||
for chunk in iter(lambda: f.read(bufsize), ''):
|
||||
for chunk in iter(lambda: f.read(bufsize), b('')):
|
||||
yield decompressor.decompress(chunk)
|
||||
yield decompressor.flush()
|
||||
|
||||
def split_lines(iter):
|
||||
buf = ''
|
||||
buf = b('')
|
||||
for chunk in iter:
|
||||
buf += chunk
|
||||
lineend = buf.find('\n')
|
||||
lineend = buf.find(b('\n'))
|
||||
while lineend != -1:
|
||||
yield buf[:lineend].decode('utf-8')
|
||||
buf = buf[lineend+1:]
|
||||
lineend = buf.find('\n')
|
||||
lineend = buf.find(b('\n'))
|
||||
assert not buf
|
||||
|
||||
for line in split_lines(read_chunks()):
|
||||
@ -109,13 +117,13 @@ def fetch_inventory(app, uri, inv):
|
||||
if inv.find('://') != -1:
|
||||
f = urllib2.urlopen(inv)
|
||||
else:
|
||||
f = open(path.join(app.srcdir, inv))
|
||||
f = open(path.join(app.srcdir, inv), 'rb')
|
||||
except Exception, err:
|
||||
app.warn('intersphinx inventory %r not fetchable due to '
|
||||
'%s: %s' % (inv, err.__class__, err))
|
||||
return
|
||||
try:
|
||||
line = f.readline().rstrip()
|
||||
line = f.readline().rstrip().decode('utf-8')
|
||||
try:
|
||||
if line == '# Sphinx inventory version 1':
|
||||
invdata = read_inventory_v1(f, uri, join)
|
||||
@ -141,6 +149,8 @@ def load_mappings(app):
|
||||
env = app.builder.env
|
||||
if not hasattr(env, 'intersphinx_cache'):
|
||||
env.intersphinx_cache = {}
|
||||
env.intersphinx_inventory = {}
|
||||
env.intersphinx_named_inventory = {}
|
||||
cache = env.intersphinx_cache
|
||||
update = False
|
||||
for key, value in app.config.intersphinx_mapping.iteritems():
|
||||
@ -191,10 +201,12 @@ def missing_reference(app, env, node, contnode):
|
||||
return
|
||||
objtypes = ['%s:%s' % (domain, objtype) for objtype in objtypes]
|
||||
to_try = [(env.intersphinx_inventory, target)]
|
||||
in_set = None
|
||||
if ':' in target:
|
||||
# first part may be the foreign doc set name
|
||||
setname, newtarget = target.split(':', 1)
|
||||
if setname in env.intersphinx_named_inventory:
|
||||
in_set = setname
|
||||
to_try.append((env.intersphinx_named_inventory[setname], newtarget))
|
||||
for inventory, target in to_try:
|
||||
for objtype in objtypes:
|
||||
@ -203,11 +215,25 @@ def missing_reference(app, env, node, contnode):
|
||||
proj, version, uri, dispname = inventory[objtype][target]
|
||||
newnode = nodes.reference('', '', internal=False, refuri=uri,
|
||||
reftitle='(in %s v%s)' % (proj, version))
|
||||
if dispname == '-':
|
||||
if node.get('refexplicit'):
|
||||
# use whatever title was given
|
||||
newnode.append(contnode)
|
||||
elif dispname == '-':
|
||||
# use whatever title was given, but strip prefix
|
||||
title = contnode.astext()
|
||||
if in_set and title.startswith(in_set+':'):
|
||||
newnode.append(contnode.__class__(title[len(in_set)+1:],
|
||||
title[len(in_set)+1:]))
|
||||
else:
|
||||
newnode.append(contnode)
|
||||
else:
|
||||
# else use the given display name (used for :ref:)
|
||||
newnode.append(contnode.__class__(dispname, dispname))
|
||||
return newnode
|
||||
# at least get rid of the ':' in the target if no explicit title given
|
||||
if in_set is not None and not node.get('refexplicit', True):
|
||||
if len(contnode) and isinstance(contnode[0], nodes.Text):
|
||||
contnode[0] = nodes.Text(newtarget, contnode[0].rawsource)
|
||||
|
||||
|
||||
def setup(app):
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user