Initial import of the doc tools.
24
Makefile
Normal file
@@ -0,0 +1,24 @@
|
||||
PYTHON ?= python
|
||||
|
||||
export PYTHONPATH = $(shell echo "$$PYTHONPATH"):./sphinx
|
||||
|
||||
.PHONY: all check clean clean-pyc pylint reindent testserver
|
||||
|
||||
all: clean-pyc check
|
||||
|
||||
check:
|
||||
@$(PYTHON) utils/check_sources.py -i sphinx/style/jquery.js sphinx
|
||||
@$(PYTHON) utils/check_sources.py converter
|
||||
|
||||
clean: clean-pyc
|
||||
|
||||
clean-pyc:
|
||||
find . -name '*.pyc' -exec rm -f {} +
|
||||
find . -name '*.pyo' -exec rm -f {} +
|
||||
find . -name '*~' -exec rm -f {} +
|
||||
|
||||
pylint:
|
||||
@pylint --rcfile utils/pylintrc sphinx converter
|
||||
|
||||
reindent:
|
||||
@$(PYTHON) utils/reindent.py -r -B .
|
||||
79
README
Normal file
@@ -0,0 +1,79 @@
|
||||
py-rest-doc
|
||||
===========
|
||||
|
||||
This sandbox project is about moving the official Python documentation
|
||||
to reStructuredText.
|
||||
|
||||
|
||||
What you need to know
|
||||
---------------------
|
||||
|
||||
This project uses Python 2.5 features, so you'll need a working Python
|
||||
2.5 setup.
|
||||
|
||||
If you want code highlighting, you need Pygments >= 0.8, easily
|
||||
installable from PyPI. Jinja, the template engine, is included as a
|
||||
SVN external.
|
||||
|
||||
For the rest of this document, let's assume that you have a Python
|
||||
checkout (you need the 2.6 line, i.e. the trunk) in ~/devel/python and
|
||||
this checkout in the current directory.
|
||||
|
||||
To convert the LaTeX doc to reST, you first have to apply the patch in
|
||||
``etc/inst.diff`` to the ``inst/inst.tex`` LaTeX file in the Python
|
||||
checkout::
|
||||
|
||||
patch -d ~/devel/python/Doc -p0 < etc/inst.diff
|
||||
|
||||
Then, create a target directory for the reST sources and run the
|
||||
converter script::
|
||||
|
||||
mkdir sources
|
||||
python convert.py ~/devel/python/Doc sources
|
||||
|
||||
This will convert all LaTeX sources to reST files in the ``sources``
|
||||
directory.
|
||||
|
||||
The ``sources`` directory contains a ``conf.py`` file which contains
|
||||
general configuration for the build process, such as the Python
|
||||
version that should be shown, or the date format for "last updated on"
|
||||
notes.
|
||||
|
||||
|
||||
Building the HTML version
|
||||
-------------------------
|
||||
|
||||
Then, create a target directory and run ::
|
||||
|
||||
mkdir build-html
|
||||
python sphinx-build.py -b html sources build-html
|
||||
|
||||
This will create HTML files in the ``build-html`` directory.
|
||||
|
||||
The ``build-html`` directory will also contain a ``.doctrees``
|
||||
directory, which caches pickles containing the docutils doctrees for
|
||||
all source files, as well as an ``environment.pickle`` file that
|
||||
collects all meta-information and data that's needed to
|
||||
cross-reference the sources and generate indices.
|
||||
|
||||
|
||||
Running the online (web) version
|
||||
--------------------------------
|
||||
|
||||
First, you need to build the source with the "web" builder::
|
||||
|
||||
mkdir build-web
|
||||
python sphinx-build.py -b web sources build-web
|
||||
|
||||
This will create files with pickled contents for the web application
|
||||
in the target directory.
|
||||
|
||||
Then, you can run ::
|
||||
|
||||
python sphinx-web.py build-web
|
||||
|
||||
which will start a webserver using wsgiref on ``localhost:3000``. The
|
||||
web application has a configuration file ``build-web/webconf.py``,
|
||||
where you can configure the server and port for the application as
|
||||
well as different other settings specific to the web app.
|
||||
|
||||
13
TODO
Normal file
@@ -0,0 +1,13 @@
|
||||
Global TODO
|
||||
===========
|
||||
|
||||
- discuss and debug comments system
|
||||
- write new Makefile, handle automatic version info and checkout
|
||||
- write a "printable" builder (export to latex, most probably)
|
||||
- discuss the default role
|
||||
- discuss lib -> ref section move
|
||||
- prepare for databases other than sqlite for comments
|
||||
- look at the old tools/ scripts, what functionality should be rewritten
|
||||
- add search via Xapian?
|
||||
- optionally have a contents tree view in the sidebar (AJAX based)?
|
||||
|
||||
25
convert.py
Normal file
@@ -0,0 +1,25 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Convert the Python documentation to Sphinx
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
from converter import convert_dir
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
rootdir = sys.argv[1]
|
||||
destdir = os.path.abspath(sys.argv[2])
|
||||
except IndexError:
|
||||
print "usage: convert.py docrootdir destdir"
|
||||
sys.exit()
|
||||
|
||||
assert os.path.isdir(os.path.join(rootdir, 'texinputs'))
|
||||
os.chdir(rootdir)
|
||||
convert_dir(destdir, *sys.argv[3:])
|
||||
144
converter/__init__.py
Normal file
@@ -0,0 +1,144 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Documentation converter - high level functions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import glob
|
||||
import shutil
|
||||
import codecs
|
||||
from os import path
|
||||
|
||||
from .tokenizer import Tokenizer
|
||||
from .latexparser import DocParser
|
||||
from .restwriter import RestWriter
|
||||
from .filenamemap import (fn_mapping, copyfiles_mapping, newfiles_mapping,
|
||||
rename_mapping, dirs_to_make, toctree_mapping,
|
||||
amendments_mapping)
|
||||
from .console import red, green
|
||||
|
||||
def convert_file(infile, outfile, doraise=True, splitchap=False,
|
||||
toctree=None, deflang=None, labelprefix=''):
|
||||
inf = codecs.open(infile, 'r', 'latin1')
|
||||
p = DocParser(Tokenizer(inf.read()).tokenize(), infile)
|
||||
if not splitchap:
|
||||
outf = codecs.open(outfile, 'w', 'utf-8')
|
||||
else:
|
||||
outf = None
|
||||
r = RestWriter(outf, splitchap, toctree, deflang, labelprefix)
|
||||
try:
|
||||
r.write_document(p.parse())
|
||||
if splitchap:
|
||||
for i, chapter in enumerate(r.chapters[1:]):
|
||||
coutf = codecs.open('%s/%d_%s' % (
|
||||
path.dirname(outfile), i+1, path.basename(outfile)),
|
||||
'w', 'utf-8')
|
||||
coutf.write(chapter.getvalue())
|
||||
coutf.close()
|
||||
else:
|
||||
outf.close()
|
||||
return 1, r.warnings
|
||||
except Exception, err:
|
||||
if doraise:
|
||||
raise
|
||||
return 0, str(err)
|
||||
|
||||
|
||||
def convert_dir(outdirname, *args):
|
||||
# make directories
|
||||
for dirname in dirs_to_make:
|
||||
try:
|
||||
os.mkdir(path.join(outdirname, dirname))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# copy files (currently only non-tex includes)
|
||||
for oldfn, newfn in copyfiles_mapping.iteritems():
|
||||
newpathfn = path.join(outdirname, newfn)
|
||||
globfns = glob.glob(oldfn)
|
||||
if len(globfns) == 1 and not path.isdir(newpathfn):
|
||||
shutil.copyfile(globfns[0], newpathfn)
|
||||
else:
|
||||
for globfn in globfns:
|
||||
shutil.copyfile(globfn, path.join(newpathfn,
|
||||
path.basename(globfn)))
|
||||
|
||||
# convert tex files
|
||||
# "doc" is not converted. It must be rewritten anyway.
|
||||
for subdir in ('api', 'dist', 'ext', 'inst', 'commontex',
|
||||
'lib', 'mac', 'ref', 'tut', 'whatsnew'):
|
||||
if args and subdir not in args:
|
||||
continue
|
||||
if subdir not in fn_mapping:
|
||||
continue
|
||||
newsubdir = fn_mapping[subdir]['__newname__']
|
||||
deflang = fn_mapping[subdir].get('__defaulthighlightlang__')
|
||||
labelprefix = fn_mapping[subdir].get('__labelprefix__', '')
|
||||
for filename in sorted(os.listdir(subdir)):
|
||||
if not filename.endswith('.tex'):
|
||||
continue
|
||||
filename = filename[:-4] # strip extension
|
||||
newname = fn_mapping[subdir][filename]
|
||||
if newname is None:
|
||||
continue
|
||||
if newname.endswith(':split'):
|
||||
newname = newname[:-6]
|
||||
splitchap = True
|
||||
else:
|
||||
splitchap = False
|
||||
if '/' not in newname:
|
||||
outfilename = path.join(outdirname, newsubdir, newname + '.rst')
|
||||
else:
|
||||
outfilename = path.join(outdirname, newname + '.rst')
|
||||
toctree = toctree_mapping.get(path.join(subdir, filename))
|
||||
infilename = path.join(subdir, filename + '.tex')
|
||||
print green(infilename),
|
||||
success, state = convert_file(infilename, outfilename, False,
|
||||
splitchap, toctree, deflang, labelprefix)
|
||||
if not success:
|
||||
print red("ERROR:")
|
||||
print red(" " + state)
|
||||
else:
|
||||
if state:
|
||||
print "warnings:"
|
||||
for warning in state:
|
||||
print " " + warning
|
||||
|
||||
# rename files, e.g. splitted ones
|
||||
for oldfn, newfn in rename_mapping.iteritems():
|
||||
try:
|
||||
if newfn is None:
|
||||
os.unlink(path.join(outdirname, oldfn))
|
||||
else:
|
||||
os.rename(path.join(outdirname, oldfn),
|
||||
path.join(outdirname, newfn))
|
||||
except OSError, err:
|
||||
if err.errno == 2:
|
||||
continue
|
||||
raise
|
||||
|
||||
# copy new files
|
||||
srcdirname = path.join(path.dirname(__file__), 'newfiles')
|
||||
for fn, newfn in newfiles_mapping.iteritems():
|
||||
shutil.copyfile(path.join(srcdirname, fn),
|
||||
path.join(outdirname, newfn))
|
||||
|
||||
# make amendments
|
||||
for newfn, (pre, post) in amendments_mapping.iteritems():
|
||||
fn = path.join(outdirname, newfn)
|
||||
try:
|
||||
ft = open(fn).read()
|
||||
except Exception, err:
|
||||
print "Error making amendments to %s: %s" % (newfn, err)
|
||||
continue
|
||||
else:
|
||||
fw = open(fn, 'w')
|
||||
fw.write(pre)
|
||||
fw.write(ft)
|
||||
fw.write(post)
|
||||
fw.close()
|
||||
101
converter/console.py
Normal file
@@ -0,0 +1,101 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Console utils
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Format colored console output.
|
||||
|
||||
:copyright: 1998-2004 by the Gentoo Foundation.
|
||||
:copyright: 2006-2007 by Georg Brandl.
|
||||
:license: GNU GPL.
|
||||
"""
|
||||
|
||||
esc_seq = "\x1b["
|
||||
|
||||
codes = {}
|
||||
codes["reset"] = esc_seq + "39;49;00m"
|
||||
|
||||
codes["bold"] = esc_seq + "01m"
|
||||
codes["faint"] = esc_seq + "02m"
|
||||
codes["standout"] = esc_seq + "03m"
|
||||
codes["underline"] = esc_seq + "04m"
|
||||
codes["blink"] = esc_seq + "05m"
|
||||
codes["overline"] = esc_seq + "06m" # Who made this up? Seriously.
|
||||
|
||||
ansi_color_codes = []
|
||||
for x in xrange(30, 38):
|
||||
ansi_color_codes.append("%im" % x)
|
||||
ansi_color_codes.append("%i;01m" % x)
|
||||
|
||||
rgb_ansi_colors = [
|
||||
'0x000000', '0x555555', '0xAA0000', '0xFF5555',
|
||||
'0x00AA00', '0x55FF55', '0xAA5500', '0xFFFF55',
|
||||
'0x0000AA', '0x5555FF', '0xAA00AA', '0xFF55FF',
|
||||
'0x00AAAA', '0x55FFFF', '0xAAAAAA', '0xFFFFFF'
|
||||
]
|
||||
|
||||
for x in xrange(len(rgb_ansi_colors)):
|
||||
codes[rgb_ansi_colors[x]] = esc_seq + ansi_color_codes[x]
|
||||
|
||||
del x
|
||||
|
||||
codes["black"] = codes["0x000000"]
|
||||
codes["darkgray"] = codes["0x555555"]
|
||||
|
||||
codes["red"] = codes["0xFF5555"]
|
||||
codes["darkred"] = codes["0xAA0000"]
|
||||
|
||||
codes["green"] = codes["0x55FF55"]
|
||||
codes["darkgreen"] = codes["0x00AA00"]
|
||||
|
||||
codes["yellow"] = codes["0xFFFF55"]
|
||||
codes["brown"] = codes["0xAA5500"]
|
||||
|
||||
codes["blue"] = codes["0x5555FF"]
|
||||
codes["darkblue"] = codes["0x0000AA"]
|
||||
|
||||
codes["fuchsia"] = codes["0xFF55FF"]
|
||||
codes["purple"] = codes["0xAA00AA"]
|
||||
|
||||
codes["teal"] = codes["0x00AAAA"]
|
||||
codes["turquoise"] = codes["0x55FFFF"]
|
||||
|
||||
codes["white"] = codes["0xFFFFFF"]
|
||||
codes["lightgray"] = codes["0xAAAAAA"]
|
||||
|
||||
codes["darkteal"] = codes["turquoise"]
|
||||
codes["darkyellow"] = codes["brown"]
|
||||
codes["fuscia"] = codes["fuchsia"]
|
||||
codes["white"] = codes["bold"]
|
||||
|
||||
def nocolor():
|
||||
"turn off colorization"
|
||||
for code in codes:
|
||||
codes[code] = ""
|
||||
|
||||
def reset_color():
|
||||
return codes["reset"]
|
||||
|
||||
def colorize(color_key, text):
|
||||
return codes[color_key] + text + codes["reset"]
|
||||
|
||||
functions_colors = [
|
||||
"bold", "white", "teal", "turquoise", "darkteal",
|
||||
"fuscia", "fuchsia", "purple", "blue", "darkblue",
|
||||
"green", "darkgreen", "yellow", "brown",
|
||||
"darkyellow", "red", "darkred"
|
||||
]
|
||||
|
||||
def create_color_func(color_key):
|
||||
"""
|
||||
Return a function that formats its argument in the given color.
|
||||
"""
|
||||
def derived_func(text):
|
||||
return colorize(color_key, text)
|
||||
return derived_func
|
||||
|
||||
ns = locals()
|
||||
for c in functions_colors:
|
||||
ns[c] = create_color_func(c)
|
||||
|
||||
del c, ns
|
||||
297
converter/docnodes.py
Normal file
@@ -0,0 +1,297 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Python documentation LaTeX parser - document nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
|
||||
class DocNode(object):
|
||||
""" A node in the document tree. """
|
||||
def __repr__(self):
|
||||
return '%s()' % self.__class__.__name__
|
||||
|
||||
def __str__(self):
|
||||
raise RuntimeError('cannot stringify docnodes')
|
||||
|
||||
def walk(self):
|
||||
return []
|
||||
|
||||
|
||||
class CommentNode(DocNode):
|
||||
""" A comment. """
|
||||
def __init__(self, comment):
|
||||
assert isinstance(comment, basestring)
|
||||
self.comment = comment
|
||||
|
||||
def __repr__(self):
|
||||
return 'CommentNode(%r)' % self.comment
|
||||
|
||||
|
||||
class RootNode(DocNode):
|
||||
""" A whole document. """
|
||||
def __init__(self, filename, children):
|
||||
self.filename = filename
|
||||
self.children = children
|
||||
self.params = {}
|
||||
self.labels = {}
|
||||
|
||||
def __repr__(self):
|
||||
return 'RootNode(%r, %r)' % (self.filename, self.children)
|
||||
|
||||
def walk(self):
|
||||
return self.children
|
||||
|
||||
def transform(self):
|
||||
""" Do restructurings not possible during parsing. """
|
||||
def do_descenvs(node):
|
||||
r""" Make \xxxlines an attribute of the parent xxxdesc node. """
|
||||
for subnode in node.walk():
|
||||
do_descenvs(subnode)
|
||||
if isinstance(node, DescEnvironmentNode):
|
||||
for subnode in node.content.walk():
|
||||
if isinstance(subnode, DescLineCommandNode):
|
||||
node.additional.append((subnode.cmdname, subnode.args))
|
||||
|
||||
do_descenvs(self)
|
||||
|
||||
|
||||
class NodeList(DocNode, list):
|
||||
""" A list of subnodes. """
|
||||
def __init__(self, children=None):
|
||||
list.__init__(self, children or [])
|
||||
|
||||
def __repr__(self):
|
||||
return 'NL%s' % list.__repr__(self)
|
||||
|
||||
def walk(self):
|
||||
return self
|
||||
|
||||
def append(self, node):
|
||||
assert isinstance(node, DocNode)
|
||||
if type(node) is EmptyNode:
|
||||
return
|
||||
elif self and isinstance(node, TextNode) and \
|
||||
type(self[-1]) is TextNode:
|
||||
self[-1].text += node.text
|
||||
elif type(node) is NodeList:
|
||||
list.extend(self, node)
|
||||
elif type(node) is VerbatimNode and self and \
|
||||
isinstance(self[-1], ParaSepNode):
|
||||
# don't allow a ParaSepNode before VerbatimNode
|
||||
# because this breaks ReST's '::'
|
||||
self[-1] = node
|
||||
else:
|
||||
list.append(self, node)
|
||||
|
||||
def flatten(self):
|
||||
if len(self) > 1:
|
||||
return self
|
||||
elif len(self) == 1:
|
||||
return self[0]
|
||||
else:
|
||||
return EmptyNode()
|
||||
|
||||
|
||||
class ParaSepNode(DocNode):
|
||||
""" A node for paragraph separator. """
|
||||
def __repr__(self):
|
||||
return 'Para'
|
||||
|
||||
|
||||
class TextNode(DocNode):
|
||||
""" A node containing text. """
|
||||
def __init__(self, text):
|
||||
assert isinstance(text, basestring)
|
||||
self.text = text
|
||||
|
||||
def __repr__(self):
|
||||
if type(self) is TextNode:
|
||||
return 'T%r' % self.text
|
||||
else:
|
||||
return '%s(%r)' % (self.__class__.__name__, self.text)
|
||||
|
||||
|
||||
class EmptyNode(TextNode):
|
||||
""" An empty node. """
|
||||
def __init__(self, *args):
|
||||
self.text = ''
|
||||
|
||||
|
||||
class NbspNode(TextNode):
|
||||
""" A non-breaking space. """
|
||||
def __init__(self, *args):
|
||||
# this breaks ReST markup (!)
|
||||
#self.text = u'\N{NO-BREAK SPACE}'
|
||||
self.text = ' '
|
||||
|
||||
def __repr__(self):
|
||||
return 'NBSP'
|
||||
|
||||
|
||||
simplecmd_mapping = {
|
||||
'ldots': u'...',
|
||||
'moreargs': '...',
|
||||
'unspecified': '...',
|
||||
'ASCII': 'ASCII',
|
||||
'UNIX': 'Unix',
|
||||
'Unix': 'Unix',
|
||||
'POSIX': 'POSIX',
|
||||
'LaTeX': 'LaTeX',
|
||||
'EOF': 'EOF',
|
||||
'Cpp': 'C++',
|
||||
'C': 'C',
|
||||
'sub': u'--> ',
|
||||
'textbackslash': '\\\\',
|
||||
'textunderscore': '_',
|
||||
'texteuro': u'\N{EURO SIGN}',
|
||||
'textasciicircum': u'^',
|
||||
'textasciitilde': u'~',
|
||||
'textgreater': '>',
|
||||
'textless': '<',
|
||||
'textbar': '|',
|
||||
'backslash': '\\\\',
|
||||
'tilde': '~',
|
||||
'copyright': u'\N{COPYRIGHT SIGN}',
|
||||
# \e is mostly inside \code and therefore not escaped.
|
||||
'e': '\\',
|
||||
'infinity': u'\N{INFINITY}',
|
||||
'plusminus': u'\N{PLUS-MINUS SIGN}',
|
||||
'leq': u'\N{LESS-THAN OR EQUAL TO}',
|
||||
'geq': u'\N{GREATER-THAN OR EQUAL TO}',
|
||||
'pi': u'\N{GREEK SMALL LETTER PI}',
|
||||
'AA': u'\N{LATIN CAPITAL LETTER A WITH RING ABOVE}',
|
||||
}
|
||||
|
||||
class SimpleCmdNode(TextNode):
|
||||
""" A command resulting in simple text. """
|
||||
def __init__(self, cmdname, args):
|
||||
self.text = simplecmd_mapping[cmdname]
|
||||
|
||||
|
||||
class BreakNode(DocNode):
|
||||
""" A line break. """
|
||||
def __repr__(self):
|
||||
return 'BR'
|
||||
|
||||
|
||||
class CommandNode(DocNode):
|
||||
""" A general command. """
|
||||
def __init__(self, cmdname, args):
|
||||
self.cmdname = cmdname
|
||||
self.args = args
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r, %r)' % (self.__class__.__name__, self.cmdname, self.args)
|
||||
|
||||
def walk(self):
|
||||
return self.args
|
||||
|
||||
|
||||
class DescLineCommandNode(CommandNode):
|
||||
""" A \\xxxline command. """
|
||||
|
||||
|
||||
class InlineNode(CommandNode):
|
||||
""" A node with inline markup. """
|
||||
def walk(self):
|
||||
return []
|
||||
|
||||
|
||||
class IndexNode(InlineNode):
|
||||
""" An index-generating command. """
|
||||
def __init__(self, cmdname, args):
|
||||
self.cmdname = cmdname
|
||||
# tricky -- this is to make this silent in paragraphs
|
||||
# while still generating index entries for textonly()
|
||||
self.args = []
|
||||
self.indexargs = args
|
||||
|
||||
|
||||
class SectioningNode(CommandNode):
|
||||
""" A heading node. """
|
||||
|
||||
|
||||
class EnvironmentNode(DocNode):
|
||||
""" An environment. """
|
||||
def __init__(self, envname, args, content):
|
||||
self.envname = envname
|
||||
self.args = args
|
||||
self.content = content
|
||||
|
||||
def __repr__(self):
|
||||
return 'EnvironmentNode(%r, %r, %r)' % (self.envname,
|
||||
self.args, self.content)
|
||||
|
||||
def walk(self):
|
||||
return [self.content]
|
||||
|
||||
|
||||
class DescEnvironmentNode(EnvironmentNode):
|
||||
""" An xxxdesc environment. """
|
||||
def __init__(self, envname, args, content):
|
||||
self.envname = envname
|
||||
self.args = args
|
||||
self.additional = []
|
||||
self.content = content
|
||||
|
||||
def __repr__(self):
|
||||
return 'DescEnvironmentNode(%r, %r, %r)' % (self.envname,
|
||||
self.args, self.content)
|
||||
|
||||
|
||||
class TableNode(EnvironmentNode):
|
||||
def __init__(self, numcols, headings, lines):
|
||||
self.numcols = numcols
|
||||
self.headings = headings
|
||||
self.lines = lines
|
||||
|
||||
def __repr__(self):
|
||||
return 'TableNode(%r, %r, %r)' % (self.numcols,
|
||||
self.headings, self.lines)
|
||||
|
||||
def walk(self):
|
||||
return []
|
||||
|
||||
|
||||
class VerbatimNode(DocNode):
|
||||
""" A verbatim code block. """
|
||||
def __init__(self, content):
|
||||
self.content = content
|
||||
|
||||
def __repr__(self):
|
||||
return 'VerbatimNode(%r)' % self.content
|
||||
|
||||
|
||||
class ListNode(DocNode):
|
||||
""" A list. """
|
||||
def __init__(self, items):
|
||||
self.items = items
|
||||
|
||||
def __repr__(self):
|
||||
return '%s(%r)' % (self.__class__.__name__, self.items)
|
||||
|
||||
def walk(self):
|
||||
return [item[1] for item in self.items]
|
||||
|
||||
|
||||
class ItemizeNode(ListNode):
|
||||
""" An enumeration with bullets. """
|
||||
|
||||
|
||||
class EnumerateNode(ListNode):
|
||||
""" An enumeration with numbers. """
|
||||
|
||||
|
||||
class DescriptionNode(ListNode):
|
||||
""" A description list. """
|
||||
|
||||
|
||||
class DefinitionsNode(ListNode):
|
||||
""" A definition list. """
|
||||
|
||||
|
||||
class ProductionListNode(ListNode):
|
||||
""" A grammar production list. """
|
||||
632
converter/filenamemap.py
Normal file
@@ -0,0 +1,632 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Map LaTeX filenames to ReST filenames
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
# '' means: use same name, strip prefix if applicable.
|
||||
# None means: don't translate at all.
|
||||
|
||||
_mapping = {
|
||||
'lib': {
|
||||
'__newname__' : 'modules',
|
||||
|
||||
'asttable': '',
|
||||
'compiler': '',
|
||||
'distutils': '',
|
||||
'email': '',
|
||||
'emailcharsets': 'email.charset',
|
||||
'emailencoders': 'email.encoders',
|
||||
'emailexc': 'email.errors',
|
||||
'emailgenerator': 'email.generator',
|
||||
'emailheaders': 'email.header',
|
||||
'emailiter': 'email.iterators',
|
||||
'emailmessage': 'email.message',
|
||||
'emailmimebase': 'email.mime',
|
||||
'emailparser': 'email.parser',
|
||||
'emailutil': 'email.util',
|
||||
'libaifc': '',
|
||||
'libanydbm': '',
|
||||
'libarray': '',
|
||||
'libascii': 'curses.ascii',
|
||||
'libast': '',
|
||||
'libasynchat': '',
|
||||
'libasyncore': '',
|
||||
'libatexit': '',
|
||||
'libaudioop': '',
|
||||
'libbase64': '',
|
||||
'libbasehttp': 'basehttpserver',
|
||||
'libbastion': '',
|
||||
'libbinascii': '',
|
||||
'libbinhex': '',
|
||||
'libbisect': '',
|
||||
'libbltin': '__builtin__',
|
||||
'libbsddb': '',
|
||||
'libbz2': '',
|
||||
'libcalendar': '',
|
||||
'libcfgparser': 'configparser',
|
||||
'libcgihttp': 'cgihttpserver',
|
||||
'libcgi': '',
|
||||
'libcgitb': '',
|
||||
'libchunk': '',
|
||||
'libcmath': '',
|
||||
'libcmd': '',
|
||||
'libcodecs': '',
|
||||
'libcodeop': '',
|
||||
'libcode': '',
|
||||
'libcollections': '',
|
||||
'libcolorsys': '',
|
||||
'libcommands': '',
|
||||
'libcompileall': '',
|
||||
'libcontextlib': '',
|
||||
'libcookielib': '',
|
||||
'libcookie': '',
|
||||
'libcopyreg': 'copy_reg',
|
||||
'libcopy': '',
|
||||
'libcrypt': '',
|
||||
'libcsv': '',
|
||||
'libctypes': '',
|
||||
'libcursespanel': 'curses.panel',
|
||||
'libcurses': '',
|
||||
'libdatetime': '',
|
||||
'libdbhash': '',
|
||||
'libdbm': '',
|
||||
'libdecimal': '',
|
||||
'libdifflib': '',
|
||||
'libdircache': '',
|
||||
'libdis': '',
|
||||
'libdl': '',
|
||||
'libdoctest': '',
|
||||
'libdocxmlrpc': 'docxmlrpcserver',
|
||||
'libdumbdbm': '',
|
||||
'libdummythreading': 'dummy_threading',
|
||||
'libdummythread': 'dummy_thread',
|
||||
'liberrno': '',
|
||||
'libetree': 'xml.etree.elementtree',
|
||||
'libfcntl': '',
|
||||
'libfilecmp': '',
|
||||
'libfileinput': '',
|
||||
'libfnmatch': '',
|
||||
'libformatter': '',
|
||||
'libfpectl': '',
|
||||
'libfpformat': '',
|
||||
'libftplib': '',
|
||||
'libfunctools': '',
|
||||
'libfuture': '__future__',
|
||||
'libgc': '',
|
||||
'libgdbm': '',
|
||||
'libgetopt': '',
|
||||
'libgetpass': '',
|
||||
'libgettext': '',
|
||||
'libglob': '',
|
||||
'libgrp': '',
|
||||
'libgzip': '',
|
||||
'libhashlib': '',
|
||||
'libheapq': '',
|
||||
'libhmac': '',
|
||||
'libhotshot': '',
|
||||
'libhtmllib': '',
|
||||
'libhtmlparser': '',
|
||||
'libhttplib': '',
|
||||
'libimageop': '',
|
||||
'libimaplib': '',
|
||||
'libimgfile': '',
|
||||
'libimghdr': '',
|
||||
'libimp': '',
|
||||
'libinspect': '',
|
||||
'libitertools': '',
|
||||
'libjpeg': '',
|
||||
'libkeyword': '',
|
||||
'liblinecache': '',
|
||||
'liblocale': '',
|
||||
'liblogging': '',
|
||||
'libmailbox': '',
|
||||
'libmailcap': '',
|
||||
'libmain': '__main__',
|
||||
'libmarshal': '',
|
||||
'libmath': '',
|
||||
'libmd5': '',
|
||||
'libmhlib': '',
|
||||
'libmimetools': '',
|
||||
'libmimetypes': '',
|
||||
'libmimewriter': '',
|
||||
'libmimify': '',
|
||||
'libmmap': '',
|
||||
'libmodulefinder': '',
|
||||
'libmsilib': '',
|
||||
'libmsvcrt': '',
|
||||
'libmultifile': '',
|
||||
'libmutex': '',
|
||||
'libnetrc': '',
|
||||
'libnew': '',
|
||||
'libnis': '',
|
||||
'libnntplib': '',
|
||||
'liboperator': '',
|
||||
'liboptparse': '',
|
||||
'libos': '',
|
||||
'libossaudiodev': '',
|
||||
'libparser': '',
|
||||
'libpdb': '',
|
||||
'libpickle': '',
|
||||
'libpickletools': '',
|
||||
'libpipes': '',
|
||||
'libpkgutil': '',
|
||||
'libplatform': '',
|
||||
'libpopen2': '',
|
||||
'libpoplib': '',
|
||||
'libposixpath': 'os.path',
|
||||
'libposix': '',
|
||||
'libpprint': '',
|
||||
'libprofile': '',
|
||||
'libpty': '',
|
||||
'libpwd': '',
|
||||
'libpyclbr': '',
|
||||
'libpycompile': 'py_compile',
|
||||
'libpydoc': '',
|
||||
'libpyexpat': '',
|
||||
'libqueue': '',
|
||||
'libquopri': '',
|
||||
'librandom': '',
|
||||
'libreadline': '',
|
||||
'librepr': '',
|
||||
'libre': '',
|
||||
'libresource': '',
|
||||
'librexec': '',
|
||||
'librfc822': '',
|
||||
'librlcompleter': '',
|
||||
'librobotparser': '',
|
||||
'librunpy': '',
|
||||
'libsched': '',
|
||||
'libselect': '',
|
||||
'libsets': '',
|
||||
'libsgmllib': '',
|
||||
'libsha': '',
|
||||
'libshelve': '',
|
||||
'libshlex': '',
|
||||
'libshutil': '',
|
||||
'libsignal': '',
|
||||
'libsimplehttp': 'simplehttpserver',
|
||||
'libsimplexmlrpc': 'simplexmlrpcserver',
|
||||
'libsite': '',
|
||||
'libsmtpd': '',
|
||||
'libsmtplib': '',
|
||||
'libsndhdr': '',
|
||||
'libsocket': '',
|
||||
'libsocksvr': 'socketserver',
|
||||
'libspwd': '',
|
||||
'libsqlite3': '',
|
||||
'libstat': '',
|
||||
'libstatvfs': '',
|
||||
'libstringio': '',
|
||||
'libstringprep': '',
|
||||
'libstring': '',
|
||||
'libstruct': '',
|
||||
'libsunaudio': '',
|
||||
'libsunau': '',
|
||||
'libsubprocess': '',
|
||||
'libsymbol': '',
|
||||
'libsyslog': '',
|
||||
'libsys': '',
|
||||
'libtabnanny': '',
|
||||
'libtarfile': '',
|
||||
'libtelnetlib': '',
|
||||
'libtempfile': '',
|
||||
'libtermios': '',
|
||||
'libtest': '',
|
||||
'libtextwrap': '',
|
||||
'libthreading': '',
|
||||
'libthread': '',
|
||||
'libtimeit': '',
|
||||
'libtime': '',
|
||||
'libtokenize': '',
|
||||
'libtoken': '',
|
||||
'libtraceback': '',
|
||||
'libtrace': '',
|
||||
'libtty': '',
|
||||
'libturtle': '',
|
||||
'libtypes': '',
|
||||
'libunicodedata': '',
|
||||
'libunittest': '',
|
||||
'liburllib2': '',
|
||||
'liburllib': '',
|
||||
'liburlparse': '',
|
||||
'libuserdict': '',
|
||||
'libuser': '',
|
||||
'libuuid': '',
|
||||
'libuu': '',
|
||||
'libwarnings': '',
|
||||
'libwave': '',
|
||||
'libweakref': '',
|
||||
'libwebbrowser': '',
|
||||
'libwhichdb': '',
|
||||
'libwinreg': '_winreg',
|
||||
'libwinsound': '',
|
||||
'libwsgiref': '',
|
||||
'libxdrlib': '',
|
||||
'libxmllib': '',
|
||||
'libxmlrpclib': '',
|
||||
'libzipfile': '',
|
||||
'libzipimport': '',
|
||||
'libzlib': '',
|
||||
'tkinter': '',
|
||||
'xmldomminidom': 'xml.dom.minidom',
|
||||
'xmldompulldom': 'xml.dom.pulldom',
|
||||
'xmldom': 'xml.dom',
|
||||
'xmletree': 'xml.etree',
|
||||
'xmlsaxhandler': 'xml.sax.handler',
|
||||
'xmlsaxreader': 'xml.sax.reader',
|
||||
'xmlsax': 'xml.sax',
|
||||
'xmlsaxutils': 'xml.sax.utils',
|
||||
'libal': '',
|
||||
'libcd': '',
|
||||
'libfl': '',
|
||||
'libfm': '',
|
||||
'libgl': '',
|
||||
'libposixfile': '',
|
||||
|
||||
# specials
|
||||
'libundoc': '',
|
||||
'libintro': '',
|
||||
|
||||
# -> ref
|
||||
'libconsts': 'reference/consts',
|
||||
'libexcs': 'reference/exceptions',
|
||||
'libfuncs': 'reference/functions',
|
||||
'libobjs': 'reference/objects',
|
||||
'libstdtypes': 'reference/stdtypes',
|
||||
|
||||
# mainfiles
|
||||
'lib': None,
|
||||
'mimelib': None,
|
||||
|
||||
# obsolete
|
||||
'libni': None,
|
||||
'libcmpcache': None,
|
||||
'libcmp': None,
|
||||
|
||||
# chapter overviews
|
||||
'fileformats': '',
|
||||
'filesys': '',
|
||||
'frameworks': '',
|
||||
'i18n': '',
|
||||
'internet': '',
|
||||
'ipc': '',
|
||||
'language': '',
|
||||
'archiving': '',
|
||||
'custominterp': '',
|
||||
'datatypes': '',
|
||||
'development': '',
|
||||
'markup': '',
|
||||
'modules': '',
|
||||
'netdata': '',
|
||||
'numeric': '',
|
||||
'persistence': '',
|
||||
'windows': '',
|
||||
'libsun': '',
|
||||
'libmm': '',
|
||||
'liballos': '',
|
||||
'libcrypto': '',
|
||||
'libsomeos': '',
|
||||
'libsgi': '',
|
||||
'libmisc': '',
|
||||
'libpython': '',
|
||||
'librestricted': '',
|
||||
'libstrings': '',
|
||||
'libunix': '',
|
||||
},
|
||||
|
||||
'ref': {
|
||||
'__newname__': 'reference',
|
||||
'ref': None,
|
||||
'ref1': 'introduction',
|
||||
'ref2': 'lexical_analysis',
|
||||
'ref3': 'datamodel',
|
||||
'ref4': 'executionmodel',
|
||||
'ref5': 'expressions',
|
||||
'ref6': 'simple_stmts',
|
||||
'ref7': 'compound_stmts',
|
||||
'ref8': 'toplevel_components',
|
||||
},
|
||||
|
||||
'tut': {
|
||||
'__newname__': 'tutorial',
|
||||
'__labelprefix__': 'tut-',
|
||||
'tut': 'tutorial:split',
|
||||
'glossary': 'glossary',
|
||||
},
|
||||
|
||||
'api': {
|
||||
'__newname__': 'c-api',
|
||||
'__defaulthighlightlang__': 'c',
|
||||
'api': None,
|
||||
|
||||
'abstract': '',
|
||||
'concrete': '',
|
||||
'exceptions': '',
|
||||
'init': '',
|
||||
'intro': '',
|
||||
'memory': '',
|
||||
'newtypes': '',
|
||||
'refcounting': '',
|
||||
'utilities': '',
|
||||
'veryhigh': '',
|
||||
},
|
||||
|
||||
'ext': {
|
||||
'__newname__': 'extending',
|
||||
'__defaulthighlightlang__': 'c',
|
||||
'ext': None,
|
||||
|
||||
'building': '',
|
||||
'embedding': '',
|
||||
'extending': 'extending',
|
||||
'newtypes': '',
|
||||
'windows': '',
|
||||
},
|
||||
|
||||
'dist': {
|
||||
'__newname__': 'distutils',
|
||||
'dist': 'distutils:split',
|
||||
'sysconfig': '',
|
||||
},
|
||||
|
||||
'mac': {
|
||||
'__newname__': 'macmodules',
|
||||
'mac': None,
|
||||
|
||||
'libaepack': 'aepack',
|
||||
'libaetools': 'aetools',
|
||||
'libaetypes': 'aetypes',
|
||||
'libautogil': 'autogil',
|
||||
'libcolorpicker': 'colorpicker',
|
||||
'libframework': 'framework',
|
||||
'libgensuitemodule': 'gensuitemodule',
|
||||
'libmacic': 'macic',
|
||||
'libmacos': 'macos',
|
||||
'libmacostools': 'macostools',
|
||||
'libmac': 'mac',
|
||||
'libmacui': 'macui',
|
||||
'libminiae': 'miniae',
|
||||
'libscrap': 'scrap',
|
||||
'scripting': '',
|
||||
'toolbox': '',
|
||||
'undoc': '',
|
||||
'using': '',
|
||||
|
||||
},
|
||||
|
||||
'inst': {
|
||||
'__newname__': 'install',
|
||||
'__defaulthighlightlang__': 'none',
|
||||
'inst': 'index',
|
||||
},
|
||||
|
||||
'whatsnew': {
|
||||
'__newname__': 'whatsnew',
|
||||
'whatsnew20': '2.0',
|
||||
'whatsnew21': '2.1',
|
||||
'whatsnew22': '2.2',
|
||||
'whatsnew23': '2.3',
|
||||
'whatsnew24': '2.4',
|
||||
'whatsnew25': '2.5',
|
||||
'whatsnew26': '2.6',
|
||||
},
|
||||
|
||||
'commontex': {
|
||||
'__newname__': '',
|
||||
'boilerplate': None,
|
||||
'patchlevel': None,
|
||||
'copyright': '',
|
||||
'license': '',
|
||||
'reportingbugs': 'bugs',
|
||||
},
|
||||
}
|
||||
|
||||
fn_mapping = {}
|
||||
|
||||
for dir, files in _mapping.iteritems():
|
||||
newmap = fn_mapping[dir] = {}
|
||||
for fn in files:
|
||||
if not fn.startswith('_') and files[fn] == '':
|
||||
if fn.startswith(dir):
|
||||
newmap[fn] = fn[len(dir):]
|
||||
else:
|
||||
newmap[fn] = fn
|
||||
else:
|
||||
newmap[fn] = files[fn]
|
||||
|
||||
|
||||
# new directories to create
|
||||
dirs_to_make = [
|
||||
'c-api',
|
||||
'data',
|
||||
'distutils',
|
||||
'documenting',
|
||||
'extending',
|
||||
'includes',
|
||||
'includes/sqlite3',
|
||||
'install',
|
||||
'macmodules',
|
||||
'modules',
|
||||
'reference',
|
||||
'tutorial',
|
||||
'whatsnew',
|
||||
]
|
||||
|
||||
# includefiles for \verbatiminput and \input
|
||||
includes_mapping = {
|
||||
'../../Parser/Python.asdl': None, # XXX
|
||||
'../../Lib/test/exception_hierarchy.txt': None,
|
||||
'emailmessage': 'email.message.rst',
|
||||
'emailparser': 'email.parser.rst',
|
||||
'emailgenerator': 'email.generator.rst',
|
||||
'emailmimebase': 'email.mime.rst',
|
||||
'emailheaders': 'email.header.rst',
|
||||
'emailcharsets': 'email.charset.rst',
|
||||
'emailencoders': 'email.encoders.rst',
|
||||
'emailexc': 'email.errors.rst',
|
||||
'emailutil': 'email.util.rst',
|
||||
'emailiter': 'email.iterators.rst',
|
||||
}
|
||||
|
||||
# new files to copy from converter/newfiles
|
||||
newfiles_mapping = {
|
||||
'conf.py': 'conf.py',
|
||||
'TODO': 'TODO',
|
||||
|
||||
'ref_index.rst': 'reference/index.rst',
|
||||
'tutorial_index.rst': 'tutorial/index.rst',
|
||||
'modules_index.rst': 'modules/index.rst',
|
||||
'mac_index.rst': 'macmodules/index.rst',
|
||||
'ext_index.rst': 'extending/index.rst',
|
||||
'api_index.rst': 'c-api/index.rst',
|
||||
'dist_index.rst': 'distutils/index.rst',
|
||||
'contents.rst': 'contents.rst',
|
||||
'about.rst': 'about.rst',
|
||||
|
||||
'doc.rst': 'documenting/index.rst',
|
||||
'doc_intro.rst': 'documenting/intro.rst',
|
||||
'doc_style.rst': 'documenting/style.rst',
|
||||
'doc_sphinx.rst': 'documenting/sphinx.rst',
|
||||
'doc_rest.rst': 'documenting/rest.rst',
|
||||
'doc_markup.rst': 'documenting/markup.rst',
|
||||
}
|
||||
|
||||
# copy files from the old doc tree
|
||||
copyfiles_mapping = {
|
||||
'api/refcounts.dat': 'data',
|
||||
'lib/email-*.py': 'includes',
|
||||
'lib/minidom-example.py': 'includes',
|
||||
'lib/tzinfo-examples.py': 'includes',
|
||||
'lib/sqlite3/*.py': 'includes/sqlite3',
|
||||
'ext/*.c': 'includes',
|
||||
'ext/*.py': 'includes',
|
||||
'commontex/typestruct.h': 'includes',
|
||||
}
|
||||
|
||||
# files to rename
|
||||
rename_mapping = {
|
||||
'tutorial/1_tutorial.rst': None, # delete
|
||||
'tutorial/2_tutorial.rst': 'tutorial/appetite.rst',
|
||||
'tutorial/3_tutorial.rst': 'tutorial/interpreter.rst',
|
||||
'tutorial/4_tutorial.rst': 'tutorial/introduction.rst',
|
||||
'tutorial/5_tutorial.rst': 'tutorial/controlflow.rst',
|
||||
'tutorial/6_tutorial.rst': 'tutorial/datastructures.rst',
|
||||
'tutorial/7_tutorial.rst': 'tutorial/modules.rst',
|
||||
'tutorial/8_tutorial.rst': 'tutorial/inputoutput.rst',
|
||||
'tutorial/9_tutorial.rst': 'tutorial/errors.rst',
|
||||
'tutorial/10_tutorial.rst': 'tutorial/classes.rst',
|
||||
'tutorial/11_tutorial.rst': 'tutorial/stdlib.rst',
|
||||
'tutorial/12_tutorial.rst': 'tutorial/stdlib2.rst',
|
||||
'tutorial/13_tutorial.rst': 'tutorial/whatnow.rst',
|
||||
'tutorial/14_tutorial.rst': 'tutorial/interactive.rst',
|
||||
'tutorial/15_tutorial.rst': 'tutorial/floatingpoint.rst',
|
||||
'tutorial/16_tutorial.rst': None, # delete
|
||||
|
||||
'distutils/1_distutils.rst': 'distutils/introduction.rst',
|
||||
'distutils/2_distutils.rst': 'distutils/setupscript.rst',
|
||||
'distutils/3_distutils.rst': 'distutils/configfile.rst',
|
||||
'distutils/4_distutils.rst': 'distutils/sourcedist.rst',
|
||||
'distutils/5_distutils.rst': 'distutils/builtdist.rst',
|
||||
'distutils/6_distutils.rst': 'distutils/packageindex.rst',
|
||||
'distutils/7_distutils.rst': 'distutils/uploading.rst',
|
||||
'distutils/8_distutils.rst': 'distutils/examples.rst',
|
||||
'distutils/9_distutils.rst': 'distutils/extending.rst',
|
||||
'distutils/10_distutils.rst': 'distutils/commandref.rst',
|
||||
'distutils/11_distutils.rst': 'distutils/apiref.rst',
|
||||
}
|
||||
|
||||
# toctree entries
|
||||
toctree_mapping = {
|
||||
'mac/scripting': ['gensuitemodule', 'aetools', 'aepack', 'aetypes', 'miniae'],
|
||||
'mac/toolbox': ['colorpicker'],
|
||||
'lib/libstrings': ['string', 're', 'struct', 'difflib', 'stringio', 'textwrap',
|
||||
'codecs', 'unicodedata', 'stringprep', 'fpformat'],
|
||||
'lib/datatypes': ['datetime', 'calendar', 'collections', 'heapq', 'bisect',
|
||||
'array', 'sets', 'sched', 'mutex', 'queue', 'weakref',
|
||||
'userdict', 'types', 'new', 'copy', 'pprint', 'repr'],
|
||||
'lib/numeric': ['math', 'cmath', 'decimal', 'random', 'itertools', 'functools',
|
||||
'operator'],
|
||||
'lib/netdata': ['email', 'mailcap', 'mailbox', 'mhlib', 'mimetools', 'mimetypes',
|
||||
'mimewriter', 'mimify', 'multifile', 'rfc822',
|
||||
'base64', 'binhex', 'binascii', 'quopri', 'uu'],
|
||||
'lib/markup': ['htmlparser', 'sgmllib', 'htmllib', 'pyexpat', 'xml.dom',
|
||||
'xml.dom.minidom', 'xml.dom.pulldom', 'xml.sax', 'xml.sax.handler',
|
||||
'xml.sax.utils', 'xml.sax.reader', 'xml.etree.elementtree'],
|
||||
'lib/fileformats': ['csv', 'configparser', 'robotparser', 'netrc', 'xdrlib'],
|
||||
'lib/libcrypto': ['hashlib', 'hmac', 'md5', 'sha'],
|
||||
'lib/filesys': ['os.path', 'fileinput', 'stat', 'statvfs', 'filecmp',
|
||||
'tempfile', 'glob', 'fnmatch', 'linecache', 'shutil', 'dircache'],
|
||||
'lib/archiving': ['zlib', 'gzip', 'bz2', 'zipfile', 'tarfile'],
|
||||
'lib/persistence': ['pickle', 'copy_reg', 'shelve', 'marshal', 'anydbm',
|
||||
'whichdb', 'dbm', 'gdbm', 'dbhash', 'bsddb', 'dumbdbm',
|
||||
'sqlite3'],
|
||||
'lib/liballos': ['os', 'time', 'optparse', 'getopt', 'logging', 'getpass',
|
||||
'curses', 'curses.ascii', 'curses.panel', 'platform',
|
||||
'errno', 'ctypes'],
|
||||
'lib/libsomeos': ['select', 'thread', 'threading', 'dummy_thread', 'dummy_threading',
|
||||
'mmap', 'readline', 'rlcompleter'],
|
||||
'lib/libunix': ['posix', 'pwd', 'spwd', 'grp', 'crypt', 'dl', 'termios', 'tty',
|
||||
'pty', 'fcntl', 'pipes', 'posixfile', 'resource', 'nis',
|
||||
'syslog', 'commands'],
|
||||
'lib/ipc': ['subprocess', 'socket', 'signal', 'popen2', 'asyncore', 'asynchat'],
|
||||
'lib/internet': ['webbrowser', 'cgi', 'cgitb', 'wsgiref', 'urllib', 'urllib2',
|
||||
'httplib', 'ftplib', 'poplib', 'imaplib',
|
||||
'nntplib', 'smtplib', 'smtpd', 'telnetlib', 'uuid', 'urlparse',
|
||||
'socketserver', 'basehttpserver', 'simplehttpserver',
|
||||
'cgihttpserver', 'cookielib', 'cookie', 'xmlrpclib',
|
||||
'simplexmlrpcserver', 'docxmlrpcserver'],
|
||||
'lib/libmm': ['audioop', 'imageop', 'aifc', 'sunau', 'wave', 'chunk',
|
||||
'colorsys', 'imghdr', 'sndhdr', 'ossaudiodev'],
|
||||
'lib/i18n': ['gettext', 'locale'],
|
||||
'lib/frameworks': ['cmd', 'shlex'],
|
||||
'lib/development': ['pydoc', 'doctest', 'unittest', 'test'],
|
||||
'lib/libpython': ['sys', '__builtin__', '__main__', 'warnings', 'contextlib',
|
||||
'atexit', 'traceback', '__future__', 'gc', 'inspect',
|
||||
'site', 'user', 'fpectl'],
|
||||
'lib/custominterp': ['code', 'codeop'],
|
||||
'lib/librestricted': ['rexec', 'bastion'],
|
||||
'lib/modules': ['imp', 'zipimport', 'pkgutil', 'modulefinder', 'runpy'],
|
||||
'lib/language': ['parser', 'symbol', 'token', 'keyword', 'tokenize',
|
||||
'tabnanny', 'pyclbr', 'py_compile', 'compileall', 'dis',
|
||||
'pickletools', 'distutils'],
|
||||
'lib/compiler': ['ast'],
|
||||
'lib/libmisc': ['formatter'],
|
||||
'lib/libsgi': ['al', 'cd', 'fl', 'fm', 'gl', 'imgfile', 'jpeg'],
|
||||
'lib/libsun': ['sunaudio'],
|
||||
'lib/windows': ['msilib', 'msvcrt', '_winreg', 'winsound'],
|
||||
}
|
||||
|
||||
# map sourcefilename to [pre, post]
|
||||
amendments_mapping = {
|
||||
'license.rst': ['''\
|
||||
.. highlightlang:: none
|
||||
|
||||
*******************
|
||||
History and License
|
||||
*******************
|
||||
|
||||
''', ''],
|
||||
|
||||
'bugs.rst': ['''\
|
||||
**************
|
||||
Reporting Bugs
|
||||
**************
|
||||
|
||||
''', ''],
|
||||
|
||||
'copyright.rst': ['''\
|
||||
*********
|
||||
Copyright
|
||||
*********
|
||||
|
||||
''', ''],
|
||||
|
||||
'install/index.rst': ['''\
|
||||
.. _install-index:
|
||||
|
||||
''', ''],
|
||||
}
|
||||
697
converter/latexparser.py
Normal file
@@ -0,0 +1,697 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Python documentation LaTeX file parser
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For more documentation, look into the ``restwriter.py`` file.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
from .docnodes import CommentNode, RootNode, NodeList, ParaSepNode, \
|
||||
TextNode, EmptyNode, NbspNode, SimpleCmdNode, BreakNode, CommandNode, \
|
||||
DescLineCommandNode, InlineNode, IndexNode, SectioningNode, \
|
||||
EnvironmentNode, DescEnvironmentNode, TableNode, VerbatimNode, \
|
||||
ListNode, ItemizeNode, EnumerateNode, DescriptionNode, \
|
||||
DefinitionsNode, ProductionListNode
|
||||
|
||||
from .util import umlaut, empty
|
||||
|
||||
|
||||
class ParserError(Exception):
|
||||
def __init__(self, msg, lineno):
|
||||
Exception.__init__(self, msg, lineno)
|
||||
|
||||
def __str__(self):
|
||||
return '%s, line %s' % self.args
|
||||
|
||||
|
||||
def generic_command(name, argspec, nodetype=CommandNode):
|
||||
def handle(self):
|
||||
args = self.parse_args('\\'+name, argspec)
|
||||
return nodetype(name, args)
|
||||
return handle
|
||||
|
||||
def sectioning_command(name):
|
||||
""" Special handling for sectioning commands: move labels directly following
|
||||
a sectioning command before it, as required by reST. """
|
||||
def handle(self):
|
||||
args = self.parse_args('\\'+name, 'M')
|
||||
snode = SectioningNode(name, args)
|
||||
for l, t, v, r in self.tokens:
|
||||
if t == 'command' and v == 'label':
|
||||
largs = self.parse_args('\\label', 'T')
|
||||
snode.args[0] = NodeList([snode.args[0], CommandNode('label', largs)])
|
||||
break
|
||||
if t == 'text':
|
||||
if not v.strip():
|
||||
# discard whitespace; after a section that's no problem
|
||||
continue
|
||||
self.tokens.push((l, t, v, r))
|
||||
break
|
||||
# no label followed
|
||||
return snode
|
||||
return handle
|
||||
|
||||
def generic_environment(name, argspec, nodetype=EnvironmentNode):
|
||||
def handle(self):
|
||||
args = self.parse_args(name, argspec)
|
||||
return nodetype(name, args, self.parse_until(self.environment_end))
|
||||
return handle
|
||||
|
||||
|
||||
class DocParserMeta(type):
|
||||
def __init__(cls, name, bases, dict):
|
||||
for nodetype, commands in cls.generic_commands.iteritems():
|
||||
for cmdname, argspec in commands.iteritems():
|
||||
setattr(cls, 'handle_' + cmdname,
|
||||
generic_command(cmdname, argspec, nodetype))
|
||||
|
||||
for cmdname in cls.sectioning_commands:
|
||||
setattr(cls, 'handle_' + cmdname, sectioning_command(cmdname))
|
||||
|
||||
for nodetype, envs in cls.generic_envs.iteritems():
|
||||
for envname, argspec in envs.iteritems():
|
||||
setattr(cls, 'handle_%s_env' % envname,
|
||||
generic_environment(envname, argspec, nodetype))
|
||||
|
||||
|
||||
class DocParser(object):
|
||||
""" Parse a Python documentation LaTeX file. """
|
||||
__metaclass__ = DocParserMeta
|
||||
|
||||
def __init__(self, tokenstream, filename):
|
||||
self.tokens = tokenstream
|
||||
self.filename = filename
|
||||
|
||||
def parse(self):
|
||||
self.rootnode = RootNode(self.filename, None)
|
||||
self.rootnode.children = self.parse_until(None)
|
||||
self.rootnode.transform()
|
||||
return self.rootnode
|
||||
|
||||
def parse_until(self, condition=None, endatbrace=False):
|
||||
nodelist = NodeList()
|
||||
bracelevel = 0
|
||||
for l, t, v, r in self.tokens:
|
||||
if condition and condition(t, v, bracelevel):
|
||||
return nodelist.flatten()
|
||||
if t == 'command':
|
||||
if len(v) == 1 and not v.isalpha():
|
||||
nodelist.append(self.handle_special_command(v))
|
||||
continue
|
||||
handler = getattr(self, 'handle_' + v, None)
|
||||
if not handler:
|
||||
raise ParserError('no handler for \\%s command' % v, l)
|
||||
nodelist.append(handler())
|
||||
elif t == 'bgroup':
|
||||
bracelevel += 1
|
||||
elif t == 'egroup':
|
||||
if bracelevel == 0 and endatbrace:
|
||||
return nodelist.flatten()
|
||||
bracelevel -= 1
|
||||
elif t == 'comment':
|
||||
nodelist.append(CommentNode(v))
|
||||
elif t == 'tilde':
|
||||
nodelist.append(NbspNode())
|
||||
elif t == 'mathmode':
|
||||
pass # ignore math mode
|
||||
elif t == 'parasep':
|
||||
nodelist.append(ParaSepNode())
|
||||
else:
|
||||
# includes 'boptional' and 'eoptional' which don't have a
|
||||
# special meaning in text
|
||||
nodelist.append(TextNode(v))
|
||||
return nodelist.flatten()
|
||||
|
||||
def parse_args(self, cmdname, argspec):
|
||||
""" Helper to parse arguments of a command. """
|
||||
# argspec: M = mandatory, T = mandatory, check text-only,
|
||||
# O = optional, Q = optional, check text-only
|
||||
args = []
|
||||
def optional_end(type, value, bracelevel):
|
||||
return type == 'eoptional' and bracelevel == 0
|
||||
|
||||
for i, c in enumerate(argspec):
|
||||
assert c in 'OMTQ'
|
||||
nextl, nextt, nextv, nextr = self.tokens.pop()
|
||||
while nextt == 'comment' or (nextt == 'text' and nextv.isspace()):
|
||||
nextl, nextt, nextv, nextr = self.tokens.pop()
|
||||
|
||||
if c in 'OQ':
|
||||
if nextt == 'boptional':
|
||||
arg = self.parse_until(optional_end)
|
||||
if c == 'Q' and not isinstance(arg, TextNode):
|
||||
raise ParserError('%s: argument %d must be text only' %
|
||||
(cmdname, i), nextl)
|
||||
args.append(arg)
|
||||
else:
|
||||
# not given
|
||||
args.append(EmptyNode())
|
||||
self.tokens.push((nextl, nextt, nextv, nextr))
|
||||
continue
|
||||
|
||||
if nextt == 'bgroup':
|
||||
arg = self.parse_until(None, endatbrace=True)
|
||||
if c == 'T' and not isinstance(arg, TextNode):
|
||||
raise ParserError('%s: argument %d must be text only' %
|
||||
(cmdname, i), nextl)
|
||||
args.append(arg)
|
||||
else:
|
||||
if nextt != 'text':
|
||||
raise ParserError('%s: non-grouped non-text arguments not '
|
||||
'supported' % cmdname, nextl)
|
||||
args.append(TextNode(nextv[0]))
|
||||
self.tokens.push((nextl, nextt, nextv[1:], nextr[1:]))
|
||||
return args
|
||||
|
||||
sectioning_commands = [
|
||||
'chapter',
|
||||
'chapter*',
|
||||
'section',
|
||||
'subsection',
|
||||
'subsubsection',
|
||||
'paragraph',
|
||||
]
|
||||
|
||||
generic_commands = {
|
||||
CommandNode: {
|
||||
'label': 'T',
|
||||
|
||||
'localmoduletable': '',
|
||||
'verbatiminput': 'T',
|
||||
'input': 'T',
|
||||
'centerline': 'M',
|
||||
|
||||
# Pydoc specific commands
|
||||
'versionadded': 'OT',
|
||||
'versionchanged': 'OT',
|
||||
'deprecated': 'TM',
|
||||
'XX' 'X': 'M', # used in dist.tex ;)
|
||||
|
||||
# module-specific
|
||||
'declaremodule': 'QTT',
|
||||
'platform': 'T',
|
||||
'modulesynopsis': 'M',
|
||||
'moduleauthor': 'TT',
|
||||
'sectionauthor': 'TT',
|
||||
|
||||
# reference lists
|
||||
'seelink': 'TMM',
|
||||
'seemodule': 'QTM',
|
||||
'seepep': 'TMM',
|
||||
'seerfc': 'TTM',
|
||||
'seetext': 'M',
|
||||
'seetitle': 'OMM',
|
||||
'seeurl': 'MM',
|
||||
},
|
||||
|
||||
DescLineCommandNode: {
|
||||
# additional items for ...desc
|
||||
'funcline': 'TM',
|
||||
'funclineni': 'TM',
|
||||
'methodline': 'QTM',
|
||||
'methodlineni': 'QTM',
|
||||
'memberline': 'QT',
|
||||
'memberlineni': 'QT',
|
||||
'dataline': 'T',
|
||||
'datalineni': 'T',
|
||||
'cfuncline': 'MTM',
|
||||
'cmemberline': 'TTT',
|
||||
'csimplemacroline': 'T',
|
||||
'ctypeline': 'QT',
|
||||
'cvarline': 'TT',
|
||||
},
|
||||
|
||||
InlineNode: {
|
||||
# specials
|
||||
'footnote': 'M',
|
||||
'frac': 'TT',
|
||||
'refmodule': 'QT',
|
||||
'citetitle': 'QT',
|
||||
'ulink': 'MT',
|
||||
'url': 'M',
|
||||
|
||||
# mapped to normal
|
||||
'textrm': 'M',
|
||||
'b': 'M',
|
||||
'email': 'M', # email addresses are recognized by ReST
|
||||
|
||||
# mapped to **strong**
|
||||
'textbf': 'M',
|
||||
'strong': 'M',
|
||||
|
||||
# mapped to *emphasized*
|
||||
'textit': 'M',
|
||||
'emph': 'M',
|
||||
|
||||
# mapped to ``code``
|
||||
'bfcode': 'M',
|
||||
'code': 'M',
|
||||
'samp': 'M',
|
||||
'character': 'M',
|
||||
'texttt': 'M',
|
||||
|
||||
# mapped to `default role`
|
||||
'var': 'M',
|
||||
|
||||
# mapped to [brackets]
|
||||
'optional': 'M',
|
||||
|
||||
# mapped to :role:`text`
|
||||
'cdata': 'M',
|
||||
'cfunction': 'M', # -> :cfunc:
|
||||
'class': 'M',
|
||||
'command': 'M',
|
||||
'constant': 'M', # -> :const:
|
||||
'csimplemacro': 'M', # -> :cmacro:
|
||||
'ctype': 'M',
|
||||
'data': 'M', # NEW
|
||||
'dfn': 'M',
|
||||
'envvar': 'M',
|
||||
'exception': 'M', # -> :exc:
|
||||
'file': 'M',
|
||||
'filenq': 'M',
|
||||
'filevar': 'M',
|
||||
'function': 'M', # -> :func:
|
||||
'grammartoken': 'M', # -> :token:
|
||||
'guilabel': 'M',
|
||||
'kbd': 'M',
|
||||
'keyword': 'M',
|
||||
'mailheader': 'M',
|
||||
'makevar': 'M',
|
||||
'manpage': 'MM',
|
||||
'member': 'M',
|
||||
'menuselection': 'M',
|
||||
'method': 'M', # -> :meth:
|
||||
'mimetype': 'M',
|
||||
'module': 'M', # -> :mod:
|
||||
'newsgroup': 'M',
|
||||
'option': 'M',
|
||||
'pep': 'M',
|
||||
'program': 'M',
|
||||
'programopt': 'M', # -> :option:
|
||||
'longprogramopt': 'M', # -> :option:
|
||||
'ref': 'T',
|
||||
'regexp': 'M',
|
||||
'rfc': 'M',
|
||||
'token': 'M',
|
||||
|
||||
'NULL': '',
|
||||
# these are defined via substitutions
|
||||
'shortversion': '',
|
||||
'version': '',
|
||||
'today': '',
|
||||
},
|
||||
|
||||
SimpleCmdNode: {
|
||||
# these are directly mapped to text
|
||||
'AA': '', # A as in Angstrom
|
||||
'ASCII': '',
|
||||
'C': '',
|
||||
'Cpp': '',
|
||||
'EOF': '',
|
||||
'LaTeX': '',
|
||||
'POSIX': '',
|
||||
'UNIX': '',
|
||||
'Unix': '',
|
||||
'backslash': '',
|
||||
'copyright': '',
|
||||
'e': '', # backslash
|
||||
'geq': '',
|
||||
'infinity': '',
|
||||
'ldots': '',
|
||||
'leq': '',
|
||||
'moreargs': '',
|
||||
'pi': '',
|
||||
'plusminus': '',
|
||||
'sub': '', # menu separator
|
||||
'textbackslash': '',
|
||||
'textunderscore': '',
|
||||
'texteuro': '',
|
||||
'textasciicircum': '',
|
||||
'textasciitilde': '',
|
||||
'textgreater': '',
|
||||
'textless': '',
|
||||
'textbar': '',
|
||||
'tilde': '',
|
||||
'unspecified': '',
|
||||
},
|
||||
|
||||
IndexNode: {
|
||||
'bifuncindex': 'T',
|
||||
'exindex': 'T',
|
||||
'kwindex': 'T',
|
||||
'obindex': 'T',
|
||||
'opindex': 'T',
|
||||
'refmodindex': 'T',
|
||||
'refexmodindex': 'T',
|
||||
'refbimodindex': 'T',
|
||||
'refstmodindex': 'T',
|
||||
'stindex': 'T',
|
||||
'index': 'M',
|
||||
'indexii': 'TT',
|
||||
'indexiii': 'TTT',
|
||||
'indexiv': 'TTTT',
|
||||
'ttindex': 'T',
|
||||
'withsubitem': 'TM',
|
||||
},
|
||||
|
||||
# These can be safely ignored
|
||||
EmptyNode: {
|
||||
'setindexsubitem': 'T',
|
||||
'tableofcontents': '',
|
||||
'makeindex': '',
|
||||
'makemodindex': '',
|
||||
'maketitle': '',
|
||||
'appendix': '',
|
||||
'documentclass': 'OM',
|
||||
'usepackage': 'OM',
|
||||
'noindent': '',
|
||||
'protect': '',
|
||||
'ifhtml': '',
|
||||
'fi': '',
|
||||
},
|
||||
}
|
||||
|
||||
generic_envs = {
|
||||
EnvironmentNode: {
|
||||
# generic LaTeX environments
|
||||
'abstract': '',
|
||||
'quote': '',
|
||||
'quotation': '',
|
||||
|
||||
'notice': 'Q',
|
||||
'seealso': '',
|
||||
'seealso*': '',
|
||||
},
|
||||
|
||||
DescEnvironmentNode: {
|
||||
# information units
|
||||
'datadesc': 'T',
|
||||
'datadescni': 'T',
|
||||
'excclassdesc': 'TM',
|
||||
'excdesc': 'T',
|
||||
'funcdesc': 'TM',
|
||||
'funcdescni': 'TM',
|
||||
'classdesc': 'TM',
|
||||
'classdesc*': 'T',
|
||||
'memberdesc': 'QT',
|
||||
'memberdescni': 'QT',
|
||||
'methoddesc': 'QMM',
|
||||
'methoddescni': 'QMM',
|
||||
'opcodedesc': 'TT',
|
||||
|
||||
'cfuncdesc': 'MTM',
|
||||
'cmemberdesc': 'TTT',
|
||||
'csimplemacrodesc': 'T',
|
||||
'ctypedesc': 'QT',
|
||||
'cvardesc': 'TT',
|
||||
},
|
||||
}
|
||||
|
||||
# ------------------------- special handlers -----------------------------
|
||||
|
||||
def handle_special_command(self, cmdname):
|
||||
if cmdname in '{}%$^#&_ ':
|
||||
# these are just escapes for special LaTeX commands
|
||||
return TextNode(cmdname)
|
||||
elif cmdname in '\'`~"c':
|
||||
# accents and umlauts
|
||||
nextl, nextt, nextv, nextr = self.tokens.next()
|
||||
if nextt == 'bgroup':
|
||||
_, nextt, _, _ = self.tokens.next()
|
||||
if nextt != 'egroup':
|
||||
raise ParserError('wrong argtype for \\%s' % cmdname, nextl)
|
||||
return TextNode(cmdname)
|
||||
if nextt != 'text':
|
||||
# not nice, but {\~} = ~
|
||||
self.tokens.push((nextl, nextt, nextv, nextr))
|
||||
return TextNode(cmdname)
|
||||
c = umlaut(cmdname, nextv[0])
|
||||
self.tokens.push((nextl, nextt, nextv[1:], nextr[1:]))
|
||||
return TextNode(c)
|
||||
elif cmdname == '\\':
|
||||
return BreakNode()
|
||||
raise ParserError('no handler for \\%s command' % cmdname,
|
||||
self.tokens.peek()[0])
|
||||
|
||||
def handle_begin(self):
|
||||
envname, = self.parse_args('begin', 'T')
|
||||
handler = getattr(self, 'handle_%s_env' % envname.text, None)
|
||||
if not handler:
|
||||
raise ParserError('no handler for %s environment' % envname.text,
|
||||
self.tokens.peek()[0])
|
||||
return handler()
|
||||
|
||||
# ------------------------- command handlers -----------------------------
|
||||
|
||||
def mk_metadata_handler(self, name, mdname=None):
|
||||
if mdname is None:
|
||||
mdname = name
|
||||
def handler(self):
|
||||
data, = self.parse_args('\\'+name, 'M')
|
||||
self.rootnode.params[mdname] = data
|
||||
return EmptyNode()
|
||||
return handler
|
||||
|
||||
handle_title = mk_metadata_handler(None, 'title')
|
||||
handle_author = mk_metadata_handler(None, 'author')
|
||||
handle_authoraddress = mk_metadata_handler(None, 'authoraddress')
|
||||
handle_date = mk_metadata_handler(None, 'date')
|
||||
handle_release = mk_metadata_handler(None, 'release')
|
||||
handle_setshortversion = mk_metadata_handler(None, 'setshortversion',
|
||||
'shortversion')
|
||||
handle_setreleaseinfo = mk_metadata_handler(None, 'setreleaseinfo',
|
||||
'releaseinfo')
|
||||
|
||||
def handle_note(self):
|
||||
note = self.parse_args('\\note', 'M')[0]
|
||||
return EnvironmentNode('notice', [TextNode('note')], note)
|
||||
|
||||
def handle_warning(self):
|
||||
warning = self.parse_args('\\warning', 'M')[0]
|
||||
return EnvironmentNode('notice', [TextNode('warning')], warning)
|
||||
|
||||
def handle_ifx(self):
|
||||
for l, t, v, r in self.tokens:
|
||||
if t == 'command' and v == 'fi':
|
||||
break
|
||||
return EmptyNode()
|
||||
|
||||
def handle_c(self):
|
||||
return self.handle_special_command('c')
|
||||
|
||||
def handle_mbox(self):
|
||||
return self.parse_args('\\mbox', 'M')[0]
|
||||
|
||||
def handle_leftline(self):
|
||||
return self.parse_args('\\leftline', 'M')[0]
|
||||
|
||||
def handle_Large(self):
|
||||
return self.parse_args('\\Large', 'M')[0]
|
||||
|
||||
def handle_pytype(self):
|
||||
# \pytype{x} is synonymous to \class{x} now
|
||||
return self.handle_class()
|
||||
|
||||
def handle_nodename(self):
|
||||
return self.handle_label()
|
||||
|
||||
def handle_verb(self):
|
||||
# skip delimiter
|
||||
l, t, v, r = self.tokens.next()
|
||||
l, t, v, r = self.tokens.next()
|
||||
assert t == 'text'
|
||||
node = InlineNode('code', [TextNode(r)])
|
||||
# skip delimiter
|
||||
l, t, v, r = self.tokens.next()
|
||||
return node
|
||||
|
||||
def handle_locallinewidth(self):
|
||||
return EmptyNode()
|
||||
|
||||
def handle_linewidth(self):
|
||||
return EmptyNode()
|
||||
|
||||
def handle_setlength(self):
|
||||
self.parse_args('\\setlength', 'MM')
|
||||
return EmptyNode()
|
||||
|
||||
def handle_stmodindex(self):
|
||||
arg, = self.parse_args('\\stmodindex', 'T')
|
||||
return CommandNode('declaremodule', [EmptyNode(),
|
||||
TextNode(u'standard'),
|
||||
arg])
|
||||
|
||||
def handle_indexname(self):
|
||||
return EmptyNode()
|
||||
|
||||
def handle_renewcommand(self):
|
||||
self.parse_args('\\renewcommand', 'MM')
|
||||
return EmptyNode()
|
||||
|
||||
# ------------------------- environment handlers -------------------------
|
||||
|
||||
def handle_document_env(self):
|
||||
return self.parse_until(self.environment_end)
|
||||
|
||||
handle_sloppypar_env = handle_document_env
|
||||
handle_flushleft_env = handle_document_env
|
||||
handle_math_env = handle_document_env
|
||||
|
||||
def handle_verbatim_env(self):
|
||||
text = []
|
||||
for l, t, v, r in self.tokens:
|
||||
if t == 'command' and v == 'end' :
|
||||
tok = self.tokens.peekmany(3)
|
||||
if tok[0][1] == 'bgroup' and \
|
||||
tok[1][1] == 'text' and \
|
||||
tok[1][2] == 'verbatim' and \
|
||||
tok[2][1] == 'egroup':
|
||||
self.tokens.popmany(3)
|
||||
break
|
||||
text.append(r)
|
||||
return VerbatimNode(TextNode(''.join(text)))
|
||||
|
||||
# involved math markup must be corrected manually
|
||||
def handle_displaymath_env(self):
|
||||
text = ['XXX: translate this math']
|
||||
for l, t, v, r in self.tokens:
|
||||
if t == 'command' and v == 'end' :
|
||||
tok = self.tokens.peekmany(3)
|
||||
if tok[0][1] == 'bgroup' and \
|
||||
tok[1][1] == 'text' and \
|
||||
tok[1][2] == 'displaymath' and \
|
||||
tok[2][1] == 'egroup':
|
||||
self.tokens.popmany(3)
|
||||
break
|
||||
text.append(r)
|
||||
return VerbatimNode(TextNode(''.join(text)))
|
||||
|
||||
# alltt is different from verbatim because it allows markup
|
||||
def handle_alltt_env(self):
|
||||
nodelist = NodeList()
|
||||
for l, t, v, r in self.tokens:
|
||||
if self.environment_end(t, v):
|
||||
break
|
||||
if t == 'command':
|
||||
if len(v) == 1 and not v.isalpha():
|
||||
nodelist.append(self.handle_special_command(v))
|
||||
continue
|
||||
handler = getattr(self, 'handle_' + v, None)
|
||||
if not handler:
|
||||
raise ParserError('no handler for \\%s command' % v, l)
|
||||
nodelist.append(handler())
|
||||
elif t == 'comment':
|
||||
nodelist.append(CommentNode(v))
|
||||
else:
|
||||
# all else is appended raw
|
||||
nodelist.append(TextNode(r))
|
||||
return VerbatimNode(nodelist.flatten())
|
||||
|
||||
def handle_itemize_env(self, nodetype=ItemizeNode):
|
||||
items = []
|
||||
# a usecase for nonlocal :)
|
||||
running = [False]
|
||||
|
||||
def item_condition(t, v, bracelevel):
|
||||
if self.environment_end(t, v):
|
||||
del running[:]
|
||||
return True
|
||||
if t == 'command' and v == 'item':
|
||||
return True
|
||||
return False
|
||||
|
||||
# the text until the first \item is discarded
|
||||
self.parse_until(item_condition)
|
||||
while running:
|
||||
itemname, = self.parse_args('\\item', 'O')
|
||||
itemcontent = self.parse_until(item_condition)
|
||||
items.append([itemname, itemcontent])
|
||||
return nodetype(items)
|
||||
|
||||
def handle_enumerate_env(self):
|
||||
return self.handle_itemize_env(EnumerateNode)
|
||||
|
||||
def handle_description_env(self):
|
||||
return self.handle_itemize_env(DescriptionNode)
|
||||
|
||||
def handle_definitions_env(self):
|
||||
items = []
|
||||
running = [False]
|
||||
|
||||
def item_condition(t, v, bracelevel):
|
||||
if self.environment_end(t, v):
|
||||
del running[:]
|
||||
return True
|
||||
if t == 'command' and v == 'term':
|
||||
return True
|
||||
return False
|
||||
|
||||
# the text until the first \item is discarded
|
||||
self.parse_until(item_condition)
|
||||
while running:
|
||||
itemname, = self.parse_args('\\term', 'M')
|
||||
itemcontent = self.parse_until(item_condition)
|
||||
items.append([itemname, itemcontent])
|
||||
return DefinitionsNode(items)
|
||||
|
||||
def mk_table_handler(self, envname, numcols):
|
||||
def handle_table(self):
|
||||
args = self.parse_args('table'+envname, 'TT' + 'M'*numcols)
|
||||
firstcolformat = args[1].text
|
||||
headings = args[2:]
|
||||
lines = []
|
||||
for l, t, v, r in self.tokens:
|
||||
# XXX: everything outside of \linexxx is lost here
|
||||
if t == 'command':
|
||||
if v == 'line'+envname:
|
||||
lines.append(self.parse_args('\\line'+envname,
|
||||
'M'*numcols))
|
||||
elif v == 'end':
|
||||
arg = self.parse_args('\\end', 'T')
|
||||
assert arg[0].text.endswith('table'+envname), arg[0].text
|
||||
break
|
||||
for line in lines:
|
||||
if not empty(line[0]):
|
||||
line[0] = InlineNode(firstcolformat, [line[0]])
|
||||
return TableNode(numcols, headings, lines)
|
||||
return handle_table
|
||||
|
||||
handle_tableii_env = mk_table_handler(None, 'ii', 2)
|
||||
handle_longtableii_env = handle_tableii_env
|
||||
handle_tableiii_env = mk_table_handler(None, 'iii', 3)
|
||||
handle_longtableiii_env = handle_tableiii_env
|
||||
handle_tableiv_env = mk_table_handler(None, 'iv', 4)
|
||||
handle_longtableiv_env = handle_tableiv_env
|
||||
handle_tablev_env = mk_table_handler(None, 'v', 5)
|
||||
handle_longtablev_env = handle_tablev_env
|
||||
|
||||
def handle_productionlist_env(self):
|
||||
env_args = self.parse_args('productionlist', 'Q')
|
||||
items = []
|
||||
for l, t, v, r in self.tokens:
|
||||
# XXX: everything outside of \production is lost here
|
||||
if t == 'command':
|
||||
if v == 'production':
|
||||
items.append(self.parse_args('\\production', 'TM'))
|
||||
elif v == 'productioncont':
|
||||
args = self.parse_args('\\productioncont', 'M')
|
||||
args.insert(0, EmptyNode())
|
||||
items.append(args)
|
||||
elif v == 'end':
|
||||
arg = self.parse_args('\\end', 'T')
|
||||
assert arg[0].text == 'productionlist'
|
||||
break
|
||||
node = ProductionListNode(items)
|
||||
# the argument specifies a production group
|
||||
node.arg = env_args[0]
|
||||
return node
|
||||
|
||||
def environment_end(self, t, v, bracelevel=0):
|
||||
if t == 'command' and v == 'end':
|
||||
self.parse_args('\\end', 'T')
|
||||
return True
|
||||
return False
|
||||
18
converter/newfiles/TODO
Normal file
@@ -0,0 +1,18 @@
|
||||
To do after conversion
|
||||
======================
|
||||
|
||||
* fix all references and links marked with `XXX`
|
||||
* adjust all literal include paths
|
||||
* remove all non-literal includes
|
||||
* fix all duplicate labels and undefined label references
|
||||
* fix the email package docs: add a toctree
|
||||
* split very large files and add toctrees
|
||||
* integrate standalone HOWTOs
|
||||
* find out which files get "comments disabled" metadata
|
||||
* double backslashes in production lists
|
||||
* add synopses for each module
|
||||
* write "About these documents"
|
||||
* finish "Documenting Python"
|
||||
* extend copyright.rst
|
||||
* merge ACKS into about.rst
|
||||
* fix the "quadruple" index term
|
||||
16
converter/newfiles/about.rst
Normal file
@@ -0,0 +1,16 @@
|
||||
=====================
|
||||
About these documents
|
||||
=====================
|
||||
|
||||
These documents are generated from `reStructuredText
|
||||
<http://docutils.sf.net/rst.html>`_ sources by *Sphinx*, a document processor
|
||||
specifically written for the Python documentation.
|
||||
|
||||
In the online version of these documents, you can submit comments and suggest
|
||||
changes directly on the documentation pages.
|
||||
|
||||
Development of the documentation and its toolchain takes place on the
|
||||
docs@python.org mailing list. We're always looking for volunteers wanting
|
||||
to help with the docs, so feel free to send a mail there!
|
||||
|
||||
See :ref:`reporting-bugs` for information how to report bugs in Python itself.
|
||||
33
converter/newfiles/api_index.rst
Normal file
@@ -0,0 +1,33 @@
|
||||
.. _c-api-index:
|
||||
|
||||
##################################
|
||||
Python/C API Reference Manual
|
||||
##################################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
This manual documents the API used by C and C++ programmers who want to write
|
||||
extension modules or embed Python. It is a companion to :ref:`extending-index`,
|
||||
which describes the general principles of extension writing but does not
|
||||
document the API functions in detail.
|
||||
|
||||
.. warning::
|
||||
|
||||
The current version of this document is somewhat incomplete. However, most of
|
||||
the important functions, types and structures are described.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
intro.rst
|
||||
veryhigh.rst
|
||||
refcounting.rst
|
||||
exceptions.rst
|
||||
utilities.rst
|
||||
abstract.rst
|
||||
concrete.rst
|
||||
init.rst
|
||||
memory.rst
|
||||
newtypes.rst
|
||||
41
converter/newfiles/conf.py
Normal file
@@ -0,0 +1,41 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Python documentation build configuration file
|
||||
#
|
||||
# The contents of this file are pickled, so don't put values in the namespace
|
||||
# that aren't pickleable (module imports are okay, they're removed automatically).
|
||||
#
|
||||
|
||||
# The default replacements for |version| and |release|:
|
||||
# The short X.Y version.
|
||||
version = '2.6'
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = '2.6a0'
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of files that shouldn't be included in the build.
|
||||
unused_files = [
|
||||
'whatsnew/2.0.rst',
|
||||
'whatsnew/2.1.rst',
|
||||
'whatsnew/2.2.rst',
|
||||
'whatsnew/2.3.rst',
|
||||
'whatsnew/2.4.rst',
|
||||
'whatsnew/2.5.rst',
|
||||
'macmodules/scrap.rst',
|
||||
'modules/xmllib.rst',
|
||||
]
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
last_updated_format = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
use_smartypants = True
|
||||
|
||||
# If true, trailing '()' will be stripped from :func: etc. cross-references.
|
||||
strip_trailing_parentheses = False
|
||||
21
converter/newfiles/contents.rst
Normal file
@@ -0,0 +1,21 @@
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
Python Documentation contents
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
||||
.. toctree::
|
||||
|
||||
whatsnew/2.6.rst
|
||||
tutorial/index.rst
|
||||
reference/index.rst
|
||||
modules/index.rst
|
||||
macmodules/index.rst
|
||||
extending/index.rst
|
||||
c-api/index.rst
|
||||
distutils/index.rst
|
||||
install/index.rst
|
||||
documenting/index.rst
|
||||
|
||||
bugs.rst
|
||||
about.rst
|
||||
license.rst
|
||||
copyright.rst
|
||||
28
converter/newfiles/dist_index.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
.. _distutils-index:
|
||||
|
||||
###############################
|
||||
Distributing Python Modules
|
||||
###############################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
This document describes the Python Distribution Utilities ("Distutils") from
|
||||
the module developer's point of view, describing how to use the Distutils to
|
||||
make Python modules and extensions easily available to a wider audience with
|
||||
very little overhead for build/release/install mechanics.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
introduction.rst
|
||||
setupscript.rst
|
||||
configfile.rst
|
||||
sourcedist.rst
|
||||
builtdist.rst
|
||||
packageindex.rst
|
||||
uploading.rst
|
||||
examples.rst
|
||||
extending.rst
|
||||
commandref.rst
|
||||
apiref.rst
|
||||
33
converter/newfiles/doc.rst
Normal file
@@ -0,0 +1,33 @@
|
||||
.. _documenting-index:
|
||||
|
||||
######################
|
||||
Documenting Python
|
||||
######################
|
||||
|
||||
|
||||
The Python language has a substantial body of documentation, much of it
|
||||
contributed by various authors. The markup used for the Python documentation is
|
||||
`reStructuredText`_, developed by the `docutils`_ project, amended by custom
|
||||
directives and using a toolset named *Sphinx* to postprocess the HTML output.
|
||||
|
||||
This document describes the style guide for our documentation, the custom
|
||||
reStructuredText markup introduced to support Python documentation and how it
|
||||
should be used, as well as the Sphinx build system.
|
||||
|
||||
.. _reStructuredText: http://docutils.sf.net/rst.html
|
||||
.. _docutils: http://docutils.sf.net/
|
||||
|
||||
If you're interested in contributing to Python's documentation, there's no need
|
||||
to write reStructuredText if you're not so inclined; plain text contributions
|
||||
are more than welcome as well.
|
||||
|
||||
.. toctree::
|
||||
|
||||
intro.rst
|
||||
style.rst
|
||||
rest.rst
|
||||
markup.rst
|
||||
sphinx.rst
|
||||
|
||||
.. XXX add credits, thanks etc.
|
||||
|
||||
29
converter/newfiles/doc_intro.rst
Normal file
@@ -0,0 +1,29 @@
|
||||
Introduction
|
||||
============
|
||||
|
||||
Python's documentation has long been considered to be good for a free
|
||||
programming language. There are a number of reasons for this, the most
|
||||
important being the early commitment of Python's creator, Guido van Rossum, to
|
||||
providing documentation on the language and its libraries, and the continuing
|
||||
involvement of the user community in providing assistance for creating and
|
||||
maintaining documentation.
|
||||
|
||||
The involvement of the community takes many forms, from authoring to bug reports
|
||||
to just plain complaining when the documentation could be more complete or
|
||||
easier to use.
|
||||
|
||||
This document is aimed at authors and potential authors of documentation for
|
||||
Python. More specifically, it is for people contributing to the standard
|
||||
documentation and developing additional documents using the same tools as the
|
||||
standard documents. This guide will be less useful for authors using the Python
|
||||
documentation tools for topics other than Python, and less useful still for
|
||||
authors not using the tools at all.
|
||||
|
||||
If your interest is in contributing to the Python documentation, but you don't
|
||||
have the time or inclination to learn reStructuredText and the markup structures
|
||||
documented here, there's a welcoming place for you among the Python contributors
|
||||
as well. Any time you feel that you can clarify existing documentation or
|
||||
provide documentation that's missing, the existing documentation team will
|
||||
gladly work with you to integrate your text, dealing with the markup for you.
|
||||
Please don't let the material in this document stand between the documentation
|
||||
and your desire to help out!
|
||||
738
converter/newfiles/doc_markup.rst
Normal file
@@ -0,0 +1,738 @@
|
||||
.. highlightlang:: rest
|
||||
|
||||
Additional Markup Constructs
|
||||
============================
|
||||
|
||||
Sphinx adds a lot of new directives and interpreted text roles to standard reST
|
||||
markup. This section contains the reference material for these facilities.
|
||||
Documentation for "standard" reST constructs is not included here, though
|
||||
they are used in the Python documentation.
|
||||
|
||||
XXX: file-wide metadata
|
||||
|
||||
Meta-information markup
|
||||
-----------------------
|
||||
|
||||
.. describe:: sectionauthor
|
||||
|
||||
Identifies the author of the current section. The argument should include
|
||||
the author's name such that it can be used for presentation (though it isn't)
|
||||
and email address. The domain name portion of the address should be lower
|
||||
case. Example::
|
||||
|
||||
.. sectionauthor:: Guido van Rossum <guido@python.org>
|
||||
|
||||
Currently, this markup isn't reflected in the output in any way, but it helps
|
||||
keep track of contributions.
|
||||
|
||||
|
||||
Module-specific markup
|
||||
----------------------
|
||||
|
||||
The markup described in this section is used to provide information about a
|
||||
module being documented. Each module should be documented in its own file.
|
||||
Normally this markup appears after the title heading of that file; a typical
|
||||
file might start like this::
|
||||
|
||||
:mod:`parrot` -- Dead parrot access
|
||||
===================================
|
||||
|
||||
.. module:: parrot
|
||||
:platform: Unix, Windows
|
||||
:synopsis: Analyze and reanimate dead parrots.
|
||||
.. moduleauthor:: Eric Cleese <eric@python.invalid>
|
||||
.. moduleauthor:: John Idle <john@python.invalid>
|
||||
|
||||
As you can see, the module-specific markup consists of two directives, the
|
||||
``module`` directive and the ``moduleauthor`` directive.
|
||||
|
||||
.. describe:: module
|
||||
|
||||
This directive marks the beginning of the description of a module (or package
|
||||
submodule, in which case the name should be fully qualified, including the
|
||||
package name).
|
||||
|
||||
The ``platform`` option, if present, is a comma-separated list of the
|
||||
platforms on which the module is available (if it is available on all
|
||||
platforms, the option should be omitted). The keys are short identifiers;
|
||||
examples that are in use include "IRIX", "Mac", "Windows", and "Unix". It is
|
||||
important to use a key which has already been used when applicable.
|
||||
|
||||
The ``synopsis`` option should consist of one sentence describing the
|
||||
module's purpose -- it is currently only used in the Global Module Index.
|
||||
|
||||
.. describe:: moduleauthor
|
||||
|
||||
The ``moduleauthor`` directive, which can appear multiple times, names the
|
||||
authors of the module code, just like ``sectionauthor`` names the author(s)
|
||||
of a piece of documentation. It too does not result in any output currently.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
It is important to make the section title of a module-describing file
|
||||
meaningful since that value will be inserted in the table-of-contents trees
|
||||
in overview files.
|
||||
|
||||
|
||||
Information units
|
||||
-----------------
|
||||
|
||||
There are a number of directives used to describe specific features provided by
|
||||
modules. Each directive requires one or more signatures to provide basic
|
||||
information about what is being described, and the content should be the
|
||||
description. The basic version makes entries in the general index; if no index
|
||||
entry is desired, you can give the directive option flag ``:noindex:``. The
|
||||
following example shows all of the features of this directive type::
|
||||
|
||||
.. function:: spam(eggs)
|
||||
ham(eggs)
|
||||
:noindex:
|
||||
|
||||
Spam or ham the foo.
|
||||
|
||||
The signatures of object methods or data attributes should always include the
|
||||
type name (``.. method:: FileInput.input(...)``), even if it is obvious from the
|
||||
context which type they belong to; this is to enable consistent
|
||||
cross-references. If you describe methods belonging to an abstract protocol,
|
||||
such as "context managers", include a (pseudo-)type name too to make the
|
||||
index entries more informative.
|
||||
|
||||
The directives are:
|
||||
|
||||
.. describe:: cfunction
|
||||
|
||||
Describes a C function. The signature should be given as in C, e.g.::
|
||||
|
||||
.. cfunction:: PyObject* PyType_GenericAlloc(PyTypeObject *type, Py_ssize_t nitems)
|
||||
|
||||
This is also used to describe function-like preprocessor macros. The names
|
||||
of the arguments should be given so they may be used in the description.
|
||||
|
||||
Note that you don't have to backslash-escape asterisks in the signature,
|
||||
as it is not parsed by the reST inliner.
|
||||
|
||||
.. describe:: cmember
|
||||
|
||||
Describes a C struct member. Example signature::
|
||||
|
||||
.. cmember:: PyObject* PyTypeObject.tp_bases
|
||||
|
||||
The text of the description should include the range of values allowed, how
|
||||
the value should be interpreted, and whether the value can be changed.
|
||||
References to structure members in text should use the ``member`` role.
|
||||
|
||||
.. describe:: cmacro
|
||||
|
||||
Describes a "simple" C macro. Simple macros are macros which are used
|
||||
for code expansion, but which do not take arguments so cannot be described as
|
||||
functions. This is not to be used for simple constant definitions. Examples
|
||||
of its use in the Python documentation include :cmacro:`PyObject_HEAD` and
|
||||
:cmacro:`Py_BEGIN_ALLOW_THREADS`.
|
||||
|
||||
.. describe:: ctype
|
||||
|
||||
Describes a C type. The signature should just be the type name.
|
||||
|
||||
.. describe:: cvar
|
||||
|
||||
Describes a global C variable. The signature should include the type, such
|
||||
as::
|
||||
|
||||
.. cvar:: PyObject* PyClass_Type
|
||||
|
||||
.. describe:: data
|
||||
|
||||
Describes global data in a module, including both variables and values used
|
||||
as "defined constants." Class and object attributes are not documented
|
||||
using this environment.
|
||||
|
||||
.. describe:: exception
|
||||
|
||||
Describes an exception class. The signature can, but need not include
|
||||
parentheses with constructor arguments.
|
||||
|
||||
.. describe:: function
|
||||
|
||||
Describes a module-level function. The signature should include the
|
||||
parameters, enclosing optional parameters in brackets. Default values can be
|
||||
given if it enhances clarity. For example::
|
||||
|
||||
.. function:: Timer.repeat([repeat=3[, number=1000000]])
|
||||
|
||||
Object methods are not documented using this directive. Bound object methods
|
||||
placed in the module namespace as part of the public interface of the module
|
||||
are documented using this, as they are equivalent to normal functions for
|
||||
most purposes.
|
||||
|
||||
The description should include information about the parameters required and
|
||||
how they are used (especially whether mutable objects passed as parameters
|
||||
are modified), side effects, and possible exceptions. A small example may be
|
||||
provided.
|
||||
|
||||
.. describe:: class
|
||||
|
||||
Describes a class. The signature can include parentheses with parameters
|
||||
which will be shown as the constructor arguments.
|
||||
|
||||
.. describe:: attribute
|
||||
|
||||
Describes an object data attribute. The description should include
|
||||
information about the type of the data to be expected and whether it may be
|
||||
changed directly.
|
||||
|
||||
.. describe:: method
|
||||
|
||||
Describes an object method. The parameters should not include the ``self``
|
||||
parameter. The description should include similar information to that
|
||||
described for ``function``.
|
||||
|
||||
.. describe:: opcode
|
||||
|
||||
Describes a Python bytecode instruction.
|
||||
|
||||
|
||||
There is also a generic version of these directives:
|
||||
|
||||
.. describe:: describe
|
||||
|
||||
This directive produces the same formatting as the specific ones explained
|
||||
above but does not create index entries or cross-referencing targets. It is
|
||||
used, for example, to describe the directives in this document. Example::
|
||||
|
||||
.. describe:: opcode
|
||||
|
||||
Describes a Python bytecode instruction.
|
||||
|
||||
|
||||
Showing code examples
|
||||
---------------------
|
||||
|
||||
Examples of Python source code or interactive sessions are represented using
|
||||
standard reST literal blocks. They are started by a ``::`` at the end of the
|
||||
preceding paragraph and delimited by indentation.
|
||||
|
||||
Representing an interactive session requires including the prompts and output
|
||||
along with the Python code. No special markup is required for interactive
|
||||
sessions. After the last line of input or output presented, there should not be
|
||||
an "unused" primary prompt; this is an example of what *not* to do::
|
||||
|
||||
>>> 1 + 1
|
||||
2
|
||||
>>>
|
||||
|
||||
Syntax highlighting is handled in a smart way:
|
||||
|
||||
* There is a "highlighting language" for each source file. Per default,
|
||||
this is ``'python'`` as the majority of files will have to highlight Python
|
||||
snippets.
|
||||
|
||||
* Within Python highlighting mode, interactive sessions are recognized
|
||||
automatically and highlighted appropriately.
|
||||
|
||||
* The highlighting language can be changed using the ``highlightlang``
|
||||
directive, used as follows::
|
||||
|
||||
.. highlightlang:: c
|
||||
|
||||
This language is used until the next ``highlightlang`` directive is
|
||||
encountered.
|
||||
|
||||
* The valid values for the highlighting language are:
|
||||
|
||||
* ``python`` (the default)
|
||||
* ``c``
|
||||
* ``rest``
|
||||
* ``none`` (no highlighting)
|
||||
|
||||
* If highlighting with the current language fails, the block is not highlighted
|
||||
in any way.
|
||||
|
||||
Longer displays of verbatim text may be included by storing the example text in
|
||||
an external file containing only plain text. The file may be included using the
|
||||
standard ``include`` directive with the ``literal`` option flag. For example,
|
||||
to include the Python source file :file:`example.py`, use::
|
||||
|
||||
.. include:: example.py
|
||||
:literal:
|
||||
|
||||
|
||||
Inline markup
|
||||
-------------
|
||||
|
||||
As said before, Sphinx uses interpreted text roles to insert semantic markup in
|
||||
documents.
|
||||
|
||||
The default role is ``var``, as that was one of the most common macros used in
|
||||
the old LaTeX docs. That means that you can use ```var``` to refer to a
|
||||
variable named "var".
|
||||
|
||||
For all other roles, you have to write ``:rolename:`content```.
|
||||
|
||||
The following roles refer to objects in modules and are possibly hyperlinked if
|
||||
a matching identifier is found:
|
||||
|
||||
.. describe:: mod
|
||||
|
||||
The name of a module; a dotted name may be used. This should also be used for
|
||||
package names.
|
||||
|
||||
.. describe:: func
|
||||
|
||||
The name of a Python function; dotted names may be used. The role text
|
||||
should include trailing parentheses to enhance readability. The parentheses
|
||||
are stripped when searching for identifiers.
|
||||
|
||||
.. describe:: data
|
||||
|
||||
The name of a module-level variable.
|
||||
|
||||
.. describe:: const
|
||||
|
||||
The name of a "defined" constant. This may be a C-language ``#define``
|
||||
or a Python variable that is not intended to be changed.
|
||||
|
||||
.. describe:: class
|
||||
|
||||
A class name; a dotted name may be used.
|
||||
|
||||
.. describe:: meth
|
||||
|
||||
The name of a method of an object. The role text should include the type
|
||||
name, method name and the trailing parentheses. A dotted name may be used.
|
||||
|
||||
.. describe:: attr
|
||||
|
||||
The name of a data attribute of an object.
|
||||
|
||||
.. describe:: exc
|
||||
|
||||
The name of an exception. A dotted name may be used.
|
||||
|
||||
The name enclosed in this markup can include a module name and/or a class name.
|
||||
For example, ``:func:`filter``` could refer to a function named ``filter`` in
|
||||
the current module, or the built-in function of that name. In contrast,
|
||||
``:func:`foo.filter``` clearly refers to the ``filter`` function in the ``foo``
|
||||
module.
|
||||
|
||||
A similar heuristic is used to determine whether the name is an attribute of
|
||||
the currently documented class.
|
||||
|
||||
The following roles create cross-references to C-language constructs if they
|
||||
are defined in the API documentation:
|
||||
|
||||
.. describe:: cdata
|
||||
|
||||
The name of a C-language variable.
|
||||
|
||||
.. describe:: cfunc
|
||||
|
||||
The name of a C-language function. Should include trailing parentheses.
|
||||
|
||||
.. describe:: cmacro
|
||||
|
||||
The name of a "simple" C macro, as defined above.
|
||||
|
||||
.. describe:: ctype
|
||||
|
||||
The name of a C-language type.
|
||||
|
||||
|
||||
The following role does possibly create a cross-reference, but does not refer
|
||||
to objects:
|
||||
|
||||
.. describe:: token
|
||||
|
||||
The name of a grammar token (used in the reference manual to create links
|
||||
between production displays).
|
||||
|
||||
---------
|
||||
|
||||
The following roles don't do anything special except formatting the text
|
||||
in a different style:
|
||||
|
||||
.. describe:: command
|
||||
|
||||
The name of an OS-level command, such as ``rm``.
|
||||
|
||||
.. describe:: dfn
|
||||
|
||||
Mark the defining instance of a term in the text. (No index entries are
|
||||
generated.)
|
||||
|
||||
.. describe:: envvar
|
||||
|
||||
An environment variable. Index entries are generated.
|
||||
|
||||
.. describe:: file
|
||||
|
||||
The name of a file or directory.
|
||||
|
||||
.. XXX: filenq, filevar
|
||||
|
||||
.. describe:: guilabel
|
||||
|
||||
Labels presented as part of an interactive user interface should be marked
|
||||
using ``guilabel``. This includes labels from text-based interfaces such as
|
||||
those created using :mod:`curses` or other text-based libraries. Any label
|
||||
used in the interface should be marked with this role, including button
|
||||
labels, window titles, field names, menu and menu selection names, and even
|
||||
values in selection lists.
|
||||
|
||||
.. describe:: kbd
|
||||
|
||||
Mark a sequence of keystrokes. What form the key sequence takes may depend
|
||||
on platform- or application-specific conventions. When there are no relevant
|
||||
conventions, the names of modifier keys should be spelled out, to improve
|
||||
accessibility for new users and non-native speakers. For example, an
|
||||
*xemacs* key sequence may be marked like ``:kbd:`C-x C-f```, but without
|
||||
reference to a specific application or platform, the same sequence should be
|
||||
marked as ``:kbd:`Control-x Control-f```.
|
||||
|
||||
.. describe:: keyword
|
||||
|
||||
The name of a keyword in a programming language.
|
||||
|
||||
.. describe:: mailheader
|
||||
|
||||
The name of an RFC 822-style mail header. This markup does not imply that
|
||||
the header is being used in an email message, but can be used to refer to any
|
||||
header of the same "style." This is also used for headers defined by the
|
||||
various MIME specifications. The header name should be entered in the same
|
||||
way it would normally be found in practice, with the camel-casing conventions
|
||||
being preferred where there is more than one common usage. For example:
|
||||
``:mailheader:`Content-Type```.
|
||||
|
||||
.. describe:: makevar
|
||||
|
||||
The name of a :command:`make` variable.
|
||||
|
||||
.. describe:: manpage
|
||||
|
||||
A reference to a Unix manual page including the section,
|
||||
e.g. ``:manpage:`ls(1)```.
|
||||
|
||||
.. describe:: menuselection
|
||||
|
||||
Menu selections should be marked using the ``menuselection`` role. This is
|
||||
used to mark a complete sequence of menu selections, including selecting
|
||||
submenus and choosing a specific operation, or any subsequence of such a
|
||||
sequence. The names of individual selections should be separated by
|
||||
``-->``.
|
||||
|
||||
For example, to mark the selection "Start > Programs", use this markup::
|
||||
|
||||
:menuselection:`Start --> Programs`
|
||||
|
||||
When including a selection that includes some trailing indicator, such as the
|
||||
ellipsis some operating systems use to indicate that the command opens a
|
||||
dialog, the indicator should be omitted from the selection name.
|
||||
|
||||
.. describe:: mimetype
|
||||
|
||||
The name of a MIME type, or a component of a MIME type (the major or minor
|
||||
portion, taken alone).
|
||||
|
||||
.. describe:: newsgroup
|
||||
|
||||
The name of a Usenet newsgroup.
|
||||
|
||||
.. describe:: option
|
||||
|
||||
A command-line option to an executable program. The leading hyphen(s) must
|
||||
be included.
|
||||
|
||||
.. describe:: program
|
||||
|
||||
The name of an executable program. This may differ from the file name for
|
||||
the executable for some platforms. In particular, the ``.exe`` (or other)
|
||||
extension should be omitted for Windows programs.
|
||||
|
||||
.. describe:: regexp
|
||||
|
||||
A regular expression. Quotes should not be included.
|
||||
|
||||
.. describe:: var
|
||||
|
||||
A Python or C variable or parameter name.
|
||||
|
||||
|
||||
The following roles generate external links:
|
||||
|
||||
.. describe:: pep
|
||||
|
||||
A reference to a Python Enhancement Proposal. This generates appropriate
|
||||
index entries. The text "PEP *number*\ " is generated; in the HTML output,
|
||||
this text is a hyperlink to an online copy of the specified PEP.
|
||||
|
||||
.. describe:: rfc
|
||||
|
||||
A reference to an Internet Request for Comments. This generates appropriate
|
||||
index entries. The text "RFC *number*\ " is generated; in the HTML output,
|
||||
this text is a hyperlink to an online copy of the specified RFC.
|
||||
|
||||
|
||||
Note that there are no special roles for including hyperlinks as you can use
|
||||
the standard reST markup for that purpose.
|
||||
|
||||
|
||||
.. _doc-ref-role:
|
||||
|
||||
Cross-linking markup
|
||||
--------------------
|
||||
|
||||
To support cross-referencing to arbitrary sections in the documentation, the
|
||||
standard reST labels are "abused" a bit: Every label must precede a section
|
||||
title; and every label name must be unique throughout the entire documentation
|
||||
source.
|
||||
|
||||
You can then reference to these sections using the ``:ref:`label-name``` role.
|
||||
|
||||
Example::
|
||||
|
||||
.. _my-reference-label:
|
||||
|
||||
Section to cross-reference
|
||||
--------------------------
|
||||
|
||||
This is the text of the section.
|
||||
|
||||
It refers to the section itself, see :ref:`my-reference-label`.
|
||||
|
||||
The ``:ref:`` invocation is replaced with the section title.
|
||||
|
||||
|
||||
Paragraph-level markup
|
||||
----------------------
|
||||
|
||||
These directives create short paragraphs and can be used inside information
|
||||
units as well as normal text:
|
||||
|
||||
.. describe:: note
|
||||
|
||||
An especially important bit of information about an API that a user should be
|
||||
aware of when using whatever bit of API the note pertains to. The content of
|
||||
the directive should be written in complete sentences and include all
|
||||
appropriate punctuation.
|
||||
|
||||
Example::
|
||||
|
||||
.. note::
|
||||
|
||||
This function is not suitable for sending spam e-mails.
|
||||
|
||||
.. describe:: warning
|
||||
|
||||
An important bit of information about an API that a user should be very aware
|
||||
of when using whatever bit of API the warning pertains to. The content of
|
||||
the directive should be written in complete sentences and include all
|
||||
appropriate punctuation. This differs from ``note`` in that it is recommended
|
||||
over ``note`` for information regarding security.
|
||||
|
||||
.. describe:: versionadded
|
||||
|
||||
This directive documents the version of Python which added the described
|
||||
feature to the library or C API. When this applies to an entire module, it
|
||||
should be placed at the top of the module section before any prose.
|
||||
|
||||
The first argument must be given and is the version in question; you can add
|
||||
a second argument consisting of a *brief* explanation of the change.
|
||||
|
||||
Example::
|
||||
|
||||
.. versionadded:: 2.5
|
||||
The `spam` parameter.
|
||||
|
||||
Note that there must be no blank line between the directive head and the
|
||||
explanation; this is to make these blocks visually continuous in the markup.
|
||||
|
||||
.. describe:: versionchanged
|
||||
|
||||
Similar to ``versionadded``, but describes when and what changed in the named
|
||||
feature in some way (new parameters, changed side effects, etc.).
|
||||
|
||||
--------------
|
||||
|
||||
.. describe:: seealso
|
||||
|
||||
Many sections include a list of references to module documentation or
|
||||
external documents. These lists are created using the ``seealso`` directive.
|
||||
|
||||
The ``seealso`` directive is typically placed in a section just before any
|
||||
sub-sections. For the HTML output, it is shown boxed off from the main flow
|
||||
of the text.
|
||||
|
||||
The content of the ``seealso`` directive should be a reST definition list.
|
||||
Example::
|
||||
|
||||
.. seealso::
|
||||
|
||||
Module :mod:`zipfile`
|
||||
Documentation of the :mod:`zipfile` standard module.
|
||||
|
||||
`GNU tar manual, Basic Tar Format <http://link>`_
|
||||
Documentation for tar archive files, including GNU tar extensions.
|
||||
|
||||
.. describe:: rubric
|
||||
|
||||
This directive creates a paragraph heading that is not used to create a
|
||||
table of contents node. It is currently used for the "Footnotes" caption.
|
||||
|
||||
.. describe:: centered
|
||||
|
||||
This directive creates a centered boldfaced paragraph. Use it as follows::
|
||||
|
||||
.. centered::
|
||||
|
||||
Paragraph contents.
|
||||
|
||||
|
||||
Table-of-contents markup
|
||||
------------------------
|
||||
|
||||
Since reST does not have facilities to interconnect several documents, or split
|
||||
documents into multiple output files, Sphinx uses a custom directive to add
|
||||
relations between the single files the documentation is made of, as well as
|
||||
tables of contents. The ``toctree`` directive is the central element.
|
||||
|
||||
.. describe:: toctree
|
||||
|
||||
This directive inserts a "TOC tree" at the current location, using the
|
||||
individual TOCs (including "sub-TOC trees") of the files given in the
|
||||
directive body. A numeric ``maxdepth`` option may be given to indicate the
|
||||
depth of the tree; by default, all levels are included.
|
||||
|
||||
Consider this example (taken from the library reference index)::
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
intro.rst
|
||||
strings.rst
|
||||
datatypes.rst
|
||||
numeric.rst
|
||||
(many more files listed here)
|
||||
|
||||
This accomplishes two things:
|
||||
|
||||
* Tables of contents from all those files are inserted, with a maximum depth
|
||||
of two, that means one nested heading. ``toctree`` directives in those
|
||||
files are also taken into account.
|
||||
* Sphinx knows that the relative order of the files ``intro.rst``,
|
||||
``strings.rst`` and so forth, and it knows that they are children of the
|
||||
shown file, the library index. From this information it generates "next
|
||||
chapter", "previous chapter" and "parent chapter" links.
|
||||
|
||||
In the end, all files included in the build process must occur in one
|
||||
``toctree`` directive; Sphinx will emit a warning if it finds a file that is
|
||||
not included, because that means that this file will not be reachable through
|
||||
standard navigation.
|
||||
|
||||
The special file ``contents.rst`` at the root of the source directory is the
|
||||
"root" of the TOC tree hierarchy; from it the "Contents" page is generated.
|
||||
|
||||
|
||||
Index-generating markup
|
||||
-----------------------
|
||||
|
||||
Sphinx automatically creates index entries from all information units (like
|
||||
functions, classes or attributes) like discussed before.
|
||||
|
||||
However, there is also an explicit directive available, to make the index more
|
||||
comprehensive and enable index entries in documents where information is not
|
||||
mainly contained in information units, such as the language reference.
|
||||
|
||||
The directive is ``index`` and contains one or more index entries. Each entry
|
||||
consists of a type and a value, separated by a colon.
|
||||
|
||||
For example::
|
||||
|
||||
.. index::
|
||||
single: execution!context
|
||||
module: __main__
|
||||
module: sys
|
||||
triple: module; search; path
|
||||
|
||||
This directive contains five entries, which will be converted to entries in the
|
||||
generated index which link to the exact location of the index statement (or, in
|
||||
case of offline media, the corresponding page number).
|
||||
|
||||
The possible entry types are:
|
||||
|
||||
single
|
||||
Creates a single index entry. Can be made a subentry by separating the
|
||||
subentry text with a semicolon (this is also used below to describe what
|
||||
entries are created).
|
||||
pair
|
||||
``pair: loop; statement`` is a shortcut that creates two index entries,
|
||||
namely ``loop; statement`` and ``statement; loop``.
|
||||
triple
|
||||
Likewise, ``triple: module; search; path`` is a shortcut that creates three
|
||||
index entries, which are ``module; search path``, ``search; path, module`` and
|
||||
``path; module search``.
|
||||
module, keyword, operator, object, exception, statement, builtin
|
||||
These all create two index entries. For example, ``module: hashlib`` creates
|
||||
the entries ``module; hashlib`` and ``hashlib; module``.
|
||||
|
||||
|
||||
Grammar production displays
|
||||
---------------------------
|
||||
|
||||
Special markup is available for displaying the productions of a formal grammar.
|
||||
The markup is simple and does not attempt to model all aspects of BNF (or any
|
||||
derived forms), but provides enough to allow context-free grammars to be
|
||||
displayed in a way that causes uses of a symbol to be rendered as hyperlinks to
|
||||
the definition of the symbol. There is this directive:
|
||||
|
||||
.. describe:: productionlist
|
||||
|
||||
This directive is used to enclose a group of productions. Each production is
|
||||
given on a single line and consists of a name, separated by a colon from the
|
||||
following definition. If the definition spans multiple lines, each
|
||||
continuation line must begin with a colon placed at the same column as in the
|
||||
first line.
|
||||
|
||||
Blank lines are not allowed within ``productionlist`` directive arguments.
|
||||
|
||||
The definition can contain token names which are marked as interpreted text
|
||||
(e.g. ``sum ::= `integer` "+" `integer```) -- this generates cross-references
|
||||
to the productions of these tokens. Note that vertical bars used to indicate
|
||||
alternatives must be escaped with backslashes because otherwise they would
|
||||
indicate a substitution reference to the reST parser.
|
||||
|
||||
|
||||
.. XXX describe optional first parameter
|
||||
|
||||
The following is an example taken from the Python Reference Manual::
|
||||
|
||||
.. productionlist::
|
||||
try_stmt: try1_stmt \| try2_stmt
|
||||
try1_stmt: "try" ":" :token:`suite`
|
||||
: ("except" [:token:`expression` ["," :token:`target`]] ":" :token:`suite`)+
|
||||
: ["else" ":" :token:`suite`]
|
||||
: ["finally" ":" :token:`suite`]
|
||||
try2_stmt: "try" ":" :token:`suite`
|
||||
: "finally" ":" :token:`suite`
|
||||
|
||||
|
||||
Substitutions
|
||||
-------------
|
||||
|
||||
The documentation system provides three substitutions that are defined by default.
|
||||
They are set in the build configuration file, see :ref:`doc-build-config`.
|
||||
|
||||
.. describe:: |release|
|
||||
|
||||
Replaced by the Python release the documentation refers to. This is the full
|
||||
version string including alpha/beta/release candidate tags, e.g. ``2.5.2b3``.
|
||||
|
||||
.. describe:: |version|
|
||||
|
||||
Replaced by the Python version the documentation refers to. This consists
|
||||
only of the major and minor version parts, e.g. ``2.5``, even for version
|
||||
2.5.1.
|
||||
|
||||
.. describe:: |today|
|
||||
|
||||
Replaced by either today's date, or the date set in the build configuration
|
||||
file. Normally has the format ``April 14, 2007``.
|
||||
229
converter/newfiles/doc_rest.rst
Normal file
@@ -0,0 +1,229 @@
|
||||
.. highlightlang:: rest
|
||||
|
||||
reStructuredText Primer
|
||||
=======================
|
||||
|
||||
This section is a brief introduction to reStructuredText (reST) concepts and
|
||||
syntax, to provide authors enough information to autor documents productively.
|
||||
Since reST was designed to be a simple, unobtrusive markup language, this will
|
||||
not take too long.
|
||||
|
||||
.. seealso::
|
||||
|
||||
The authoritative `reStructuredText User
|
||||
Documentation <http://docutils.sourceforge.net/rst.html>`_.
|
||||
|
||||
|
||||
Paragraphs
|
||||
----------
|
||||
|
||||
The most basic block a reST document is made of. Paragraphs are chunks of text
|
||||
separated by one ore more blank lines. As in Python, indentation is significant
|
||||
in reST, so all lines of a paragraph must be left-aligned.
|
||||
|
||||
|
||||
Inline markup
|
||||
-------------
|
||||
|
||||
The standard reST inline markup is quite simple: use
|
||||
|
||||
* one asterisk: ``*text*`` for emphasis (italics),
|
||||
* two asterisks: ``**text**`` for strong emphasis (boldface), and
|
||||
* backquotes: ````text```` for code samples.
|
||||
|
||||
If asterisks or backquotes appear in running text and could be confused with
|
||||
inline markup delimiters, they have to be escaped with a backslash.
|
||||
|
||||
Be aware of some restrictions of this markup:
|
||||
|
||||
* it may not be nested,
|
||||
* content may not start or end with whitespace: ``* text*`` is wrong,
|
||||
* it must be separated from surrounding text by non-word characters. Use a
|
||||
backslash escaped space to work around that: ``thisis\ *one*\ word``.
|
||||
|
||||
These restrictions may be lifted in future versions of the docutils.
|
||||
|
||||
reST also allows for custom "interpreted text roles"', which signify that the
|
||||
enclosed text should be interpreted in a specific way. Sphinx uses this to
|
||||
provide semantic markup and cross-referencing of identifiers, as described in
|
||||
the appropriate section. The general syntax is ``:rolename:`content```.
|
||||
|
||||
|
||||
Lists and Quotes
|
||||
----------------
|
||||
|
||||
List markup is natural: just place an asterisk at the start of a paragraph and
|
||||
indent properly. The same goes for numbered lists; they can also be
|
||||
autonumbered using a ``#`` sign::
|
||||
|
||||
* This is a bulleted list.
|
||||
* It has two items, the second
|
||||
item uses two lines.
|
||||
|
||||
#. This is a numbered list.
|
||||
#. It has two items too.
|
||||
|
||||
Nested lists are possible, but be aware that they must be separated from the
|
||||
parent list items by blank lines::
|
||||
|
||||
* this is
|
||||
* a list
|
||||
|
||||
* with a nested list
|
||||
* and some subitems
|
||||
|
||||
* and here the parent list continues
|
||||
|
||||
Definition lists are created as follows::
|
||||
|
||||
term (up to a line of text)
|
||||
Definition of the term, which must be indented
|
||||
|
||||
and can even consist of multiple paragraphs
|
||||
|
||||
next term
|
||||
Description.
|
||||
|
||||
|
||||
Paragraphs are quoted by just indenting them more than the surrounding
|
||||
paragraphs.
|
||||
|
||||
|
||||
Source Code
|
||||
-----------
|
||||
|
||||
Literal code blocks are introduced by ending a paragraph with the special marker
|
||||
``::``. The literal block must be indented, to be able to include blank lines::
|
||||
|
||||
This is a normal text paragraph. The next paragraph is a code sample::
|
||||
|
||||
It is not processed in any way, except
|
||||
that the indentation is removed.
|
||||
|
||||
It can span multiple lines.
|
||||
|
||||
This is a normal text paragraph again.
|
||||
|
||||
The handling of the ``::`` marker is smart:
|
||||
|
||||
* If it occurs as a paragraph of its own, that paragraph is completely left
|
||||
out of the document.
|
||||
* If it is preceded by whitespace, the marker is removed.
|
||||
* If it is preceded by non-whitespace, the marker is replaced by a single
|
||||
colon.
|
||||
|
||||
That way, the second sentence in the above example's first paragraph would be
|
||||
rendered as "The next paragraph is a code sample:".
|
||||
|
||||
|
||||
Hyperlinks
|
||||
----------
|
||||
|
||||
External links
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Use ```Link text <http://target>`_`` for inline web links. If the link text
|
||||
should be the web address, you don't need special markup at all, the parser
|
||||
finds links and mail addresses in ordinary text.
|
||||
|
||||
Internal links
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Internal linking is done via a special reST role, see the section on specific
|
||||
markup, :ref:`doc-ref-role`.
|
||||
|
||||
|
||||
Sections
|
||||
--------
|
||||
|
||||
Section headers are created by underlining (and optionally overlining) the
|
||||
section title with a punctuation character, at least as long as the text::
|
||||
|
||||
=================
|
||||
This is a heading
|
||||
=================
|
||||
|
||||
Normally, there are no heading levels assigned to certain characters as the
|
||||
structure is determined from the succession of headings. However, for the
|
||||
Python documentation, we use this convention:
|
||||
|
||||
* ``#`` with overline, for parts
|
||||
* ``*`` with overline, for chapters
|
||||
* ``=``, for sections
|
||||
* ``-``, for subsections
|
||||
* ``^``, for subsubsections
|
||||
* ``"``, for paragraphs
|
||||
|
||||
|
||||
Explicit Markup
|
||||
---------------
|
||||
|
||||
"Explicit markup" is used in reST for most constructs that need special
|
||||
handling, such as footnotes, specially-highlighted paragraphs, comments, and
|
||||
generic directives.
|
||||
|
||||
An explicit markup block begins with a line starting with ``..`` followed by
|
||||
whitespace and is terminated by the next paragraph at the same level of
|
||||
indentation. (There needs to be a blank line between explicit markup and normal
|
||||
paragraphs. This may all sound a bit complicated, but it is intuitive enough
|
||||
when you write it.)
|
||||
|
||||
|
||||
Directives
|
||||
----------
|
||||
|
||||
A directive is a generic block of explicit markup. Besides roles, it is one of
|
||||
the extension mechanisms of reST, and Sphinx makes heavy use of it.
|
||||
|
||||
Basically, a directive consists of a name, arguments, options and content. (Keep
|
||||
this terminology in mind, it is used in the next chapter describing custom
|
||||
directives.) Looking at this example, ::
|
||||
|
||||
.. function:: foo(x)
|
||||
foo(y, z)
|
||||
:bar: no
|
||||
|
||||
Return a line of text input from the user.
|
||||
|
||||
``function`` is the directive name. It is given two arguments here, the
|
||||
remainder of the first line and the second line, as well as one option ``bar``
|
||||
(as you can see, options are given in the lines immediately following the
|
||||
arguments and indicated by the colons).
|
||||
|
||||
The directive content follows after a blank line and is indented relative to the
|
||||
directive start.
|
||||
|
||||
|
||||
Footnotes
|
||||
---------
|
||||
|
||||
For footnotes, use ``[#]_`` to mark the footnote location, and add the footnote
|
||||
body at the bottom of the document after a "Footnotes" rubric heading, like so::
|
||||
|
||||
Lorem ipsum [#]_ dolor sit amet ... [#]_
|
||||
|
||||
.. rubric:: Footnotes
|
||||
|
||||
.. [#] Text of the first footnote.
|
||||
.. [#] Text of the second footnote.
|
||||
|
||||
|
||||
Comments
|
||||
--------
|
||||
|
||||
Every explicit markup block which isn't a valid markup construct (like the
|
||||
footnotes above) is regared as a comment.
|
||||
|
||||
|
||||
Source encoding
|
||||
---------------
|
||||
|
||||
Since the easiest way to include special characters like em dashes or copyright
|
||||
signs in reST is to directly write them as Unicode characters, one has to
|
||||
specify an encoding:
|
||||
|
||||
All Python documentation source files must be in UTF-8 encoding, and the HTML
|
||||
documents written from them will be in that encoding as well.
|
||||
|
||||
|
||||
XXX: Gotchas
|
||||
55
converter/newfiles/doc_sphinx.rst
Normal file
@@ -0,0 +1,55 @@
|
||||
.. highlightlang:: rest
|
||||
|
||||
The Sphinx build system
|
||||
=======================
|
||||
|
||||
XXX: intro...
|
||||
|
||||
.. _doc-build-config:
|
||||
|
||||
The build configuration file
|
||||
----------------------------
|
||||
|
||||
The documentation root, that is the ``Doc`` subdirectory of the source
|
||||
distribution, contains a file named ``conf.py``. This file is called the "build
|
||||
configuration file", and it contains several variables that are read and used
|
||||
during a build run.
|
||||
|
||||
These variables are:
|
||||
|
||||
release : string
|
||||
A string that is used as a replacement for the ``|release|`` reST
|
||||
substitution. It should be the full version string including
|
||||
alpha/beta/release candidate tags, e.g. ``2.5.2b3``.
|
||||
|
||||
version : string
|
||||
A string that is used as a replacement for the ``|version|`` reST
|
||||
substitution. It should be the Python version the documentation refers to.
|
||||
This consists only of the major and minor version parts, e.g. ``2.5``, even
|
||||
for version 2.5.1.
|
||||
|
||||
today_fmt : string
|
||||
A ``strftime`` format that is used to format a replacement for the
|
||||
``|today|`` reST substitution.
|
||||
|
||||
today : string
|
||||
A string that can contain a date that should be written to the documentation
|
||||
output literally. If this is nonzero, it is used instead of
|
||||
``strftime(today_fmt)``.
|
||||
|
||||
unused_file : list of strings
|
||||
A list of reST filenames that are to be disregarded during building. This
|
||||
could be docs for temporarily disabled modules or documentation that's not
|
||||
yet ready for public consumption.
|
||||
|
||||
last_updated_format : string
|
||||
If this is not an empty string, it will be given to ``time.strftime()`` and
|
||||
written to each generated output file after "last updated on:".
|
||||
|
||||
use_smartypants : bool
|
||||
If true, use SmartyPants to convert quotes and dashes to the typographically
|
||||
correct entities.
|
||||
|
||||
strip_trailing_parentheses : bool
|
||||
If true, trailing parentheses will be stripped from ``:func:`` etc.
|
||||
crossreferences.
|
||||
57
converter/newfiles/doc_style.rst
Normal file
@@ -0,0 +1,57 @@
|
||||
.. highlightlang:: rest
|
||||
|
||||
Style Guide
|
||||
===========
|
||||
|
||||
The Python documentation should follow the `Apple Publications Style Guide`_
|
||||
wherever possible. This particular style guide was selected mostly because it
|
||||
seems reasonable and is easy to get online.
|
||||
|
||||
.. _Apple Publications Style Guide: http://developer.apple.com/documentation/UserExperience/Conceptual/APStyleGuide/AppleStyleGuide2003.pdf
|
||||
|
||||
Topics which are not covered in the Apple's style guide will be discussed in
|
||||
this document if necessary.
|
||||
|
||||
Footnotes are generally discouraged, though they may be used when they are the
|
||||
best way to present specific information. When a footnote reference is added at
|
||||
the end of the sentence, it should follow the sentence-ending punctuation. The
|
||||
reST markup should appear something like this::
|
||||
|
||||
This sentence has a footnote reference. [#]_ This is the next sentence.
|
||||
|
||||
Footnotes should be gathered at the end of a file, or if the file is very long,
|
||||
at the end of a section. The docutils will automatically create backlinks to the
|
||||
footnote reference.
|
||||
|
||||
Footnotes may appear in the middle of sentences where appropriate.
|
||||
|
||||
Many special names are used in the Python documentation, including the names of
|
||||
operating systems, programming languages, standards bodies, and the like. Most
|
||||
of these entities are not assigned any special markup, but the preferred
|
||||
spellings are given here to aid authors in maintaining the consistency of
|
||||
presentation in the Python documentation.
|
||||
|
||||
Other terms and words deserve special mention as well; these conventions should
|
||||
be used to ensure consistency throughout the documentation:
|
||||
|
||||
CPU
|
||||
For "central processing unit." Many style guides say this should be spelled
|
||||
out on the first use (and if you must use it, do so!). For the Python
|
||||
documentation, this abbreviation should be avoided since there's no
|
||||
reasonable way to predict which occurrence will be the first seen by the
|
||||
reader. It is better to use the word "processor" instead.
|
||||
|
||||
POSIX
|
||||
The name assigned to a particular group of standards. This is always
|
||||
uppercase.
|
||||
|
||||
Python
|
||||
The name of our favorite programming language is always capitalized.
|
||||
|
||||
Unicode
|
||||
The name of a character set and matching encoding. This is always written
|
||||
capitalized.
|
||||
|
||||
Unix
|
||||
The name of the operating system developed at AT&T Bell Labs in the early
|
||||
1970s.
|
||||
34
converter/newfiles/ext_index.rst
Normal file
@@ -0,0 +1,34 @@
|
||||
.. _extending-index:
|
||||
|
||||
##################################################
|
||||
Extending and Embedding the Python Interpreter
|
||||
##################################################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
This document describes how to write modules in C or C++ to extend the Python
|
||||
interpreter with new modules. Those modules can define new functions but also
|
||||
new object types and their methods. The document also describes how to embed
|
||||
the Python interpreter in another application, for use as an extension language.
|
||||
Finally, it shows how to compile and link extension modules so that they can be
|
||||
loaded dynamically (at run time) into the interpreter, if the underlying
|
||||
operating system supports this feature.
|
||||
|
||||
This document assumes basic knowledge about Python. For an informal
|
||||
introduction to the language, see :ref:`tutorial-index`. :ref:`reference-index`
|
||||
gives a more formal definition of the language. :ref:`modules-index` documents
|
||||
the existing object types, functions and modules (both built-in and written in
|
||||
Python) that give the language its wide application range.
|
||||
|
||||
For a detailed description of the whole Python/C API, see the separate
|
||||
:ref:`c-api-index`.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
extending.rst
|
||||
newtypes.rst
|
||||
building.rst
|
||||
windows.rst
|
||||
embedding.rst
|
||||
34
converter/newfiles/mac_index.rst
Normal file
@@ -0,0 +1,34 @@
|
||||
.. _macmodules-index:
|
||||
|
||||
##############################
|
||||
Macintosh Library Modules
|
||||
##############################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
This library reference manual documents Python's extensions for the Macintosh.
|
||||
It should be used in conjunction with :ref:`modules-index`, which documents the
|
||||
standard library and built-in types.
|
||||
|
||||
This manual assumes basic knowledge about the Python language. For an informal
|
||||
introduction to Python, see :ref:`tutorial-index`; :ref:`reference-index`
|
||||
remains the highest authority on syntactic and semantic questions. Finally, the
|
||||
manual entitled :ref:`extending-index` describes how to add new extensions to
|
||||
Python and how to embed it in other applications.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
using.rst
|
||||
mac.rst
|
||||
macic.rst
|
||||
macos.rst
|
||||
macostools.rst
|
||||
macui.rst
|
||||
framework.rst
|
||||
autogil.rst
|
||||
scripting.rst
|
||||
toolbox.rst
|
||||
colorpicker.rst
|
||||
undoc.rst
|
||||
67
converter/newfiles/modules_index.rst
Normal file
@@ -0,0 +1,67 @@
|
||||
.. _modules-index:
|
||||
|
||||
###############################
|
||||
The Python standard library
|
||||
###############################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
While :ref:`reference-index` describes the exact syntax and semantics of the
|
||||
language, it does not describe the standard library that is distributed with the
|
||||
language, and which greatly enhances its immediate usability. This library
|
||||
contains built-in modules (written in C) that provide access to system
|
||||
functionality such as file I/O that would otherwise be inaccessible to Python
|
||||
programmers, as well as modules written in Python that provide standardized
|
||||
solutions for many problems that occur in everyday programming. Some of these
|
||||
modules are explicitly designed to encourage and enhance the portability of
|
||||
Python programs.
|
||||
|
||||
This library reference manual documents Python's standard library, as well as
|
||||
many optional library modules (which may or may not be available, depending on
|
||||
whether the underlying platform supports them and on the configuration choices
|
||||
made at compile time). It also documents the standard types of the language and
|
||||
its built-in functions and exceptions, many of which are not or incompletely
|
||||
documented in the Reference Manual.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
intro.rst
|
||||
strings.rst
|
||||
datatypes.rst
|
||||
numeric.rst
|
||||
netdata.rst
|
||||
markup.rst
|
||||
fileformats.rst
|
||||
crypto.rst
|
||||
filesys.rst
|
||||
archiving.rst
|
||||
persistence.rst
|
||||
allos.rst
|
||||
someos.rst
|
||||
unix.rst
|
||||
ipc.rst
|
||||
internet.rst
|
||||
mm.rst
|
||||
tkinter.rst
|
||||
i18n.rst
|
||||
frameworks.rst
|
||||
development.rst
|
||||
pdb.rst
|
||||
profile.rst
|
||||
hotshot.rst
|
||||
timeit.rst
|
||||
trace.rst
|
||||
python.rst
|
||||
custominterp.rst
|
||||
restricted.rst
|
||||
modules.rst
|
||||
language.rst
|
||||
compiler.rst
|
||||
misc.rst
|
||||
sgi.rst
|
||||
sun.rst
|
||||
windows.rst
|
||||
undoc.rst
|
||||
34
converter/newfiles/ref_index.rst
Normal file
@@ -0,0 +1,34 @@
|
||||
.. _reference-index:
|
||||
|
||||
#################################
|
||||
The Python language reference
|
||||
#################################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
This reference manual describes the syntax and "core semantics" of the
|
||||
language. It is terse, but attempts to be exact and complete. The semantics of
|
||||
non-essential built-in object types and of the built-in functions and modules
|
||||
are described in :ref:`modules-index`. For an informal introduction to the
|
||||
language, see :ref:`tutorial-index`. For C or C++ programmers, two additional
|
||||
manuals exist: :ref:`extending-index` describes the high-level picture of how to
|
||||
write a Python extension module, and the :ref:`c-api-index` describes the
|
||||
interfaces available to C/C++ programmers in detail.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
introduction.rst
|
||||
lexical_analysis.rst
|
||||
datamodel.rst
|
||||
executionmodel.rst
|
||||
expressions.rst
|
||||
simple_stmts.rst
|
||||
compound_stmts.rst
|
||||
toplevel_components.rst
|
||||
functions.rst
|
||||
consts.rst
|
||||
objects.rst
|
||||
stdtypes.rst
|
||||
exceptions.rst
|
||||
60
converter/newfiles/tutorial_index.rst
Normal file
@@ -0,0 +1,60 @@
|
||||
.. _tutorial-index:
|
||||
|
||||
######################
|
||||
The Python tutorial
|
||||
######################
|
||||
|
||||
:Release: |version|
|
||||
:Date: |today|
|
||||
|
||||
Python is an easy to learn, powerful programming language. It has efficient
|
||||
high-level data structures and a simple but effective approach to
|
||||
object-oriented programming. Python's elegant syntax and dynamic typing,
|
||||
together with its interpreted nature, make it an ideal language for scripting
|
||||
and rapid application development in many areas on most platforms.
|
||||
|
||||
The Python interpreter and the extensive standard library are freely available
|
||||
in source or binary form for all major platforms from the Python Web site,
|
||||
http://www.python.org/, and may be freely distributed. The same site also
|
||||
contains distributions of and pointers to many free third party Python modules,
|
||||
programs and tools, and additional documentation.
|
||||
|
||||
The Python interpreter is easily extended with new functions and data types
|
||||
implemented in C or C++ (or other languages callable from C). Python is also
|
||||
suitable as an extension language for customizable applications.
|
||||
|
||||
This tutorial introduces the reader informally to the basic concepts and
|
||||
features of the Python language and system. It helps to have a Python
|
||||
interpreter handy for hands-on experience, but all examples are self-contained,
|
||||
so the tutorial can be read off-line as well.
|
||||
|
||||
For a description of standard objects and modules, see the Python Library
|
||||
Reference document. The Python Reference Manual gives a more formal definition
|
||||
of the language. To write extensions in C or C++, read Extending and Embedding
|
||||
the Python Interpreter and Python/C API Reference. There are also several books
|
||||
covering Python in depth.
|
||||
|
||||
This tutorial does not attempt to be comprehensive and cover every single
|
||||
feature, or even every commonly used feature. Instead, it introduces many of
|
||||
Python's most noteworthy features, and will give you a good idea of the
|
||||
language's flavor and style. After reading it, you will be able to read and
|
||||
write Python modules and programs, and you will be ready to learn more about the
|
||||
various Python library modules described in the Python Library Reference.
|
||||
|
||||
.. toctree::
|
||||
|
||||
appetite.rst
|
||||
interpreter.rst
|
||||
introduction.rst
|
||||
controlflow.rst
|
||||
datastructures.rst
|
||||
modules.rst
|
||||
inputoutput.rst
|
||||
errors.rst
|
||||
classes.rst
|
||||
stdlib.rst
|
||||
stdlib2.rst
|
||||
whatnow.rst
|
||||
interactive.rst
|
||||
floatingpoint.rst
|
||||
glossary.rst
|
||||
959
converter/restwriter.py
Normal file
@@ -0,0 +1,959 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Python documentation ReST writer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
How the converter works
|
||||
=======================
|
||||
|
||||
A LaTeX document is tokenized by a `Tokenizer`. The tokens are processed by
|
||||
the `DocParser` class which emits a tree of `DocNode`\s. The `RestWriter`
|
||||
now walks this node tree and generates ReST from that.
|
||||
|
||||
There are some intricacies while writing ReST:
|
||||
|
||||
- Paragraph text must be rewrapped in order to avoid ragged lines. The
|
||||
`textwrap` module does that nicely, but it must obviously operate on a
|
||||
whole paragraph at a time. Therefore the contents of the current paragraph
|
||||
are cached in `self.curpar`. Every time a block level element is
|
||||
encountered, its node handler calls `self.flush_par()` which writes out a
|
||||
paragraph. Because this can be detrimental for the markup at several
|
||||
stages, the `self.noflush` context manager can be used to forbid paragraph
|
||||
flushing temporarily, which means that no block level nodes can be
|
||||
processed.
|
||||
|
||||
- There are no inline comments in ReST. Therefore comments are stored in
|
||||
`self.comments` and written out every time the paragraph is flushed.
|
||||
|
||||
- A similar thing goes for footnotes: `self.footnotes`.
|
||||
|
||||
- Some inline markup cannot contain nested markup. Therefore the function
|
||||
`textonly()` exists which returns a node similar to its argument, but
|
||||
stripped of inline markup.
|
||||
|
||||
- Some constructs need to format non-block-level nodes, but without writing
|
||||
the result to the current paragraph. These use `self.get_node_text()`
|
||||
which writes to a temporary paragraph and returns the resulting markup.
|
||||
|
||||
- Indentation is important. The `self.indent` context manager helps keeping
|
||||
track of indentation levels.
|
||||
|
||||
- Some blocks, like lists, need to prevent the first line from being
|
||||
indented because the indentation space is already filled (e.g. by a
|
||||
bullet). Therefore the `self.indent` context manager accepts a
|
||||
`firstline` flag which can be set to ``False``, resulting in the first
|
||||
line not being indented.
|
||||
|
||||
|
||||
There are some restrictions on markup compared to LaTeX:
|
||||
|
||||
- Table cells may not contain blocks.
|
||||
|
||||
- Hard line breaks don't exist.
|
||||
|
||||
- Block level markup inside "alltt" environments doesn't work.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
# yay!
|
||||
from __future__ import with_statement
|
||||
|
||||
import re
|
||||
import StringIO
|
||||
import textwrap
|
||||
|
||||
WIDTH = 80
|
||||
INDENT = 3
|
||||
|
||||
new_wordsep_re = re.compile(
|
||||
r'(\s+|' # any whitespace
|
||||
r'(?<=\s)(?::[a-z-]+:)?`\S+|' # interpreted text start
|
||||
r'[^\s\w]*\w+[a-zA-Z]-(?=\w+[a-zA-Z])|' # hyphenated words
|
||||
r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))') # em-dash
|
||||
|
||||
import textwrap
|
||||
# monkey-patch...
|
||||
textwrap.TextWrapper.wordsep_re = new_wordsep_re
|
||||
wrapper = textwrap.TextWrapper(width=WIDTH, break_long_words=False)
|
||||
|
||||
from .docnodes import RootNode, TextNode, NodeList, InlineNode, \
|
||||
CommentNode, EmptyNode
|
||||
from .util import fixup_text, empty, text, my_make_id, \
|
||||
repair_bad_inline_markup
|
||||
from .filenamemap import includes_mapping
|
||||
|
||||
class WriterError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Indenter(object):
|
||||
""" Context manager factory for indentation. """
|
||||
def __init__(self, writer):
|
||||
class IndenterManager(object):
|
||||
def __init__(self, indentlevel, flush, firstline):
|
||||
self.indentlevel = indentlevel
|
||||
self.flush = flush
|
||||
self.firstline = firstline
|
||||
|
||||
def __enter__(self):
|
||||
writer.indentation += (self.indentlevel * ' ')
|
||||
writer.indentfirstline = self.firstline
|
||||
return self
|
||||
|
||||
def __exit__(self, *ignored):
|
||||
if self.flush:
|
||||
writer.flush_par()
|
||||
writer.indentation = writer.indentation[:-self.indentlevel]
|
||||
|
||||
self.manager = IndenterManager
|
||||
|
||||
def __call__(self, indentlevel=INDENT, flush=True, firstline=True):
|
||||
return self.manager(indentlevel, flush, firstline)
|
||||
|
||||
|
||||
class NoFlush(object):
|
||||
""" Convenience context manager. """
|
||||
def __init__(self, writer):
|
||||
self.writer = writer
|
||||
|
||||
def __enter__(self):
|
||||
self.writer.no_flushing += 1
|
||||
|
||||
def __exit__(self, *ignored):
|
||||
self.writer.no_flushing -= 1
|
||||
|
||||
|
||||
class SectionMeta(object):
|
||||
def __init__(self):
|
||||
self.modname = ''
|
||||
self.platform = ''
|
||||
self.synopsis = []
|
||||
self.modauthors = []
|
||||
self.sectauthors = []
|
||||
|
||||
|
||||
class RestWriter(object):
|
||||
""" Write ReST from a node tree. """
|
||||
|
||||
def __init__(self, fp, splitchap=False, toctree=None, deflang=None, labelprefix=''):
|
||||
self.splitchap = splitchap # split output at chapters?
|
||||
if splitchap:
|
||||
self.fp = StringIO.StringIO() # dummy one
|
||||
self.chapters = [self.fp]
|
||||
else:
|
||||
self.fp = fp # file pointer
|
||||
self.toctree = toctree # entries for the TOC tree
|
||||
self.deflang = deflang # default highlighting language
|
||||
self.labelprefix = labelprefix # prefix for all label names
|
||||
|
||||
# indentation tools
|
||||
self.indentation = '' # current indentation string
|
||||
self.indentfirstline = True # indent the first line of next paragraph?
|
||||
self.indented = Indenter(self) # convenience context manager
|
||||
|
||||
# paragraph flushing tools
|
||||
self.flush_cb = None # callback run on next paragraph flush, used
|
||||
# for properly separating field lists from
|
||||
# the following paragraph
|
||||
self.no_flushing = 0 # raise an error on paragraph flush?
|
||||
self.noflush = NoFlush(self) # convenience context manager
|
||||
|
||||
# collected items to output later
|
||||
self.curpar = [] # text in current paragraph
|
||||
self.comments = [] # comments to be output after flushing
|
||||
self.indexentries = [] # indexentries to be output before flushing
|
||||
self.footnotes = [] # footnotes to be output at document end
|
||||
self.warnings = [] # warnings while writing
|
||||
|
||||
# specials
|
||||
self.sectionlabel = '' # most recent \label command
|
||||
self.thisclass = '' # most recent classdesc name
|
||||
self.sectionmeta = None # current section metadata
|
||||
self.noescape = 0 # don't escape text nodes
|
||||
self.indexsubitem = '' # current \withsubitem text
|
||||
|
||||
def write_document(self, rootnode):
|
||||
""" Write a document, represented by a RootNode. """
|
||||
assert type(rootnode) is RootNode
|
||||
|
||||
if self.deflang:
|
||||
self.write_directive('highlightlang', self.deflang)
|
||||
|
||||
self.visit_node(rootnode)
|
||||
self.write_footnotes()
|
||||
|
||||
def new_chapter(self):
|
||||
""" Called if self.splitchap is True. Create a new file pointer
|
||||
and set self.fp to it. """
|
||||
new_fp = StringIO.StringIO()
|
||||
self.chapters.append(new_fp)
|
||||
self.fp = new_fp
|
||||
|
||||
def write(self, text='', nl=True, first=False):
|
||||
""" Write a string to the output file. """
|
||||
if first:
|
||||
self.fp.write((self.indentation if self.indentfirstline else '') + text)
|
||||
self.indentfirstline = True
|
||||
elif text: # don't write indentation only
|
||||
self.fp.write(self.indentation + text)
|
||||
if nl:
|
||||
self.fp.write('\n')
|
||||
|
||||
def write_footnotes(self):
|
||||
""" Write the current footnotes, if any. """
|
||||
self.flush_par()
|
||||
if self.footnotes:
|
||||
self.write('.. rubric:: Footnotes\n')
|
||||
footnotes = self.footnotes
|
||||
self.footnotes = [] # first clear since indented() will flush
|
||||
for footnode in footnotes:
|
||||
self.write('.. [#] ', nl=False)
|
||||
with self.indented(3, firstline=False):
|
||||
self.visit_node(footnode)
|
||||
|
||||
def write_directive(self, name, args='', node=None, spabove=False, spbelow=True):
|
||||
""" Helper to write a ReST directive. """
|
||||
if spabove:
|
||||
self.write()
|
||||
self.write('.. %s::%s' % (name, args and ' '+args))
|
||||
if spbelow:
|
||||
self.write()
|
||||
with self.indented():
|
||||
if node is not None:
|
||||
self.visit_node(node)
|
||||
|
||||
def write_sectionmeta(self):
|
||||
mod = self.sectionmeta
|
||||
self.sectionmeta = None
|
||||
if not mod:
|
||||
return
|
||||
if mod.modname:
|
||||
self.write('.. module:: %s' % mod.modname)
|
||||
if mod.platform:
|
||||
self.write(' :platform: %s' % mod.platform)
|
||||
if mod.synopsis:
|
||||
self.write(' :synopsis: %s' % mod.synopsis[0])
|
||||
for line in mod.synopsis[1:]:
|
||||
self.write(' %s' % line)
|
||||
if mod.modauthors:
|
||||
for author in mod.modauthors:
|
||||
self.write('.. moduleauthor:: %s' % author)
|
||||
if mod.sectauthors:
|
||||
for author in mod.sectauthors:
|
||||
self.write('.. sectionauthor:: %s' % author)
|
||||
self.write()
|
||||
self.write()
|
||||
|
||||
indexentry_mapping = {
|
||||
'index': 'single',
|
||||
'indexii': 'pair',
|
||||
'indexiii': 'triple',
|
||||
'indexiv': 'quadruple',
|
||||
'stindex': 'statement',
|
||||
'ttindex': 'single',
|
||||
'obindex': 'object',
|
||||
'opindex': 'operator',
|
||||
'kwindex': 'keyword',
|
||||
'exindex': 'exception',
|
||||
'bifuncindex': 'builtin',
|
||||
'refmodindex': 'module',
|
||||
'refbimodindex': 'module',
|
||||
'refexmodindex': 'module',
|
||||
'refstmodindex': 'module',
|
||||
}
|
||||
|
||||
def get_indexentries(self, entries):
|
||||
""" Return a list of lines for the index entries. """
|
||||
def format_entry(cmdname, args, subitem):
|
||||
textargs = []
|
||||
for arg in args:
|
||||
if isinstance(arg, TextNode):
|
||||
textarg = text(arg)
|
||||
else:
|
||||
textarg = self.get_node_text(self.get_textonly_node(arg, warn=0))
|
||||
if ';' in textarg:
|
||||
raise WriterError("semicolon in index args: " + textarg)
|
||||
textarg += subitem
|
||||
textarg = textarg.replace('!', '; ')
|
||||
textargs.append(textarg)
|
||||
return '%s: %s' % (self.indexentry_mapping[cmdname],
|
||||
'; '.join(textarg for textarg in textargs
|
||||
if not empty(arg)))
|
||||
|
||||
ret = []
|
||||
if len(entries) == 1:
|
||||
ret.append('.. index:: %s' % format_entry(*entries[0]))
|
||||
else:
|
||||
ret.append('.. index::')
|
||||
for entry in entries:
|
||||
ret.append(' %s' % format_entry(*entry))
|
||||
return ret
|
||||
|
||||
def get_par(self, wrap, width=None):
|
||||
""" Get the contents of the current paragraph.
|
||||
Returns a list if wrap and not indent, else a string. """
|
||||
if not self.curpar:
|
||||
if wrap:
|
||||
return []
|
||||
else:
|
||||
return ''
|
||||
text = ''.join(self.curpar).lstrip()
|
||||
text = repair_bad_inline_markup(text)
|
||||
self.curpar = []
|
||||
if wrap:
|
||||
# returns a list!
|
||||
wrapper.width = width or WIDTH
|
||||
return wrapper.wrap(text)
|
||||
else:
|
||||
return text
|
||||
|
||||
no_warn_textonly = set((
|
||||
'var', 'code', 'textrm', 'emph', 'keyword', 'textit', 'programopt',
|
||||
'cfunction', 'texttt', 'email', 'constant',
|
||||
))
|
||||
|
||||
def get_textonly_node(self, node, cmd='', warn=1):
|
||||
""" Return a similar Node or NodeList that only has TextNode subnodes.
|
||||
|
||||
Warning values:
|
||||
- 0: never warn
|
||||
- 1: warn for markup losing information
|
||||
"""
|
||||
if cmd == 'code':
|
||||
warn = 0
|
||||
def do(subnode):
|
||||
if isinstance(subnode, TextNode):
|
||||
return subnode
|
||||
if isinstance(subnode, NodeList):
|
||||
return NodeList(do(subsubnode) for subsubnode in subnode)
|
||||
if isinstance(subnode, CommentNode):
|
||||
# loses comments, but huh
|
||||
return EmptyNode()
|
||||
if isinstance(subnode, InlineNode):
|
||||
if subnode.cmdname == 'optional':
|
||||
# this is not mapped to ReST markup
|
||||
return subnode
|
||||
if len(subnode.args) == 1:
|
||||
if warn == 1 and subnode.cmdname not in self.no_warn_textonly:
|
||||
self.warnings.append('%r: Discarding %s markup in %r' %
|
||||
(cmd, subnode.cmdname, node))
|
||||
return do(subnode.args[0])
|
||||
elif len(subnode.args) == 0:
|
||||
# should only happen for IndexNodes which stay in
|
||||
return subnode
|
||||
elif len(subnode.args) == 2 and subnode.cmdname == 'refmodule':
|
||||
if not warn:
|
||||
return do(subnode.args[1])
|
||||
raise WriterError('get_textonly_node() failed for %r' % subnode)
|
||||
return do(node)
|
||||
|
||||
def get_node_text(self, node, wrap=False, width=None):
|
||||
""" Write the node to a temporary paragraph and return the result
|
||||
as a string. """
|
||||
with self.noflush:
|
||||
self._old_curpar = self.curpar
|
||||
self.curpar = []
|
||||
self.visit_node(node)
|
||||
ret = self.get_par(wrap, width=width)
|
||||
self.curpar = self._old_curpar
|
||||
return ret
|
||||
|
||||
def flush_par(self, nocb=False, nocomments=False):
|
||||
""" Write the current paragraph to the output file.
|
||||
Prepend index entries, append comments and footnotes. """
|
||||
if self.no_flushing:
|
||||
raise WriterError('called flush_par() while noflush active')
|
||||
if self.indexentries:
|
||||
for line in self.get_indexentries(self.indexentries):
|
||||
self.write(line)
|
||||
self.write()
|
||||
self.indexentries = []
|
||||
if self.flush_cb and not nocb:
|
||||
self.flush_cb()
|
||||
self.flush_cb = None
|
||||
par = self.get_par(wrap=True)
|
||||
if par:
|
||||
for i, line in enumerate(par):
|
||||
self.write(line, first=(i==0))
|
||||
self.write()
|
||||
if self.comments and not nocomments:
|
||||
for comment in self.comments:
|
||||
self.write('.. % ' + comment)
|
||||
self.write()
|
||||
self.comments = []
|
||||
|
||||
def visit_wrapped(self, pre, node, post, noescape=False):
|
||||
""" Write a node within a paragraph, wrapped with pre and post strings. """
|
||||
if noescape:
|
||||
self.noescape += 1
|
||||
self.curpar.append(pre)
|
||||
with self.noflush:
|
||||
self.visit_node(node)
|
||||
self.curpar.append(post)
|
||||
if noescape:
|
||||
self.noescape -= 1
|
||||
|
||||
def visit_node(self, node):
|
||||
""" "Write" a node (appends to curpar or writes something). """
|
||||
visitfunc = getattr(self, 'visit_' + node.__class__.__name__, None)
|
||||
if not visitfunc:
|
||||
raise WriterError('no visit function for %s node' % node.__class__)
|
||||
visitfunc(node)
|
||||
|
||||
# ------------------------- node handlers -----------------------------
|
||||
|
||||
def visit_RootNode(self, node):
|
||||
if node.params.get('title'):
|
||||
title = self.get_node_text(node.params['title'])
|
||||
hl = len(title)
|
||||
self.write('*' * (hl+4))
|
||||
self.write(' %s ' % title)
|
||||
self.write('*' * (hl+4))
|
||||
self.write()
|
||||
|
||||
if node.params.get('author'):
|
||||
self.write(':Author: %s%s' %
|
||||
(self.get_node_text(node.params['author']),
|
||||
(' <%s>' % self.get_node_text(node.params['authoremail'])
|
||||
if 'authoremail' in node.params else '')))
|
||||
self.write()
|
||||
|
||||
if node.params.get('date'):
|
||||
self.write(':Date: %s' % self.get_node_text(node.params['date']))
|
||||
self.write()
|
||||
|
||||
if node.params.get('release'):
|
||||
self.write('.. |release| replace:: %s' %
|
||||
self.get_node_text(node.params['release']))
|
||||
self.write()
|
||||
|
||||
self.visit_NodeList(node.children)
|
||||
|
||||
def visit_NodeList(self, nodelist):
|
||||
for node in nodelist:
|
||||
self.visit_node(node)
|
||||
|
||||
def visit_CommentNode(self, node):
|
||||
# no inline comments -> they are all output at the start of a new paragraph
|
||||
self.comments.append(node.comment.strip())
|
||||
|
||||
sectchars = {
|
||||
'chapter': '*',
|
||||
'chapter*': '*',
|
||||
'section': '=',
|
||||
'subsection': '-',
|
||||
'subsubsection': '^',
|
||||
'paragraph': '"',
|
||||
}
|
||||
|
||||
sectdoubleline = [
|
||||
'chapter',
|
||||
'chapter*',
|
||||
]
|
||||
|
||||
def visit_SectioningNode(self, node):
|
||||
self.flush_par()
|
||||
self.sectionlabel = ''
|
||||
self.thisclass = ''
|
||||
self.write()
|
||||
|
||||
if self.splitchap and node.cmdname.startswith('chapter'):
|
||||
self.write_footnotes()
|
||||
self.new_chapter()
|
||||
|
||||
heading = self.get_node_text(node.args[0]).strip()
|
||||
if self.sectionlabel:
|
||||
self.write('.. _%s:\n' % self.sectionlabel)
|
||||
hl = len(heading)
|
||||
if node.cmdname in self.sectdoubleline:
|
||||
self.write(self.sectchars[node.cmdname] * hl)
|
||||
self.write(heading)
|
||||
self.write(self.sectchars[node.cmdname] * hl)
|
||||
self.write()
|
||||
|
||||
def visit_EnvironmentNode(self, node):
|
||||
self.flush_par()
|
||||
envname = node.envname
|
||||
if envname == 'notice':
|
||||
type = text(node.args[0]) or 'note'
|
||||
self.write_directive(type, '', node.content)
|
||||
elif envname in ('seealso', 'seealso*'):
|
||||
self.write_directive('seealso', '', node.content, spabove=True)
|
||||
elif envname == 'abstract':
|
||||
self.write_directive('topic', 'Abstract', node.content, spabove=True)
|
||||
elif envname == 'quote':
|
||||
with self.indented():
|
||||
self.visit_node(node.content)
|
||||
self.write()
|
||||
elif envname == 'quotation':
|
||||
self.write_directive('epigraph', '', node.content, spabove=True)
|
||||
else:
|
||||
raise WriterError('no handler for %s environment' % envname)
|
||||
|
||||
descmap = {
|
||||
'funcdesc': ('function', '0(1)'),
|
||||
'funcdescni': ('function', '0(1)'),
|
||||
'classdesc': ('class', '0(1)'),
|
||||
'classdesc*': ('class', '0'),
|
||||
'methoddesc': ('method', '0.1(2)'),
|
||||
'methoddescni': ('method', '0.1(2)'),
|
||||
'excdesc': ('exception', '0'),
|
||||
'excclassdesc': ('exception', '0(1)'),
|
||||
'datadesc': ('data', '0'),
|
||||
'datadescni': ('data', '0'),
|
||||
'memberdesc': ('attribute', '0.1'),
|
||||
'memberdescni': ('attribute', '0.1'),
|
||||
'opcodedesc': ('opcode', '0 (1)'),
|
||||
|
||||
'cfuncdesc': ('cfunction', '0 1(2)'),
|
||||
'cmemberdesc': ('cmember', '1 0.2'),
|
||||
'csimplemacrodesc': ('cmacro', '0'),
|
||||
'ctypedesc': ('ctype', '1'),
|
||||
'cvardesc': ('cvar', '0 1'),
|
||||
}
|
||||
|
||||
def _write_sig(self, spec, args):
|
||||
# don't escape "*" in signatures
|
||||
self.noescape += 1
|
||||
for c in spec:
|
||||
if c.isdigit():
|
||||
self.visit_node(self.get_textonly_node(args[int(c)]))
|
||||
else:
|
||||
self.curpar.append(c)
|
||||
self.noescape -= 1
|
||||
|
||||
def visit_DescEnvironmentNode(self, node):
|
||||
envname = node.envname
|
||||
if envname not in self.descmap:
|
||||
raise WriterError('no handler for %s environment' % envname)
|
||||
|
||||
self.flush_par()
|
||||
# automatically fill in the class name if not given
|
||||
if envname[:9] == 'classdesc' or envname[:12] == 'excclassdesc':
|
||||
self.thisclass = text(node.args[0])
|
||||
elif envname[:10] in ('methoddesc', 'memberdesc') and not \
|
||||
text(node.args[0]):
|
||||
if not self.thisclass:
|
||||
raise WriterError('No current class for %s member' %
|
||||
text(node.args[1]))
|
||||
node.args[0] = TextNode(self.thisclass)
|
||||
directivename, sigspec = self.descmap[envname]
|
||||
self._write_sig(sigspec, node.args)
|
||||
signature = self.get_par(wrap=False)
|
||||
self.write()
|
||||
self.write('.. %s:: %s' % (directivename, signature))
|
||||
if node.additional:
|
||||
for cmdname, add in node.additional:
|
||||
entry = self.descmap[cmdname.replace('line', 'desc')]
|
||||
if envname[:10] in ('methoddesc', 'memberdesc') and not \
|
||||
text(add[0]):
|
||||
if not self.thisclass:
|
||||
raise WriterError('No current class for %s member' %
|
||||
text(add[1]))
|
||||
add[0] = TextNode(self.thisclass)
|
||||
self._write_sig(entry[1], add)
|
||||
signature = self.get_par(wrap=False)
|
||||
self.write(' %s%s' % (' ' * (len(directivename) - 2),
|
||||
signature))
|
||||
if envname.endswith('ni'):
|
||||
self.write(' :noindex:')
|
||||
self.write()
|
||||
with self.indented():
|
||||
self.visit_node(node.content)
|
||||
|
||||
|
||||
def visit_CommandNode(self, node):
|
||||
cmdname = node.cmdname
|
||||
if cmdname == 'label':
|
||||
labelname = self.labelprefix + text(node.args[0]).lower()
|
||||
if self.no_flushing:
|
||||
# in section
|
||||
self.sectionlabel = labelname
|
||||
else:
|
||||
self.flush_par()
|
||||
self.write('.. _%s:\n' % labelname)
|
||||
return
|
||||
|
||||
elif cmdname in ('declaremodule', 'modulesynopsis',
|
||||
'moduleauthor', 'sectionauthor', 'platform'):
|
||||
self.flush_par(nocb=True, nocomments=True)
|
||||
if not self.sectionmeta:
|
||||
self.sectionmeta = SectionMeta()
|
||||
if cmdname == 'declaremodule':
|
||||
self.sectionmeta.modname = text(node.args[2])
|
||||
elif cmdname == 'modulesynopsis':
|
||||
self.sectionmeta.synopsis = self.get_node_text(
|
||||
self.get_textonly_node(node.args[0], warn=0), wrap=True)
|
||||
elif cmdname == 'moduleauthor':
|
||||
email = text(node.args[1])
|
||||
self.sectionmeta.modauthors.append(
|
||||
'%s%s' % (text(node.args[0]), (email and ' <%s>' % email)))
|
||||
elif cmdname == 'sectionauthor':
|
||||
email = text(node.args[1])
|
||||
self.sectionmeta.sectauthors.append(
|
||||
'%s%s' % (text(node.args[0]), (email and ' <%s>' % email)))
|
||||
elif cmdname == 'platform':
|
||||
self.sectionmeta.platform = text(node.args[0])
|
||||
self.flush_cb = lambda: self.write_sectionmeta()
|
||||
return
|
||||
|
||||
self.flush_par()
|
||||
if cmdname.startswith('see'):
|
||||
i = 2
|
||||
if cmdname == 'seemodule':
|
||||
self.write('Module :mod:`%s`' % text(node.args[1]))
|
||||
elif cmdname == 'seelink':
|
||||
linktext = self.get_node_text(node.args[1])
|
||||
self.write('`%s <%s>`_' % (linktext, text(node.args[0])))
|
||||
elif cmdname == 'seepep':
|
||||
self.write(':pep:`%s` - %s' % (text(node.args[0]),
|
||||
self.get_node_text(node.args[1])))
|
||||
elif cmdname == 'seerfc':
|
||||
self.write(':rfc:`%s` - %s' % (text(node.args[0]),
|
||||
text(node.args[1])))
|
||||
elif cmdname == 'seetitle':
|
||||
if empty(node.args[0]):
|
||||
self.write('%s' % text(node.args[1]))
|
||||
else:
|
||||
self.write('`%s <%s>`_' % (text(node.args[1]),
|
||||
text(node.args[0])))
|
||||
elif cmdname == 'seeurl':
|
||||
i = 1
|
||||
self.write('%s' % text(node.args[0]))
|
||||
elif cmdname == 'seetext':
|
||||
self.visit_node(node.args[0])
|
||||
return
|
||||
with self.indented():
|
||||
self.visit_node(node.args[i])
|
||||
elif cmdname in ('versionchanged', 'versionadded'):
|
||||
self.write('.. %s:: %s' % (cmdname, text(node.args[1])))
|
||||
if not empty(node.args[0]):
|
||||
with self.indented():
|
||||
self.visit_node(node.args[0])
|
||||
self.curpar.append('.')
|
||||
else:
|
||||
self.write()
|
||||
elif cmdname == 'deprecated':
|
||||
self.write_directive('deprecated', text(node.args[0]), node.args[1],
|
||||
spbelow=False)
|
||||
elif cmdname == 'localmoduletable':
|
||||
if self.toctree:
|
||||
self.write_directive('toctree', '', spbelow=True, spabove=True)
|
||||
with self.indented():
|
||||
for entry in self.toctree:
|
||||
self.write(entry + '.rst')
|
||||
else:
|
||||
self.warnings.append('no toctree given, but \\localmoduletable in file')
|
||||
elif cmdname == 'verbatiminput':
|
||||
inclname = text(node.args[0])
|
||||
newname = includes_mapping.get(inclname, '../includes/' + inclname)
|
||||
if newname is None:
|
||||
self.write()
|
||||
self.write('.. XXX includefile %s' % inclname)
|
||||
return
|
||||
self.write()
|
||||
self.write('.. include:: %s' % newname)
|
||||
self.write(' :literal:')
|
||||
self.write()
|
||||
elif cmdname == 'input':
|
||||
inclname = text(node.args[0])
|
||||
newname = includes_mapping.get(inclname, None)
|
||||
if newname is None:
|
||||
self.write('X' 'XX: input{%s} :XX' 'X' % inclname)
|
||||
return
|
||||
self.write_directive('include', newname, spabove=True)
|
||||
elif cmdname == 'centerline':
|
||||
self.write_directive('centered', self.get_node_text(node.args[0]),
|
||||
spabove=True, spbelow=True)
|
||||
elif cmdname == 'XX' 'X':
|
||||
self.visit_wrapped(r'**\*\*** ', node.args[0], ' **\*\***')
|
||||
else:
|
||||
raise WriterError('no handler for %s command' % cmdname)
|
||||
|
||||
def visit_DescLineCommandNode(self, node):
|
||||
# these have already been written as arguments of the corresponding
|
||||
# DescEnvironmentNode
|
||||
pass
|
||||
|
||||
def visit_ParaSepNode(self, node):
|
||||
self.flush_par()
|
||||
|
||||
def visit_VerbatimNode(self, node):
|
||||
if self.comments:
|
||||
# these interfer with the literal block
|
||||
self.flush_par()
|
||||
if self.curpar:
|
||||
last = self.curpar[-1].rstrip(' ')
|
||||
if last.endswith(':'):
|
||||
self.curpar[-1] = last + ':'
|
||||
else:
|
||||
self.curpar.append(' ::')
|
||||
else:
|
||||
self.curpar.append('::')
|
||||
self.flush_par()
|
||||
with self.indented():
|
||||
if isinstance(node.content, TextNode):
|
||||
# verbatim
|
||||
lines = textwrap.dedent(text(node.content).lstrip('\n')).split('\n')
|
||||
if not lines:
|
||||
return
|
||||
else:
|
||||
# alltt, possibly with inline formats
|
||||
lines = self.get_node_text(self.get_textonly_node(
|
||||
node.content, warn=0)).split('\n') + ['']
|
||||
# discard leading blank links
|
||||
while not lines[0].strip():
|
||||
del lines[0]
|
||||
for line in lines:
|
||||
self.write(line)
|
||||
|
||||
note_re = re.compile('^\(\d\)$')
|
||||
|
||||
def visit_TableNode(self, node):
|
||||
self.flush_par()
|
||||
lines = node.lines[:]
|
||||
lines.insert(0, node.headings)
|
||||
fmted_rows = []
|
||||
width = WIDTH - len(self.indentation)
|
||||
realwidths = [0] * node.numcols
|
||||
colwidth = (width / node.numcols) + 5
|
||||
# don't allow paragraphs in table cells for now
|
||||
with self.noflush:
|
||||
for line in lines:
|
||||
cells = []
|
||||
for i, cell in enumerate(line):
|
||||
par = self.get_node_text(cell, wrap=True, width=colwidth)
|
||||
if len(par) == 1 and self.note_re.match(par[0].strip()):
|
||||
# special case: escape "(1)" to avoid enumeration
|
||||
par[0] = '\\' + par[0]
|
||||
maxwidth = max(map(len, par)) if par else 0
|
||||
realwidths[i] = max(realwidths[i], maxwidth)
|
||||
cells.append(par)
|
||||
fmted_rows.append(cells)
|
||||
|
||||
def writesep(char='-'):
|
||||
out = ['+']
|
||||
for width in realwidths:
|
||||
out.append(char * (width+2))
|
||||
out.append('+')
|
||||
self.write(''.join(out))
|
||||
|
||||
def writerow(row):
|
||||
lines = map(None, *row)
|
||||
for line in lines:
|
||||
out = ['|']
|
||||
for i, cell in enumerate(line):
|
||||
if cell:
|
||||
out.append(' ' + cell.ljust(realwidths[i]+1))
|
||||
else:
|
||||
out.append(' ' * (realwidths[i] + 2))
|
||||
out.append('|')
|
||||
self.write(''.join(out))
|
||||
|
||||
writesep('-')
|
||||
writerow(fmted_rows[0])
|
||||
writesep('=')
|
||||
for row in fmted_rows[1:]:
|
||||
writerow(row)
|
||||
writesep('-')
|
||||
self.write()
|
||||
|
||||
def visit_ItemizeNode(self, node):
|
||||
self.flush_par()
|
||||
for title, content in node.items:
|
||||
if not empty(title):
|
||||
# do it like in a description list
|
||||
self.write(self.get_node_text(title))
|
||||
with self.indented():
|
||||
self.visit_node(content)
|
||||
else:
|
||||
self.curpar.append('* ')
|
||||
with self.indented(2, firstline=False):
|
||||
self.visit_node(content)
|
||||
|
||||
def visit_EnumerateNode(self, node):
|
||||
self.flush_par()
|
||||
for title, content in node.items:
|
||||
assert empty(title)
|
||||
self.curpar.append('#. ')
|
||||
with self.indented(3, firstline=False):
|
||||
self.visit_node(content)
|
||||
|
||||
def visit_DescriptionNode(self, node):
|
||||
self.flush_par()
|
||||
for title, content in node.items:
|
||||
self.write(self.get_node_text(title))
|
||||
with self.indented():
|
||||
self.visit_node(content)
|
||||
|
||||
visit_DefinitionsNode = visit_DescriptionNode
|
||||
|
||||
def visit_ProductionListNode(self, node):
|
||||
self.flush_par()
|
||||
arg = text(node.arg)
|
||||
self.write('.. productionlist::%s' % (' '+arg if arg else ''))
|
||||
with self.indented():
|
||||
for item in node.items:
|
||||
if not empty(item[0]):
|
||||
lasttext = text(item[0])
|
||||
self.write('%s: %s' % (
|
||||
text(item[0]).ljust(len(lasttext)),
|
||||
self.get_node_text(item[1])))
|
||||
self.write()
|
||||
|
||||
def visit_EmptyNode(self, node):
|
||||
pass
|
||||
|
||||
def visit_TextNode(self, node):
|
||||
if self.noescape:
|
||||
self.curpar.append(node.text)
|
||||
else:
|
||||
self.curpar.append(fixup_text(node.text))
|
||||
|
||||
visit_NbspNode = visit_TextNode
|
||||
visit_SimpleCmdNode = visit_TextNode
|
||||
|
||||
def visit_BreakNode(self, node):
|
||||
# XXX: linebreaks in ReST?
|
||||
self.curpar.append(' --- ')
|
||||
|
||||
def visit_IndexNode(self, node):
|
||||
if node.cmdname == 'withsubitem':
|
||||
self.indexsubitem = ' ' + text(node.indexargs[0])
|
||||
self.visit_node(node.indexargs[1])
|
||||
self.indexsubitem = ''
|
||||
else:
|
||||
self.indexentries.append((node.cmdname, node.indexargs,
|
||||
self.indexsubitem))
|
||||
|
||||
# maps argumentless commands to text
|
||||
simplecmd_mapping = {
|
||||
'NULL': '`NULL`',
|
||||
'shortversion': '|version|',
|
||||
'version': '|release|',
|
||||
'today': '|today|',
|
||||
}
|
||||
|
||||
# map LaTeX command names to roles: shorter names!
|
||||
role_mapping = {
|
||||
'cfunction': 'cfunc',
|
||||
'constant': 'const',
|
||||
'csimplemacro': 'cmacro',
|
||||
'exception': 'exc',
|
||||
'function': 'func',
|
||||
'grammartoken': 'token',
|
||||
'member': 'attr',
|
||||
'method': 'meth',
|
||||
'module': 'mod',
|
||||
'programopt': 'option',
|
||||
# these mean: no change
|
||||
'cdata': '',
|
||||
'class': '',
|
||||
'command': '',
|
||||
'ctype': '',
|
||||
'data': '', # NEW
|
||||
'dfn': '',
|
||||
'envvar': '',
|
||||
'file': '',
|
||||
'filenq': '',
|
||||
'filevar': '',
|
||||
'guilabel': '',
|
||||
'kbd': '',
|
||||
'keyword': '',
|
||||
'mailheader': '',
|
||||
'makevar': '',
|
||||
'menuselection': '',
|
||||
'mimetype': '',
|
||||
'newsgroup': '',
|
||||
'option': '',
|
||||
'pep': '',
|
||||
'program': '',
|
||||
'ref': '',
|
||||
'rfc': '',
|
||||
}
|
||||
|
||||
# do not warn about nested inline markup in these roles
|
||||
role_no_warn = set((
|
||||
'cdata', 'cfunction', 'class', 'constant', 'csimplemacro', 'ctype',
|
||||
'data', 'exception', 'function', 'member', 'method', 'module',
|
||||
))
|
||||
|
||||
def visit_InlineNode(self, node):
|
||||
# XXX: no nested markup -- docutils doesn't support it
|
||||
cmdname = node.cmdname
|
||||
if not node.args:
|
||||
self.curpar.append(self.simplecmd_mapping[cmdname])
|
||||
return
|
||||
content = node.args[0]
|
||||
if cmdname in ('code', 'bfcode', 'samp', 'texttt', 'regexp'):
|
||||
self.visit_wrapped('``', self.get_textonly_node(content, 'code',
|
||||
warn=1), '``', noescape=True)
|
||||
elif cmdname in ('emph', 'textit'):
|
||||
self.visit_wrapped('*', self.get_textonly_node(content, 'emph',
|
||||
warn=1), '*')
|
||||
elif cmdname in ('strong', 'textbf'):
|
||||
self.visit_wrapped('**', self.get_textonly_node(content, 'strong',
|
||||
warn=1), '**')
|
||||
elif cmdname in ('b', 'textrm', 'email'):
|
||||
self.visit_node(content)
|
||||
elif cmdname in ('var', 'token'):
|
||||
# \token appears in productionlists only
|
||||
self.visit_wrapped('`', self.get_textonly_node(content, 'var',
|
||||
warn=1), '`')
|
||||
elif cmdname == 'ref':
|
||||
self.curpar.append(':ref:`%s%s`' % (self.labelprefix,
|
||||
text(node.args[0]).lower()))
|
||||
elif cmdname == 'refmodule':
|
||||
self.visit_wrapped(':mod:`', node.args[1], '`', noescape=True)
|
||||
elif cmdname == 'optional':
|
||||
self.visit_wrapped('[', content, ']')
|
||||
elif cmdname == 'url':
|
||||
self.visit_node(content)
|
||||
elif cmdname == 'ulink':
|
||||
target = text(node.args[1])
|
||||
if target.startswith('..'):
|
||||
self.visit_wrapped('', content, ' (X' + 'XX reference: %s)' % target)
|
||||
elif not target.startswith(('http:', 'mailto:')):
|
||||
#self.warnings.append('Local \\ulink to %s, use \\ref instead' % target)
|
||||
self.visit_wrapped('', content, ' (X' 'XX reference: %s)' % target)
|
||||
else:
|
||||
self.visit_wrapped('`', self.get_textonly_node(content, 'ulink', warn=1),
|
||||
' <%s>`_' % target)
|
||||
elif cmdname == 'citetitle':
|
||||
target = text(content)
|
||||
if not target:
|
||||
self.visit_node(node.args[1])
|
||||
elif target.startswith('..'):
|
||||
self.visit_wrapped('', node.args[1],
|
||||
' (X' + 'XX reference: %s)' % target)
|
||||
else:
|
||||
self.visit_wrapped('`', self.get_textonly_node(node.args[1],
|
||||
'citetitle', warn=1),
|
||||
' <%s>`_' % target)
|
||||
elif cmdname == 'character':
|
||||
# ``'a'`` is not longer than :character:`a`
|
||||
self.visit_wrapped("``'", content, "'``", noescape=True)
|
||||
elif cmdname == 'manpage':
|
||||
self.curpar.append(':manpage:`')
|
||||
self.visit_node(self.get_textonly_node(content, warn=0))
|
||||
self.visit_wrapped('(', self.get_textonly_node(node.args[1], warn=0), ')')
|
||||
self.curpar.append('`')
|
||||
elif cmdname == 'footnote':
|
||||
self.curpar.append(' [#]_')
|
||||
self.footnotes.append(content)
|
||||
elif cmdname == 'frac':
|
||||
self.visit_wrapped('(', node.args[0], ')/')
|
||||
self.visit_wrapped('(', node.args[1], ')')
|
||||
elif cmdname == 'longprogramopt':
|
||||
self.visit_wrapped(':option:`--', content, '`')
|
||||
elif cmdname == '':
|
||||
self.visit_node(content)
|
||||
# stray commands from distutils
|
||||
elif cmdname in ('argument name', 'value', 'attribute', 'option name'):
|
||||
self.visit_wrapped('`', content, '`')
|
||||
else:
|
||||
self.visit_wrapped(':%s:`' % (self.role_mapping[cmdname] or cmdname),
|
||||
self.get_textonly_node(
|
||||
content, cmdname, warn=(cmdname not in self.role_no_warn)), '`')
|
||||
97
converter/scanner.py
Normal file
@@ -0,0 +1,97 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
scanner
|
||||
~~~~~~~
|
||||
|
||||
This library implements a regex based scanner.
|
||||
|
||||
:copyright: 2006-2007 by Armin Ronacher, Georg Brandl.
|
||||
:license: BSD license.
|
||||
"""
|
||||
import re
|
||||
|
||||
|
||||
class EndOfText(RuntimeError):
|
||||
"""
|
||||
Raise if end of text is reached and the user
|
||||
tried to call a match function.
|
||||
"""
|
||||
|
||||
|
||||
class Scanner(object):
|
||||
"""
|
||||
Simple scanner
|
||||
|
||||
All method patterns are regular expression strings (not
|
||||
compiled expressions!)
|
||||
"""
|
||||
|
||||
def __init__(self, text, flags=0):
|
||||
"""
|
||||
:param text: The text which should be scanned
|
||||
:param flags: default regular expression flags
|
||||
"""
|
||||
self.data = text
|
||||
self.data_length = len(text)
|
||||
self.start_pos = 0
|
||||
self.pos = 0
|
||||
self.flags = flags
|
||||
self.last = None
|
||||
self.match = None
|
||||
self._re_cache = {}
|
||||
|
||||
def eos(self):
|
||||
"""`True` if the scanner reached the end of text."""
|
||||
return self.pos >= self.data_length
|
||||
eos = property(eos, eos.__doc__)
|
||||
|
||||
def check(self, pattern):
|
||||
"""
|
||||
Apply `pattern` on the current position and return
|
||||
the match object. (Doesn't touch pos). Use this for
|
||||
lookahead.
|
||||
"""
|
||||
if self.eos:
|
||||
raise EndOfText()
|
||||
if pattern not in self._re_cache:
|
||||
self._re_cache[pattern] = re.compile(pattern, self.flags)
|
||||
return self._re_cache[pattern].match(self.data, self.pos)
|
||||
|
||||
def test(self, pattern):
|
||||
"""Apply a pattern on the current position and check
|
||||
if it patches. Doesn't touch pos."""
|
||||
return self.check(pattern) is not None
|
||||
|
||||
def scan(self, pattern):
|
||||
"""
|
||||
Scan the text for the given pattern and update pos/match
|
||||
and related fields. The return value is a boolen that
|
||||
indicates if the pattern matched. The matched value is
|
||||
stored on the instance as ``match``, the last value is
|
||||
stored as ``last``. ``start_pos`` is the position of the
|
||||
pointer before the pattern was matched, ``pos`` is the
|
||||
end position.
|
||||
"""
|
||||
if self.eos:
|
||||
raise EndOfText()
|
||||
if pattern not in self._re_cache:
|
||||
self._re_cache[pattern] = re.compile(pattern, self.flags)
|
||||
self.last = self.match
|
||||
m = self._re_cache[pattern].match(self.data, self.pos)
|
||||
if m is None:
|
||||
return False
|
||||
self.start_pos = m.start()
|
||||
self.pos = m.end()
|
||||
self.match = m
|
||||
return True
|
||||
|
||||
def get_char(self):
|
||||
"""Scan exactly one char."""
|
||||
self.scan('.')
|
||||
|
||||
def __repr__(self):
|
||||
return '<%s %d/%d>' % (
|
||||
self.__class__.__name__,
|
||||
self.pos,
|
||||
self.data_length
|
||||
)
|
||||
124
converter/tokenizer.py
Normal file
@@ -0,0 +1,124 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Python documentation LaTeX file tokenizer
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For more documentation, look into the ``restwriter.py`` file.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from .scanner import Scanner
|
||||
|
||||
class Tokenizer(Scanner):
|
||||
""" Lex a Python doc LaTeX document. """
|
||||
|
||||
specials = {
|
||||
'{': 'bgroup',
|
||||
'}': 'egroup',
|
||||
'[': 'boptional',
|
||||
']': 'eoptional',
|
||||
'~': 'tilde',
|
||||
'$': 'mathmode',
|
||||
}
|
||||
|
||||
@property
|
||||
def mtext(self):
|
||||
return self.match.group()
|
||||
|
||||
def tokenize(self):
|
||||
return TokenStream(self._tokenize())
|
||||
|
||||
def _tokenize(self):
|
||||
lineno = 1
|
||||
while not self.eos:
|
||||
if self.scan(r'\\verb([^a-zA-Z])(.*?)(\1)'):
|
||||
# specialcase \verb here
|
||||
yield lineno, 'command', 'verb', '\\verb'
|
||||
yield lineno, 'text', self.match.group(1), self.match.group(1)
|
||||
yield lineno, 'text', self.match.group(2), self.match.group(2)
|
||||
yield lineno, 'text', self.match.group(3), self.match.group(3)
|
||||
elif self.scan(r'\\([a-zA-Z]+\*?)[ \t]*'):
|
||||
yield lineno, 'command', self.match.group(1), self.mtext
|
||||
elif self.scan(r'\\.'):
|
||||
yield lineno, 'command', self.mtext[1], self.mtext
|
||||
elif self.scan(r'\\\n'):
|
||||
yield lineno, 'text', self.mtext, self.mtext
|
||||
lineno += 1
|
||||
elif self.scan(r'%(.*)\n[ \t]*'):
|
||||
yield lineno, 'comment', self.match.group(1), self.mtext
|
||||
lineno += 1
|
||||
elif self.scan(r'[{}\[\]~$]'):
|
||||
yield lineno, self.specials[self.mtext], self.mtext, self.mtext
|
||||
elif self.scan(r'(\n[ \t]*){2,}'):
|
||||
lines = self.mtext.count('\n')
|
||||
yield lineno, 'parasep', '\n' * lines, self.mtext
|
||||
lineno += lines
|
||||
elif self.scan(r'\n[ \t]*'):
|
||||
yield lineno, 'text', ' ', self.mtext
|
||||
lineno += 1
|
||||
elif self.scan(r'[^\\%}{\[\]~\n]+'):
|
||||
yield lineno, 'text', self.mtext, self.mtext
|
||||
else:
|
||||
raise RuntimeError('unexpected text on line %d: %r' %
|
||||
(lineno, self.data[self.pos:self.pos+100]))
|
||||
|
||||
|
||||
class TokenStream(object):
|
||||
"""
|
||||
A token stream works like a normal generator just that
|
||||
it supports peeking and pushing tokens back to the stream.
|
||||
"""
|
||||
|
||||
def __init__(self, generator):
|
||||
self._generator = generator
|
||||
self._pushed = []
|
||||
self.last = (1, 'initial', '')
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def __nonzero__(self):
|
||||
""" Are we at the end of the tokenstream? """
|
||||
if self._pushed:
|
||||
return True
|
||||
try:
|
||||
self.push(self.next())
|
||||
except StopIteration:
|
||||
return False
|
||||
return True
|
||||
|
||||
def pop(self):
|
||||
""" Return the next token from the stream. """
|
||||
if self._pushed:
|
||||
rv = self._pushed.pop()
|
||||
else:
|
||||
rv = self._generator.next()
|
||||
self.last = rv
|
||||
return rv
|
||||
|
||||
next = pop
|
||||
|
||||
def popmany(self, num=1):
|
||||
""" Pop a list of tokens. """
|
||||
return [self.next() for i in range(num)]
|
||||
|
||||
def peek(self):
|
||||
""" Pop and push a token, return it. """
|
||||
token = self.next()
|
||||
self.push(token)
|
||||
return token
|
||||
|
||||
def peekmany(self, num=1):
|
||||
""" Pop and push a list of tokens. """
|
||||
tokens = self.popmany(num)
|
||||
for tok in tokens:
|
||||
self.push(tok)
|
||||
return tokens
|
||||
|
||||
def push(self, item):
|
||||
""" Push a token back to the stream. """
|
||||
self._pushed.append(item)
|
||||
96
converter/util.py
Normal file
@@ -0,0 +1,96 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Python documentation conversion utils
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from docutils.nodes import make_id
|
||||
|
||||
from .docnodes import TextNode, EmptyNode, NodeList
|
||||
|
||||
|
||||
def umlaut(cmd, c):
|
||||
try:
|
||||
if cmd == '"':
|
||||
return {'o': u'ö',
|
||||
'a': u'ä',
|
||||
'u': u'ü',
|
||||
'i': u'ï',
|
||||
'O': u'Ö',
|
||||
'A': u'Ä',
|
||||
'U': u'Ü'}[c]
|
||||
elif cmd == "'":
|
||||
return {'a': u'á',
|
||||
'e': u'é'}[c]
|
||||
elif cmd == '~':
|
||||
return {'n': u'ñ'}[c]
|
||||
elif cmd == 'c':
|
||||
return {'c': u'ç'}[c]
|
||||
elif cmd == '`':
|
||||
return {'o': u'ò'}[c]
|
||||
else:
|
||||
from .latexparser import ParserError
|
||||
raise ParserError('invalid umlaut \\%s' % cmd, 0)
|
||||
except KeyError:
|
||||
from .latexparser import ParserError
|
||||
raise ParserError('unsupported umlaut \\%s%s' % (cmd, c), 0)
|
||||
|
||||
def fixup_text(text):
|
||||
return text.replace('``', '"').replace("''", '"').replace('`', "'").\
|
||||
replace('|', '\\|').replace('*', '\\*')
|
||||
|
||||
def empty(node):
|
||||
return (type(node) is EmptyNode)
|
||||
|
||||
def text(node):
|
||||
""" Return the text for a TextNode or raise an error. """
|
||||
if isinstance(node, TextNode):
|
||||
return node.text
|
||||
elif isinstance(node, NodeList):
|
||||
restext = ''
|
||||
for subnode in node:
|
||||
restext += text(subnode)
|
||||
return restext
|
||||
from .restwriter import WriterError
|
||||
raise WriterError('text() failed for %r' % node)
|
||||
|
||||
markup_re = re.compile(r'(:[a-zA-Z0-9_-]+:)?`(.*?)`')
|
||||
|
||||
def my_make_id(name):
|
||||
""" Like make_id(), but strip roles first. """
|
||||
return make_id(markup_re.sub(r'\2', name))
|
||||
|
||||
alphanum = u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
|
||||
wordchars_s = alphanum + u'_.-'
|
||||
wordchars_e = alphanum + u'+`(-'
|
||||
bad_markup_re = re.compile(r'(:[a-zA-Z0-9_-]+:)?(`{1,2})[ ]*(.+?)[ ]*(\2)')
|
||||
quoted_code_re = re.compile(r'\\`(``.+?``)\'')
|
||||
|
||||
def repair_bad_inline_markup(text):
|
||||
# remove quoting from `\code{x}'
|
||||
xtext = quoted_code_re.sub(r'\1', text)
|
||||
|
||||
# special: the literal backslash
|
||||
xtext = xtext.replace('``\\``', '\x03')
|
||||
# special: literal backquotes
|
||||
xtext = xtext.replace('``````', '\x02')
|
||||
|
||||
ntext = []
|
||||
lasti = 0
|
||||
l = len(xtext)
|
||||
for m in bad_markup_re.finditer(xtext):
|
||||
ntext.append(xtext[lasti:m.start()])
|
||||
s, e = m.start(), m.end()
|
||||
if s != 0 and xtext[s-1:s] in wordchars_s:
|
||||
ntext.append('\\ ')
|
||||
ntext.append((m.group(1) or '') + m.group(2) + m.group(3) + m.group(4))
|
||||
if e != l and xtext[e:e+1] in wordchars_e:
|
||||
ntext.append('\\ ')
|
||||
lasti = m.end()
|
||||
ntext.append(xtext[lasti:])
|
||||
return ''.join(ntext).replace('\x02', '``````').replace('\x03', '``\\``')
|
||||
122
etc/inst.diff
Normal file
@@ -0,0 +1,122 @@
|
||||
Index: inst/inst.tex
|
||||
===================================================================
|
||||
--- inst/inst.tex (Revision 54633)
|
||||
+++ inst/inst.tex (Arbeitskopie)
|
||||
@@ -324,32 +324,6 @@
|
||||
section~\ref{custom-install} on custom installations.
|
||||
|
||||
|
||||
-% This rather nasty macro is used to generate the tables that describe
|
||||
-% each installation scheme. It's nasty because it takes two arguments
|
||||
-% for each "slot" in an installation scheme, there will soon be more
|
||||
-% than five of these slots, and TeX has a limit of 10 arguments to a
|
||||
-% macro. Uh-oh.
|
||||
-
|
||||
-\newcommand{\installscheme}[8]
|
||||
- {\begin{tableiii}{l|l|l}{textrm}
|
||||
- {Type of file}
|
||||
- {Installation Directory}
|
||||
- {Override option}
|
||||
- \lineiii{pure module distribution}
|
||||
- {\filevar{#1}\filenq{#2}}
|
||||
- {\longprogramopt{install-purelib}}
|
||||
- \lineiii{non-pure module distribution}
|
||||
- {\filevar{#3}\filenq{#4}}
|
||||
- {\longprogramopt{install-platlib}}
|
||||
- \lineiii{scripts}
|
||||
- {\filevar{#5}\filenq{#6}}
|
||||
- {\longprogramopt{install-scripts}}
|
||||
- \lineiii{data}
|
||||
- {\filevar{#7}\filenq{#8}}
|
||||
- {\longprogramopt{install-data}}
|
||||
- \end{tableiii}}
|
||||
-
|
||||
-
|
||||
\section{Alternate Installation}
|
||||
\label{alt-install}
|
||||
|
||||
@@ -399,10 +373,23 @@
|
||||
The \longprogramopt{home} option defines the installation base
|
||||
directory. Files are installed to the following directories under the
|
||||
installation base as follows:
|
||||
-\installscheme{home}{/lib/python}
|
||||
- {home}{/lib/python}
|
||||
- {home}{/bin}
|
||||
- {home}{/share}
|
||||
+\begin{tableiii}{l|l|l}{textrm}
|
||||
+ {Type of file}
|
||||
+ {Installation Directory}
|
||||
+ {Override option}
|
||||
+ \lineiii{pure module distribution}
|
||||
+ {\filevar{home}\filenq{/lib/python}}
|
||||
+ {\longprogramopt{install-purelib}}
|
||||
+ \lineiii{non-pure module distribution}
|
||||
+ {\filevar{home}\filenq{/lib/python}}
|
||||
+ {\longprogramopt{install-platlib}}
|
||||
+ \lineiii{scripts}
|
||||
+ {\filevar{home}\filenq{/bin}}
|
||||
+ {\longprogramopt{install-scripts}}
|
||||
+ \lineiii{data}
|
||||
+ {\filevar{home}\filenq{/share}}
|
||||
+ {\longprogramopt{install-data}}
|
||||
+\end{tableiii}
|
||||
|
||||
|
||||
\versionchanged[The \longprogramopt{home} option used to be supported
|
||||
@@ -452,10 +439,23 @@
|
||||
etc.) If \longprogramopt{exec-prefix} is not supplied, it defaults to
|
||||
\longprogramopt{prefix}. Files are installed as follows:
|
||||
|
||||
-\installscheme{prefix}{/lib/python2.\filevar{X}/site-packages}
|
||||
- {exec-prefix}{/lib/python2.\filevar{X}/site-packages}
|
||||
- {prefix}{/bin}
|
||||
- {prefix}{/share}
|
||||
+\begin{tableiii}{l|l|l}{textrm}
|
||||
+ {Type of file}
|
||||
+ {Installation Directory}
|
||||
+ {Override option}
|
||||
+ \lineiii{pure module distribution}
|
||||
+ {\filevar{prefix}\filenq{/lib/python2.\filevar{X}/site-packages}}
|
||||
+ {\longprogramopt{install-purelib}}
|
||||
+ \lineiii{non-pure module distribution}
|
||||
+ {\filevar{exec-prefix}\filenq{/lib/python2.\filevar{X}/site-packages}}
|
||||
+ {\longprogramopt{install-platlib}}
|
||||
+ \lineiii{scripts}
|
||||
+ {\filevar{prefix}\filenq{/bin}}
|
||||
+ {\longprogramopt{install-scripts}}
|
||||
+ \lineiii{data}
|
||||
+ {\filevar{prefix}\filenq{/share}}
|
||||
+ {\longprogramopt{install-data}}
|
||||
+\end{tableiii}
|
||||
|
||||
There is no requirement that \longprogramopt{prefix} or
|
||||
\longprogramopt{exec-prefix} actually point to an alternate Python
|
||||
@@ -502,11 +502,24 @@
|
||||
The installation base is defined by the \longprogramopt{prefix} option;
|
||||
the \longprogramopt{exec-prefix} option is not supported under Windows.
|
||||
Files are installed as follows:
|
||||
-\installscheme{prefix}{}
|
||||
- {prefix}{}
|
||||
- {prefix}{\textbackslash{}Scripts}
|
||||
- {prefix}{\textbackslash{}Data}
|
||||
|
||||
+\begin{tableiii}{l|l|l}{textrm}
|
||||
+ {Type of file}
|
||||
+ {Installation Directory}
|
||||
+ {Override option}
|
||||
+ \lineiii{pure module distribution}
|
||||
+ {\filevar{prefix}\filenq{}}
|
||||
+ {\longprogramopt{install-purelib}}
|
||||
+ \lineiii{non-pure module distribution}
|
||||
+ {\filevar{prefix}\filenq{}}
|
||||
+ {\longprogramopt{install-platlib}}
|
||||
+ \lineiii{scripts}
|
||||
+ {\filevar{prefix}\filenq{\textbackslash{}Scripts}}
|
||||
+ {\longprogramopt{install-scripts}}
|
||||
+ \lineiii{data}
|
||||
+ {\filevar{prefix}\filenq{\textbackslash{}Data}}
|
||||
+ {\longprogramopt{install-data}}
|
||||
+\end{tableiii}
|
||||
|
||||
|
||||
\section{Custom Installation}
|
||||
20
sphinx-build.py
Normal file
@@ -0,0 +1,20 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Sphinx - Python documentation toolchain
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import sys
|
||||
|
||||
if __name__ == '__main__':
|
||||
from sphinx import main
|
||||
try:
|
||||
sys.exit(main(sys.argv))
|
||||
except Exception:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
import pdb
|
||||
pdb.post_mortem(sys.exc_traceback)
|
||||
59
sphinx-web.py
Normal file
@@ -0,0 +1,59 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Sphinx - Python documentation webserver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Armin Ronacher, Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import getopt
|
||||
|
||||
import sphinx
|
||||
from sphinx.web.application import setup_app
|
||||
from sphinx.web.serve import run_simple
|
||||
|
||||
try:
|
||||
from werkzeug.debug import DebuggedApplication
|
||||
except ImportError:
|
||||
DebuggedApplication = lambda x, y: x
|
||||
|
||||
|
||||
def main(argv):
|
||||
opts, args = getopt.getopt(argv[1:], "dhf:")
|
||||
opts = dict(opts)
|
||||
if len(args) != 1 or '-h' in opts:
|
||||
print 'usage: %s [-d] [-f cfg.py] <doc_root>' % argv[0]
|
||||
print ' -d: debug mode, use werkzeug debugger if installed'
|
||||
print ' -f: use "cfg.py" file instead of doc_root/webconf.py'
|
||||
return 2
|
||||
|
||||
conffile = opts.get('-f', os.path.join(args[0], 'webconf.py'))
|
||||
config = {}
|
||||
execfile(conffile, config)
|
||||
|
||||
port = config.get('listen_port', 3000)
|
||||
hostname = config.get('listen_addr', 'localhost')
|
||||
debug = ('-d' in opts) or (hostname == 'localhost')
|
||||
|
||||
config['data_root_path'] = args[0]
|
||||
config['debug'] = debug
|
||||
|
||||
def make_app():
|
||||
app = setup_app(config, check_superuser=True)
|
||||
if debug:
|
||||
app = DebuggedApplication(app, True)
|
||||
return app
|
||||
|
||||
if os.environ.get('RUN_MAIN') != 'true':
|
||||
print '* Sphinx %s- Python documentation web application' % \
|
||||
sphinx.__version__.replace('$', '').replace('Revision:', 'rev.')
|
||||
if debug:
|
||||
print '* Running in debug mode'
|
||||
|
||||
run_simple(hostname, port, make_app, use_reloader=debug)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv))
|
||||
127
sphinx/__init__.py
Normal file
@@ -0,0 +1,127 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Sphinx
|
||||
~~~~~~
|
||||
|
||||
The Python documentation toolchain.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import getopt
|
||||
from os import path
|
||||
|
||||
from .builder import builders
|
||||
from .console import nocolor
|
||||
|
||||
__version__ = '$Revision: 5369 $'
|
||||
|
||||
|
||||
def usage(argv, msg=None):
|
||||
if msg:
|
||||
print >>sys.stderr, msg
|
||||
print >>sys.stderr
|
||||
print >>sys.stderr, """\
|
||||
usage: %s [options] sourcedir outdir [filenames...]"
|
||||
options: -b <builder> -- builder to use (one of %s)
|
||||
-a -- write all files; default is to only write new and changed files
|
||||
-O <option[=value]> -- give option to to the builder (-O help for list)
|
||||
-D <setting=value> -- override a setting in sourcedir/conf.py
|
||||
-N -- do not do colored output
|
||||
modi:
|
||||
* without -a and without filenames, write new and changed files.
|
||||
* with -a, write all files.
|
||||
* with filenames, write these.""" % (argv[0], ', '.join(builders))
|
||||
|
||||
|
||||
def main(argv):
|
||||
try:
|
||||
opts, args = getopt.getopt(argv[1:], 'ab:O:D:N')
|
||||
srcdirname = path.abspath(args[0])
|
||||
if not path.isdir(srcdirname):
|
||||
print >>sys.stderr, 'Error: Cannot find source directory.'
|
||||
return 1
|
||||
if not path.isfile(path.join(srcdirname, 'conf.py')):
|
||||
print >>sys.stderr, 'Error: Source directory doesn\'t contain conf.py file.'
|
||||
return 1
|
||||
outdirname = path.abspath(args[1])
|
||||
if not path.isdir(outdirname):
|
||||
print >>sys.stderr, 'Error: Cannot find output directory.'
|
||||
return 1
|
||||
except (IndexError, getopt.error):
|
||||
usage(argv)
|
||||
return 1
|
||||
|
||||
filenames = args[2:]
|
||||
err = 0
|
||||
for filename in filenames:
|
||||
if not path.isfile(filename):
|
||||
print >>sys.stderr, 'Cannot find file %r.' % filename
|
||||
err = 1
|
||||
if err:
|
||||
return 1
|
||||
|
||||
builder = all_files = None
|
||||
opt_help = False
|
||||
options = {}
|
||||
confoverrides = {}
|
||||
for opt, val in opts:
|
||||
if opt == '-b':
|
||||
if val not in builders:
|
||||
usage(argv, 'Invalid builder value specified.')
|
||||
return 1
|
||||
builder = val
|
||||
elif opt == '-a':
|
||||
if filenames:
|
||||
usage(argv, 'Cannot combine -a option and filenames.')
|
||||
return 1
|
||||
all_files = True
|
||||
elif opt == '-O':
|
||||
if val == 'help':
|
||||
opt_help = True
|
||||
continue
|
||||
if '=' in val:
|
||||
key, val = val.split('=')
|
||||
try:
|
||||
val = int(val)
|
||||
except: pass
|
||||
else:
|
||||
key, val = val, True
|
||||
options[key] = val
|
||||
elif opt == '-D':
|
||||
key, val = val.split('=')
|
||||
try:
|
||||
val = int(val)
|
||||
except: pass
|
||||
confoverrides[key] = val
|
||||
elif opt == '-N':
|
||||
nocolor()
|
||||
|
||||
if builder is None:
|
||||
print 'No builder selected, using default: html'
|
||||
builder = 'html'
|
||||
|
||||
builderobj = builders[builder]
|
||||
|
||||
if opt_help:
|
||||
print 'Options recognized by the %s builder:' % builder
|
||||
for optname, description in builderobj.option_spec.iteritems():
|
||||
print ' * %s: %s' % (optname, description)
|
||||
return 0
|
||||
|
||||
builderobj = builderobj(srcdirname, outdirname, options,
|
||||
status_stream=sys.stdout,
|
||||
warning_stream=sys.stderr,
|
||||
confoverrides=confoverrides)
|
||||
if all_files:
|
||||
builderobj.build_all()
|
||||
elif filenames:
|
||||
builderobj.build_specific(filenames)
|
||||
else:
|
||||
builderobj.build_update()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv))
|
||||
18
sphinx/_jinja.py
Normal file
@@ -0,0 +1,18 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx._jinja
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Jinja glue.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import sys
|
||||
from os import path
|
||||
|
||||
sys.path.insert(0, path.dirname(__file__))
|
||||
|
||||
from jinja import Environment, FileSystemLoader
|
||||
58
sphinx/addnodes.py
Normal file
@@ -0,0 +1,58 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.addnodes
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
# index markup
|
||||
class index(nodes.Invisible, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
# description units (classdesc, funcdesc etc.)
|
||||
class desc(nodes.Admonition, nodes.Element): pass
|
||||
class desc_content(nodes.General, nodes.Element): pass
|
||||
class desc_signature(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
class desc_classname(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
class desc_name(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
class desc_parameterlist(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
class desc_parameter(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
class desc_optional(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
# refcount annotation
|
||||
class refcount(nodes.emphasis): pass
|
||||
|
||||
# \versionadded, \versionchanged, \deprecated
|
||||
class versionmodified(nodes.Admonition, nodes.TextElement): pass
|
||||
|
||||
# seealso
|
||||
class seealso(nodes.Admonition, nodes.Element): pass
|
||||
|
||||
# productionlist
|
||||
class productionlist(nodes.Admonition, nodes.Element): pass
|
||||
class production(nodes.Part, nodes.Inline, nodes.TextElement): pass
|
||||
|
||||
# toc tree
|
||||
class toctree(nodes.General, nodes.Element): pass
|
||||
|
||||
# centered
|
||||
class centered(nodes.Part, nodes.Element): pass
|
||||
|
||||
# pending xref
|
||||
class pending_xref(nodes.Element): pass
|
||||
|
||||
# compact paragraph -- never makes a <p>
|
||||
class compact_paragraph(nodes.paragraph): pass
|
||||
|
||||
# sets the highlighting language for literal blocks
|
||||
class highlightlang(nodes.Element): pass
|
||||
|
||||
# make them known to docutils. this is needed, because the HTMl writer
|
||||
# will choke at some point if these are not added
|
||||
nodes._add_node_class_names("""index desc desc_content desc_signature
|
||||
desc_classname desc_name desc_parameterlist desc_parameter desc_optional
|
||||
centered versionmodified seealso productionlist production toctree
|
||||
pending_xref compact_paragraph highlightlang""".split())
|
||||
608
sphinx/builder.py
Normal file
@@ -0,0 +1,608 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.builder
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Builder classes for different output formats.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import types
|
||||
import codecs
|
||||
import shutil
|
||||
import cPickle as pickle
|
||||
import cStringIO as StringIO
|
||||
from os import path
|
||||
|
||||
from docutils.io import StringOutput, DocTreeInput
|
||||
from docutils.core import publish_parts
|
||||
from docutils.utils import new_document
|
||||
from docutils.readers import doctree
|
||||
from docutils.frontend import OptionParser
|
||||
|
||||
from .util import (get_matching_files, attrdict, status_iterator,
|
||||
ensuredir, get_category, relative_uri)
|
||||
from .writer import HTMLWriter
|
||||
from .console import bold, purple, green
|
||||
from .htmlhelp import build_hhx
|
||||
from .environment import BuildEnvironment
|
||||
from .highlighting import pygments, get_stylesheet
|
||||
|
||||
# side effect: registers roles and directives
|
||||
from . import roles
|
||||
from . import directives
|
||||
|
||||
ENV_PICKLE_FILENAME = 'environment.pickle'
|
||||
LAST_BUILD_FILENAME = 'last_build'
|
||||
|
||||
# Helper objects
|
||||
|
||||
class relpath_to(object):
|
||||
def __init__(self, builder, filename):
|
||||
self.baseuri = builder.get_target_uri(filename)
|
||||
self.builder = builder
|
||||
def __call__(self, otheruri, resource=False):
|
||||
if not resource:
|
||||
otheruri = self.builder.get_target_uri(otheruri)
|
||||
return relative_uri(self.baseuri, otheruri)
|
||||
|
||||
|
||||
class collect_env_warnings(object):
|
||||
def __init__(self, builder):
|
||||
self.builder = builder
|
||||
def __enter__(self):
|
||||
self.stream = StringIO.StringIO()
|
||||
self.builder.env.set_warning_stream(self.stream)
|
||||
def __exit__(self, *args):
|
||||
self.builder.env.set_warning_stream(self.builder.warning_stream)
|
||||
warnings = self.stream.getvalue()
|
||||
if warnings:
|
||||
print >>self.builder.warning_stream, warnings
|
||||
|
||||
|
||||
class Builder(object):
|
||||
"""
|
||||
Builds target formats from the reST sources.
|
||||
"""
|
||||
|
||||
option_spec = {
|
||||
'freshenv': 'Don\'t use a pickled environment',
|
||||
}
|
||||
|
||||
def __init__(self, srcdirname, outdirname, options, env=None,
|
||||
status_stream=None, warning_stream=None,
|
||||
confoverrides=None):
|
||||
self.srcdir = srcdirname
|
||||
self.outdir = outdirname
|
||||
if not path.isdir(path.join(outdirname, '.doctrees')):
|
||||
os.mkdir(path.join(outdirname, '.doctrees'))
|
||||
|
||||
self.options = attrdict(options)
|
||||
self.validate_options()
|
||||
|
||||
# probably set in load_env()
|
||||
self.env = env
|
||||
|
||||
self.config = {}
|
||||
execfile(path.join(srcdirname, 'conf.py'), self.config)
|
||||
# remove potentially pickling-problematic values
|
||||
del self.config['__builtins__']
|
||||
for key, val in self.config.items():
|
||||
if isinstance(val, types.ModuleType):
|
||||
del self.config[key]
|
||||
if confoverrides:
|
||||
self.config.update(confoverrides)
|
||||
|
||||
self.status_stream = status_stream or sys.stdout
|
||||
self.warning_stream = warning_stream or sys.stderr
|
||||
|
||||
self.init()
|
||||
|
||||
# helper methods
|
||||
|
||||
def validate_options(self):
|
||||
for option in self.options:
|
||||
if option not in self.option_spec:
|
||||
raise ValueError('Got unexpected option %s' % option)
|
||||
for option in self.option_spec:
|
||||
if option not in self.options:
|
||||
self.options[option] = False
|
||||
|
||||
def msg(self, message='', nonl=False, nobold=False):
|
||||
if not nobold: message = bold(message)
|
||||
if nonl:
|
||||
print >>self.status_stream, message,
|
||||
else:
|
||||
print >>self.status_stream, message
|
||||
self.status_stream.flush()
|
||||
|
||||
def init(self):
|
||||
"""Load necessary templates and perform initialization."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_target_uri(self, source_filename):
|
||||
"""Return the target URI for a source filename."""
|
||||
raise NotImplementedError
|
||||
|
||||
def get_relative_uri(self, from_, to):
|
||||
"""Return a relative URI between two source filenames."""
|
||||
return relative_uri(self.get_target_uri(from_),
|
||||
self.get_target_uri(to))
|
||||
|
||||
def get_outdated_files(self):
|
||||
"""Return a list of output files that are outdated."""
|
||||
raise NotImplementedError
|
||||
|
||||
# build methods
|
||||
|
||||
def load_env(self):
|
||||
"""Set up the build environment. Return True if a pickled file could be
|
||||
successfully loaded, False if a new environment had to be created."""
|
||||
if self.env:
|
||||
return
|
||||
if not self.options.freshenv:
|
||||
try:
|
||||
self.msg('trying to load pickled env...', nonl=True)
|
||||
self.env = BuildEnvironment.frompickle(
|
||||
path.join(self.outdir, ENV_PICKLE_FILENAME))
|
||||
self.msg('done', nobold=True)
|
||||
except Exception, err:
|
||||
self.msg('failed: %s' % err, nobold=True)
|
||||
self.env = BuildEnvironment(self.srcdir,
|
||||
path.join(self.outdir, '.doctrees'))
|
||||
else:
|
||||
self.env = BuildEnvironment(self.srcdir,
|
||||
path.join(self.outdir, '.doctrees'))
|
||||
|
||||
def build_all(self):
|
||||
"""Build all source files."""
|
||||
self.load_env()
|
||||
self.build(None, summary='all source files')
|
||||
|
||||
def build_specific(self, source_filenames):
|
||||
"""Only rebuild as much as needed for changes in the source_filenames."""
|
||||
# bring the filenames to the canonical format, that is,
|
||||
# relative to the source directory.
|
||||
dirlen = len(self.srcdir) + 1
|
||||
to_write = [path.abspath(filename)[dirlen:] for filename in source_filenames]
|
||||
self.load_env()
|
||||
self.build(to_write,
|
||||
summary='%d source files given on command line' % len(to_write))
|
||||
|
||||
def build_update(self):
|
||||
"""Only rebuild files changed or added since last build."""
|
||||
self.load_env()
|
||||
to_build = list(self.get_outdated_files())
|
||||
if not to_build:
|
||||
self.msg('no files are out of date, exiting.')
|
||||
return
|
||||
self.build(to_build,
|
||||
summary='%d source files that are out of date' % len(to_build))
|
||||
|
||||
def build(self, filenames, summary=None):
|
||||
if summary:
|
||||
self.msg('building [%s]:' % self.name, nonl=1)
|
||||
self.msg(summary, nobold=1)
|
||||
|
||||
# while reading, collect all warnings from docutils
|
||||
with collect_env_warnings(self):
|
||||
self.msg('reading, updating environment:', nonl=1)
|
||||
iterator = self.env.update(self.config)
|
||||
self.msg(iterator.next(), nobold=1)
|
||||
for filename in iterator:
|
||||
self.msg(purple(filename), nonl=1, nobold=1)
|
||||
self.msg()
|
||||
|
||||
# save the environment
|
||||
self.msg('pickling the env...', nonl=True)
|
||||
self.env.topickle(path.join(self.outdir, ENV_PICKLE_FILENAME))
|
||||
self.msg('done', nobold=True)
|
||||
|
||||
# global actions
|
||||
self.msg('checking consistency...')
|
||||
self.env.check_consistency()
|
||||
self.msg('creating index...')
|
||||
self.env.create_index(self)
|
||||
|
||||
self.prepare_writing()
|
||||
|
||||
if filenames:
|
||||
# add all TOC files that may have changed
|
||||
filenames_set = set(filenames)
|
||||
for filename in filenames:
|
||||
for tocfilename in self.env.files_to_rebuild.get(filename, []):
|
||||
filenames_set.add(tocfilename)
|
||||
filenames_set.add('contents.rst')
|
||||
else:
|
||||
# build all
|
||||
filenames_set = set(self.env.all_files)
|
||||
|
||||
# write target files
|
||||
with collect_env_warnings(self):
|
||||
self.msg('writing output...')
|
||||
for filename in status_iterator(sorted(filenames_set), green,
|
||||
stream=self.status_stream):
|
||||
doctree = self.env.get_and_resolve_doctree(filename, self)
|
||||
self.write_file(filename, doctree)
|
||||
|
||||
# finish (write style files etc.)
|
||||
self.msg('finishing...')
|
||||
self.finish()
|
||||
self.msg('done!')
|
||||
|
||||
def prepare_writing(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def write_file(self, filename, doctree):
|
||||
raise NotImplementedError
|
||||
|
||||
def finish(self):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class StandaloneHTMLBuilder(Builder):
|
||||
"""
|
||||
Builds standalone HTML docs.
|
||||
"""
|
||||
name = 'html'
|
||||
|
||||
option_spec = Builder.option_spec
|
||||
option_spec.update({
|
||||
'nostyle': 'Don\'t copy style and script files',
|
||||
'nosearchindex': 'Don\'t create a JSON search index for offline search',
|
||||
})
|
||||
|
||||
copysource = True
|
||||
|
||||
def init(self):
|
||||
"""Load templates."""
|
||||
# lazily import this, maybe other builders won't need it
|
||||
from ._jinja import Environment, FileSystemLoader
|
||||
|
||||
# load templates
|
||||
self.templates = {}
|
||||
templates_path = path.join(path.dirname(__file__), 'templates')
|
||||
jinja_env = Environment(loader=FileSystemLoader(templates_path),
|
||||
# disable traceback, more likely that something in the
|
||||
# application is broken than in the templates
|
||||
friendly_traceback=False)
|
||||
for fname in os.listdir(templates_path):
|
||||
if fname.endswith('.html'):
|
||||
self.templates[fname[:-5]] = jinja_env.get_template(fname)
|
||||
|
||||
def render_partial(self, node):
|
||||
"""Utility: Render a lone doctree node."""
|
||||
doc = new_document('foo')
|
||||
doc.append(node)
|
||||
return publish_parts(
|
||||
doc,
|
||||
source_class=DocTreeInput,
|
||||
reader=doctree.Reader(),
|
||||
writer=HTMLWriter(self.config),
|
||||
settings_overrides={'output_encoding': 'unicode'}
|
||||
)
|
||||
|
||||
def prepare_writing(self):
|
||||
if not self.options.nosearchindex:
|
||||
from .search import IndexBuilder
|
||||
self.indexer = IndexBuilder()
|
||||
else:
|
||||
self.indexer = None
|
||||
self.docwriter = HTMLWriter(self.config)
|
||||
self.docsettings = OptionParser(
|
||||
defaults=self.env.settings,
|
||||
components=(self.docwriter,)).get_default_values()
|
||||
|
||||
# format the "last updated on" string, only once is enough since it
|
||||
# typically doesn't include the time of day
|
||||
lufmt = self.config.get('last_updated_format')
|
||||
if lufmt:
|
||||
self.last_updated = time.strftime(lufmt)
|
||||
else:
|
||||
self.last_updated = None
|
||||
|
||||
self.globalcontext = dict(
|
||||
last_updated = self.last_updated,
|
||||
builder = self.name,
|
||||
release = self.config['release'],
|
||||
parents = [],
|
||||
len = len,
|
||||
titles = {},
|
||||
)
|
||||
|
||||
def write_file(self, filename, doctree):
|
||||
destination = StringOutput(encoding='utf-8')
|
||||
doctree.settings = self.docsettings
|
||||
|
||||
output = self.docwriter.write(doctree, destination)
|
||||
self.docwriter.assemble_parts()
|
||||
|
||||
prev = next = None
|
||||
parents = []
|
||||
related = self.env.toctree_relations.get(filename)
|
||||
if related:
|
||||
prev = {'link': self.get_relative_uri(filename, related[1]),
|
||||
'title': self.render_partial(self.env.titles[related[1]])['title']}
|
||||
next = {'link': self.get_relative_uri(filename, related[2]),
|
||||
'title': self.render_partial(self.env.titles[related[2]])['title']}
|
||||
while related:
|
||||
parents.append(
|
||||
{'link': self.get_relative_uri(filename, related[0]),
|
||||
'title': self.render_partial(self.env.titles[related[0]])['title']})
|
||||
related = self.env.toctree_relations.get(related[0])
|
||||
if parents:
|
||||
parents.pop() # remove link to "contents.rst"; we have a generic
|
||||
# "back to index" link already
|
||||
parents.reverse()
|
||||
|
||||
title = self.env.titles.get(filename)
|
||||
if title:
|
||||
title = self.render_partial(title)['title']
|
||||
else:
|
||||
title = ''
|
||||
self.globalcontext['titles'][filename] = title
|
||||
sourcename = filename[:-4] + '.txt'
|
||||
context = dict(
|
||||
title = title,
|
||||
sourcename = sourcename,
|
||||
pathto = relpath_to(self, self.get_target_uri(filename)),
|
||||
body = self.docwriter.parts['fragment'],
|
||||
toc = self.render_partial(self.env.get_toc_for(filename))['fragment'],
|
||||
# only display a TOC if there's more than one item to show
|
||||
display_toc = (self.env.toc_num_entries[filename] > 1),
|
||||
parents = parents,
|
||||
prev = prev,
|
||||
next = next,
|
||||
)
|
||||
|
||||
self.index_file(filename, doctree, title)
|
||||
self.handle_file(filename, context)
|
||||
|
||||
def finish(self):
|
||||
self.msg('writing additional files...')
|
||||
|
||||
# the global general index
|
||||
|
||||
# the total count of lines for each index letter, used to distribute
|
||||
# the entries into two columns
|
||||
indexcounts = []
|
||||
for key, entries in self.env.index:
|
||||
indexcounts.append(sum(1 + len(subitems) for _, (_, subitems) in entries))
|
||||
|
||||
genindexcontext = dict(
|
||||
genindexentries = self.env.index,
|
||||
genindexcounts = indexcounts,
|
||||
current_page_name = 'genindex',
|
||||
pathto = relpath_to(self, self.get_target_uri('genindex.rst')),
|
||||
)
|
||||
self.handle_file('genindex.rst', genindexcontext, 'genindex')
|
||||
|
||||
# the global module index
|
||||
|
||||
# the sorted list of all modules, for the global module index
|
||||
modules = sorted(((mn, (self.get_relative_uri('modindex.rst', fn) +
|
||||
'#module-' + mn, sy, pl))
|
||||
for (mn, (fn, sy, pl)) in self.env.modules.iteritems()),
|
||||
key=lambda x: x[0].lower())
|
||||
# collect all platforms
|
||||
platforms = set()
|
||||
# sort out collapsable modules
|
||||
modindexentries = []
|
||||
pmn = ''
|
||||
cg = 0 # collapse group
|
||||
fl = '' # first letter
|
||||
for mn, (fn, sy, pl) in modules:
|
||||
pl = pl.split(', ') if pl else []
|
||||
platforms.update(pl)
|
||||
if fl != mn[0].lower() and mn[0] != '_':
|
||||
modindexentries.append(['', False, 0, False, mn[0].upper(), '', []])
|
||||
tn = mn.partition('.')[0]
|
||||
if tn != mn:
|
||||
# submodule
|
||||
if pmn == tn:
|
||||
# first submodule - make parent collapsable
|
||||
modindexentries[-1][1] = True
|
||||
elif not pmn.startswith(tn):
|
||||
# submodule without parent in list, add dummy entry
|
||||
cg += 1
|
||||
modindexentries.append([tn, True, cg, False, '', '', []])
|
||||
else:
|
||||
cg += 1
|
||||
modindexentries.append([mn, False, cg, (tn != mn), fn, sy, pl])
|
||||
pmn = mn
|
||||
fl = mn[0].lower()
|
||||
platforms = sorted(platforms)
|
||||
|
||||
modindexcontext = dict(
|
||||
modindexentries = modindexentries,
|
||||
platforms = platforms,
|
||||
current_page_name = 'modindex',
|
||||
pathto = relpath_to(self, self.get_target_uri('modindex.rst')),
|
||||
)
|
||||
self.handle_file('modindex.rst', modindexcontext, 'modindex')
|
||||
|
||||
# the index page
|
||||
indexcontext = dict(
|
||||
pathto = relpath_to(self, self.get_target_uri('index.rst')),
|
||||
current_page_name = 'index',
|
||||
)
|
||||
self.handle_file('index.rst', indexcontext, 'index')
|
||||
|
||||
# the search page
|
||||
searchcontext = dict(
|
||||
pathto = relpath_to(self, self.get_target_uri('search.rst')),
|
||||
current_page_name = 'search',
|
||||
)
|
||||
self.handle_file('search.rst', searchcontext, 'search')
|
||||
|
||||
if not self.options.nostyle:
|
||||
self.msg('copying style files...')
|
||||
# copy style files
|
||||
styledirname = path.join(path.dirname(__file__), 'style')
|
||||
ensuredir(path.join(self.outdir, 'style'))
|
||||
for filename in os.listdir(styledirname):
|
||||
if not filename.startswith('.'):
|
||||
shutil.copyfile(path.join(styledirname, filename),
|
||||
path.join(self.outdir, 'style', filename))
|
||||
# add pygments style file
|
||||
f = open(path.join(self.outdir, 'style', 'pygments.css'), 'w')
|
||||
if pygments:
|
||||
f.write(get_stylesheet())
|
||||
f.close()
|
||||
|
||||
# dump the search index
|
||||
self.handle_finish()
|
||||
|
||||
# --------- these are overwritten by the Web builder
|
||||
|
||||
def get_target_uri(self, source_filename):
|
||||
return source_filename[:-4] + '.html'
|
||||
|
||||
def get_outdated_files(self):
|
||||
for filename in get_matching_files(
|
||||
self.srcdir, '*.rst', exclude=set(self.config.get('unused_files', ()))):
|
||||
try:
|
||||
targetmtime = path.getmtime(path.join(self.outdir,
|
||||
filename[:-4] + '.html'))
|
||||
except:
|
||||
targetmtime = 0
|
||||
if path.getmtime(path.join(self.srcdir, filename)) > targetmtime:
|
||||
yield filename
|
||||
|
||||
def index_file(self, filename, doctree, title):
|
||||
# only index pages with title
|
||||
if self.indexer is not None and title:
|
||||
category = get_category(filename)
|
||||
if category is not None:
|
||||
self.indexer.feed(self.get_target_uri(filename)[:-5], # strip '.html'
|
||||
category, title, doctree)
|
||||
|
||||
def handle_file(self, filename, context, templatename='page'):
|
||||
ctx = self.globalcontext.copy()
|
||||
ctx.update(context)
|
||||
output = self.templates[templatename].render(ctx)
|
||||
outfilename = path.join(self.outdir, filename[:-4] + '.html')
|
||||
ensuredir(path.dirname(outfilename)) # normally different from self.outdir
|
||||
try:
|
||||
with codecs.open(outfilename, 'w', 'utf-8') as fp:
|
||||
fp.write(output)
|
||||
except (IOError, OSError), err:
|
||||
print >>self.warning_stream, "Error writing file %s: %s" % (outfilename, err)
|
||||
if self.copysource and context.get('sourcename'):
|
||||
# copy the source file for the "show source" link
|
||||
shutil.copyfile(path.join(self.srcdir, filename),
|
||||
path.join(self.outdir, context['sourcename']))
|
||||
|
||||
def handle_finish(self):
|
||||
if self.indexer is not None:
|
||||
self.msg('dumping search index...')
|
||||
f = open(path.join(self.outdir, 'searchindex.json'), 'w')
|
||||
self.indexer.dump(f, 'json')
|
||||
f.close()
|
||||
|
||||
|
||||
class WebHTMLBuilder(StandaloneHTMLBuilder):
|
||||
"""
|
||||
Builds HTML docs usable with the web-based doc server.
|
||||
"""
|
||||
name = 'web'
|
||||
|
||||
# doesn't use the standalone specific options
|
||||
option_spec = Builder.option_spec.copy()
|
||||
option_spec.update({
|
||||
'nostyle': 'Don\'t copy style and script files',
|
||||
'nosearchindex': 'Don\'t create a search index for the online search',
|
||||
})
|
||||
|
||||
def init(self):
|
||||
# Nothing to do here.
|
||||
pass
|
||||
|
||||
def get_outdated_files(self):
|
||||
for filename in get_matching_files(
|
||||
self.srcdir, '*.rst', exclude=set(self.config.get('unused_files', ()))):
|
||||
try:
|
||||
targetmtime = path.getmtime(path.join(self.outdir,
|
||||
filename[:-4] + '.fpickle'))
|
||||
except:
|
||||
targetmtime = 0
|
||||
if path.getmtime(path.join(self.srcdir, filename)) > targetmtime:
|
||||
yield filename
|
||||
|
||||
def get_target_uri(self, source_filename):
|
||||
if source_filename == 'index.rst':
|
||||
return ''
|
||||
if source_filename.endswith('/index.rst'):
|
||||
return source_filename[:-9] # up to /
|
||||
return source_filename[:-4] + '/'
|
||||
|
||||
def index_file(self, filename, doctree, title):
|
||||
# only index pages with title and category
|
||||
if self.indexer is not None and title:
|
||||
category = get_category(filename)
|
||||
if category is not None:
|
||||
self.indexer.feed(filename, category, title, doctree)
|
||||
|
||||
def handle_file(self, filename, context, templatename='page'):
|
||||
outfilename = path.join(self.outdir, filename[:-4] + '.fpickle')
|
||||
ensuredir(path.dirname(outfilename))
|
||||
context.pop('pathto', None) # can't be pickled
|
||||
with file(outfilename, 'wb') as fp:
|
||||
pickle.dump(context, fp, 2)
|
||||
|
||||
# if there is a source file, copy the source file for the "show source" link
|
||||
if context.get('sourcename'):
|
||||
source_name = path.join(self.outdir, 'sources', context['sourcename'])
|
||||
ensuredir(path.dirname(source_name))
|
||||
shutil.copyfile(path.join(self.srcdir, filename), source_name)
|
||||
|
||||
def handle_finish(self):
|
||||
# dump the global context
|
||||
outfilename = path.join(self.outdir, 'globalcontext.pickle')
|
||||
with file(outfilename, 'wb') as fp:
|
||||
pickle.dump(self.globalcontext, fp, 2)
|
||||
|
||||
if self.indexer is not None:
|
||||
self.msg('dumping search index...')
|
||||
f = open(path.join(self.outdir, 'searchindex.pickle'), 'w')
|
||||
self.indexer.dump(f, 'pickle')
|
||||
f.close()
|
||||
# touch 'last build' file, used by the web application to determine
|
||||
# when to reload its environment and clear the cache
|
||||
open(path.join(self.outdir, LAST_BUILD_FILENAME), 'w').close()
|
||||
# copy configuration file if not present
|
||||
if not path.isfile(path.join(self.outdir, 'webconf.py')):
|
||||
shutil.copyfile(path.join(path.dirname(__file__), 'web', 'webconf.py'),
|
||||
path.join(self.outdir, 'webconf.py'))
|
||||
|
||||
|
||||
class HTMLHelpBuilder(StandaloneHTMLBuilder):
|
||||
"""
|
||||
Builder that also outputs Windows HTML help project, contents and index files.
|
||||
Adapted from the original Doc/tools/prechm.py.
|
||||
"""
|
||||
name = 'htmlhelp'
|
||||
|
||||
option_spec = Builder.option_spec.copy()
|
||||
option_spec.update({
|
||||
'outname': 'Output file base name (default "pydoc")'
|
||||
})
|
||||
|
||||
# don't copy the reST source
|
||||
copysource = False
|
||||
|
||||
def handle_finish(self):
|
||||
build_hhx(self, self.outdir, self.options.get('outname') or 'pydoc')
|
||||
|
||||
|
||||
builders = {
|
||||
'html': StandaloneHTMLBuilder,
|
||||
'web': WebHTMLBuilder,
|
||||
'htmlhelp': HTMLHelpBuilder,
|
||||
}
|
||||
53
sphinx/console.py
Normal file
@@ -0,0 +1,53 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.console
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Format colored console output.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
codes = {}
|
||||
|
||||
def nocolor():
|
||||
codes.clear()
|
||||
|
||||
def colorize(name, text):
|
||||
return codes.get(name, '') + text + codes.get('reset', '')
|
||||
|
||||
def create_color_func(name):
|
||||
def inner(text):
|
||||
return colorize(name, text)
|
||||
globals()[name] = inner
|
||||
|
||||
_attrs = {
|
||||
'reset': '39;49;00m',
|
||||
'bold': '01m',
|
||||
'faint': '02m',
|
||||
'standout': '03m',
|
||||
'underline': '04m',
|
||||
'blink': '05m',
|
||||
}
|
||||
|
||||
for name, value in _attrs.items():
|
||||
codes[name] = '\x1b[' + value
|
||||
|
||||
_colors = [
|
||||
('black', 'darkgray'),
|
||||
('darkred', 'red'),
|
||||
('darkgreen', 'green'),
|
||||
('brown', 'yellow'),
|
||||
('darkblue', 'blue'),
|
||||
('purple', 'fuchsia'),
|
||||
('turquoise', 'teal'),
|
||||
('lightgray', 'white'),
|
||||
]
|
||||
|
||||
for i, (dark, light) in enumerate(_colors):
|
||||
codes[dark] = '\x1b[%im' % (i+30)
|
||||
codes[light] = '\x1b[%i;01m' % (i+30)
|
||||
|
||||
for name in codes:
|
||||
create_color_func(name)
|
||||
519
sphinx/directives.py
Normal file
@@ -0,0 +1,519 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.directives
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Handlers for additional ReST directives.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
import string
|
||||
from os import path
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.parsers.rst import directives, roles
|
||||
from docutils.parsers.rst.directives import admonitions
|
||||
|
||||
from . import addnodes
|
||||
|
||||
# ------ index markup --------------------------------------------------------------
|
||||
|
||||
entrytypes = [
|
||||
'single', 'pair', 'triple', 'quadruple',
|
||||
'module', 'keyword', 'operator', 'object', 'exception', 'statement', 'builtin',
|
||||
]
|
||||
|
||||
def index_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
arguments = arguments[0].split('\n')
|
||||
env = state.document.settings.env
|
||||
targetid = 'index-%s' % env.index_num
|
||||
env.index_num += 1
|
||||
targetnode = nodes.target('', '', ids=[targetid])
|
||||
state.document.note_explicit_target(targetnode)
|
||||
indexnode = addnodes.index()
|
||||
indexnode['entries'] = arguments
|
||||
for entry in arguments:
|
||||
try:
|
||||
type, string = entry.split(':', 1)
|
||||
env.note_index_entry(type.strip(), string.strip(),
|
||||
targetid, string.strip())
|
||||
except ValueError:
|
||||
continue
|
||||
return [indexnode, targetnode]
|
||||
|
||||
index_directive.arguments = (1, 0, 1)
|
||||
directives.register_directive('index', index_directive)
|
||||
|
||||
# ------ information units ---------------------------------------------------------
|
||||
|
||||
def desc_index_text(desctype, currmodule, name):
|
||||
if desctype == 'function':
|
||||
if not currmodule:
|
||||
return '%s() (built-in function)' % name
|
||||
return '%s() (in module %s)' % (name, currmodule)
|
||||
elif desctype == 'data':
|
||||
if not currmodule:
|
||||
return '%s (built-in variable)' % name
|
||||
return '%s (in module %s)' % (name, currmodule)
|
||||
elif desctype == 'class':
|
||||
return '%s (class in %s)' % (name, currmodule)
|
||||
elif desctype == 'exception':
|
||||
return name
|
||||
elif desctype == 'method':
|
||||
try:
|
||||
clsname, methname = name.rsplit('.', 1)
|
||||
except:
|
||||
if currmodule:
|
||||
return '%s() (in module %s)' % (name, currmodule)
|
||||
else:
|
||||
return '%s()' % name
|
||||
if currmodule:
|
||||
return '%s() (%s.%s method)' % (methname, currmodule, clsname)
|
||||
else:
|
||||
return '%s() (%s method)' % (methname, clsname)
|
||||
elif desctype == 'attribute':
|
||||
try:
|
||||
clsname, attrname = name.rsplit('.', 1)
|
||||
except:
|
||||
if currmodule:
|
||||
return '%s (in module %s)' % (name, currmodule)
|
||||
else:
|
||||
return name
|
||||
if currmodule:
|
||||
return '%s (%s.%s attribute)' % (attrname, currmodule, clsname)
|
||||
else:
|
||||
return '%s (%s attribute)' % (attrname, clsname)
|
||||
elif desctype == 'opcode':
|
||||
return '%s (opcode)' % name
|
||||
elif desctype == 'cfunction':
|
||||
return '%s (C function)' % name
|
||||
elif desctype == 'cmember':
|
||||
return '%s (C member)' % name
|
||||
elif desctype == 'cmacro':
|
||||
return '%s (C macro)' % name
|
||||
elif desctype == 'ctype':
|
||||
return '%s (C type)' % name
|
||||
elif desctype == 'cvar':
|
||||
return '%s (C variable)' % name
|
||||
else:
|
||||
raise ValueError("unhandled descenv: %s" % desctype)
|
||||
|
||||
|
||||
# ------ functions to parse a Python or C signature and create desc_* nodes.
|
||||
|
||||
py_sig_re = re.compile(r'''^([\w.]*\.)? # class names
|
||||
(\w+) \s* # thing name
|
||||
(?: \((.*)\) )? $ # optionally arguments
|
||||
''', re.VERBOSE)
|
||||
|
||||
py_paramlist_re = re.compile(r'([\[\],])') # split at '[', ']' and ','
|
||||
|
||||
def parse_py_signature(signode, sig, desctype, currclass):
|
||||
"""
|
||||
Transform a python signature into RST nodes. Returns (signode, fullname).
|
||||
Return the fully qualified name of the thing.
|
||||
|
||||
If inside a class, the current class name is handled intelligently:
|
||||
* it is stripped from the displayed name if present
|
||||
* it is added to the full name (return value) if not present
|
||||
"""
|
||||
m = py_sig_re.match(sig)
|
||||
if m is None: raise ValueError
|
||||
classname, name, arglist = m.groups()
|
||||
|
||||
if currclass:
|
||||
if classname and classname.startswith(currclass):
|
||||
fullname = classname + name
|
||||
classname = classname[len(currclass):].lstrip('.')
|
||||
elif classname:
|
||||
fullname = currclass + '.' + classname + name
|
||||
else:
|
||||
fullname = currclass + '.' + name
|
||||
else:
|
||||
fullname = classname + name if classname else name
|
||||
|
||||
if classname:
|
||||
signode += addnodes.desc_classname(classname, classname)
|
||||
signode += addnodes.desc_name(name, name)
|
||||
if not arglist:
|
||||
if desctype in ('function', 'method'):
|
||||
# for callables, add an empty parameter list
|
||||
signode += addnodes.desc_parameterlist()
|
||||
return fullname
|
||||
signode += addnodes.desc_parameterlist()
|
||||
|
||||
stack = [signode[-1]]
|
||||
arglist = arglist.replace('`', '').replace(r'\ ', '') # remove markup
|
||||
for token in py_paramlist_re.split(arglist):
|
||||
if token == '[':
|
||||
opt = addnodes.desc_optional()
|
||||
stack[-1] += opt
|
||||
stack.append(opt)
|
||||
elif token == ']':
|
||||
try: stack.pop()
|
||||
except IndexError: raise ValueError
|
||||
elif not token or token == ',' or token.isspace():
|
||||
pass
|
||||
else:
|
||||
token = token.strip()
|
||||
stack[-1] += addnodes.desc_parameter(token, token)
|
||||
if len(stack) != 1: raise ValueError
|
||||
return fullname
|
||||
|
||||
|
||||
c_sig_re = re.compile(
|
||||
r'''^([^(]*?) # return type
|
||||
(\w+) \s* # thing name
|
||||
(?: \((.*)\) )? $ # optionally arguments
|
||||
''', re.VERBOSE)
|
||||
c_funcptr_sig_re = re.compile(
|
||||
r'''^([^(]+?) # return type
|
||||
(\( [^()]+ \)) \s* # name in parentheses
|
||||
\( (.*) \) $ # arguments
|
||||
''', re.VERBOSE)
|
||||
|
||||
# RE to split at word boundaries
|
||||
wsplit_re = re.compile(r'(\W+)')
|
||||
|
||||
# These C types aren't described in the reference, so don't try to create
|
||||
# a cross-reference to them
|
||||
stopwords = set(('const', 'void', 'char', 'int', 'long', 'FILE', 'struct'))
|
||||
|
||||
def parse_c_type(node, ctype):
|
||||
# add cross-ref nodes for all words
|
||||
for part in filter(None, wsplit_re.split(ctype)):
|
||||
tnode = nodes.Text(part, part)
|
||||
if part[0] in string.letters+'_' and part not in stopwords:
|
||||
pnode = addnodes.pending_xref(
|
||||
'', reftype='ctype', reftarget=part, modname=None, classname=None)
|
||||
pnode += tnode
|
||||
node += pnode
|
||||
else:
|
||||
node += tnode
|
||||
|
||||
def parse_c_signature(signode, sig, desctype):
|
||||
"""Transform a C-language signature into RST nodes."""
|
||||
# first try the function pointer signature regex, it's more specific
|
||||
m = c_funcptr_sig_re.match(sig)
|
||||
if m is None:
|
||||
m = c_sig_re.match(sig)
|
||||
if m is None:
|
||||
raise ValueError('no match')
|
||||
rettype, name, arglist = m.groups()
|
||||
|
||||
parse_c_type(signode, rettype)
|
||||
signode += addnodes.desc_name(name, name)
|
||||
if not arglist:
|
||||
if desctype == 'cfunction':
|
||||
# for functions, add an empty parameter list
|
||||
signode += addnodes.desc_parameterlist()
|
||||
return name
|
||||
|
||||
paramlist = addnodes.desc_parameterlist()
|
||||
arglist = arglist.replace('`', '').replace('\\ ', '') # remove markup
|
||||
# this messes up function pointer types, but not too badly ;)
|
||||
args = arglist.split(',')
|
||||
for arg in args:
|
||||
arg = arg.strip()
|
||||
param = addnodes.desc_parameter('', '', noemph=True)
|
||||
try:
|
||||
ctype, argname = arg.rsplit(' ', 1)
|
||||
except ValueError:
|
||||
# no argument name given, only the type
|
||||
parse_c_type(param, arg)
|
||||
else:
|
||||
parse_c_type(param, ctype)
|
||||
param += nodes.emphasis(' '+argname, ' '+argname)
|
||||
paramlist += param
|
||||
signode += paramlist
|
||||
return name
|
||||
|
||||
|
||||
opcode_sig_re = re.compile(r'(\w+(?:\+\d)?)\s*\((.*)\)')
|
||||
|
||||
def parse_opcode_signature(signode, sig, desctype):
|
||||
"""Transform an opcode signature into RST nodes."""
|
||||
m = opcode_sig_re.match(sig)
|
||||
if m is None: raise ValueError
|
||||
opname, arglist = m.groups()
|
||||
signode += addnodes.desc_name(opname, opname)
|
||||
paramlist = addnodes.desc_parameterlist()
|
||||
signode += paramlist
|
||||
paramlist += addnodes.desc_parameter(arglist, arglist)
|
||||
return opname.strip()
|
||||
|
||||
|
||||
def add_refcount_annotation(env, node, name):
|
||||
"""Add a reference count annotation. Return None."""
|
||||
entry = env.refcounts.get(name)
|
||||
if not entry:
|
||||
return
|
||||
elif entry.result_type not in ("PyObject*", "PyVarObject*"):
|
||||
return
|
||||
rc = 'Return value: '
|
||||
if entry.result_refs is None:
|
||||
rc += "Always NULL."
|
||||
else:
|
||||
rc += ("New" if entry.result_refs else "Borrowed") + " reference."
|
||||
node += addnodes.refcount(rc, rc)
|
||||
|
||||
|
||||
def desc_directive(desctype, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
env = state.document.settings.env
|
||||
node = addnodes.desc()
|
||||
node['desctype'] = desctype
|
||||
|
||||
noindex = ('noindex' in options)
|
||||
signatures = map(lambda s: s.strip(), arguments[0].split('\n'))
|
||||
names = []
|
||||
for i, sig in enumerate(signatures):
|
||||
# add a signature node for each signature in the current unit
|
||||
# and add a reference target for it
|
||||
sig = sig.strip()
|
||||
signode = addnodes.desc_signature(sig, '')
|
||||
signode['first'] = False
|
||||
node.append(signode)
|
||||
try:
|
||||
if desctype in ('function', 'data', 'class', 'exception',
|
||||
'method', 'attribute'):
|
||||
name = parse_py_signature(signode, sig, desctype, env.currclass)
|
||||
elif desctype in ('cfunction', 'cmember', 'cmacro', 'ctype', 'cvar'):
|
||||
name = parse_c_signature(signode, sig, desctype)
|
||||
elif desctype == 'opcode':
|
||||
name = parse_opcode_signature(signode, sig, desctype)
|
||||
else:
|
||||
# describe: use generic fallback
|
||||
raise ValueError
|
||||
except ValueError, err:
|
||||
signode.clear()
|
||||
signode += addnodes.desc_name(sig, sig)
|
||||
continue # we don't want an index entry here
|
||||
# only add target and index entry if this is the first description of the
|
||||
# function name in this desc block
|
||||
if not noindex and name not in names:
|
||||
fullname = (env.currmodule + '.' if env.currmodule else '') + name
|
||||
# note target
|
||||
if fullname not in state.document.ids:
|
||||
signode['names'].append(fullname)
|
||||
signode['ids'].append(fullname)
|
||||
signode['first'] = (not names)
|
||||
state.document.note_explicit_target(signode)
|
||||
env.note_descref(fullname, desctype)
|
||||
names.append(name)
|
||||
|
||||
env.note_index_entry('single',
|
||||
desc_index_text(desctype, env.currmodule, name),
|
||||
fullname, fullname)
|
||||
|
||||
subnode = addnodes.desc_content()
|
||||
if desctype == 'cfunction':
|
||||
add_refcount_annotation(env, subnode, name)
|
||||
# needed for automatic qualification of members
|
||||
if desctype == 'class' and names:
|
||||
env.currclass = names[0]
|
||||
# needed for association of version{added,changed} directives
|
||||
if names:
|
||||
env.currdesc = names[0]
|
||||
state.nested_parse(content, content_offset, subnode)
|
||||
if desctype == 'class':
|
||||
env.currclass = None
|
||||
env.currdesc = None
|
||||
node.append(subnode)
|
||||
return [node]
|
||||
|
||||
desc_directive.content = 1
|
||||
desc_directive.arguments = (1, 0, 1)
|
||||
desc_directive.options = {'noindex': directives.flag}
|
||||
|
||||
desctypes = [
|
||||
# the Python ones
|
||||
'function',
|
||||
'data',
|
||||
'class',
|
||||
'method',
|
||||
'attribute',
|
||||
'exception',
|
||||
# the C ones
|
||||
'cfunction',
|
||||
'cmember',
|
||||
'cmacro',
|
||||
'ctype',
|
||||
'cvar',
|
||||
# the odd one
|
||||
'opcode',
|
||||
# the generic one
|
||||
'describe',
|
||||
]
|
||||
|
||||
for name in desctypes:
|
||||
directives.register_directive(name, desc_directive)
|
||||
|
||||
|
||||
# ------ versionadded/versionchanged -----------------------------------------------
|
||||
|
||||
def version_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
node = addnodes.versionmodified()
|
||||
node['type'] = name
|
||||
node['version'] = arguments[0]
|
||||
if len(arguments) == 2:
|
||||
inodes, messages = state.inline_text(arguments[1], lineno+1)
|
||||
node.extend(inodes)
|
||||
if content:
|
||||
state.nested_parse(content, content_offset, node)
|
||||
ret = [node] + messages
|
||||
else:
|
||||
ret = [node]
|
||||
env = state.document.settings.env
|
||||
env.note_versionchange(node['type'], node['version'], node)
|
||||
return ret
|
||||
|
||||
version_directive.arguments = (1, 1, 1)
|
||||
version_directive.content = 1
|
||||
|
||||
directives.register_directive('deprecated', version_directive)
|
||||
directives.register_directive('versionadded', version_directive)
|
||||
directives.register_directive('versionchanged', version_directive)
|
||||
|
||||
|
||||
# ------ see also ------------------------------------------------------------------
|
||||
|
||||
def seealso_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
rv = admonitions.make_admonition(
|
||||
addnodes.seealso, name, ['See also:'], options, content,
|
||||
lineno, content_offset, block_text, state, state_machine)
|
||||
return rv
|
||||
|
||||
seealso_directive.content = 1
|
||||
seealso_directive.arguments = (0, 0, 0)
|
||||
directives.register_directive('seealso', seealso_directive)
|
||||
|
||||
|
||||
# ------ production list (for the reference) ---------------------------------------
|
||||
|
||||
def productionlist_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
env = state.document.settings.env
|
||||
node = addnodes.productionlist()
|
||||
messages = []
|
||||
i = 0
|
||||
|
||||
# use token as the default role while in production list
|
||||
roles._roles[''] = roles._role_registry['token']
|
||||
for rule in arguments[0].split('\n'):
|
||||
if i == 0 and ':' not in rule:
|
||||
# production group
|
||||
continue
|
||||
i += 1
|
||||
try:
|
||||
name, tokens = rule.split(':', 1)
|
||||
except ValueError:
|
||||
break
|
||||
subnode = addnodes.production()
|
||||
subnode['tokenname'] = name.strip()
|
||||
if subnode['tokenname']:
|
||||
idname = 'grammar-token-%s' % subnode['tokenname']
|
||||
if idname not in state.document.ids:
|
||||
subnode['ids'].append(idname)
|
||||
state.document.note_implicit_target(subnode, subnode)
|
||||
env.note_token(subnode['tokenname'])
|
||||
inodes, imessages = state.inline_text(tokens, lineno+i)
|
||||
subnode.extend(inodes)
|
||||
messages.extend(imessages)
|
||||
node.append(subnode)
|
||||
del roles._roles['']
|
||||
return [node] + messages
|
||||
|
||||
productionlist_directive.content = 0
|
||||
productionlist_directive.arguments = (1, 0, 1)
|
||||
directives.register_directive('productionlist', productionlist_directive)
|
||||
|
||||
# ------ section metadata ----------------------------------------------------------
|
||||
|
||||
def module_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
env = state.document.settings.env
|
||||
modname = arguments[0].strip()
|
||||
env.currmodule = modname
|
||||
env.note_module(modname, options.get('synopsis', ''), options.get('platform', ''))
|
||||
ret = []
|
||||
targetnode = nodes.target('', '', ids=['module-' + modname])
|
||||
state.document.note_explicit_target(targetnode)
|
||||
ret.append(targetnode)
|
||||
if 'platform' in options:
|
||||
node = nodes.paragraph()
|
||||
node += nodes.emphasis('Platforms: ', 'Platforms: ')
|
||||
node += nodes.Text(options['platform'], options['platform'])
|
||||
ret.append(node)
|
||||
# the synopsis isn't printed; in fact, it is only used in the modindex currently
|
||||
env.note_index_entry('single', '%s (module)' % modname, 'module-' + modname,
|
||||
modname)
|
||||
return ret
|
||||
|
||||
module_directive.arguments = (1, 0, 0)
|
||||
module_directive.options = {'platform': lambda x: x,
|
||||
'synopsis': lambda x: x}
|
||||
directives.register_directive('module', module_directive)
|
||||
|
||||
|
||||
def author_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
# The author directives aren't included in the built document
|
||||
return []
|
||||
|
||||
author_directive.arguments = (1, 0, 1)
|
||||
directives.register_directive('sectionauthor', author_directive)
|
||||
directives.register_directive('moduleauthor', author_directive)
|
||||
|
||||
|
||||
# ------ toctree directive ---------------------------------------------------------
|
||||
|
||||
def toctree_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
env = state.document.settings.env
|
||||
dirname = path.dirname(env.filename)
|
||||
|
||||
subnode = addnodes.toctree()
|
||||
includefiles = filter(None, content)
|
||||
# absolutize filenames
|
||||
includefiles = map(lambda x: path.normpath(path.join(dirname, x)), includefiles)
|
||||
subnode['includefiles'] = includefiles
|
||||
subnode['maxdepth'] = options.get('maxdepth', -1)
|
||||
return [subnode]
|
||||
|
||||
toctree_directive.content = 1
|
||||
toctree_directive.options = {'maxdepth': int}
|
||||
directives.register_directive('toctree', toctree_directive)
|
||||
|
||||
|
||||
# ------ centered directive ---------------------------------------------------------
|
||||
|
||||
def centered_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
if not arguments:
|
||||
return []
|
||||
subnode = addnodes.centered()
|
||||
inodes, messages = state.inline_text(arguments[0], lineno)
|
||||
subnode.extend(inodes)
|
||||
return [subnode] + messages
|
||||
|
||||
centered_directive.arguments = (1, 0, 1)
|
||||
directives.register_directive('centered', centered_directive)
|
||||
|
||||
|
||||
# ------ highlightlanguage directive ------------------------------------------------
|
||||
|
||||
def highlightlang_directive(name, arguments, options, content, lineno,
|
||||
content_offset, block_text, state, state_machine):
|
||||
return [addnodes.highlightlang(lang=arguments[0].strip())]
|
||||
|
||||
highlightlang_directive.content = 0
|
||||
highlightlang_directive.arguments = (1, 0, 0)
|
||||
directives.register_directive('highlightlang',
|
||||
highlightlang_directive)
|
||||
840
sphinx/environment.py
Normal file
@@ -0,0 +1,840 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.environment
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Global creation environment.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
|
||||
import re
|
||||
import os
|
||||
import time
|
||||
import heapq
|
||||
import hashlib
|
||||
import difflib
|
||||
import itertools
|
||||
import cPickle as pickle
|
||||
from os import path
|
||||
from string import uppercase
|
||||
|
||||
from docutils import nodes
|
||||
from docutils.io import FileInput
|
||||
from docutils.core import publish_doctree
|
||||
from docutils.utils import Reporter
|
||||
from docutils.readers import standalone
|
||||
from docutils.transforms import Transform
|
||||
from docutils.transforms.parts import ContentsFilter
|
||||
from docutils.transforms.universal import FilterMessages
|
||||
|
||||
from . import addnodes
|
||||
from .util import get_matching_files
|
||||
from .refcounting import Refcounts
|
||||
|
||||
default_settings = {
|
||||
'embed_stylesheet': False,
|
||||
'cloak_email_addresses': True,
|
||||
'pep_base_url': 'http://www.python.org/dev/peps/',
|
||||
'input_encoding': 'utf-8',
|
||||
'doctitle_xform': False,
|
||||
'sectsubtitle_xform': False,
|
||||
}
|
||||
|
||||
# This is increased every time a new environment attribute is added
|
||||
# to properly invalidate pickle files.
|
||||
ENV_VERSION = 9
|
||||
|
||||
|
||||
def walk_depth(node, depth, maxdepth):
|
||||
"""Utility: Cut a TOC at a specified depth."""
|
||||
for subnode in node.children[:]:
|
||||
if isinstance(subnode, (addnodes.compact_paragraph, nodes.list_item)):
|
||||
walk_depth(subnode, depth, maxdepth)
|
||||
elif isinstance(subnode, nodes.bullet_list):
|
||||
if depth > maxdepth:
|
||||
subnode.parent.replace(subnode, [])
|
||||
else:
|
||||
walk_depth(subnode, depth+1, maxdepth)
|
||||
|
||||
|
||||
default_substitutions = set([
|
||||
'version',
|
||||
'release',
|
||||
'today',
|
||||
])
|
||||
|
||||
|
||||
class DefaultSubstitutions(Transform):
|
||||
"""
|
||||
Replace some substitutions if they aren't defined in the document.
|
||||
"""
|
||||
# run before the default Substitutions
|
||||
default_priority = 210
|
||||
|
||||
def apply(self):
|
||||
config = self.document.settings.env.config
|
||||
# only handle those not otherwise defined in the document
|
||||
to_handle = default_substitutions - set(self.document.substitution_defs)
|
||||
for ref in self.document.traverse(nodes.substitution_reference):
|
||||
refname = ref['refname']
|
||||
if refname in to_handle:
|
||||
text = config.get(refname, '')
|
||||
if refname == 'today' and not text:
|
||||
# special handling: can also specify a strftime format
|
||||
text = time.strftime(config.get('today_fmt', '%B %d, %Y'))
|
||||
ref.replace_self(nodes.Text(text, text))
|
||||
|
||||
|
||||
class MoveModuleTargets(Transform):
|
||||
"""
|
||||
Move module targets to their nearest enclosing section title.
|
||||
"""
|
||||
default_priority = 210
|
||||
|
||||
def apply(self):
|
||||
for node in self.document.traverse(nodes.target):
|
||||
if not node['ids']:
|
||||
continue
|
||||
if node['ids'][0].startswith('module-') and \
|
||||
node.parent.__class__ is nodes.section:
|
||||
node.parent['ids'] = node['ids']
|
||||
node.parent.remove(node)
|
||||
|
||||
|
||||
class MyStandaloneReader(standalone.Reader):
|
||||
"""
|
||||
Add our own Substitutions transform.
|
||||
"""
|
||||
def get_transforms(self):
|
||||
tf = standalone.Reader.get_transforms(self)
|
||||
return tf + [DefaultSubstitutions, MoveModuleTargets,
|
||||
FilterMessages]
|
||||
|
||||
|
||||
class MyContentsFilter(ContentsFilter):
|
||||
"""
|
||||
Used with BuildEnvironment.add_toc_from() to discard cross-file links
|
||||
within table-of-contents link nodes.
|
||||
"""
|
||||
def visit_pending_xref(self, node):
|
||||
self.parent.append(nodes.literal(node['reftarget'], node['reftarget']))
|
||||
raise nodes.SkipNode
|
||||
|
||||
|
||||
class BuildEnvironment:
|
||||
"""
|
||||
The environment in which the ReST files are translated.
|
||||
Stores an inventory of cross-file targets and provides doctree
|
||||
transformations to resolve links to them.
|
||||
|
||||
Not all doctrees are stored in the environment, only those of files
|
||||
containing a "toctree" directive, because they have to change if sections
|
||||
are edited in other files. This keeps the environment size moderate.
|
||||
"""
|
||||
|
||||
# --------- ENVIRONMENT PERSISTENCE ----------------------------------------
|
||||
|
||||
@staticmethod
|
||||
def frompickle(filename):
|
||||
with open(filename, 'rb') as picklefile:
|
||||
env = pickle.load(picklefile)
|
||||
if env.version != ENV_VERSION:
|
||||
raise IOError('env version not current')
|
||||
return env
|
||||
|
||||
def topickle(self, filename):
|
||||
# remove unpicklable attributes
|
||||
wstream = self.warning_stream
|
||||
self.set_warning_stream(None)
|
||||
with open(filename, 'wb') as picklefile:
|
||||
pickle.dump(self, picklefile, pickle.HIGHEST_PROTOCOL)
|
||||
# reset stream
|
||||
self.set_warning_stream(wstream)
|
||||
|
||||
# --------- ENVIRONMENT INITIALIZATION -------------------------------------
|
||||
|
||||
def __init__(self, srcdir, doctreedir):
|
||||
self.doctreedir = doctreedir
|
||||
self.srcdir = srcdir
|
||||
self.config = {}
|
||||
|
||||
# read the refcounts file
|
||||
self.refcounts = Refcounts.fromfile(
|
||||
path.join(self.srcdir, 'data', 'refcounts.dat'))
|
||||
|
||||
# the docutils settings for building
|
||||
self.settings = default_settings.copy()
|
||||
self.settings['env'] = self
|
||||
|
||||
# the stream to write warning messages to
|
||||
self.warning_stream = None
|
||||
|
||||
# this is to invalidate old pickles
|
||||
self.version = ENV_VERSION
|
||||
|
||||
# Build times -- to determine changed files
|
||||
# Also use this as an inventory of all existing and built filenames.
|
||||
self.all_files = {} # filename -> (mtime, md5) at the time of build
|
||||
|
||||
# File metadata
|
||||
self.metadata = {} # filename -> dict of metadata items
|
||||
|
||||
# TOC inventory
|
||||
self.titles = {} # filename -> title node
|
||||
self.tocs = {} # filename -> table of contents nodetree
|
||||
self.toc_num_entries = {} # filename -> number of real entries
|
||||
# used to determine when to show the TOC in a sidebar
|
||||
# (don't show if it's only one item)
|
||||
self.toctree_relations = {} # filename -> ["parent", "previous", "next"] filename
|
||||
# for navigating in the toctree
|
||||
self.files_to_rebuild = {} # filename -> list of files (containing its TOCs)
|
||||
# to rebuild too
|
||||
|
||||
# X-ref target inventory
|
||||
self.descrefs = {} # fullname -> filename, desctype
|
||||
self.filemodules = {} # filename -> [modules]
|
||||
self.modules = {} # modname -> filename, synopsis, platform
|
||||
self.tokens = {} # tokenname -> filename
|
||||
self.labels = {} # labelname -> filename, labelid
|
||||
|
||||
# Other inventories
|
||||
self.indexentries = {} # filename -> list of
|
||||
# (type, string, target, aliasname)
|
||||
self.versionchanges = {} # version -> list of
|
||||
# (type, filename, module, descname, content)
|
||||
|
||||
# These are set while parsing a file
|
||||
self.filename = None # current file name
|
||||
self.currmodule = None # current module name
|
||||
self.currclass = None # current class name
|
||||
self.currdesc = None # current descref name
|
||||
self.index_num = 0 # autonumber for index targets
|
||||
|
||||
def set_warning_stream(self, stream):
|
||||
self.warning_stream = stream
|
||||
self.settings['warning_stream'] = stream
|
||||
|
||||
def clear_file(self, filename):
|
||||
"""Remove all traces of a source file in the inventory."""
|
||||
if filename in self.all_files:
|
||||
self.all_files.pop(filename, None)
|
||||
self.metadata.pop(filename, None)
|
||||
self.titles.pop(filename, None)
|
||||
self.tocs.pop(filename, None)
|
||||
self.toc_num_entries.pop(filename, None)
|
||||
self.files_to_rebuild.pop(filename, None)
|
||||
|
||||
for fullname, (fn, _) in self.descrefs.items():
|
||||
if fn == filename:
|
||||
del self.descrefs[fullname]
|
||||
self.filemodules.pop(filename, None)
|
||||
for modname, (fn, _, _) in self.modules.items():
|
||||
if fn == filename:
|
||||
del self.modules[modname]
|
||||
for tokenname, fn in self.tokens.items():
|
||||
if fn == filename:
|
||||
del self.tokens[tokenname]
|
||||
for labelname, (fn, _, _) in self.labels.items():
|
||||
if fn == filename:
|
||||
del self.labels[labelname]
|
||||
self.indexentries.pop(filename, None)
|
||||
for version, changes in self.versionchanges.items():
|
||||
new = [change for change in changes if change[1] != filename]
|
||||
changes[:] = new
|
||||
|
||||
def get_outdated_files(self, config):
|
||||
"""
|
||||
Return (removed, changed) iterables.
|
||||
"""
|
||||
all_source_files = list(get_matching_files(
|
||||
self.srcdir, '*.rst', exclude=set(config.get('unused_files', ()))))
|
||||
|
||||
# clear all files no longer present
|
||||
removed = set(self.all_files) - set(all_source_files)
|
||||
|
||||
if config != self.config:
|
||||
# config values affect e.g. substitutions
|
||||
changed = all_source_files
|
||||
else:
|
||||
changed = []
|
||||
for filename in all_source_files:
|
||||
if filename not in self.all_files:
|
||||
changed.append(filename)
|
||||
else:
|
||||
# if the doctree file is not there, rebuild
|
||||
if not path.isfile(path.join(self.doctreedir,
|
||||
filename[:-3] + 'doctree')):
|
||||
changed.append(filename)
|
||||
continue
|
||||
mtime, md5 = self.all_files[filename]
|
||||
newmtime = path.getmtime(path.join(self.srcdir, filename))
|
||||
if newmtime == mtime:
|
||||
continue
|
||||
# check the MD5
|
||||
with file(path.join(self.srcdir, filename), 'rb') as f:
|
||||
newmd5 = hashlib.md5(f.read()).digest()
|
||||
if newmd5 != md5:
|
||||
changed.append(filename)
|
||||
|
||||
return removed, changed
|
||||
|
||||
def update(self, config):
|
||||
"""
|
||||
(Re-)read all files new or changed since last update.
|
||||
Yields a summary and then filenames as it processes them.
|
||||
"""
|
||||
removed, changed = self.get_outdated_files(config)
|
||||
msg = '%s removed, %s changed' % (len(removed), len(changed))
|
||||
if self.config != config:
|
||||
msg = '[config changed] ' + msg
|
||||
yield msg
|
||||
|
||||
self.config = config
|
||||
|
||||
# clear all files no longer present
|
||||
for filename in removed:
|
||||
self.clear_file(filename)
|
||||
|
||||
# re-read the refcount file
|
||||
self.refcounts = Refcounts.fromfile(
|
||||
path.join(self.srcdir, 'data', 'refcounts.dat'))
|
||||
|
||||
# read all new and changed files
|
||||
for filename in changed:
|
||||
yield filename
|
||||
self.read_file(filename)
|
||||
|
||||
# --------- SINGLE FILE BUILDING -------------------------------------------
|
||||
|
||||
def read_file(self, filename, src_path=None, save_parsed=True):
|
||||
"""Parse a file and add/update inventory entries for the doctree.
|
||||
If srcpath is given, read from a different source file."""
|
||||
# remove all inventory entries for that file
|
||||
self.clear_file(filename)
|
||||
|
||||
if src_path is None:
|
||||
src_path = path.join(self.srcdir, filename)
|
||||
|
||||
self.filename = filename
|
||||
doctree = publish_doctree(None, src_path, FileInput,
|
||||
settings_overrides=self.settings,
|
||||
reader=MyStandaloneReader())
|
||||
self.process_metadata(filename, doctree)
|
||||
self.create_title_from(filename, doctree)
|
||||
self.note_labels_from(filename, doctree)
|
||||
self.build_toc_from(filename, doctree)
|
||||
|
||||
# calculate the MD5 of the file at time of build
|
||||
with file(src_path, 'rb') as f:
|
||||
md5 = hashlib.md5(f.read()).digest()
|
||||
self.all_files[filename] = (path.getmtime(src_path), md5)
|
||||
|
||||
# make it picklable
|
||||
doctree.reporter = None
|
||||
doctree.transformer = None
|
||||
doctree.settings.env = None
|
||||
doctree.settings.warning_stream = None
|
||||
|
||||
# cleanup
|
||||
self.filename = None
|
||||
self.currmodule = None
|
||||
self.currclass = None
|
||||
|
||||
if save_parsed:
|
||||
# save the parsed doctree
|
||||
doctree_filename = path.join(self.doctreedir, filename[:-3] + 'doctree')
|
||||
dirname = path.dirname(doctree_filename)
|
||||
if not path.isdir(dirname):
|
||||
os.makedirs(dirname)
|
||||
with file(doctree_filename, 'wb') as f:
|
||||
pickle.dump(doctree, f, pickle.HIGHEST_PROTOCOL)
|
||||
else:
|
||||
return doctree
|
||||
|
||||
def process_metadata(self, filename, doctree):
|
||||
"""
|
||||
Process the docinfo part of the doctree as metadata.
|
||||
"""
|
||||
self.metadata[filename] = md = {}
|
||||
docinfo = doctree[0]
|
||||
if docinfo.__class__ is not nodes.docinfo:
|
||||
# nothing to see here
|
||||
return
|
||||
for node in docinfo:
|
||||
if node.__class__ is nodes.author:
|
||||
# handled specially by docutils
|
||||
md['author'] = node.astext()
|
||||
elif node.__class__ is nodes.field:
|
||||
name, body = node
|
||||
md[name.astext()] = body.astext()
|
||||
del doctree[0]
|
||||
|
||||
def create_title_from(self, filename, document):
|
||||
"""
|
||||
Add a title node to the document (just copy the first section title),
|
||||
and store that title in the environment.
|
||||
"""
|
||||
for node in document.traverse(nodes.section):
|
||||
titlenode = nodes.title()
|
||||
visitor = MyContentsFilter(document)
|
||||
node[0].walkabout(visitor)
|
||||
titlenode += visitor.get_entry_text()
|
||||
self.titles[filename] = titlenode
|
||||
return
|
||||
|
||||
def note_labels_from(self, filename, document):
|
||||
for name, explicit in document.nametypes.iteritems():
|
||||
if not explicit:
|
||||
continue
|
||||
labelid = document.nameids[name]
|
||||
node = document.ids[labelid]
|
||||
if not isinstance(node, nodes.section):
|
||||
# e.g. desc-signatures
|
||||
continue
|
||||
sectname = node[0].astext() # node[0] == title node
|
||||
if name in self.labels:
|
||||
print >>self.warning_stream, \
|
||||
('WARNING: duplicate label %s, ' % name +
|
||||
'in %s and %s' % (self.labels[name][0], filename))
|
||||
self.labels[name] = filename, labelid, sectname
|
||||
|
||||
def note_toctree(self, filename, toctreenode):
|
||||
"""Note a TOC tree directive in a document and gather information about
|
||||
file relations from it."""
|
||||
includefiles = toctreenode['includefiles']
|
||||
includefiles_len = len(includefiles)
|
||||
for i, includefile in enumerate(includefiles):
|
||||
# the "previous" file for the first toctree item is the parent
|
||||
previous = includefiles[i-1] if i > 0 else filename
|
||||
# the "next" file for the last toctree item is the parent again
|
||||
next = includefiles[i+1] if i < includefiles_len-1 else filename
|
||||
self.toctree_relations[includefile] = [filename, previous, next]
|
||||
# note that if the included file is rebuilt, this one must be
|
||||
# too (since the TOC of the included file could have changed)
|
||||
self.files_to_rebuild.setdefault(includefile, set()).add(filename)
|
||||
|
||||
|
||||
def build_toc_from(self, filename, document):
|
||||
"""Build a TOC from the doctree and store it in the inventory."""
|
||||
numentries = [0] # nonlocal again...
|
||||
|
||||
def build_toc(node):
|
||||
entries = []
|
||||
for subnode in node:
|
||||
if isinstance(subnode, addnodes.toctree):
|
||||
# just copy the toctree node which is then resolved
|
||||
# in self.resolve_toctrees
|
||||
item = subnode.copy()
|
||||
entries.append(item)
|
||||
# do the inventory stuff
|
||||
self.note_toctree(filename, subnode)
|
||||
continue
|
||||
if not isinstance(subnode, nodes.section):
|
||||
continue
|
||||
title = subnode[0]
|
||||
# copy the contents of the section title, but without references
|
||||
# and unnecessary stuff
|
||||
visitor = MyContentsFilter(document)
|
||||
title.walkabout(visitor)
|
||||
nodetext = visitor.get_entry_text()
|
||||
if not numentries[0]:
|
||||
# for the very first toc entry, don't add an anchor
|
||||
# as it is the file's title anyway
|
||||
anchorname = ''
|
||||
else:
|
||||
anchorname = '#' + subnode['ids'][0]
|
||||
numentries[0] += 1
|
||||
reference = nodes.reference('', '', refuri=filename,
|
||||
anchorname=anchorname,
|
||||
*nodetext)
|
||||
para = addnodes.compact_paragraph('', '', reference)
|
||||
item = nodes.list_item('', para)
|
||||
item += build_toc(subnode)
|
||||
entries.append(item)
|
||||
if entries:
|
||||
return nodes.bullet_list('', *entries)
|
||||
return []
|
||||
toc = build_toc(document)
|
||||
if toc:
|
||||
self.tocs[filename] = toc
|
||||
else:
|
||||
self.tocs[filename] = nodes.bullet_list('')
|
||||
self.toc_num_entries[filename] = numentries[0]
|
||||
|
||||
def get_toc_for(self, filename):
|
||||
"""Return a TOC nodetree -- for use on the same page only!"""
|
||||
toc = self.tocs[filename].deepcopy()
|
||||
for node in toc.traverse(nodes.reference):
|
||||
node['refuri'] = node['anchorname']
|
||||
return toc
|
||||
|
||||
# -------
|
||||
# these are called from docutils directives and therefore use self.filename
|
||||
#
|
||||
def note_descref(self, fullname, desctype):
|
||||
if fullname in self.descrefs:
|
||||
print >>self.warning_stream, \
|
||||
('WARNING: duplicate canonical description name %s, ' % fullname +
|
||||
'in %s and %s' % (self.descrefs[fullname][0], self.filename))
|
||||
self.descrefs[fullname] = (self.filename, desctype)
|
||||
|
||||
def note_module(self, modname, synopsis, platform):
|
||||
self.modules[modname] = (self.filename, synopsis, platform)
|
||||
self.filemodules.setdefault(self.filename, []).append(modname)
|
||||
|
||||
def note_token(self, tokenname):
|
||||
self.tokens[tokenname] = self.filename
|
||||
|
||||
|
||||
def note_index_entry(self, type, string, targetid, aliasname):
|
||||
self.indexentries.setdefault(self.filename, []).append(
|
||||
(type, string, targetid, aliasname))
|
||||
|
||||
def note_versionchange(self, type, version, node):
|
||||
self.versionchanges.setdefault(version, []).append(
|
||||
(type, self.filename, self.currmodule, self.currdesc, node.deepcopy()))
|
||||
# -------
|
||||
|
||||
# --------- RESOLVING REFERENCES AND TOCTREES ------------------------------
|
||||
|
||||
def get_doctree(self, filename):
|
||||
"""Read the doctree for a file from the pickle and return it."""
|
||||
doctree_filename = path.join(self.doctreedir, filename[:-3] + 'doctree')
|
||||
with file(doctree_filename, 'rb') as f:
|
||||
doctree = pickle.load(f)
|
||||
doctree.reporter = Reporter(filename, 2, 4, stream=self.warning_stream)
|
||||
return doctree
|
||||
|
||||
def get_and_resolve_doctree(self, filename, builder, doctree=None):
|
||||
"""Read the doctree from the pickle, resolve cross-references and
|
||||
toctrees and return it."""
|
||||
if doctree is None:
|
||||
doctree = self.get_doctree(filename)
|
||||
|
||||
# resolve all pending cross-references
|
||||
self.resolve_references(doctree, filename, builder)
|
||||
|
||||
# now, resolve all toctree nodes
|
||||
def _entries_from_toctree(toctreenode):
|
||||
"""Return TOC entries for a toctree node."""
|
||||
includefiles = map(str, toctreenode['includefiles'])
|
||||
|
||||
entries = []
|
||||
for includefile in includefiles:
|
||||
try:
|
||||
toc = self.tocs[includefile].deepcopy()
|
||||
except KeyError, err:
|
||||
# this is raised if the included file does not exist
|
||||
print >>self.warning_stream, 'WARNING: %s: toctree contains ' \
|
||||
'ref to nonexisting file %r' % (filename, includefile)
|
||||
else:
|
||||
for toctreenode in toc.traverse(addnodes.toctree):
|
||||
toctreenode.parent.replace_self(
|
||||
_entries_from_toctree(toctreenode))
|
||||
entries.append(toc)
|
||||
if entries:
|
||||
return addnodes.compact_paragraph('', '', *entries)
|
||||
return []
|
||||
|
||||
for toctreenode in doctree.traverse(addnodes.toctree):
|
||||
maxdepth = toctreenode.get('maxdepth', -1)
|
||||
newnode = _entries_from_toctree(toctreenode)
|
||||
# prune the tree to maxdepth
|
||||
if maxdepth > 0:
|
||||
walk_depth(newnode, 1, maxdepth)
|
||||
toctreenode.replace_self(newnode)
|
||||
|
||||
# set the target paths in the toctrees (they are not known
|
||||
# at TOC generation time)
|
||||
for node in doctree.traverse(nodes.reference):
|
||||
if node.hasattr('anchorname'):
|
||||
# a TOC reference
|
||||
node['refuri'] = builder.get_relative_uri(
|
||||
filename, node['refuri']) + node['anchorname']
|
||||
|
||||
return doctree
|
||||
|
||||
|
||||
def resolve_references(self, doctree, docfilename, builder):
|
||||
for node in doctree.traverse(addnodes.pending_xref):
|
||||
contnode = node[0].deepcopy()
|
||||
newnode = None
|
||||
|
||||
typ = node['reftype']
|
||||
target = node['reftarget']
|
||||
modname = node['modname']
|
||||
clsname = node['classname']
|
||||
|
||||
if typ == 'ref':
|
||||
filename, labelid, sectname = self.labels.get(target, ('','',''))
|
||||
if not filename:
|
||||
newnode = doctree.reporter.system_message(
|
||||
2, 'undefined label: %s' % target)
|
||||
print >>self.warning_stream, \
|
||||
'%s: undefined label: %s' % (docfilename, target)
|
||||
else:
|
||||
newnode = nodes.reference('', '')
|
||||
if filename == docfilename:
|
||||
newnode['refid'] = labelid
|
||||
else:
|
||||
newnode['refuri'] = builder.get_relative_uri(
|
||||
docfilename, filename) + '#' + labelid
|
||||
newnode.append(nodes.emphasis(sectname, sectname))
|
||||
elif typ == 'token':
|
||||
filename = self.tokens.get(target, '')
|
||||
if not filename:
|
||||
newnode = contnode
|
||||
else:
|
||||
newnode = nodes.reference('', '')
|
||||
if filename == docfilename:
|
||||
newnode['refid'] = 'grammar-token-' + target
|
||||
else:
|
||||
newnode['refuri'] = builder.get_relative_uri(
|
||||
docfilename, filename) + '#grammar-token-' + target
|
||||
newnode.append(contnode)
|
||||
elif typ == 'mod':
|
||||
filename, synopsis, platform = self.modules.get(target, ('','',''))
|
||||
# just link to an anchor if there are multiple modules in one file
|
||||
# because the anchor is generally below the heading which is ugly
|
||||
# but can't be helped easily
|
||||
anchor = ''
|
||||
if not filename or filename == docfilename:
|
||||
# don't link to self
|
||||
newnode = contnode
|
||||
else:
|
||||
if len(self.filemodules[filename]) > 1:
|
||||
anchor = '#' + 'module-' + target
|
||||
newnode = nodes.reference('', '')
|
||||
newnode['refuri'] = (
|
||||
builder.get_relative_uri(docfilename, filename) + anchor)
|
||||
newnode.append(contnode)
|
||||
else:
|
||||
name, desc = self.find_desc(modname, clsname, target, typ)
|
||||
if not desc:
|
||||
newnode = contnode
|
||||
else:
|
||||
newnode = nodes.reference('', '')
|
||||
if desc[0] == docfilename:
|
||||
newnode['refid'] = name
|
||||
else:
|
||||
newnode['refuri'] = (
|
||||
builder.get_relative_uri(docfilename, desc[0])
|
||||
+ '#' + name)
|
||||
newnode.append(contnode)
|
||||
|
||||
if newnode:
|
||||
node.replace_self(newnode)
|
||||
|
||||
def create_index(self, builder, _fixre=re.compile(r'(.*) ([(][^()]*[)])')):
|
||||
"""Create the real index from the collected index entries."""
|
||||
new = {}
|
||||
|
||||
def add_entry(word, subword, dic=new):
|
||||
entry = dic.get(word)
|
||||
if not entry:
|
||||
dic[word] = entry = [[], {}]
|
||||
if subword:
|
||||
add_entry(subword, '', dic=entry[1])
|
||||
else:
|
||||
entry[0].append(builder.get_relative_uri('genindex.rst', fn)
|
||||
+ '#' + tid)
|
||||
|
||||
for fn, entries in self.indexentries.iteritems():
|
||||
# new entry types must be listed in directives.py!
|
||||
for type, string, tid, alias in entries:
|
||||
if type == 'single':
|
||||
entry, _, subentry = string.partition('!')
|
||||
add_entry(entry, subentry)
|
||||
elif type == 'pair':
|
||||
first, second = map(lambda x: x.strip(), string.split(';', 1))
|
||||
add_entry(first, second)
|
||||
add_entry(second, first)
|
||||
elif type == 'triple':
|
||||
first, second, third = map(lambda x: x.strip(), string.split(';', 2))
|
||||
add_entry(first, second+' '+third)
|
||||
add_entry(second, third+', '+first)
|
||||
add_entry(third, first+' '+second)
|
||||
# this is a bit ridiculous...
|
||||
# elif type == 'quadruple':
|
||||
# first, second, third, fourth = \
|
||||
# map(lambda x: x.strip(), string.split(';', 3))
|
||||
# add_entry(first, '%s %s %s' % (second, third, fourth))
|
||||
# add_entry(second, '%s %s, %s' % (third, fourth, first))
|
||||
# add_entry(third, '%s, %s %s' % (fourth, first, second))
|
||||
# add_entry(fourth, '%s %s %s' % (first, second, third))
|
||||
elif type in ('module', 'keyword', 'operator', 'object',
|
||||
'exception', 'statement'):
|
||||
add_entry(string, type)
|
||||
add_entry(type, string)
|
||||
elif type == 'builtin':
|
||||
add_entry(string, 'built-in function')
|
||||
add_entry('built-in function', string)
|
||||
else:
|
||||
print >>self.warning_stream, \
|
||||
"unknown index entry type %r in %s" % (type, fn)
|
||||
|
||||
newlist = new.items()
|
||||
newlist.sort(key=lambda t: t[0].lower())
|
||||
|
||||
# fixup entries: transform
|
||||
# func() (in module foo)
|
||||
# func() (in module bar)
|
||||
# into
|
||||
# func()
|
||||
# (in module foo)
|
||||
# (in module bar)
|
||||
oldkey = ''
|
||||
oldsubitems = None
|
||||
i = 0
|
||||
while i < len(newlist):
|
||||
key, (targets, subitems) = newlist[i]
|
||||
# cannot move if it hassubitems; structure gets too complex
|
||||
if not subitems:
|
||||
m = _fixre.match(key)
|
||||
if m:
|
||||
if oldkey == m.group(1):
|
||||
# prefixes match: add entry as subitem of the previous entry
|
||||
oldsubitems.setdefault(m.group(2), [[], {}])[0].extend(targets)
|
||||
del newlist[i]
|
||||
continue
|
||||
oldkey = m.group(1)
|
||||
else:
|
||||
oldkey = key
|
||||
oldsubitems = subitems
|
||||
i += 1
|
||||
|
||||
# group the entries by letter
|
||||
def keyfunc((k, v), ltrs=uppercase+'_'):
|
||||
# hack: mutate the subitems dicts to a list in the keyfunc
|
||||
v[1] = sorted((si, se) for (si, (se, void)) in v[1].iteritems())
|
||||
# now calculate the key
|
||||
letter = k[0].upper()
|
||||
if letter in ltrs:
|
||||
return letter
|
||||
else:
|
||||
# get all other symbols under one heading
|
||||
return 'Symbols'
|
||||
self.index = [(key, list(group)) for (key, group) in
|
||||
itertools.groupby(newlist, keyfunc)]
|
||||
|
||||
def check_consistency(self):
|
||||
"""Do consistency checks."""
|
||||
|
||||
for filename in self.all_files:
|
||||
if filename not in self.toctree_relations:
|
||||
if filename == 'contents.rst':
|
||||
# the master file is not included anywhere ;)
|
||||
continue
|
||||
self.warning_stream.write(
|
||||
'WARNING: %s isn\'t included in any toctree\n' % filename)
|
||||
|
||||
# --------- QUERYING -------------------------------------------------------
|
||||
|
||||
def find_desc(self, modname, classname, name, type):
|
||||
"""Find a description node matching "name", perhaps using
|
||||
the given module and/or classname."""
|
||||
# skip parens
|
||||
if name[-2:] == '()':
|
||||
name = name[:-2]
|
||||
|
||||
# don't add module and class names for C things
|
||||
if type[0] == 'c' and type not in ('class', 'const'):
|
||||
# skip trailing star and whitespace
|
||||
name = name.rstrip(' *')
|
||||
if name in self.descrefs and self.descrefs[name][1][0] == 'c':
|
||||
return name, self.descrefs[name]
|
||||
return None, None
|
||||
|
||||
if name in self.descrefs:
|
||||
newname = name
|
||||
elif modname and modname + '.' + name in self.descrefs:
|
||||
newname = modname + '.' + name
|
||||
elif modname and classname and \
|
||||
modname + '.' + classname + '.' + name in self.descrefs:
|
||||
newname = modname + '.' + classname + '.' + name
|
||||
# special case: builtin exceptions have module "exceptions" set
|
||||
elif type == 'exc' and '.' not in name and \
|
||||
'exceptions.' + name in self.descrefs:
|
||||
newname = 'exceptions.' + name
|
||||
# special case: object methods
|
||||
elif type in ('func', 'meth') and '.' not in name and \
|
||||
'object.' + name in self.descrefs:
|
||||
newname = 'object.' + name
|
||||
else:
|
||||
return None, None
|
||||
return newname, self.descrefs[newname]
|
||||
|
||||
def find_keyword(self, keyword, avoid_fuzzy=False, cutoff=0.6, n=20):
|
||||
"""
|
||||
Find keyword matches for a keyword. If there's an exact match, just return
|
||||
it, else return a list of fuzzy matches if avoid_fuzzy isn't True.
|
||||
|
||||
Keywords searched are: first modules, then descrefs.
|
||||
|
||||
Returns: None if nothing found
|
||||
(type, filename, anchorname) if exact match found
|
||||
list of (quality, type, filename, anchorname, description) if fuzzy
|
||||
"""
|
||||
|
||||
if keyword in self.modules:
|
||||
filename, title, system = self.modules[keyword]
|
||||
return 'module', filename, 'module-' + keyword
|
||||
if keyword in self.descrefs:
|
||||
filename, ref_type = self.descrefs[keyword]
|
||||
return ref_type, filename, keyword
|
||||
# special cases
|
||||
if '.' not in keyword:
|
||||
# exceptions are documented in the exceptions module
|
||||
if 'exceptions.'+keyword in self.descrefs:
|
||||
filename, ref_type = self.descrefs['exceptions.'+keyword]
|
||||
return ref_type, filename, 'exceptions.'+keyword
|
||||
# special methods are documented as object methods
|
||||
if 'object.'+keyword in self.descrefs:
|
||||
filename, ref_type = self.descrefs['object.'+keyword]
|
||||
return ref_type, filename, 'object.'+keyword
|
||||
|
||||
if avoid_fuzzy:
|
||||
return
|
||||
|
||||
# find fuzzy matches
|
||||
s = difflib.SequenceMatcher()
|
||||
s.set_seq2(keyword.lower())
|
||||
|
||||
def possibilities():
|
||||
for title, (fn, desc, _) in self.modules.iteritems():
|
||||
yield ('module', fn, 'module-'+title, desc)
|
||||
for title, (fn, desctype) in self.descrefs.iteritems():
|
||||
yield (desctype, fn, title, '')
|
||||
|
||||
def dotsearch(string):
|
||||
parts = string.lower().split('.')
|
||||
for idx in xrange(0, len(parts)):
|
||||
yield '.'.join(parts[idx:])
|
||||
|
||||
result = []
|
||||
for type, filename, title, desc in possibilities():
|
||||
best_res = 0
|
||||
for part in dotsearch(title):
|
||||
s.set_seq1(part)
|
||||
if s.real_quick_ratio() >= cutoff and \
|
||||
s.quick_ratio() >= cutoff and \
|
||||
s.ratio() >= cutoff and \
|
||||
s.ratio() > best_res:
|
||||
best_res = s.ratio()
|
||||
if best_res:
|
||||
result.append((best_res, type, filename, title, desc))
|
||||
|
||||
return heapq.nlargest(n, result)
|
||||
|
||||
def get_real_filename(self, filename):
|
||||
"""
|
||||
Pass this function a filename without .rst extension to get the real
|
||||
filename. This also resolves the special `index.rst` files. If the file
|
||||
does not exist the return value will be `None`.
|
||||
"""
|
||||
for rstname in filename + '.rst', filename + path.sep + 'index.rst':
|
||||
if rstname in self.all_files:
|
||||
return rstname
|
||||
72
sphinx/highlighting.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.highlighting
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Highlight code blocks using Pygments.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import cgi
|
||||
from collections import defaultdict
|
||||
|
||||
try:
|
||||
import pygments
|
||||
from pygments import highlight
|
||||
from pygments.lexers import PythonLexer, PythonConsoleLexer, CLexer, \
|
||||
TextLexer, RstLexer
|
||||
from pygments.formatters import HtmlFormatter
|
||||
from pygments.filters import ErrorToken
|
||||
from pygments.style import Style
|
||||
from pygments.styles.friendly import FriendlyStyle
|
||||
from pygments.token import Generic, Comment
|
||||
except ImportError:
|
||||
pygments = None
|
||||
else:
|
||||
class PythonDocStyle(Style):
|
||||
"""
|
||||
Like friendly, but a bit darker to enhance contrast on the green background.
|
||||
"""
|
||||
|
||||
background_color = '#eeffcc'
|
||||
default_style = ''
|
||||
|
||||
styles = FriendlyStyle.styles
|
||||
styles.update({
|
||||
Generic.Output: 'italic #333',
|
||||
Comment: 'italic #408090',
|
||||
})
|
||||
|
||||
lexers = defaultdict(TextLexer,
|
||||
none = TextLexer(),
|
||||
python = PythonLexer(),
|
||||
pycon = PythonConsoleLexer(),
|
||||
rest = RstLexer(),
|
||||
c = CLexer(),
|
||||
)
|
||||
for _lexer in lexers.values():
|
||||
_lexer.add_filter('raiseonerror')
|
||||
|
||||
fmter = HtmlFormatter(style=PythonDocStyle)
|
||||
|
||||
|
||||
def highlight_block(source, lang):
|
||||
if not pygments:
|
||||
return '<pre>' + cgi.escape(source) + '</pre>\n'
|
||||
if lang == 'python':
|
||||
if source.startswith('>>>'):
|
||||
lexer = lexers['pycon']
|
||||
else:
|
||||
lexer = lexers['python']
|
||||
else:
|
||||
lexer = lexers[lang]
|
||||
try:
|
||||
return highlight(source, lexer, fmter)
|
||||
except ErrorToken:
|
||||
# this is most probably not Python, so let it pass textonly
|
||||
return '<pre>' + cgi.escape(source) + '</pre>\n'
|
||||
|
||||
def get_stylesheet():
|
||||
return fmter.get_style_defs()
|
||||
188
sphinx/htmlhelp.py
Normal file
@@ -0,0 +1,188 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.htmlhelp
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Build HTML help support files.
|
||||
Adapted from the original Doc/tools/prechm.py.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
|
||||
import os
|
||||
import cgi
|
||||
from os import path
|
||||
|
||||
from docutils import nodes
|
||||
|
||||
from . import addnodes
|
||||
|
||||
# Project file (*.hhp) template. 'outname' is the file basename (like
|
||||
# the pythlp in pythlp.hhp); 'version' is the doc version number (like
|
||||
# the 2.2 in Python 2.2).
|
||||
# The magical numbers in the long line under [WINDOWS] set most of the
|
||||
# user-visible features (visible buttons, tabs, etc).
|
||||
# About 0x10384e: This defines the buttons in the help viewer. The
|
||||
# following defns are taken from htmlhelp.h. Not all possibilities
|
||||
# actually work, and not all those that work are available from the Help
|
||||
# Workshop GUI. In particular, the Zoom/Font button works and is not
|
||||
# available from the GUI. The ones we're using are marked with 'x':
|
||||
#
|
||||
# 0x000002 Hide/Show x
|
||||
# 0x000004 Back x
|
||||
# 0x000008 Forward x
|
||||
# 0x000010 Stop
|
||||
# 0x000020 Refresh
|
||||
# 0x000040 Home x
|
||||
# 0x000080 Forward
|
||||
# 0x000100 Back
|
||||
# 0x000200 Notes
|
||||
# 0x000400 Contents
|
||||
# 0x000800 Locate x
|
||||
# 0x001000 Options x
|
||||
# 0x002000 Print x
|
||||
# 0x004000 Index
|
||||
# 0x008000 Search
|
||||
# 0x010000 History
|
||||
# 0x020000 Favorites
|
||||
# 0x040000 Jump 1
|
||||
# 0x080000 Jump 2
|
||||
# 0x100000 Zoom/Font x
|
||||
# 0x200000 TOC Next
|
||||
# 0x400000 TOC Prev
|
||||
|
||||
project_template = '''\
|
||||
[OPTIONS]
|
||||
Compiled file=%(outname)s.chm
|
||||
Contents file=%(outname)s.hhc
|
||||
Default Window=%(outname)s
|
||||
Default topic=index.html
|
||||
Display compile progress=No
|
||||
Full text search stop list file=%(outname)s.stp
|
||||
Full-text search=Yes
|
||||
Index file=%(outname)s.hhk
|
||||
Language=0x409
|
||||
Title=Python %(version)s Documentation
|
||||
|
||||
[WINDOWS]
|
||||
%(outname)s="Python %(version)s Documentation","%(outname)s.hhc","%(outname)s.hhk",\
|
||||
"index.html","index.html",,,,,0x63520,220,0x10384e,[0,0,1024,768],,,,,,,0
|
||||
|
||||
[FILES]
|
||||
'''
|
||||
|
||||
contents_header = '''\
|
||||
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
|
||||
<HTML>
|
||||
<HEAD>
|
||||
<meta name="GENERATOR" content="Microsoft® HTML Help Workshop 4.1">
|
||||
<!-- Sitemap 1.0 -->
|
||||
</HEAD><BODY>
|
||||
<OBJECT type="text/site properties">
|
||||
<param name="Window Styles" value="0x801227">
|
||||
<param name="ImageType" value="Folder">
|
||||
</OBJECT>
|
||||
<UL>
|
||||
'''
|
||||
|
||||
contents_footer = '''\
|
||||
</UL></BODY></HTML>
|
||||
'''
|
||||
|
||||
object_sitemap = '''\
|
||||
<OBJECT type="text/sitemap">
|
||||
<param name="Name" value="%s">
|
||||
<param name="Local" value="%s">
|
||||
</OBJECT>
|
||||
'''
|
||||
|
||||
# List of words the full text search facility shouldn't index. This
|
||||
# becomes file outname.stp. Note that this list must be pretty small!
|
||||
# Different versions of the MS docs claim the file has a maximum size of
|
||||
# 256 or 512 bytes (including \r\n at the end of each line).
|
||||
# Note that "and", "or", "not" and "near" are operators in the search
|
||||
# language, so no point indexing them even if we wanted to.
|
||||
stopwords = """
|
||||
a and are as at
|
||||
be but by
|
||||
for
|
||||
if in into is it
|
||||
near no not
|
||||
of on or
|
||||
such
|
||||
that the their then there these they this to
|
||||
was will with
|
||||
""".split()
|
||||
|
||||
|
||||
def build_hhx(builder, outdir, outname):
|
||||
builder.msg('dumping stopword list...')
|
||||
with open(path.join(outdir, outname+'.stp'), 'w') as f:
|
||||
for word in sorted(stopwords):
|
||||
print >>f, word
|
||||
|
||||
builder.msg('writing project file...')
|
||||
with open(path.join(outdir, outname+'.hhp'), 'w') as f:
|
||||
f.write(project_template % {'outname': outname,
|
||||
'version': builder.config['version']})
|
||||
if not outdir.endswith(os.sep):
|
||||
outdir += os.sep
|
||||
olen = len(outdir)
|
||||
for root, dirs, files in os.walk(outdir):
|
||||
for fn in files:
|
||||
if fn.endswith(('.html', '.css', '.js')):
|
||||
print >>f, path.join(root, fn)[olen:].replace('/', '\\')
|
||||
|
||||
builder.msg('writing TOC file...')
|
||||
with open(path.join(outdir, outname+'.hhc'), 'w') as f:
|
||||
f.write(contents_header)
|
||||
# special books
|
||||
f.write('<LI> ' + object_sitemap % ('Main page', 'index.html'))
|
||||
f.write('<LI> ' + object_sitemap % ('Global Module Index', 'modindex.html'))
|
||||
# the TOC
|
||||
toc = builder.env.get_and_resolve_doctree('contents.rst', builder)
|
||||
def write_toc(node, ullevel=0):
|
||||
if isinstance(node, nodes.list_item):
|
||||
f.write('<LI> ')
|
||||
for subnode in node:
|
||||
write_toc(subnode, ullevel)
|
||||
elif isinstance(node, nodes.reference):
|
||||
f.write(object_sitemap % (cgi.escape(node.astext()),
|
||||
node['refuri']))
|
||||
elif isinstance(node, nodes.bullet_list):
|
||||
if ullevel != 0:
|
||||
f.write('<UL>\n')
|
||||
for subnode in node:
|
||||
write_toc(subnode, ullevel+1)
|
||||
if ullevel != 0:
|
||||
f.write('</UL>\n')
|
||||
elif isinstance(node, addnodes.compact_paragraph):
|
||||
for subnode in node:
|
||||
write_toc(subnode, ullevel)
|
||||
elif isinstance(node, nodes.section):
|
||||
write_toc(node[1], ullevel)
|
||||
elif isinstance(node, nodes.document):
|
||||
write_toc(node[0], ullevel)
|
||||
write_toc(toc)
|
||||
f.write(contents_footer)
|
||||
|
||||
builder.msg('writing index file...')
|
||||
with open(path.join(outdir, outname+'.hhk'), 'w') as f:
|
||||
f.write('<UL>\n')
|
||||
def write_index(title, refs, subitems):
|
||||
if refs:
|
||||
f.write('<LI> ')
|
||||
f.write(object_sitemap % (cgi.escape(title), refs[0]))
|
||||
for ref in refs[1:]:
|
||||
f.write(object_sitemap % ('[Link]', ref))
|
||||
if subitems:
|
||||
f.write('<UL> ')
|
||||
for subitem in subitems:
|
||||
write_index(subitem[0], subitem[1], [])
|
||||
f.write('</UL>')
|
||||
for (key, group) in builder.env.index:
|
||||
for title, (refs, subitems) in group:
|
||||
write_index(title, refs, subitems)
|
||||
f.write('</UL>\n')
|
||||
72
sphinx/json.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.json
|
||||
~~~~~~~~~~~
|
||||
|
||||
Minimal JSON module that generates small dumps.
|
||||
|
||||
This is not fully JSON compliant but enough for the searchindex.
|
||||
And the generated files are smaller than the simplejson ones.
|
||||
|
||||
Uses the basestring encode function from simplejson.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher, Bob Ippolito.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
ESCAPE = re.compile(r'[\x00-\x19\\"\b\f\n\r\t]')
|
||||
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
|
||||
ESCAPE_DICT = {
|
||||
'\\': '\\\\',
|
||||
'"': '\\"',
|
||||
'\b': '\\b',
|
||||
'\f': '\\f',
|
||||
'\n': '\\n',
|
||||
'\r': '\\r',
|
||||
'\t': '\\t',
|
||||
}
|
||||
for i in range(0x20):
|
||||
ESCAPE_DICT.setdefault(chr(i), '\\u%04x' % (i,))
|
||||
|
||||
|
||||
def encode_basestring_ascii(s):
|
||||
def replace(match):
|
||||
s = match.group(0)
|
||||
try:
|
||||
return ESCAPE_DICT[s]
|
||||
except KeyError:
|
||||
n = ord(s)
|
||||
if n < 0x10000:
|
||||
return '\\u%04x' % (n,)
|
||||
else:
|
||||
# surrogate pair
|
||||
n -= 0x10000
|
||||
s1 = 0xd800 | ((n >> 10) & 0x3ff)
|
||||
s2 = 0xdc00 | (n & 0x3ff)
|
||||
return '\\u%04x\\u%04x' % (s1, s2)
|
||||
return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
|
||||
|
||||
|
||||
def dump_json(obj, key=False):
|
||||
if key:
|
||||
if not isinstance(obj, basestring):
|
||||
obj = str(obj)
|
||||
return encode_basestring_ascii(obj)
|
||||
if obj is None:
|
||||
return 'null'
|
||||
elif obj is True or obj is False:
|
||||
return obj and 'true' or 'false'
|
||||
elif isinstance(obj, (int, long, float)):
|
||||
return str(obj)
|
||||
elif isinstance(obj, dict):
|
||||
return '{%s}' % ','.join('%s:%s' % (
|
||||
dump_json(key, True),
|
||||
dump_json(value)
|
||||
) for key, value in obj.iteritems())
|
||||
elif isinstance(obj, (tuple, list, set)):
|
||||
return '[%s]' % ','.join(dump_json(x) for x in obj)
|
||||
elif isinstance(obj, basestring):
|
||||
return encode_basestring_ascii(obj)
|
||||
raise TypeError(type(obj))
|
||||
52
sphinx/refcounting.py
Normal file
@@ -0,0 +1,52 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.refcounting
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Handle reference counting annotations, based on refcount.py
|
||||
and anno-api.py.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
|
||||
|
||||
class RCEntry:
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
self.args = []
|
||||
self.result_type = ''
|
||||
self.result_refs = None
|
||||
|
||||
|
||||
class Refcounts(dict):
|
||||
@classmethod
|
||||
def fromfile(cls, filename):
|
||||
d = cls()
|
||||
with open(filename, 'r') as fp:
|
||||
for line in fp:
|
||||
line = line.strip()
|
||||
if line[:1] in ("", "#"):
|
||||
# blank lines and comments
|
||||
continue
|
||||
parts = line.split(":", 4)
|
||||
if len(parts) != 5:
|
||||
raise ValueError("Wrong field count in %r" % line)
|
||||
function, type, arg, refcount, comment = parts
|
||||
# Get the entry, creating it if needed:
|
||||
try:
|
||||
entry = d[function]
|
||||
except KeyError:
|
||||
entry = d[function] = RCEntry(function)
|
||||
if not refcount or refcount == "null":
|
||||
refcount = None
|
||||
else:
|
||||
refcount = int(refcount)
|
||||
# Update the entry with the new parameter or the result information.
|
||||
if arg:
|
||||
entry.args.append((arg, type, refcount))
|
||||
else:
|
||||
entry.result_type = type
|
||||
entry.result_refs = refcount
|
||||
return d
|
||||
143
sphinx/roles.py
Normal file
@@ -0,0 +1,143 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.roles
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Handlers for additional ReST roles.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from docutils import nodes, utils
|
||||
from docutils.parsers.rst import roles
|
||||
|
||||
from . import addnodes
|
||||
|
||||
ws_re = re.compile(r'\s+')
|
||||
|
||||
generic_docroles = {
|
||||
'command' : nodes.strong,
|
||||
'dfn' : nodes.emphasis,
|
||||
'file' : nodes.emphasis,
|
||||
'filenq' : nodes.emphasis,
|
||||
'filevar' : nodes.emphasis,
|
||||
'guilabel' : nodes.strong,
|
||||
'kbd' : nodes.literal,
|
||||
'keyword' : nodes.literal,
|
||||
'mailheader' : nodes.emphasis,
|
||||
'makevar' : nodes.Text,
|
||||
'manpage' : nodes.emphasis,
|
||||
'mimetype' : nodes.emphasis,
|
||||
'newsgroup' : nodes.emphasis,
|
||||
'option' : nodes.emphasis,
|
||||
'program' : nodes.strong,
|
||||
'regexp' : nodes.literal,
|
||||
}
|
||||
|
||||
for rolename, nodeclass in generic_docroles.iteritems():
|
||||
roles.register_generic_role(rolename, nodeclass)
|
||||
|
||||
|
||||
def indexmarkup_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
|
||||
env = inliner.document.settings.env
|
||||
text = utils.unescape(text)
|
||||
targetid = 'index-%s' % env.index_num
|
||||
env.index_num += 1
|
||||
targetnode = nodes.target('', '', ids=[targetid])
|
||||
inliner.document.note_explicit_target(targetnode)
|
||||
if typ == 'envvar':
|
||||
env.note_index_entry('single', '%s' % text,
|
||||
targetid, text)
|
||||
env.note_index_entry('single', 'environment variables!%s' % text,
|
||||
targetid, text)
|
||||
textnode = nodes.strong(text, text)
|
||||
return [targetnode, textnode], []
|
||||
elif typ == 'pep':
|
||||
env.note_index_entry('single', 'Python Enhancement Proposals!PEP %s' % text,
|
||||
targetid, 'PEP %s' % text)
|
||||
try:
|
||||
pepnum = int(text)
|
||||
except ValueError:
|
||||
msg = inliner.reporter.error('invalid PEP number %s' % text, line=lineno)
|
||||
prb = inliner.problematic(rawtext, rawtext, msg)
|
||||
return [prb], [msg]
|
||||
ref = inliner.document.settings.pep_base_url + 'pep-%04d' % pepnum
|
||||
sn = nodes.strong('PEP '+text, 'PEP '+text)
|
||||
rn = nodes.reference('', '', refuri=ref)
|
||||
rn += sn
|
||||
return [targetnode, rn], []
|
||||
elif typ == 'rfc':
|
||||
env.note_index_entry('single', 'RFC!RFC %s' % text,
|
||||
targetid, 'RFC %s' % text)
|
||||
try:
|
||||
rfcnum = int(text)
|
||||
except ValueError:
|
||||
msg = inliner.reporter.error('invalid RFC number %s' % text, line=lineno)
|
||||
prb = inliner.problematic(rawtext, rawtext, msg)
|
||||
return [prb], [msg]
|
||||
ref = inliner.document.settings.rfc_base_url + inliner.rfc_url % rfcnum
|
||||
sn = nodes.strong('RFC '+text, 'RFC '+text)
|
||||
rn = nodes.reference('', '', refuri=ref)
|
||||
rn += sn
|
||||
return [targetnode, rn], []
|
||||
|
||||
roles.register_canonical_role('envvar', indexmarkup_role)
|
||||
roles.register_local_role('pep', indexmarkup_role)
|
||||
roles.register_local_role('rfc', indexmarkup_role)
|
||||
|
||||
|
||||
# default is `literal`
|
||||
innernodetypes = {
|
||||
'ref': nodes.emphasis,
|
||||
'token': nodes.strong,
|
||||
}
|
||||
|
||||
def xfileref_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
|
||||
env = inliner.document.settings.env
|
||||
text = utils.unescape(text)
|
||||
# 'token' is the default role inside 'productionlist' directives
|
||||
if typ == '':
|
||||
typ = 'token'
|
||||
if env.config.get('strip_trailing_parentheses', False):
|
||||
if text[-2:] == '()':
|
||||
text = text[:-2]
|
||||
pnode = addnodes.pending_xref(rawtext)
|
||||
pnode['reftype'] = typ
|
||||
pnode['reftarget'] = ws_re.sub('', text)
|
||||
pnode['modname'] = env.currmodule
|
||||
pnode['classname'] = env.currclass
|
||||
pnode += innernodetypes.get(typ, nodes.literal)(rawtext, text, classes=['xref'])
|
||||
return [pnode], []
|
||||
|
||||
|
||||
def menusel_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):
|
||||
return [nodes.emphasis(rawtext, text.replace('-->', u'\N{TRIANGULAR BULLET}'))], []
|
||||
|
||||
|
||||
specific_docroles = {
|
||||
'data': xfileref_role,
|
||||
'exc': xfileref_role,
|
||||
'func': xfileref_role,
|
||||
'class': xfileref_role,
|
||||
'const': xfileref_role,
|
||||
'attr': xfileref_role,
|
||||
'meth': xfileref_role,
|
||||
|
||||
'cfunc' : xfileref_role,
|
||||
'cdata' : xfileref_role,
|
||||
'ctype' : xfileref_role,
|
||||
'cmacro' : xfileref_role,
|
||||
|
||||
'mod' : xfileref_role,
|
||||
|
||||
'ref': xfileref_role,
|
||||
'token' : xfileref_role,
|
||||
|
||||
'menuselection' : menusel_role,
|
||||
}
|
||||
|
||||
for rolename, func in specific_docroles.iteritems():
|
||||
roles.register_canonical_role(rolename, func)
|
||||
132
sphinx/search.py
Normal file
@@ -0,0 +1,132 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.search
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Create a search index for offline search.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
import re
|
||||
import pickle
|
||||
|
||||
from collections import defaultdict
|
||||
from docutils.nodes import Text, NodeVisitor
|
||||
from .stemmer import PorterStemmer
|
||||
from .json import dump_json
|
||||
|
||||
|
||||
word_re = re.compile(r'\w+(?u)')
|
||||
|
||||
|
||||
class Stemmer(PorterStemmer):
|
||||
"""
|
||||
All those porter stemmer implementations look hideous.
|
||||
make at least the stem method nicer.
|
||||
"""
|
||||
|
||||
def stem(self, word):
|
||||
return PorterStemmer.stem(self, word, 0, len(word) - 1)
|
||||
|
||||
|
||||
class WordCollector(NodeVisitor):
|
||||
"""
|
||||
A special visitor that collects words for the `IndexBuilder`.
|
||||
"""
|
||||
|
||||
def __init__(self, document):
|
||||
NodeVisitor.__init__(self, document)
|
||||
self.found_words = []
|
||||
|
||||
def dispatch_visit(self, node):
|
||||
if node.__class__ is Text:
|
||||
self.found_words.extend(word_re.findall(node.astext()))
|
||||
|
||||
|
||||
class IndexBuilder(object):
|
||||
"""
|
||||
Helper class that creates a searchindex based on the doctrees
|
||||
passed to the `feed` method.
|
||||
"""
|
||||
formats = {
|
||||
'json': dump_json,
|
||||
'pickle': pickle.dumps
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self._filenames = {}
|
||||
self._mapping = {}
|
||||
self._titles = {}
|
||||
self._categories = {}
|
||||
self._stemmer = Stemmer()
|
||||
|
||||
def dump(self, stream, format):
|
||||
"""Dump the freezed index to a stream."""
|
||||
stream.write(self.formats[format](self.freeze()))
|
||||
|
||||
def freeze(self):
|
||||
"""
|
||||
Create a useable data structure. You can pass this output
|
||||
to the `SearchFrontend` to search the index.
|
||||
"""
|
||||
return [
|
||||
[k for k, v in sorted(self._filenames.items(),
|
||||
key=lambda x: x[1])],
|
||||
dict(item for item in sorted(self._categories.items(),
|
||||
key=lambda x: x[0])),
|
||||
[v for k, v in sorted(self._titles.items(),
|
||||
key=lambda x: x[0])],
|
||||
dict(item for item in sorted(self._mapping.items(),
|
||||
key=lambda x: x[0])),
|
||||
]
|
||||
|
||||
def feed(self, filename, category, title, doctree):
|
||||
"""Feed a doctree to the index."""
|
||||
file_id = self._filenames.setdefault(filename, len(self._filenames))
|
||||
self._titles[file_id] = title
|
||||
visitor = WordCollector(doctree)
|
||||
doctree.walk(visitor)
|
||||
self._categories.setdefault(category, set()).add(file_id)
|
||||
for word in word_re.findall(title) + visitor.found_words:
|
||||
self._mapping.setdefault(self._stemmer.stem(word.lower()),
|
||||
set()).add(file_id)
|
||||
|
||||
|
||||
class SearchFrontend(object):
|
||||
"""
|
||||
This class acts as a frontend for the search index. It can search
|
||||
a searchindex as provided by `IndexBuilder`.
|
||||
"""
|
||||
|
||||
def __init__(self, index):
|
||||
self.filenames, self.areas, self.titles, self.words = index
|
||||
self._stemmer = Stemmer()
|
||||
|
||||
def query(self, required, excluded, areas):
|
||||
file_map = defaultdict(set)
|
||||
for word in required:
|
||||
if word not in self.words:
|
||||
break
|
||||
for fid in self.words[word]:
|
||||
file_map[fid].add(word)
|
||||
|
||||
return sorted(((self.filenames[fid], self.titles[fid])
|
||||
for fid, words in file_map.iteritems()
|
||||
if len(words) == len(required) and
|
||||
any(fid in self.areas.get(area, ()) for area in areas) and not
|
||||
any(fid in self.words.get(word, ()) for word in excluded)
|
||||
), key=lambda x: x[1].lower())
|
||||
|
||||
def search(self, searchstring, areas):
|
||||
required = set()
|
||||
excluded = set()
|
||||
for word in searchstring.split():
|
||||
if word.startswith('-'):
|
||||
storage = excluded
|
||||
word = word[1:]
|
||||
else:
|
||||
storage = required
|
||||
storage.add(self._stemmer.stem(word.lower()))
|
||||
|
||||
return self.query(required, excluded, areas)
|
||||
263
sphinx/smartypants.py
Normal file
@@ -0,0 +1,263 @@
|
||||
r"""
|
||||
This is based on SmartyPants.py by `Chad Miller`_.
|
||||
|
||||
Copyright and License
|
||||
=====================
|
||||
|
||||
SmartyPants_ license::
|
||||
|
||||
Copyright (c) 2003 John Gruber
|
||||
(http://daringfireball.net/)
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
* Neither the name "SmartyPants" nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this
|
||||
software without specific prior written permission.
|
||||
|
||||
This software is provided by the copyright holders and contributors "as
|
||||
is" and any express or implied warranties, including, but not limited
|
||||
to, the implied warranties of merchantability and fitness for a
|
||||
particular purpose are disclaimed. In no event shall the copyright
|
||||
owner or contributors be liable for any direct, indirect, incidental,
|
||||
special, exemplary, or consequential damages (including, but not
|
||||
limited to, procurement of substitute goods or services; loss of use,
|
||||
data, or profits; or business interruption) however caused and on any
|
||||
theory of liability, whether in contract, strict liability, or tort
|
||||
(including negligence or otherwise) arising in any way out of the use
|
||||
of this software, even if advised of the possibility of such damage.
|
||||
|
||||
|
||||
smartypants.py license::
|
||||
|
||||
smartypants.py is a derivative work of SmartyPants.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
* Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
This software is provided by the copyright holders and contributors "as
|
||||
is" and any express or implied warranties, including, but not limited
|
||||
to, the implied warranties of merchantability and fitness for a
|
||||
particular purpose are disclaimed. In no event shall the copyright
|
||||
owner or contributors be liable for any direct, indirect, incidental,
|
||||
special, exemplary, or consequential damages (including, but not
|
||||
limited to, procurement of substitute goods or services; loss of use,
|
||||
data, or profits; or business interruption) however caused and on any
|
||||
theory of liability, whether in contract, strict liability, or tort
|
||||
(including negligence or otherwise) arising in any way out of the use
|
||||
of this software, even if advised of the possibility of such damage.
|
||||
|
||||
.. _Chad Miller: http://web.chad.org/
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
|
||||
def sphinx_smarty_pants(t):
|
||||
t = t.replace('"', '"')
|
||||
t = educateDashesOldSchool(t)
|
||||
t = educateQuotes(t)
|
||||
t = t.replace('"', '"')
|
||||
return t
|
||||
|
||||
# Constants for quote education.
|
||||
|
||||
punct_class = r"""[!"#\$\%'()*+,-.\/:;<=>?\@\[\\\]\^_`{|}~]"""
|
||||
close_class = r"""[^\ \t\r\n\[\{\(\-]"""
|
||||
dec_dashes = r"""–|—"""
|
||||
|
||||
# Special case if the very first character is a quote
|
||||
# followed by punctuation at a non-word-break. Close the quotes by brute force:
|
||||
single_quote_start_re = re.compile(r"""^'(?=%s\\B)""" % (punct_class,))
|
||||
double_quote_start_re = re.compile(r"""^"(?=%s\\B)""" % (punct_class,))
|
||||
|
||||
# Special case for double sets of quotes, e.g.:
|
||||
# <p>He said, "'Quoted' words in a larger quote."</p>
|
||||
double_quote_sets_re = re.compile(r""""'(?=\w)""")
|
||||
single_quote_sets_re = re.compile(r"""'"(?=\w)""")
|
||||
|
||||
# Special case for decade abbreviations (the '80s):
|
||||
decade_abbr_re = re.compile(r"""\b'(?=\d{2}s)""")
|
||||
|
||||
# Get most opening double quotes:
|
||||
opening_double_quotes_regex = re.compile(r"""
|
||||
(
|
||||
\s | # a whitespace char, or
|
||||
| # a non-breaking space entity, or
|
||||
-- | # dashes, or
|
||||
&[mn]dash; | # named dash entities
|
||||
%s | # or decimal entities
|
||||
&\#x201[34]; # or hex
|
||||
)
|
||||
" # the quote
|
||||
(?=\w) # followed by a word character
|
||||
""" % (dec_dashes,), re.VERBOSE)
|
||||
|
||||
# Double closing quotes:
|
||||
closing_double_quotes_regex = re.compile(r"""
|
||||
#(%s)? # character that indicates the quote should be closing
|
||||
"
|
||||
(?=\s)
|
||||
""" % (close_class,), re.VERBOSE)
|
||||
|
||||
closing_double_quotes_regex_2 = re.compile(r"""
|
||||
(%s) # character that indicates the quote should be closing
|
||||
"
|
||||
""" % (close_class,), re.VERBOSE)
|
||||
|
||||
# Get most opening single quotes:
|
||||
opening_single_quotes_regex = re.compile(r"""
|
||||
(
|
||||
\s | # a whitespace char, or
|
||||
| # a non-breaking space entity, or
|
||||
-- | # dashes, or
|
||||
&[mn]dash; | # named dash entities
|
||||
%s | # or decimal entities
|
||||
&\#x201[34]; # or hex
|
||||
)
|
||||
' # the quote
|
||||
(?=\w) # followed by a word character
|
||||
""" % (dec_dashes,), re.VERBOSE)
|
||||
|
||||
closing_single_quotes_regex = re.compile(r"""
|
||||
(%s)
|
||||
'
|
||||
(?!\s | s\b | \d)
|
||||
""" % (close_class,), re.VERBOSE)
|
||||
|
||||
closing_single_quotes_regex_2 = re.compile(r"""
|
||||
(%s)
|
||||
'
|
||||
(\s | s\b)
|
||||
""" % (close_class,), re.VERBOSE)
|
||||
|
||||
def educateQuotes(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
|
||||
Returns: The string, with "educated" curly quote HTML entities.
|
||||
|
||||
Example input: "Isn't this fun?"
|
||||
Example output: “Isn’t this fun?”
|
||||
"""
|
||||
|
||||
# Special case if the very first character is a quote
|
||||
# followed by punctuation at a non-word-break. Close the quotes by brute force:
|
||||
str = single_quote_start_re.sub("’", str)
|
||||
str = double_quote_start_re.sub("”", str)
|
||||
|
||||
# Special case for double sets of quotes, e.g.:
|
||||
# <p>He said, "'Quoted' words in a larger quote."</p>
|
||||
str = double_quote_sets_re.sub("“‘", str)
|
||||
str = single_quote_sets_re.sub("‘“", str)
|
||||
|
||||
# Special case for decade abbreviations (the '80s):
|
||||
str = decade_abbr_re.sub("’", str)
|
||||
|
||||
str = opening_single_quotes_regex.sub(r"\1‘", str)
|
||||
str = closing_single_quotes_regex.sub(r"\1’", str)
|
||||
str = closing_single_quotes_regex_2.sub(r"\1’\2", str)
|
||||
|
||||
# Any remaining single quotes should be opening ones:
|
||||
str = str.replace("'", "‘")
|
||||
|
||||
str = opening_double_quotes_regex.sub(r"\1“", str)
|
||||
str = closing_double_quotes_regex.sub(r"”", str)
|
||||
str = closing_double_quotes_regex_2.sub(r"\1”", str)
|
||||
|
||||
# Any remaining quotes should be opening ones.
|
||||
str = str.replace('"', "“")
|
||||
|
||||
return str
|
||||
|
||||
|
||||
def educateBackticks(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
Returns: The string, with ``backticks'' -style double quotes
|
||||
translated into HTML curly quote entities.
|
||||
Example input: ``Isn't this fun?''
|
||||
Example output: “Isn't this fun?”
|
||||
"""
|
||||
return str.replace("``", "“").replace("''", "”")
|
||||
|
||||
|
||||
def educateSingleBackticks(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
Returns: The string, with `backticks' -style single quotes
|
||||
translated into HTML curly quote entities.
|
||||
|
||||
Example input: `Isn't this fun?'
|
||||
Example output: ‘Isn’t this fun?’
|
||||
"""
|
||||
return str.replace('`', "‘").replace("'", "’")
|
||||
|
||||
|
||||
def educateDashesOldSchool(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
|
||||
Returns: The string, with each instance of "--" translated to
|
||||
an en-dash HTML entity, and each "---" translated to
|
||||
an em-dash HTML entity.
|
||||
"""
|
||||
return str.replace('---', "—").replace('--', "–")
|
||||
|
||||
|
||||
def educateDashesOldSchoolInverted(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
|
||||
Returns: The string, with each instance of "--" translated to
|
||||
an em-dash HTML entity, and each "---" translated to
|
||||
an en-dash HTML entity. Two reasons why: First, unlike the
|
||||
en- and em-dash syntax supported by
|
||||
EducateDashesOldSchool(), it's compatible with existing
|
||||
entries written before SmartyPants 1.1, back when "--" was
|
||||
only used for em-dashes. Second, em-dashes are more
|
||||
common than en-dashes, and so it sort of makes sense that
|
||||
the shortcut should be shorter to type. (Thanks to Aaron
|
||||
Swartz for the idea.)
|
||||
"""
|
||||
return str.replace('---', "–").replace('--', "—")
|
||||
|
||||
|
||||
|
||||
def educateEllipses(str):
|
||||
"""
|
||||
Parameter: String.
|
||||
Returns: The string, with each instance of "..." translated to
|
||||
an ellipsis HTML entity.
|
||||
|
||||
Example input: Huh...?
|
||||
Example output: Huh…?
|
||||
"""
|
||||
return str.replace('...', "…").replace('. . .', "…")
|
||||
|
||||
|
||||
__author__ = "Chad Miller <smartypantspy@chad.org>"
|
||||
__version__ = "1.5_1.5: Sat, 13 Aug 2005 15:50:24 -0400"
|
||||
__url__ = "http://wiki.chad.org/SmartyPantsPy"
|
||||
__description__ = \
|
||||
"Smart-quotes, smart-ellipses, and smart-dashes for weblog entries in pyblosxom"
|
||||
344
sphinx/stemmer.py
Normal file
@@ -0,0 +1,344 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.stemmer
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Porter Stemming Algorithm
|
||||
|
||||
This is the Porter stemming algorithm, ported to Python from the
|
||||
version coded up in ANSI C by the author. It may be be regarded
|
||||
as canonical, in that it follows the algorithm presented in
|
||||
|
||||
Porter, 1980, An algorithm for suffix stripping, Program, Vol. 14,
|
||||
no. 3, pp 130-137,
|
||||
|
||||
only differing from it at the points maked --DEPARTURE-- below.
|
||||
|
||||
See also http://www.tartarus.org/~martin/PorterStemmer
|
||||
|
||||
The algorithm as described in the paper could be exactly replicated
|
||||
by adjusting the points of DEPARTURE, but this is barely necessary,
|
||||
because (a) the points of DEPARTURE are definitely improvements, and
|
||||
(b) no encoding of the Porter stemmer I have seen is anything like
|
||||
as exact as this version, even with the points of DEPARTURE!
|
||||
|
||||
Release 1: January 2001
|
||||
|
||||
:copyright: 2001 by Vivake Gupta <v@nano.com>.
|
||||
:license: Public Domain (?).
|
||||
"""
|
||||
|
||||
class PorterStemmer(object):
|
||||
|
||||
def __init__(self):
|
||||
"""The main part of the stemming algorithm starts here.
|
||||
b is a buffer holding a word to be stemmed. The letters are in b[k0],
|
||||
b[k0+1] ... ending at b[k]. In fact k0 = 0 in this demo program. k is
|
||||
readjusted downwards as the stemming progresses. Zero termination is
|
||||
not in fact used in the algorithm.
|
||||
|
||||
Note that only lower case sequences are stemmed. Forcing to lower case
|
||||
should be done before stem(...) is called.
|
||||
"""
|
||||
|
||||
self.b = "" # buffer for word to be stemmed
|
||||
self.k = 0
|
||||
self.k0 = 0
|
||||
self.j = 0 # j is a general offset into the string
|
||||
|
||||
def cons(self, i):
|
||||
"""cons(i) is TRUE <=> b[i] is a consonant."""
|
||||
if self.b[i] == 'a' or self.b[i] == 'e' or self.b[i] == 'i' \
|
||||
or self.b[i] == 'o' or self.b[i] == 'u':
|
||||
return 0
|
||||
if self.b[i] == 'y':
|
||||
if i == self.k0:
|
||||
return 1
|
||||
else:
|
||||
return (not self.cons(i - 1))
|
||||
return 1
|
||||
|
||||
def m(self):
|
||||
"""m() measures the number of consonant sequences between k0 and j.
|
||||
if c is a consonant sequence and v a vowel sequence, and <..>
|
||||
indicates arbitrary presence,
|
||||
|
||||
<c><v> gives 0
|
||||
<c>vc<v> gives 1
|
||||
<c>vcvc<v> gives 2
|
||||
<c>vcvcvc<v> gives 3
|
||||
....
|
||||
"""
|
||||
n = 0
|
||||
i = self.k0
|
||||
while 1:
|
||||
if i > self.j:
|
||||
return n
|
||||
if not self.cons(i):
|
||||
break
|
||||
i = i + 1
|
||||
i = i + 1
|
||||
while 1:
|
||||
while 1:
|
||||
if i > self.j:
|
||||
return n
|
||||
if self.cons(i):
|
||||
break
|
||||
i = i + 1
|
||||
i = i + 1
|
||||
n = n + 1
|
||||
while 1:
|
||||
if i > self.j:
|
||||
return n
|
||||
if not self.cons(i):
|
||||
break
|
||||
i = i + 1
|
||||
i = i + 1
|
||||
|
||||
def vowelinstem(self):
|
||||
"""vowelinstem() is TRUE <=> k0,...j contains a vowel"""
|
||||
for i in range(self.k0, self.j + 1):
|
||||
if not self.cons(i):
|
||||
return 1
|
||||
return 0
|
||||
|
||||
def doublec(self, j):
|
||||
"""doublec(j) is TRUE <=> j,(j-1) contain a double consonant."""
|
||||
if j < (self.k0 + 1):
|
||||
return 0
|
||||
if (self.b[j] != self.b[j-1]):
|
||||
return 0
|
||||
return self.cons(j)
|
||||
|
||||
def cvc(self, i):
|
||||
"""cvc(i) is TRUE <=> i-2,i-1,i has the form consonant - vowel - consonant
|
||||
and also if the second c is not w,x or y. this is used when trying to
|
||||
restore an e at the end of a short e.g.
|
||||
|
||||
cav(e), lov(e), hop(e), crim(e), but
|
||||
snow, box, tray.
|
||||
"""
|
||||
if i < (self.k0 + 2) or not self.cons(i) or self.cons(i-1) or not self.cons(i-2):
|
||||
return 0
|
||||
ch = self.b[i]
|
||||
if ch == 'w' or ch == 'x' or ch == 'y':
|
||||
return 0
|
||||
return 1
|
||||
|
||||
def ends(self, s):
|
||||
"""ends(s) is TRUE <=> k0,...k ends with the string s."""
|
||||
length = len(s)
|
||||
if s[length - 1] != self.b[self.k]: # tiny speed-up
|
||||
return 0
|
||||
if length > (self.k - self.k0 + 1):
|
||||
return 0
|
||||
if self.b[self.k-length+1:self.k+1] != s:
|
||||
return 0
|
||||
self.j = self.k - length
|
||||
return 1
|
||||
|
||||
def setto(self, s):
|
||||
"""setto(s) sets (j+1),...k to the characters in the string s, readjusting k."""
|
||||
length = len(s)
|
||||
self.b = self.b[:self.j+1] + s + self.b[self.j+length+1:]
|
||||
self.k = self.j + length
|
||||
|
||||
def r(self, s):
|
||||
"""r(s) is used further down."""
|
||||
if self.m() > 0:
|
||||
self.setto(s)
|
||||
|
||||
def step1ab(self):
|
||||
"""step1ab() gets rid of plurals and -ed or -ing. e.g.
|
||||
|
||||
caresses -> caress
|
||||
ponies -> poni
|
||||
ties -> ti
|
||||
caress -> caress
|
||||
cats -> cat
|
||||
|
||||
feed -> feed
|
||||
agreed -> agree
|
||||
disabled -> disable
|
||||
|
||||
matting -> mat
|
||||
mating -> mate
|
||||
meeting -> meet
|
||||
milling -> mill
|
||||
messing -> mess
|
||||
|
||||
meetings -> meet
|
||||
"""
|
||||
if self.b[self.k] == 's':
|
||||
if self.ends("sses"):
|
||||
self.k = self.k - 2
|
||||
elif self.ends("ies"):
|
||||
self.setto("i")
|
||||
elif self.b[self.k - 1] != 's':
|
||||
self.k = self.k - 1
|
||||
if self.ends("eed"):
|
||||
if self.m() > 0:
|
||||
self.k = self.k - 1
|
||||
elif (self.ends("ed") or self.ends("ing")) and self.vowelinstem():
|
||||
self.k = self.j
|
||||
if self.ends("at"): self.setto("ate")
|
||||
elif self.ends("bl"): self.setto("ble")
|
||||
elif self.ends("iz"): self.setto("ize")
|
||||
elif self.doublec(self.k):
|
||||
self.k = self.k - 1
|
||||
ch = self.b[self.k]
|
||||
if ch == 'l' or ch == 's' or ch == 'z':
|
||||
self.k = self.k + 1
|
||||
elif (self.m() == 1 and self.cvc(self.k)):
|
||||
self.setto("e")
|
||||
|
||||
def step1c(self):
|
||||
"""step1c() turns terminal y to i when there is another vowel in the stem."""
|
||||
if (self.ends("y") and self.vowelinstem()):
|
||||
self.b = self.b[:self.k] + 'i' + self.b[self.k+1:]
|
||||
|
||||
def step2(self):
|
||||
"""step2() maps double suffices to single ones.
|
||||
so -ization ( = -ize plus -ation) maps to -ize etc. note that the
|
||||
string before the suffix must give m() > 0.
|
||||
"""
|
||||
if self.b[self.k - 1] == 'a':
|
||||
if self.ends("ational"): self.r("ate")
|
||||
elif self.ends("tional"): self.r("tion")
|
||||
elif self.b[self.k - 1] == 'c':
|
||||
if self.ends("enci"): self.r("ence")
|
||||
elif self.ends("anci"): self.r("ance")
|
||||
elif self.b[self.k - 1] == 'e':
|
||||
if self.ends("izer"): self.r("ize")
|
||||
elif self.b[self.k - 1] == 'l':
|
||||
if self.ends("bli"): self.r("ble") # --DEPARTURE--
|
||||
# To match the published algorithm, replace this phrase with
|
||||
# if self.ends("abli"): self.r("able")
|
||||
elif self.ends("alli"): self.r("al")
|
||||
elif self.ends("entli"): self.r("ent")
|
||||
elif self.ends("eli"): self.r("e")
|
||||
elif self.ends("ousli"): self.r("ous")
|
||||
elif self.b[self.k - 1] == 'o':
|
||||
if self.ends("ization"): self.r("ize")
|
||||
elif self.ends("ation"): self.r("ate")
|
||||
elif self.ends("ator"): self.r("ate")
|
||||
elif self.b[self.k - 1] == 's':
|
||||
if self.ends("alism"): self.r("al")
|
||||
elif self.ends("iveness"): self.r("ive")
|
||||
elif self.ends("fulness"): self.r("ful")
|
||||
elif self.ends("ousness"): self.r("ous")
|
||||
elif self.b[self.k - 1] == 't':
|
||||
if self.ends("aliti"): self.r("al")
|
||||
elif self.ends("iviti"): self.r("ive")
|
||||
elif self.ends("biliti"): self.r("ble")
|
||||
elif self.b[self.k - 1] == 'g': # --DEPARTURE--
|
||||
if self.ends("logi"): self.r("log")
|
||||
# To match the published algorithm, delete this phrase
|
||||
|
||||
def step3(self):
|
||||
"""step3() dels with -ic-, -full, -ness etc. similar strategy to step2."""
|
||||
if self.b[self.k] == 'e':
|
||||
if self.ends("icate"): self.r("ic")
|
||||
elif self.ends("ative"): self.r("")
|
||||
elif self.ends("alize"): self.r("al")
|
||||
elif self.b[self.k] == 'i':
|
||||
if self.ends("iciti"): self.r("ic")
|
||||
elif self.b[self.k] == 'l':
|
||||
if self.ends("ical"): self.r("ic")
|
||||
elif self.ends("ful"): self.r("")
|
||||
elif self.b[self.k] == 's':
|
||||
if self.ends("ness"): self.r("")
|
||||
|
||||
def step4(self):
|
||||
"""step4() takes off -ant, -ence etc., in context <c>vcvc<v>."""
|
||||
if self.b[self.k - 1] == 'a':
|
||||
if self.ends("al"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'c':
|
||||
if self.ends("ance"): pass
|
||||
elif self.ends("ence"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'e':
|
||||
if self.ends("er"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'i':
|
||||
if self.ends("ic"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'l':
|
||||
if self.ends("able"): pass
|
||||
elif self.ends("ible"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'n':
|
||||
if self.ends("ant"): pass
|
||||
elif self.ends("ement"): pass
|
||||
elif self.ends("ment"): pass
|
||||
elif self.ends("ent"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'o':
|
||||
if self.ends("ion") and (self.b[self.j] == 's' \
|
||||
or self.b[self.j] == 't'): pass
|
||||
elif self.ends("ou"): pass
|
||||
# takes care of -ous
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 's':
|
||||
if self.ends("ism"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 't':
|
||||
if self.ends("ate"): pass
|
||||
elif self.ends("iti"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'u':
|
||||
if self.ends("ous"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'v':
|
||||
if self.ends("ive"): pass
|
||||
else: return
|
||||
elif self.b[self.k - 1] == 'z':
|
||||
if self.ends("ize"): pass
|
||||
else: return
|
||||
else:
|
||||
return
|
||||
if self.m() > 1:
|
||||
self.k = self.j
|
||||
|
||||
def step5(self):
|
||||
"""step5() removes a final -e if m() > 1, and changes -ll to -l if
|
||||
m() > 1.
|
||||
"""
|
||||
self.j = self.k
|
||||
if self.b[self.k] == 'e':
|
||||
a = self.m()
|
||||
if a > 1 or (a == 1 and not self.cvc(self.k-1)):
|
||||
self.k = self.k - 1
|
||||
if self.b[self.k] == 'l' and self.doublec(self.k) and self.m() > 1:
|
||||
self.k = self.k -1
|
||||
|
||||
def stem(self, p, i, j):
|
||||
"""In stem(p,i,j), p is a char pointer, and the string to be stemmed
|
||||
is from p[i] to p[j] inclusive. Typically i is zero and j is the
|
||||
offset to the last character of a string, (p[j+1] == '\0'). The
|
||||
stemmer adjusts the characters p[i] ... p[j] and returns the new
|
||||
end-point of the string, k. Stemming never increases word length, so
|
||||
i <= k <= j. To turn the stemmer into a module, declare 'stem' as
|
||||
extern, and delete the remainder of this file.
|
||||
"""
|
||||
# copy the parameters into statics
|
||||
self.b = p
|
||||
self.k = j
|
||||
self.k0 = i
|
||||
if self.k <= self.k0 + 1:
|
||||
return self.b # --DEPARTURE--
|
||||
|
||||
# With this line, strings of length 1 or 2 don't go through the
|
||||
# stemming process, although no mention is made of this in the
|
||||
# published algorithm. Remove the line to match the published
|
||||
# algorithm.
|
||||
|
||||
self.step1ab()
|
||||
self.step1c()
|
||||
self.step2()
|
||||
self.step3()
|
||||
self.step4()
|
||||
self.step5()
|
||||
return self.b[self.k0:self.k+1]
|
||||
162
sphinx/style/admin.css
Normal file
@@ -0,0 +1,162 @@
|
||||
/**
|
||||
* Sphinx Admin Panel
|
||||
*/
|
||||
|
||||
div.admin {
|
||||
margin: 0 -20px -30px -20px;
|
||||
padding: 0 20px 10px 20px;
|
||||
background-color: #f2f2f2;
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.admin a {
|
||||
color: #333;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.admin a:hover {
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.admin h1,
|
||||
div.admin h2 {
|
||||
background-color: #555;
|
||||
border-bottom: 1px solid #222;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.admin form form {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.admin input, div.admin textarea {
|
||||
font-family: 'Bitstream Vera Sans', 'Arial', sans-serif;
|
||||
font-size: 13px;
|
||||
color: #333;
|
||||
padding: 2px;
|
||||
background-color: #fff;
|
||||
border: 1px solid #aaa;
|
||||
}
|
||||
|
||||
div.admin input[type="reset"],
|
||||
div.admin input[type="submit"] {
|
||||
cursor: pointer;
|
||||
font-weight: bold;
|
||||
padding: 2px;
|
||||
}
|
||||
|
||||
div.admin input[type="reset"]:hover,
|
||||
div.admin input[type="submit"]:hover {
|
||||
border: 1px solid #333;
|
||||
}
|
||||
|
||||
div.admin div.actions {
|
||||
margin: 10px 0 0 0;
|
||||
padding: 5px;
|
||||
background-color: #aaa;
|
||||
border: 1px solid #777;
|
||||
}
|
||||
|
||||
div.admin div.error {
|
||||
margin: 10px 0 0 0;
|
||||
padding: 5px;
|
||||
border: 2px solid #222;
|
||||
background-color: #ccc;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.admin div.dialog {
|
||||
background-color: #ccc;
|
||||
margin: 10px 0 10px 0;
|
||||
}
|
||||
|
||||
div.admin div.dialog h2 {
|
||||
margin: 0;
|
||||
font-size: 18px;
|
||||
padding: 4px 10px 4px 10px;
|
||||
}
|
||||
|
||||
div.admin div.dialog div.text {
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
div.admin div.dialog div.buttons {
|
||||
padding: 5px 10px 5px 10px;
|
||||
}
|
||||
|
||||
div.admin table.mapping {
|
||||
width: 100%;
|
||||
border: 1px solid #999;
|
||||
border-collapse: collapse;
|
||||
background-color: #aaa;
|
||||
}
|
||||
|
||||
div.admin table.mapping th {
|
||||
background-color: #ddd;
|
||||
border-bottom: 1px solid #888;
|
||||
padding: 5px;
|
||||
}
|
||||
|
||||
div.admin table.mapping th.recent_comments {
|
||||
background-color: #c5cba4;
|
||||
}
|
||||
|
||||
div.admin table.mapping,
|
||||
div.admin table.mapping a {
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.admin table.mapping td {
|
||||
border: 1px solid #888;
|
||||
border-left: none;
|
||||
border-right: none;
|
||||
text-align: left;
|
||||
line-height: 24px;
|
||||
padding: 0 5px 0 5px;
|
||||
}
|
||||
|
||||
div.admin table.mapping tr:hover {
|
||||
background-color: #888;
|
||||
}
|
||||
|
||||
div.admin table.mapping td.username {
|
||||
width: 180px;
|
||||
}
|
||||
|
||||
div.admin table.mapping td.pub_date {
|
||||
font-style: italic;
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
div.admin table.mapping td.groups input {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.admin table.mapping td.actions input {
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.admin table.mapping .actions {
|
||||
text-align: right;
|
||||
width: 70px;
|
||||
}
|
||||
|
||||
div.admin table.mapping span.meta {
|
||||
font-size: 11px;
|
||||
color: #222;
|
||||
}
|
||||
|
||||
div.admin table.mapping span.meta a {
|
||||
color: #222;
|
||||
}
|
||||
|
||||
div.admin div.detail_form dt {
|
||||
clear: both;
|
||||
float: left;
|
||||
width: 110px;
|
||||
}
|
||||
|
||||
div.admin div.detail_form textarea {
|
||||
width: 98%;
|
||||
height: 160px;
|
||||
}
|
||||
BIN
sphinx/style/comment.png
Normal file
|
After Width: | Height: | Size: 401 B |
764
sphinx/style/default.css
Normal file
@@ -0,0 +1,764 @@
|
||||
/**
|
||||
* Python Doc Design
|
||||
*/
|
||||
|
||||
body {
|
||||
font-family: 'Bitstream Vera Sans', 'Arial', sans-serif;
|
||||
font-size: 13px;
|
||||
background-color: #11303d;
|
||||
color: #000;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* :::: LAYOUT :::: */
|
||||
|
||||
div.document {
|
||||
background-color: #1c4e63;
|
||||
}
|
||||
|
||||
div.documentwrapper {
|
||||
float: left;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.bodywrapper {
|
||||
margin: 0 0 0 230px;
|
||||
}
|
||||
|
||||
div.body {
|
||||
background-color: white;
|
||||
padding: 0 20px 30px 20px;
|
||||
}
|
||||
|
||||
div.sidebarwrapper {
|
||||
padding: 10px 5px 0 10px;
|
||||
}
|
||||
|
||||
div.sidebar {
|
||||
float: left;
|
||||
width: 230px;
|
||||
margin-left: -100%;
|
||||
}
|
||||
|
||||
div.clearer {
|
||||
clear: both;
|
||||
}
|
||||
|
||||
div.footer {
|
||||
color: #fff;
|
||||
width: 100%;
|
||||
padding: 9px 0 9px 0;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
div.footer a {
|
||||
color: #fff;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.related {
|
||||
background-color: #133f52;
|
||||
color: #fff;
|
||||
width: 100%;
|
||||
height: 30px;
|
||||
line-height: 30px;
|
||||
}
|
||||
|
||||
div.related h3 {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.related ul {
|
||||
margin: 0;
|
||||
padding: 0 0 0 10px;
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.related li {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.related li.right {
|
||||
float: right;
|
||||
margin-right: 5px;
|
||||
}
|
||||
|
||||
div.related a {
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* ::: TOC :::: */
|
||||
div.sidebar h3 {
|
||||
font-family: 'Trebuchet MS', sans-serif;
|
||||
color: white;
|
||||
font-size: 24px;
|
||||
font-weight: normal;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.sidebar h4 {
|
||||
font-family: 'Trebuchet MS', sans-serif;
|
||||
color: white;
|
||||
font-size: 16px;
|
||||
font-weight: normal;
|
||||
margin: 5px 0 0 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.sidebar p {
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.sidebar p.topless {
|
||||
margin: 5px 10px 10px 10px;
|
||||
}
|
||||
|
||||
div.sidebar ul {
|
||||
margin: 10px;
|
||||
padding: 0;
|
||||
list-style: none;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.sidebar ul ul,
|
||||
div.sidebar ul.want-points {
|
||||
margin-left: 20px;
|
||||
list-style: square;
|
||||
}
|
||||
|
||||
div.sidebar ul ul {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
div.sidebar a {
|
||||
color: #98dbcc;
|
||||
}
|
||||
|
||||
div.sidebar form {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
div.sidebar input {
|
||||
border: 1px solid #98dbcc;
|
||||
font-family: 'Bitstream Vera Sans', 'Arial', sans-serif;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
/* :::: MODULE CLOUD :::: */
|
||||
div.modulecloud {
|
||||
margin: -5px 10px 5px 10px;
|
||||
padding: 10px;
|
||||
font-size: 110%;
|
||||
line-height: 160%;
|
||||
border: 1px solid #cbe7e5;
|
||||
background-color: #f2fbfd;
|
||||
}
|
||||
|
||||
div.modulecloud a {
|
||||
padding: 0 5px 0 5px;
|
||||
}
|
||||
|
||||
/* :::: SEARCH :::: */
|
||||
ul.search {
|
||||
margin: 10px 0 0 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
ul.search li {
|
||||
padding: 5px 0 5px 20px;
|
||||
background-image: url(file.png);
|
||||
background-repeat: no-repeat;
|
||||
background-position: 0 7px;
|
||||
}
|
||||
|
||||
ul.search li a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
ul.search li div.context {
|
||||
color: #888;
|
||||
margin: 2px 0 0 30px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.keywordmatches li.goodmatch a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* :::: COMMON FORM STYLES :::: */
|
||||
|
||||
div.actions {
|
||||
padding: 5px 10px 5px 10px;
|
||||
border-top: 1px solid #cbe7e5;
|
||||
border-bottom: 1px solid #cbe7e5;
|
||||
background-color: #e0f6f4;
|
||||
}
|
||||
|
||||
form dl {
|
||||
color: #333;
|
||||
}
|
||||
|
||||
form dt {
|
||||
clear: both;
|
||||
float: left;
|
||||
min-width: 110px;
|
||||
margin-right: 10px;
|
||||
padding-top: 2px;
|
||||
}
|
||||
|
||||
input#homepage {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.error {
|
||||
margin: 5px 20px 0 0;
|
||||
padding: 5px;
|
||||
border: 1px solid #d00;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* :::: INLINE COMMENTS :::: */
|
||||
|
||||
div.inlinecomments {
|
||||
position: absolute;
|
||||
right: 20px;
|
||||
}
|
||||
|
||||
div.inlinecomments a.bubble {
|
||||
display: block;
|
||||
float: right;
|
||||
background-image: url(style/comment.png);
|
||||
background-repeat: no-repeat;
|
||||
width: 25px;
|
||||
height: 25px;
|
||||
text-align: center;
|
||||
padding-top: 3px;
|
||||
font-size: 12px;
|
||||
line-height: 14px;
|
||||
font-weight: bold;
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.inlinecomments a.bubble span {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.inlinecomments a.emptybubble {
|
||||
background-image: url(style/nocomment.png);
|
||||
}
|
||||
|
||||
div.inlinecomments a.bubble:hover {
|
||||
background-image: url(style/hovercomment.png);
|
||||
text-decoration: none;
|
||||
color: #3ca0a4;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comments {
|
||||
float: right;
|
||||
margin: 25px 5px 0 0;
|
||||
max-width: 50em;
|
||||
min-width: 30em;
|
||||
border: 1px solid #2eabb0;
|
||||
background-color: #f2fbfd;
|
||||
z-index: 150;
|
||||
}
|
||||
|
||||
div#comments {
|
||||
border: 1px solid #2eabb0;
|
||||
}
|
||||
|
||||
div#comments div.nocomments {
|
||||
padding: 10px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comments h3,
|
||||
div#comments h3 {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
background-color: #2eabb0;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 3px;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comments div.actions {
|
||||
padding: 4px;
|
||||
margin: 0;
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
div#comments div.comment {
|
||||
margin: 10px;
|
||||
border: 1px solid #2eabb0;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comment h4,
|
||||
div.commentwindow div.comment h4,
|
||||
div#comments div.comment h4 {
|
||||
margin: 10px 0 0 0;
|
||||
background-color: #2eabb0;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 1px 4px 1px 4px;
|
||||
}
|
||||
|
||||
div#comments div.comment h4 {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
div#comments div.comment h4 a {
|
||||
color: #d5f4f4;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comment div.text,
|
||||
div.commentwindow div.comment div.text,
|
||||
div#comments div.comment div.text {
|
||||
margin: -5px 0 -5px 0;
|
||||
padding: 0 10px 0 10px;
|
||||
}
|
||||
|
||||
div.inlinecomments div.comment div.meta,
|
||||
div.commentwindow div.comment div.meta,
|
||||
div#comments div.comment div.meta {
|
||||
text-align: right;
|
||||
padding: 2px 10px 2px 0;
|
||||
font-size: 95%;
|
||||
color: #538893;
|
||||
border-top: 1px solid #cbe7e5;
|
||||
background-color: #e0f6f4;
|
||||
}
|
||||
|
||||
div.commentwindow {
|
||||
position: absolute;
|
||||
width: 500px;
|
||||
border: 1px solid #cbe7e5;
|
||||
background-color: #f2fbfd;
|
||||
display: none;
|
||||
z-index: 130;
|
||||
}
|
||||
|
||||
div.commentwindow h3 {
|
||||
margin: 0;
|
||||
background-color: #2eabb0;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 5px;
|
||||
font-size: 22px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
div.commentwindow div.actions {
|
||||
margin: 10px -10px 0 -10px;
|
||||
padding: 4px 10px 4px 10px;
|
||||
color: #538893;
|
||||
}
|
||||
|
||||
div.commentwindow div.actions input {
|
||||
border: 1px solid #2eabb0;
|
||||
background-color: white;
|
||||
color: #135355;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
div.commentwindow div.form {
|
||||
padding: 0 10px 0 10px;
|
||||
}
|
||||
|
||||
div.commentwindow div.form input,
|
||||
div.commentwindow div.form textarea {
|
||||
border: 1px solid #3c9ea2;
|
||||
background-color: white;
|
||||
color: black;
|
||||
}
|
||||
|
||||
div.commentwindow div.error {
|
||||
margin: 10px 5px 10px 5px;
|
||||
background-color: #fbe5dc;
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.commentwindow div.form textarea {
|
||||
width: 99%;
|
||||
}
|
||||
|
||||
div.commentwindow div.preview {
|
||||
margin: 10px 0 10px 0;
|
||||
background-color: #70d0d4;
|
||||
padding: 0 1px 1px 25px;
|
||||
}
|
||||
|
||||
div.commentwindow div.preview h4 {
|
||||
margin: 0 0 -5px -20px;
|
||||
padding: 4px 0 0 4px;
|
||||
color: white;
|
||||
font-size: 18px;
|
||||
}
|
||||
|
||||
div.commentwindow div.preview div.comment {
|
||||
background-color: #f2fbfd;
|
||||
}
|
||||
|
||||
div.commentwindow div.preview div.comment h4 {
|
||||
margin: 10px 0 0 0!important;
|
||||
padding: 1px 4px 1px 4px!important;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
/* :::: SUGGEST CHANGES :::: */
|
||||
div#suggest-changes-box input, div#suggest-changes-box textarea {
|
||||
border: 1px solid #ccc;
|
||||
background-color: white;
|
||||
color: black;
|
||||
}
|
||||
|
||||
div#suggest-changes-box textarea {
|
||||
width: 99%;
|
||||
height: 400px;
|
||||
}
|
||||
|
||||
|
||||
/* :::: PREVIEW :::: */
|
||||
div.preview {
|
||||
background-image: url(style/preview.png);
|
||||
padding: 0 20px 20px 20px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
|
||||
/* :::: INDEX PAGE :::: */
|
||||
|
||||
table.contentstable {
|
||||
width: 90%;
|
||||
}
|
||||
|
||||
table.contentstable p.biglink {
|
||||
line-height: 150%;
|
||||
}
|
||||
|
||||
a.biglink {
|
||||
font-size: 1.5em;
|
||||
}
|
||||
|
||||
span.linkdescr {
|
||||
font-style: italic;
|
||||
padding-top: 5px;
|
||||
}
|
||||
|
||||
/* :::: INDEX STYLES :::: */
|
||||
|
||||
table.indextable td {
|
||||
text-align: left;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
table.indextable dl, table.indextable dd {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
table.indextable tr.pcap {
|
||||
height: 10px;
|
||||
}
|
||||
|
||||
table.indextable tr.cap {
|
||||
margin-top: 10px;
|
||||
background-color: #f2f2f2;
|
||||
}
|
||||
|
||||
img.toggler {
|
||||
margin-right: 3px;
|
||||
margin-top: 3px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
form.pfform {
|
||||
margin: 10px 0 20px 0;
|
||||
}
|
||||
|
||||
/* :::: GLOBAL STYLES :::: */
|
||||
|
||||
.docwarning {
|
||||
background-color: #ffe4e4;
|
||||
padding: 10px;
|
||||
margin: 0 -20px 0 -20px;
|
||||
border-bottom: 1px solid #f66;
|
||||
}
|
||||
|
||||
p.subhead {
|
||||
font-weight: bold;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
a {
|
||||
color: #355f7c;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.body h1,
|
||||
div.body h2,
|
||||
div.body h3,
|
||||
div.body h4,
|
||||
div.body h5,
|
||||
div.body h6 {
|
||||
font-family: 'Trebuchet MS', sans-serif;
|
||||
background-color: #f2f2f2;
|
||||
font-weight: normal;
|
||||
color: #20435c;
|
||||
border-bottom: 1px solid #ccc;
|
||||
margin: 20px -20px 10px -20px;
|
||||
padding: 3px 0 3px 10px;
|
||||
}
|
||||
|
||||
div.body h1 { margin-top: 0; font-size: 30px; }
|
||||
div.body h2 { font-size: 25px; }
|
||||
div.body h3 { font-size: 21px; }
|
||||
div.body h4 { font-size: 18px; }
|
||||
div.body h5 { font-size: 14px; }
|
||||
div.body h6 { font-size: 12px; }
|
||||
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink {
|
||||
color: #c60f0f;
|
||||
font-size: 0.8em;
|
||||
padding: 0 4px 0 4px;
|
||||
text-decoration: none;
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink {
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover {
|
||||
background-color: #c60f0f;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.body p, div.body dd, div.body li {
|
||||
text-align: justify;
|
||||
line-height: 130%;
|
||||
}
|
||||
|
||||
div.body td {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.fakelist {
|
||||
list-style: none;
|
||||
margin: 10px 0 10px 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* "Footnotes" heading */
|
||||
p.rubric {
|
||||
margin-top: 30px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* Admonitions */
|
||||
|
||||
div.admonition {
|
||||
margin-top: 10px;
|
||||
margin-bottom: 10px;
|
||||
padding: 10px 10px 0px 10px;
|
||||
}
|
||||
|
||||
div.admonition dt {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.admonition dd {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
div.seealso {
|
||||
background-color: #ffc;
|
||||
border: 1px solid #ff6;
|
||||
}
|
||||
|
||||
div.warning {
|
||||
background-color: #ffe4e4;
|
||||
border: 1px solid #f66;
|
||||
}
|
||||
|
||||
div.note {
|
||||
background-color: #eee;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
margin: 0px 0px 5px 0px;
|
||||
font-weight: bold;
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
table.docutils {
|
||||
border: 0;
|
||||
}
|
||||
|
||||
table.docutils td, table.docutils th {
|
||||
margin: 2px;
|
||||
border-top: 0;
|
||||
border-left: 0;
|
||||
border-right: 0;
|
||||
border-bottom: 1px solid #aaa;
|
||||
}
|
||||
|
||||
table.field-list td, table.field-list th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
table.footnote td, table.footnote th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
dl {
|
||||
margin-bottom: 15px;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
dd p {
|
||||
margin-top: 0px;
|
||||
}
|
||||
|
||||
dd ul, dd table {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
dd {
|
||||
margin-top: 3px;
|
||||
margin-bottom: 10px;
|
||||
margin-left: 30px;
|
||||
}
|
||||
|
||||
.refcount {
|
||||
color: #060;
|
||||
}
|
||||
|
||||
dt:target,
|
||||
.highlight {
|
||||
background-color: #fbe54e;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: left;
|
||||
padding-right: 5px;
|
||||
}
|
||||
|
||||
pre {
|
||||
font-family: 'Bitstream Vera Sans Mono', monospace;
|
||||
padding: 5px;
|
||||
background-color: #efc;
|
||||
color: #333;
|
||||
border: 1px solid #ac9;
|
||||
border-left: none;
|
||||
border-right: none;
|
||||
}
|
||||
|
||||
tt {
|
||||
font-family: 'Bitstream Vera Sans Mono', monospace;
|
||||
background-color: #ecf0f3;
|
||||
padding: 1px;
|
||||
}
|
||||
|
||||
tt.descname {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
font-size: 1.2em;
|
||||
}
|
||||
|
||||
tt.descclassname {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
tt.xref, a tt {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.footnote:target { background-color: #ffa }
|
||||
|
||||
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
.optional {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
.versionmodified {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
form.comment {
|
||||
margin: 0;
|
||||
padding: 10px 30px 10px 30px;
|
||||
background-color: #eee;
|
||||
}
|
||||
|
||||
form.comment h3 {
|
||||
background-color: #326591;
|
||||
color: white;
|
||||
margin: -10px -30px 10px -30px;
|
||||
padding: 5px;
|
||||
font-size: 1.4em;
|
||||
}
|
||||
|
||||
form.comment input,
|
||||
form.comment textarea {
|
||||
border: 1px solid #ccc;
|
||||
padding: 2px;
|
||||
font-family: 'Bitstream Vera Sans', 'Verdana', sans-serif;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
form.comment input[type="text"] {
|
||||
width: 240px;
|
||||
}
|
||||
|
||||
form.comment textarea {
|
||||
width: 100%;
|
||||
height: 200px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
/* :::: PRINT :::: */
|
||||
@media print {
|
||||
div.documentwrapper {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.body {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
div.sidebar,
|
||||
div.related,
|
||||
div.footer,
|
||||
div#comments div.new-comment-box,
|
||||
#top-link {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
349
sphinx/style/doctools.js
Normal file
@@ -0,0 +1,349 @@
|
||||
/// XXX: make it cross browser
|
||||
|
||||
/**
|
||||
* make the code below compatible with browsers without
|
||||
* an installed firebug like debugger
|
||||
*/
|
||||
if (!window.console || !console.firebug) {
|
||||
var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml",
|
||||
"group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"];
|
||||
window.console = {};
|
||||
for (var i = 0; i < names.length; ++i)
|
||||
window.console[names[i]] = function() {}
|
||||
}
|
||||
|
||||
/**
|
||||
* small helper function to urldecode strings
|
||||
*/
|
||||
jQuery.urldecode = function(x) {
|
||||
return decodeURIComponent(x).replace(/\+/g, ' ');
|
||||
}
|
||||
|
||||
/**
|
||||
* small helper function to urlencode strings
|
||||
*/
|
||||
jQuery.urlencode = encodeURIComponent;
|
||||
|
||||
/**
|
||||
* This function returns the parsed url parameters of the
|
||||
* current request. Multiple values per key are supported,
|
||||
* it will always return arrays of strings for the value parts.
|
||||
*/
|
||||
jQuery.getQueryParameters = function(s) {
|
||||
if (typeof s == 'undefined')
|
||||
s = document.location.search;
|
||||
var parts = s.substr(s.indexOf('?') + 1).split('&');
|
||||
var result = {};
|
||||
for (var i = 0; i < parts.length; i++) {
|
||||
var tmp = parts[i].split('=', 2);
|
||||
var key = jQuery.urldecode(tmp[0]);
|
||||
var value = jQuery.urldecode(tmp[1]);
|
||||
if (key in result)
|
||||
result[key].push(value);
|
||||
else
|
||||
result[key] = [value];
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* small function to check if an array contains
|
||||
* a given item.
|
||||
*/
|
||||
jQuery.contains = function(arr, item) {
|
||||
for (var i = 0; i < arr.length; i++) {
|
||||
if (arr[i] == item)
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* highlight a given string on a jquery object by wrapping it in
|
||||
* span elements with the given class name.
|
||||
*/
|
||||
jQuery.fn.highlightText = function(text, className) {
|
||||
function highlight(node) {
|
||||
if (node.nodeType == 3) {
|
||||
var val = node.nodeValue;
|
||||
var pos = val.toLowerCase().indexOf(text);
|
||||
if (pos >= 0 && !jQuery.className.has(node.parentNode, className)) {
|
||||
var span = document.createElement("span");
|
||||
span.className = className;
|
||||
span.appendChild(document.createTextNode(val.substr(pos, text.length)));
|
||||
node.parentNode.insertBefore(span, node.parentNode.insertBefore(
|
||||
document.createTextNode(val.substr(pos + text.length)),
|
||||
node.nextSibling));
|
||||
node.nodeValue = val.substr(0, pos);
|
||||
}
|
||||
}
|
||||
else if (!jQuery(node).is("button, select, textarea")) {
|
||||
jQuery.each(node.childNodes, function() {
|
||||
highlight(this)
|
||||
});
|
||||
}
|
||||
}
|
||||
return this.each(function() {
|
||||
highlight(this);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Small JavaScript module for the documentation.
|
||||
*/
|
||||
var Documentation = {
|
||||
|
||||
init : function() {
|
||||
this.addContextElements();
|
||||
this.fixFirefoxAnchorBug();
|
||||
this.highlightSearchWords();
|
||||
this.initModIndex();
|
||||
this.initComments();
|
||||
},
|
||||
|
||||
/**
|
||||
* add context elements like header anchor links
|
||||
*/
|
||||
addContextElements : function() {
|
||||
for (var i = 1; i <= 6; i++) {
|
||||
$('h' + i + '[@id]').each(function() {
|
||||
$('<a class="headerlink">\u00B6</a>').
|
||||
attr('href', '#' + this.id).
|
||||
attr('title', 'Permalink to this headline').
|
||||
appendTo(this);
|
||||
});
|
||||
}
|
||||
$('dt[@id]').each(function() {
|
||||
$('<a class="headerlink">\u00B6</a>').
|
||||
attr('href', '#' + this.id).
|
||||
attr('title', 'Permalink to this definition').
|
||||
appendTo(this);
|
||||
});
|
||||
},
|
||||
|
||||
/**
|
||||
* workaround a firefox stupidity
|
||||
*/
|
||||
fixFirefoxAnchorBug : function() {
|
||||
if (document.location.hash && $.browser.mozilla)
|
||||
window.setTimeout(function() {
|
||||
document.location.href += '';
|
||||
}, 10);
|
||||
},
|
||||
|
||||
/**
|
||||
* highlight the search words provided in the url in the text
|
||||
*/
|
||||
highlightSearchWords : function() {
|
||||
var params = $.getQueryParameters();
|
||||
var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : [];
|
||||
if (terms.length) {
|
||||
var body = $('div.body');
|
||||
window.setTimeout(function() {
|
||||
$.each(terms, function() {
|
||||
body.highlightText(this.toLowerCase(), 'highlight');
|
||||
});
|
||||
}, 10);
|
||||
$('<li class="highlight-link"><a href="javascript:Documentation.' +
|
||||
'hideSearchWords()">Hide Search Matches</a></li>')
|
||||
.appendTo($('.sidebar .this-page-menu'));
|
||||
}
|
||||
},
|
||||
|
||||
/**
|
||||
* init the modindex toggle buttons
|
||||
*/
|
||||
initModIndex : function() {
|
||||
$('img.toggler').click(function() {
|
||||
var src = $(this).attr('src');
|
||||
var idnum = $(this).attr('id').substr(7);
|
||||
console.log($('tr.cg-' + idnum).toggle());
|
||||
if (src.substr(-9) == 'minus.png')
|
||||
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
|
||||
else
|
||||
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
|
||||
}).css('display', '').click();
|
||||
},
|
||||
|
||||
/**
|
||||
* init the inline comments
|
||||
*/
|
||||
initComments : function() {
|
||||
$('.inlinecomments div.actions').each(function() {
|
||||
this.innerHTML += ' | ';
|
||||
$(this).append($('<a href="#">hide comments</a>').click(function() {
|
||||
$(this).parent().parent().toggle();
|
||||
return false;
|
||||
}));
|
||||
});
|
||||
$('.inlinecomments .comments').hide();
|
||||
$('.inlinecomments a.bubble').each(function() {
|
||||
$(this).click($(this).is('.emptybubble') ? function() {
|
||||
var params = $.getQueryParameters(this.href);
|
||||
Documentation.newComment(params.target[0]);
|
||||
return false;
|
||||
} : function() {
|
||||
$('.comments', $(this).parent().parent()[0]).toggle();
|
||||
return false;
|
||||
});
|
||||
});
|
||||
$('#comments div.actions a.newcomment').click(function() {
|
||||
Documentation.newComment();
|
||||
return false;
|
||||
});
|
||||
if (document.location.hash.match(/^#comment-/))
|
||||
$('.inlinecomments .comments ' + document.location.hash)
|
||||
.parent().toggle();
|
||||
},
|
||||
|
||||
/**
|
||||
* helper function to hide the search marks again
|
||||
*/
|
||||
hideSearchWords : function() {
|
||||
$('.sidebar .this-page-menu li.highlight-link').fadeOut(300);
|
||||
$('span.highlight').removeClass('highlight');
|
||||
},
|
||||
|
||||
/**
|
||||
* show the comment window for a certain id or the whole page.
|
||||
*/
|
||||
newComment : function(id) {
|
||||
Documentation.CommentWindow.openFor(id || '');
|
||||
},
|
||||
|
||||
/**
|
||||
* write a new comment from within a comment view box
|
||||
*/
|
||||
newCommentFromBox : function(link) {
|
||||
var params = $.getQueryParameters(link.href);
|
||||
$(link).parent().parent().fadeOut('slow');
|
||||
this.newComment(params.target);
|
||||
},
|
||||
|
||||
/**
|
||||
* make the url absolute
|
||||
*/
|
||||
makeURL : function(relativeURL) {
|
||||
return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL;
|
||||
},
|
||||
|
||||
/**
|
||||
* get the current relative url
|
||||
*/
|
||||
getCurrentURL : function() {
|
||||
var path = document.location.pathname;
|
||||
var parts = path.split(/\//);
|
||||
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
|
||||
if (this == '..')
|
||||
parts.pop();
|
||||
});
|
||||
var url = parts.join('/');
|
||||
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
|
||||
},
|
||||
|
||||
/**
|
||||
* class that represents the comment window
|
||||
*/
|
||||
CommentWindow : (function() {
|
||||
var openWindows = {};
|
||||
|
||||
var Window = function(sectionID) {
|
||||
this.url = Documentation.makeURL('@comments/' + Documentation.getCurrentURL()
|
||||
+ '/?target=' + $.urlencode(sectionID) + '&mode=ajax');
|
||||
this.sectionID = sectionID;
|
||||
|
||||
this.root = $('<div class="commentwindow"></div>');
|
||||
this.root.appendTo($('body'));
|
||||
this.title = $('<h3>New Comment</h3>').appendTo(this.root);
|
||||
this.body = $('<div class="form">please wait...</div>').appendTo(this.root);
|
||||
this.resizeHandle = $('<div class="resizehandle"></div>').appendTo(this.root);
|
||||
|
||||
this.root.Draggable({
|
||||
handle: this.title[0],
|
||||
});
|
||||
|
||||
this.root.css({
|
||||
left: window.innerWidth / 2 - $(this.root).width() / 2,
|
||||
top: window.scrollY + (window.innerHeight / 2 - 150)
|
||||
});
|
||||
this.root.fadeIn('slow');
|
||||
this.updateView();
|
||||
};
|
||||
|
||||
Window.prototype.updateView = function(data) {
|
||||
var self = this;
|
||||
function update(data) {
|
||||
if (data.posted) {
|
||||
document.location.hash = '#comment-' + data.commentID;
|
||||
document.location.reload();
|
||||
}
|
||||
else {
|
||||
self.body.html(data.body);
|
||||
$('div.actions', self.body).append($('<input>')
|
||||
.attr('type', 'button')
|
||||
.attr('value', 'Close')
|
||||
.click(function() { self.close(); })
|
||||
);
|
||||
$('div.actions input[@name="preview"]')
|
||||
.attr('type', 'button')
|
||||
.click(function() { self.submitForm($('form', self.body)[0], true); });
|
||||
$('form', self.body).bind("submit", function() {
|
||||
self.submitForm(this);
|
||||
return false;
|
||||
});
|
||||
|
||||
if (data.error) {
|
||||
self.root.Highlight(1000, '#aadee1');
|
||||
$('div.error', self.root).slideDown(500);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (typeof data == 'undefined')
|
||||
$.getJSON(this.url, function(json) { update(json); });
|
||||
else
|
||||
$.ajax({
|
||||
url: this.url,
|
||||
type: 'POST',
|
||||
dataType: 'json',
|
||||
data: data,
|
||||
success: function(json) { update(json); }
|
||||
});
|
||||
}
|
||||
|
||||
Window.prototype.getFormValue = function(name) {
|
||||
return $('*[@name="' + name + '"]', this.body)[0].value;
|
||||
}
|
||||
|
||||
Window.prototype.submitForm = function(form, previewMode) {
|
||||
this.updateView({
|
||||
author: form.author.value,
|
||||
author_mail: form.author_mail.value,
|
||||
title: form.title.value,
|
||||
comment_body: form.comment_body.value,
|
||||
preview: previewMode ? 'yes' : ''
|
||||
});
|
||||
}
|
||||
|
||||
Window.prototype.close = function() {
|
||||
var self = this;
|
||||
delete openWindows[this.sectionID];
|
||||
this.root.fadeOut('slow', function() {
|
||||
self.root.remove();
|
||||
});
|
||||
}
|
||||
|
||||
Window.openFor = function(sectionID) {
|
||||
if (sectionID in openWindows)
|
||||
return openWindows[sectionID];
|
||||
return new Window(sectionID);
|
||||
}
|
||||
|
||||
return Window;
|
||||
})()
|
||||
};
|
||||
|
||||
|
||||
$(document).ready(function() {
|
||||
Documentation.init();
|
||||
});
|
||||
BIN
sphinx/style/file.png
Normal file
|
After Width: | Height: | Size: 392 B |
BIN
sphinx/style/hovercomment.png
Normal file
|
After Width: | Height: | Size: 522 B |
8
sphinx/style/interface.js
Normal file
2344
sphinx/style/jquery.js
vendored
Normal file
BIN
sphinx/style/minus.png
Normal file
|
After Width: | Height: | Size: 199 B |
BIN
sphinx/style/nocomment.png
Normal file
|
After Width: | Height: | Size: 415 B |
BIN
sphinx/style/plus.png
Normal file
|
After Width: | Height: | Size: 199 B |
BIN
sphinx/style/preview.png
Normal file
|
After Width: | Height: | Size: 37 KiB |
16
sphinx/style/rightsidebar.css
Normal file
@@ -0,0 +1,16 @@
|
||||
/**
|
||||
* Python Doc Design -- Right Side Bar Overrides
|
||||
*/
|
||||
|
||||
|
||||
div.sidebar {
|
||||
float: right;
|
||||
}
|
||||
|
||||
div.bodywrapper {
|
||||
margin: 0 230px 0 0;
|
||||
}
|
||||
|
||||
div.inlinecomments {
|
||||
right: 250px;
|
||||
}
|
||||
428
sphinx/style/searchtools.js
Normal file
@@ -0,0 +1,428 @@
|
||||
/**
|
||||
* helper function to return a node containing the
|
||||
* search summary for a given text. keywords is a list
|
||||
* of stemmed words, hlwords is the list of normal, unstemmed
|
||||
* words. the first one is used to find the occurance, the
|
||||
* latter for highlighting it.
|
||||
*/
|
||||
jQuery.makeSearchSummary = function(text, keywords, hlwords) {
|
||||
var textLower = text.toLowerCase();
|
||||
var start = 0;
|
||||
$.each(keywords, function() {
|
||||
var i = textLower.indexOf(this.toLowerCase());
|
||||
if (i > -1) {
|
||||
start = i;
|
||||
}
|
||||
});
|
||||
start = Math.max(start - 120, 0);
|
||||
var excerpt = ((start > 0) ? '...' : '') +
|
||||
$.trim(text.substr(start, 240)) +
|
||||
((start + 240 - text.length) ? '...' : '');
|
||||
var rv = $('<div class="context"></div>').text(excerpt);
|
||||
$.each(hlwords, function() {
|
||||
rv = rv.highlightText(this, 'highlight');
|
||||
});
|
||||
return rv;
|
||||
}
|
||||
|
||||
/**
|
||||
* Porter Stemmer
|
||||
*/
|
||||
var PorterStemmer = function() {
|
||||
|
||||
var step2list = {
|
||||
ational: 'ate',
|
||||
tional: 'tion',
|
||||
enci: 'ence',
|
||||
anci: 'ance',
|
||||
izer: 'ize',
|
||||
bli: 'ble',
|
||||
alli: 'al',
|
||||
entli: 'ent',
|
||||
eli: 'e',
|
||||
ousli: 'ous',
|
||||
ization: 'ize',
|
||||
ation: 'ate',
|
||||
ator: 'ate',
|
||||
alism: 'al',
|
||||
iveness: 'ive',
|
||||
fulness: 'ful',
|
||||
ousness: 'ous',
|
||||
aliti: 'al',
|
||||
iviti: 'ive',
|
||||
biliti: 'ble',
|
||||
logi: 'log'
|
||||
};
|
||||
|
||||
var step3list = {
|
||||
icate: 'ic',
|
||||
ative: '',
|
||||
alize: 'al',
|
||||
iciti: 'ic',
|
||||
ical: 'ic',
|
||||
ful: '',
|
||||
ness: ''
|
||||
};
|
||||
|
||||
var c = "[^aeiou]"; // consonant
|
||||
var v = "[aeiouy]"; // vowel
|
||||
var C = c + "[^aeiouy]*"; // consonant sequence
|
||||
var V = v + "[aeiou]*"; // vowel sequence
|
||||
|
||||
var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0
|
||||
var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1
|
||||
var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1
|
||||
var s_v = "^(" + C + ")?" + v; // vowel in stem
|
||||
|
||||
this.stemWord = function (w) {
|
||||
var stem;
|
||||
var suffix;
|
||||
var firstch;
|
||||
var origword = w;
|
||||
|
||||
if (w.length < 3) {
|
||||
return w;
|
||||
}
|
||||
|
||||
var re;
|
||||
var re2;
|
||||
var re3;
|
||||
var re4;
|
||||
|
||||
firstch = w.substr(0,1);
|
||||
if (firstch == "y") {
|
||||
w = firstch.toUpperCase() + w.substr(1);
|
||||
}
|
||||
|
||||
// Step 1a
|
||||
re = /^(.+?)(ss|i)es$/;
|
||||
re2 = /^(.+?)([^s])s$/;
|
||||
|
||||
if (re.test(w)) {
|
||||
w = w.replace(re,"$1$2");
|
||||
}
|
||||
else if (re2.test(w)) {
|
||||
w = w.replace(re2,"$1$2");
|
||||
}
|
||||
|
||||
// Step 1b
|
||||
re = /^(.+?)eed$/;
|
||||
re2 = /^(.+?)(ed|ing)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(fp[1])) {
|
||||
re = /.$/;
|
||||
w = w.replace(re,"");
|
||||
}
|
||||
}
|
||||
else if (re2.test(w)) {
|
||||
var fp = re2.exec(w);
|
||||
stem = fp[1];
|
||||
re2 = new RegExp(s_v);
|
||||
if (re2.test(stem)) {
|
||||
w = stem;
|
||||
re2 = /(at|bl|iz)$/;
|
||||
re3 = new RegExp("([^aeiouylsz])\\1$");
|
||||
re4 = new RegExp("^" + C + v + "[^aeiouwxy]$");
|
||||
if (re2.test(w)) {
|
||||
w = w + "e";
|
||||
}
|
||||
else if (re3.test(w)) {
|
||||
re = /.$/; w = w.replace(re,"");
|
||||
}
|
||||
else if (re4.test(w)) {
|
||||
w = w + "e";
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Step 1c
|
||||
re = /^(.+?)y$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(s_v);
|
||||
if (re.test(stem)) { w = stem + "i"; }
|
||||
}
|
||||
|
||||
// Step 2
|
||||
re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
suffix = fp[2];
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(stem)) {
|
||||
w = stem + step2list[suffix];
|
||||
}
|
||||
}
|
||||
|
||||
// Step 3
|
||||
re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
suffix = fp[2];
|
||||
re = new RegExp(mgr0);
|
||||
if (re.test(stem)) {
|
||||
w = stem + step3list[suffix];
|
||||
}
|
||||
}
|
||||
|
||||
// Step 4
|
||||
re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/;
|
||||
re2 = /^(.+?)(s|t)(ion)$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(mgr1);
|
||||
if (re.test(stem)) {
|
||||
w = stem;
|
||||
}
|
||||
}
|
||||
else if (re2.test(w)) {
|
||||
var fp = re2.exec(w);
|
||||
stem = fp[1] + fp[2];
|
||||
re2 = new RegExp(mgr1);
|
||||
if (re2.test(stem)) {
|
||||
w = stem;
|
||||
}
|
||||
}
|
||||
|
||||
// Step 5
|
||||
re = /^(.+?)e$/;
|
||||
if (re.test(w)) {
|
||||
var fp = re.exec(w);
|
||||
stem = fp[1];
|
||||
re = new RegExp(mgr1);
|
||||
re2 = new RegExp(meq1);
|
||||
re3 = new RegExp("^" + C + v + "[^aeiouwxy]$");
|
||||
if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) {
|
||||
w = stem;
|
||||
}
|
||||
}
|
||||
re = /ll$/;
|
||||
re2 = new RegExp(mgr1);
|
||||
if (re.test(w) && re2.test(w)) {
|
||||
re = /.$/;
|
||||
w = w.replace(re,"");
|
||||
}
|
||||
|
||||
// and turn initial Y back to y
|
||||
if (firstch == "y") {
|
||||
w = firstch.toLowerCase() + w.substr(1);
|
||||
}
|
||||
return w;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
/**
|
||||
* Search Module
|
||||
*/
|
||||
var Search = {
|
||||
|
||||
init : function() {
|
||||
var params = $.getQueryParameters();
|
||||
if (params.q) {
|
||||
var query = params.q[0];
|
||||
var areas = params.area || [];
|
||||
|
||||
// auto default
|
||||
if (areas.length == 1 && areas[0] == 'default') {
|
||||
areas = ['tutorial', 'modules', 'install', 'distutils'];
|
||||
}
|
||||
|
||||
// update input fields
|
||||
$('input[@type="checkbox"]').each(function() {
|
||||
this.checked = $.contains(areas, this.value);
|
||||
});
|
||||
$('input[@name="q"]')[0].value = query;
|
||||
|
||||
this.performSearch(query, areas);
|
||||
}
|
||||
},
|
||||
|
||||
/**
|
||||
* perform a search for something
|
||||
*/
|
||||
performSearch : function(query, areas) {
|
||||
// create the required interface elements
|
||||
var out = $('#search-results');
|
||||
var title = $('<h2>Searching</h2>').appendTo(out);
|
||||
var dots = $('<span></span>').appendTo(title);
|
||||
var status = $('<p style="display: none"></p>').appendTo(out);
|
||||
var output = $('<ul class="search"/>').appendTo(out);
|
||||
|
||||
// spawn a background runner for updating the dots
|
||||
// until the search has finished
|
||||
var pulseStatus = 0;
|
||||
function pulse() {
|
||||
pulseStatus = (pulseStatus + 1) % 4;
|
||||
var dotString = '';
|
||||
for (var i = 0; i < pulseStatus; i++) {
|
||||
dotString += '.';
|
||||
}
|
||||
dots.text(dotString);
|
||||
if (pulseStatus > -1) {
|
||||
window.setTimeout(pulse, 500);
|
||||
}
|
||||
};
|
||||
pulse();
|
||||
|
||||
// stem the searchwords and add them to the
|
||||
// correct list
|
||||
var stemmer = new PorterStemmer();
|
||||
var searchwords = [];
|
||||
var excluded = [];
|
||||
var hlwords = [];
|
||||
var tmp = query.split(/\s+/);
|
||||
for (var i = 0; i < tmp.length; i++) {
|
||||
// stem the word
|
||||
var word = stemmer.stemWord(tmp[i]).toLowerCase();
|
||||
// select the correct list
|
||||
if (word[0] == '-') {
|
||||
var toAppend = excluded;
|
||||
word = word.substr(1);
|
||||
}
|
||||
else {
|
||||
var toAppend = searchwords;
|
||||
hlwords.push(tmp[i].toLowerCase());
|
||||
}
|
||||
// only add if not already in the list
|
||||
if (!$.contains(toAppend, word)) {
|
||||
toAppend.push(word);
|
||||
}
|
||||
};
|
||||
var highlightstring = '?highlight=' + $.urlencode(hlwords.join(" "));
|
||||
|
||||
console.debug('SEARCH: searching for:');
|
||||
console.info('required: ', searchwords);
|
||||
console.info('excluded: ', excluded);
|
||||
console.info('areas: ', areas);
|
||||
|
||||
// fetch searchindex and perform search
|
||||
$.getJSON('searchindex.json', function(data) {
|
||||
|
||||
// prepare search
|
||||
var filenames = data[0];
|
||||
var areaMap = data[1];
|
||||
var titles = data[2]
|
||||
var words = data[3];
|
||||
var fileMap = {};
|
||||
var files = null;
|
||||
|
||||
// perform the search on the required words
|
||||
for (var i = 0; i < searchwords.length; i++) {
|
||||
var word = searchwords[i];
|
||||
// no match but word was a required one
|
||||
if ((files = words[word]) == null) {
|
||||
break;
|
||||
}
|
||||
// create the mapping
|
||||
for (var j = 0; j < files.length; j++) {
|
||||
var file = files[j];
|
||||
if (file in fileMap) {
|
||||
fileMap[file].push(word);
|
||||
}
|
||||
else {
|
||||
fileMap[file] = [word];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// now check if the files are in the correct
|
||||
// areas and if the don't contain excluded words
|
||||
var results = [];
|
||||
for (var file in fileMap) {
|
||||
|
||||
// check if all requirements are matched
|
||||
if (fileMap[file].length != searchwords.length) {
|
||||
continue;
|
||||
}
|
||||
var valid = false;
|
||||
|
||||
// check if the file is in one of the searched
|
||||
// areas.
|
||||
for (var i = 0; i < areas.length; i++) {
|
||||
if ($.contains(areaMap[areas[i]] || [], file)) {
|
||||
valid = true;
|
||||
break;
|
||||
}
|
||||
};
|
||||
|
||||
// ensure that none of the excluded words is in the
|
||||
// search result.
|
||||
if (valid) {
|
||||
for (var i = 0; i < excluded.length; i++) {
|
||||
if ($.contains(words[excluded[i]] || [], file)) {
|
||||
valid = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// if we have still a valid result we can add it
|
||||
// to the result list
|
||||
if (valid) {
|
||||
results.push([filenames[file], titles[file]]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// delete unused variables in order to not waste
|
||||
// memory until list is retrieved completely
|
||||
delete filenames, areaMap, titles, words, data;
|
||||
|
||||
// now sort the results by title
|
||||
results.sort(function(a, b) {
|
||||
var left = a[1].toLowerCase();
|
||||
var right = b[1].toLowerCase();
|
||||
return (left > right) ? -1 : ((left < right) ? 1 : 0);
|
||||
});
|
||||
|
||||
// print the results
|
||||
var resultCount = results.length;
|
||||
function displayNextItem() {
|
||||
// results left, load the summary and display it
|
||||
if (results.length) {
|
||||
var item = results.pop();
|
||||
var listItem = $('<li style="display:none"></li>');
|
||||
listItem.append($('<a/>').attr('href', item[0] + '.html' +
|
||||
highlightstring).html(item[1]));
|
||||
$.get(item[0] + '.txt', function(data) {
|
||||
listItem.append($.makeSearchSummary(data, searchwords, hlwords));
|
||||
output.append(listItem);
|
||||
listItem.slideDown(10, function() {
|
||||
displayNextItem();
|
||||
});
|
||||
});
|
||||
}
|
||||
// search finished, update title and status message
|
||||
else {
|
||||
pulseStatus = -1;
|
||||
title.text('Search Results');
|
||||
if (!resultCount) {
|
||||
status.text('Your search did not match any documents. ' +
|
||||
'Please make sure that all words are spelled ' +
|
||||
'correctly and that you\'ve selected enough ' +
|
||||
'categories.');
|
||||
}
|
||||
else {
|
||||
status.text('Search finished, found ' + resultCount +
|
||||
' page' + (resultCount != 1 ? 's' : '') +
|
||||
' matching the search query.');
|
||||
}
|
||||
status.fadeIn(500);
|
||||
}
|
||||
}
|
||||
displayNextItem();
|
||||
});
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
$(document).ready(function() {
|
||||
Documentation.Search.init();
|
||||
});
|
||||
19
sphinx/style/stickysidebar.css
Normal file
@@ -0,0 +1,19 @@
|
||||
/**
|
||||
* Python Doc Design -- Sticky Sidebar Overrides
|
||||
*/
|
||||
|
||||
div.sidebar {
|
||||
top: 30px;
|
||||
left: 0px;
|
||||
position: fixed;
|
||||
margin: 0;
|
||||
float: none;
|
||||
}
|
||||
|
||||
div.related {
|
||||
position: fixed;
|
||||
}
|
||||
|
||||
div.documentwrapper {
|
||||
margin-top: 30px;
|
||||
}
|
||||
BIN
sphinx/style/top.png
Normal file
|
After Width: | Height: | Size: 976 B |
663
sphinx/style/traditional.css
Normal file
@@ -0,0 +1,663 @@
|
||||
/**
|
||||
* Python Doc Design
|
||||
*/
|
||||
|
||||
body {
|
||||
color: #000;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* :::: LAYOUT :::: */
|
||||
|
||||
div.documentwrapper {
|
||||
float: left;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.bodywrapper {
|
||||
margin: 0 230px 0 0;
|
||||
}
|
||||
|
||||
div.body {
|
||||
background-color: white;
|
||||
padding: 0 20px 30px 20px;
|
||||
}
|
||||
|
||||
div.sidebarwrapper {
|
||||
border: 1px solid #99ccff;
|
||||
padding: 10px;
|
||||
margin: 10px 15px 10px 0;
|
||||
}
|
||||
|
||||
div.sidebar {
|
||||
float: right;
|
||||
margin-left: -100%;
|
||||
width: 230px;
|
||||
}
|
||||
|
||||
div.clearer {
|
||||
clear: both;
|
||||
}
|
||||
|
||||
div.footer {
|
||||
clear: both;
|
||||
width: 100%;
|
||||
background-color: #99ccff;
|
||||
padding: 9px 0 9px 0;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
div.related {
|
||||
background-color: #99ccff;
|
||||
color: #333;
|
||||
width: 100%;
|
||||
height: 30px;
|
||||
line-height: 30px;
|
||||
border-bottom: 5px solid white;
|
||||
}
|
||||
|
||||
div.related h3 {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.related ul {
|
||||
margin: 0;
|
||||
padding: 0 0 0 10px;
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.related li {
|
||||
display: inline;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.related li.right {
|
||||
float: right;
|
||||
margin-right: 5px;
|
||||
}
|
||||
|
||||
/* ::: SIDEBAR :::: */
|
||||
div.sidebar h3 {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
div.sidebar h4 {
|
||||
margin: 5px 0 0 0;
|
||||
}
|
||||
|
||||
div.sidebar p.topless {
|
||||
margin: 5px 10px 10px 10px;
|
||||
}
|
||||
|
||||
div.sidebar ul {
|
||||
margin: 10px;
|
||||
margin-left: 15px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.sidebar ul ul {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
div.sidebar form {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
|
||||
/* :::: SEARCH :::: */
|
||||
ul.search {
|
||||
margin: 10px 0 0 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
ul.search li {
|
||||
padding: 5px 0 5px 20px;
|
||||
background-image: url(file.png);
|
||||
background-repeat: no-repeat;
|
||||
background-position: 0 7px;
|
||||
}
|
||||
|
||||
ul.search li a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
ul.search li div.context {
|
||||
color: #888;
|
||||
margin: 2px 0 0 30px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.keywordmatches li.goodmatch a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* :::: COMMON FORM STYLES :::: */
|
||||
|
||||
div.actions {
|
||||
border-top: 1px solid #aaa;
|
||||
background-color: #ddd;
|
||||
margin: 10px 0 0 -20px;
|
||||
padding: 5px 0 5px 20px;
|
||||
}
|
||||
|
||||
form dl {
|
||||
color: #333;
|
||||
}
|
||||
|
||||
form dt {
|
||||
clear: both;
|
||||
float: left;
|
||||
min-width: 110px;
|
||||
margin-right: 10px;
|
||||
padding-top: 2px;
|
||||
}
|
||||
|
||||
input#homepage {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.error {
|
||||
margin: 5px 20px 0 0;
|
||||
padding: 5px;
|
||||
border: 1px solid #d00;
|
||||
/*border: 2px solid #05171e;
|
||||
background-color: #092835;
|
||||
color: white;*/
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* :::: INLINE COMMENTS :::: */
|
||||
|
||||
div.inlinecommentswrapper {
|
||||
float: right;
|
||||
max-width: 40%;
|
||||
}
|
||||
|
||||
div.commentmarker {
|
||||
float: right;
|
||||
background-image: url(style/comment.png);
|
||||
background-repeat: no-repeat;
|
||||
width: 25px;
|
||||
height: 25px;
|
||||
text-align: center;
|
||||
padding-top: 3px;
|
||||
}
|
||||
|
||||
div.nocommentmarker {
|
||||
float: right;
|
||||
background-image: url(style/nocomment.png);
|
||||
background-repeat: no-repeat;
|
||||
width: 25px;
|
||||
height: 25px;
|
||||
}
|
||||
|
||||
div.inlinecomments {
|
||||
margin-left: 10px;
|
||||
margin-bottom: 5px;
|
||||
background-color: #eee;
|
||||
border: 1px solid #ccc;
|
||||
padding: 5px;
|
||||
}
|
||||
|
||||
div.inlinecomment {
|
||||
border-top: 1px solid #ccc;
|
||||
padding-top: 5px;
|
||||
margin-top: 5px;
|
||||
}
|
||||
|
||||
.inlinecomments p {
|
||||
margin: 5px 0 5px 0;
|
||||
}
|
||||
|
||||
.inlinecomments .head {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.inlinecomments .meta {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
|
||||
/* :::: COMMENTS :::: */
|
||||
|
||||
div#comments h3 {
|
||||
border-top: 1px solid #aaa;
|
||||
padding: 5px 20px 5px 20px;
|
||||
margin: 20px -20px 20px -20px;
|
||||
background-color: #ddd;
|
||||
}
|
||||
|
||||
/*
|
||||
div#comments {
|
||||
background-color: #ccc;
|
||||
margin: 40px -20px -30px -20px;
|
||||
padding: 0 0 1px 0;
|
||||
}
|
||||
|
||||
div#comments h4 {
|
||||
margin: 30px 0 20px 0;
|
||||
background-color: #aaa;
|
||||
border-bottom: 1px solid #09232e;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
div#comments form {
|
||||
display: block;
|
||||
margin: 0 0 0 20px;
|
||||
}
|
||||
|
||||
div#comments textarea {
|
||||
width: 98%;
|
||||
height: 160px;
|
||||
}
|
||||
|
||||
div#comments div.help {
|
||||
margin: 20px 20px 10px 0;
|
||||
background-color: #ccc;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
div#comments div.help p {
|
||||
margin: 0;
|
||||
padding: 0 0 10px 0;
|
||||
}
|
||||
|
||||
div#comments input, div#comments textarea {
|
||||
font-family: 'Bitstream Vera Sans', 'Arial', sans-serif;
|
||||
font-size: 13px;
|
||||
color: black;
|
||||
background-color: #aaa;
|
||||
border: 1px solid #092835;
|
||||
}
|
||||
|
||||
div#comments input[type="reset"],
|
||||
div#comments input[type="submit"] {
|
||||
cursor: pointer;
|
||||
font-weight: bold;
|
||||
padding: 2px;
|
||||
margin: 5px 5px 5px 0;
|
||||
background-color: #666;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div#comments div.comment {
|
||||
margin: 10px 10px 10px 20px;
|
||||
padding: 10px;
|
||||
border: 1px solid #0f3646;
|
||||
background-color: #aaa;
|
||||
color: #333;
|
||||
}
|
||||
|
||||
div#comments div.comment p {
|
||||
margin: 5px 0 5px 0;
|
||||
}
|
||||
|
||||
div#comments div.comment p.meta {
|
||||
font-style: italic;
|
||||
color: #444;
|
||||
text-align: right;
|
||||
margin: -5px 0 -5px 0;
|
||||
}
|
||||
|
||||
div#comments div.comment h4 {
|
||||
margin: -10px -10px 5px -10px;
|
||||
padding: 3px;
|
||||
font-size: 15px;
|
||||
background-color: #888;
|
||||
color: white;
|
||||
border: 0;
|
||||
}
|
||||
|
||||
div#comments div.comment pre,
|
||||
div#comments div.comment tt {
|
||||
background-color: #ddd;
|
||||
color: #111;
|
||||
border: none;
|
||||
}
|
||||
|
||||
div#comments div.comment a {
|
||||
color: #fff;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div#comments div.comment blockquote {
|
||||
margin: 10px;
|
||||
padding: 10px;
|
||||
border-left: 1px solid #0f3646;
|
||||
/*border: 1px solid #0f3646;
|
||||
background-color: #071c25;*/
|
||||
}
|
||||
|
||||
div#comments em.important {
|
||||
color: #d00;
|
||||
font-weight: bold;
|
||||
font-style: normal;
|
||||
}*/
|
||||
|
||||
/* :::: SUGGEST CHANGES :::: */
|
||||
div#suggest-changes-box input, div#suggest-changes-box textarea {
|
||||
border: 1px solid #ccc;
|
||||
background-color: white;
|
||||
color: black;
|
||||
}
|
||||
|
||||
div#suggest-changes-box textarea {
|
||||
width: 99%;
|
||||
height: 400px;
|
||||
}
|
||||
|
||||
|
||||
/* :::: PREVIEW :::: */
|
||||
div.preview {
|
||||
background-image: url(style/preview.png);
|
||||
padding: 0 20px 20px 20px;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
|
||||
|
||||
/* :::: INDEX PAGE :::: */
|
||||
|
||||
table.contentstable {
|
||||
width: 90%;
|
||||
}
|
||||
|
||||
table.contentstable p.biglink {
|
||||
line-height: 150%;
|
||||
}
|
||||
|
||||
a.biglink {
|
||||
font-size: 1.5em;
|
||||
}
|
||||
|
||||
span.linkdescr {
|
||||
font-style: italic;
|
||||
padding-top: 5px;
|
||||
}
|
||||
|
||||
/* :::: GENINDEX STYLES :::: */
|
||||
|
||||
table.indextable td {
|
||||
text-align: left;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
table.indextable dl, table.indextable dd {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
table.indextable tr.pcap {
|
||||
height: 10px;
|
||||
}
|
||||
|
||||
table.indextable tr.cap {
|
||||
margin-top: 10px;
|
||||
background-color: #f2f2f2;
|
||||
}
|
||||
|
||||
img.toggler {
|
||||
margin-right: 3px;
|
||||
margin-top: 3px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
/* :::: GLOBAL STYLES :::: */
|
||||
|
||||
p.subhead {
|
||||
font-weight: bold;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
a:link:active { color: #ff0000; }
|
||||
a:link:hover { background-color: #bbeeff; }
|
||||
a:visited:hover { background-color: #bbeeff; }
|
||||
a:visited { color: #551a8b; }
|
||||
a:link { color: #0000bb; }
|
||||
|
||||
div.body h1,
|
||||
div.body h2,
|
||||
div.body h3,
|
||||
div.body h4,
|
||||
div.body h5,
|
||||
div.body h6 {
|
||||
font-family: avantgarde, sans-serif;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.body h1 { font-size: 180%; }
|
||||
div.body h2 { font-size: 150%; }
|
||||
div.body h3 { font-size: 120%; }
|
||||
div.body h4 { font-size: 120%; }
|
||||
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink,
|
||||
a.headerlink {
|
||||
color: #c60f0f;
|
||||
font-size: 0.8em;
|
||||
padding: 0 4px 0 4px;
|
||||
text-decoration: none;
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink,
|
||||
*:hover > a.headerlink {
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover,
|
||||
a.headerlink:hover {
|
||||
background-color: #c60f0f;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.body p, div.body dd, div.body li {
|
||||
text-align: justify;
|
||||
}
|
||||
|
||||
div.body td {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.fakelist {
|
||||
list-style: none;
|
||||
margin: 10px 0 10px 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
/* "Footnotes" heading */
|
||||
p.rubric {
|
||||
margin-top: 30px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* Admonitions */
|
||||
|
||||
div.admonition {
|
||||
margin-top: 10px;
|
||||
margin-bottom: 10px;
|
||||
padding: 10px 10px 0px 10px;
|
||||
}
|
||||
|
||||
div.admonition dt {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.admonition dd {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
div.seealso {
|
||||
background-color: #ffc;
|
||||
border: 1px solid #ff6;
|
||||
}
|
||||
|
||||
div.warning {
|
||||
background-color: #ffe4e4;
|
||||
border: 1px solid #f66;
|
||||
}
|
||||
|
||||
div.note {
|
||||
background-color: #eee;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
margin: 0px 0px 5px 0px;
|
||||
font-weight: bold;
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
table.docutils {
|
||||
border: 0;
|
||||
}
|
||||
|
||||
table.docutils td, table.docutils th {
|
||||
margin: 2px;
|
||||
border-top: 0;
|
||||
border-left: 0;
|
||||
border-right: 0;
|
||||
border-bottom: 1px solid #aaa;
|
||||
}
|
||||
|
||||
table.field-list td, table.field-list th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
table.footnote td, table.footnote th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
dl {
|
||||
margin-bottom: 15px;
|
||||
clear: both;
|
||||
}
|
||||
|
||||
dd p {
|
||||
margin-top: 0px;
|
||||
}
|
||||
|
||||
dd ul, dd table {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
dd {
|
||||
margin-top: 3px;
|
||||
margin-bottom: 10px;
|
||||
margin-left: 30px;
|
||||
}
|
||||
|
||||
.refcount {
|
||||
color: #060;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: left;
|
||||
padding-right: 5px;
|
||||
}
|
||||
|
||||
pre {
|
||||
font-family: 'Bitstream Vera Sans Mono', monospace;
|
||||
padding: 5px;
|
||||
color: #00008b;
|
||||
border-left: none;
|
||||
border-right: none;
|
||||
}
|
||||
|
||||
tt {
|
||||
font-family: 'Bitstream Vera Sans Mono', monospace;
|
||||
background-color: #ecf0f3;
|
||||
padding: 1px;
|
||||
}
|
||||
|
||||
tt.descname {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
font-size: 1.2em;
|
||||
}
|
||||
|
||||
tt.descclassname {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
tt.xref, a tt {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.footnote:target { background-color: #ffa }
|
||||
|
||||
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
.optional {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
.versionmodified {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
form.comment {
|
||||
margin: 0;
|
||||
padding: 10px 30px 10px 30px;
|
||||
background-color: #eee;
|
||||
}
|
||||
|
||||
form.comment h3 {
|
||||
background-color: #326591;
|
||||
color: white;
|
||||
margin: -10px -30px 10px -30px;
|
||||
padding: 5px;
|
||||
font-size: 1.4em;
|
||||
}
|
||||
|
||||
form.comment input,
|
||||
form.comment textarea {
|
||||
border: 1px solid #ccc;
|
||||
padding: 2px;
|
||||
font-family: 'Bitstream Vera Sans', 'Verdana', sans-serif;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
form.comment input[type="text"] {
|
||||
width: 240px;
|
||||
}
|
||||
|
||||
form.comment textarea {
|
||||
width: 100%;
|
||||
height: 200px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
/* :::: PRINT :::: */
|
||||
@media print {
|
||||
div.documentwrapper {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.body {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
div.sidebar,
|
||||
div.related,
|
||||
div.footer,
|
||||
div#comments div.new-comment-box,
|
||||
#top-link {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
34
sphinx/templates/_commentform.html
Normal file
@@ -0,0 +1,34 @@
|
||||
<form action="?target={{ comments_form.target|e(true) }}" method="post">
|
||||
{% if comments_form.error %}
|
||||
<div class="error">{{ comments_form.error|e }}</div>
|
||||
{% endif %}
|
||||
<p>Note: you can also <a href="{{ pathto(suggest_url, 1)|e }}">suggest
|
||||
changes</a> to the official documentation text.</p>
|
||||
<dl>
|
||||
<dt>Name:</dt>
|
||||
<dd><input type="text" size="24" name="author" value="{{ comments_form.author|e(true) }}"></dd>
|
||||
<dt>E-Mail Address:</dt>
|
||||
<dd><input type="text" size="24" name="author_mail" value="{{ comments_form.author_mail|e(true) }}"></dd>
|
||||
<dt>Comment Title:</dt>
|
||||
<dd><input type="text" size="36" name="title" value="{{ comments_form.title|e(true) }}"></dd>
|
||||
</dl>
|
||||
<input type="text" size="12" name="homepage" id="homepage">
|
||||
<textarea name="comment_body" rows="7" cols="50">{{ comments_form.comment_body|e }}</textarea>
|
||||
{% if preview %}
|
||||
<div class="preview">
|
||||
<h4>Preview</h4>
|
||||
<div class="comment">
|
||||
<h4>{{ preview.title|e or ' ' }}</h4>
|
||||
<div class="text">{{ preview.parsed_comment_body or ' ' }}</div>
|
||||
<div class="meta">by {{ preview.author|e }}, written on
|
||||
{{ preview.pub_date|datetimeformat }} |
|
||||
<a href="#">#</a></div>
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
<div class="actions">
|
||||
<input type="submit" value="Submit comment">
|
||||
<input type="submit" name="preview" value="Preview">
|
||||
<input type="reset" value="Reset form">
|
||||
</div>
|
||||
</form>
|
||||
23
sphinx/templates/admin/change_password.html
Normal file
@@ -0,0 +1,23 @@
|
||||
{% extends "admin/layout.html" %}
|
||||
{% block admin_body %}
|
||||
<h1>Change Password</h1>
|
||||
{% if change_failed %}
|
||||
<p class="error">The two passwords don't match or are empty.</p>
|
||||
{% elif change_successful %}
|
||||
<p class="message">Password changed successfully.</p>
|
||||
{% else %}
|
||||
<p>Enter the new password below twice</p>
|
||||
{% endif %}
|
||||
<form action="" method="post">
|
||||
<dl>
|
||||
<dt>Password</dt>
|
||||
<dd><input type="password" name="pw1"></dd>
|
||||
<dt>Repeat</dt>
|
||||
<dd><input type="password" name="pw2"></dd>
|
||||
</dl>
|
||||
<div class="actions">
|
||||
<input type="submit" value="Change">
|
||||
<input type="submit" value="Cancel" name="cancel">
|
||||
</div>
|
||||
</form>
|
||||
{% endblock %}
|
||||
19
sphinx/templates/admin/index.html
Normal file
@@ -0,0 +1,19 @@
|
||||
{% extends "admin/layout.html" %}
|
||||
{% block admin_body %}
|
||||
<h1>Administration Index</h1>
|
||||
<p>
|
||||
Welcome in the documentation administration, {{ req.user|e }}.
|
||||
</p>
|
||||
<h2>Tasks</h2>
|
||||
<ul>
|
||||
<li><a href="moderate_comments/">Moderate Comments</a></li>
|
||||
{%- if can_change_password %}
|
||||
<li><a href="change_password/">Change Password</a></li>
|
||||
{%- endif %}
|
||||
{%- if is_master_admin %}
|
||||
<li><a href="manage_users/">Manage Users</a></li>
|
||||
{%- endif %}
|
||||
<li><a href="../">Back To Documentation</a></li>
|
||||
<li><a href="logout/">Logout</a></li>
|
||||
</ul>
|
||||
{% endblock %}
|
||||
8
sphinx/templates/admin/layout.html
Normal file
@@ -0,0 +1,8 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Documentation Administration' %}
|
||||
{% set in_admin_panel = true %}
|
||||
{% block body %}
|
||||
<div class="admin">
|
||||
{% block admin_body %}{% endblock %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
19
sphinx/templates/admin/login.html
Normal file
@@ -0,0 +1,19 @@
|
||||
{% extends "admin/layout.html" %}
|
||||
{% block admin_body %}
|
||||
<h1>Login</h1>
|
||||
<form action="" method="post">
|
||||
<dl>
|
||||
<dt>Username</dt>
|
||||
<dd><input type="text" name="username"></dd>
|
||||
<dt>Password</dt>
|
||||
<dd><input type="password" name="password"></dd>
|
||||
</dl>
|
||||
{% if login_failed %}
|
||||
<div class="error">Invalid username and/or password</div>
|
||||
{% endif %}
|
||||
<div class="actions">
|
||||
<input type="submit" value="Login">
|
||||
<input type="submit" name="cancel" value="Cancel">
|
||||
</div>
|
||||
</form>
|
||||
{% endblock %}
|
||||
92
sphinx/templates/admin/manage_users.html
Normal file
@@ -0,0 +1,92 @@
|
||||
{% extends "admin/layout.html" %}
|
||||
{% block admin_body %}
|
||||
<h1>Manage Users</h1>
|
||||
<p>
|
||||
All uses with "master" privileges can give and revoke permissions
|
||||
from this page. You cannot change the passwords of other users and
|
||||
remove your own user or "master" privilege. Privileges are separated
|
||||
by commas — optional whitespace is ignored.
|
||||
</p>
|
||||
<p><strong>Privileges</strong></p>
|
||||
<ul>
|
||||
<li><tt>master</tt> — user can create and edit user accounts.</li>
|
||||
<li><tt>frozenpassword</tt> — user cannot change his password.</li>
|
||||
</ul>
|
||||
<form action="" method="post">
|
||||
{% if ask_confirmation %}
|
||||
<div class="dialog">
|
||||
<h2>Confirm</h2>
|
||||
<div class="text">
|
||||
{% trans amount=to_delete|length %}
|
||||
Do you really want to delete the user?
|
||||
{% pluralize %}
|
||||
Do you really want to delete {{ amount }} users?
|
||||
{% endtrans %}
|
||||
</div>
|
||||
<div class="buttons">
|
||||
<input type="hidden" name="update" value="yes">
|
||||
<input type="submit" name="confirmed" value="Yes">
|
||||
<input type="submit" name="aborted" value="No">
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if generated_user and generated_password %}
|
||||
<div class="dialog">
|
||||
<h2>User Generated</h2>
|
||||
<div class="text">
|
||||
The user <strong>{{ generated_user|e }}</strong> was generated successfully
|
||||
with the password <strong>{{ generated_password|e }}</strong>.
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if user_exists %}
|
||||
<div class="dialog">
|
||||
<h2>Username in User</h2>
|
||||
<div class="text">
|
||||
The username {{ user_exists|e }} is in use. Select a different one.
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if self_destruction %}
|
||||
<div class="dialog">
|
||||
<h2>Error</h2>
|
||||
<div class="text">
|
||||
You can't delete your own user or remove your own master privileges.
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if add_user_mode %}
|
||||
<div class="dialog detail_form">
|
||||
<h2>Add User</h2>
|
||||
<div class="text">
|
||||
Username <input type="text" size="24" name="username" value="{{
|
||||
form.username|e(true) }}">
|
||||
</div>
|
||||
<div class="buttons">
|
||||
<input type="submit" name="add_user" value="Add">
|
||||
<input type="submit" name="aborted" value="Cancel">
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
<table class="mapping">
|
||||
<tr>
|
||||
<th>Username</th>
|
||||
<th>Privileges</th>
|
||||
<th class="actions">Delete</th>
|
||||
</tr>
|
||||
{%- for user, privileges in users|dictsort %}
|
||||
<tr>
|
||||
<td class="username">{{ user|e }}</td>
|
||||
<td class="groups"><input type="text" name="privileges-{{ user|e }}" value="{{ privileges|join(', ') }}"></td>
|
||||
<td class="actions"><input type="checkbox" name="delete" value="{{ user|e
|
||||
}}"{% if user in to_delete %} checked{% endif %}></td>
|
||||
</tr>
|
||||
{%- endfor %}
|
||||
</table>
|
||||
<div class="actions">
|
||||
<input type="submit" name="update" value="Update">
|
||||
<input type="submit" name="add_user" value="Add User">
|
||||
<input type="submit" name="cancel" value="Cancel">
|
||||
</div>
|
||||
</form>
|
||||
{% endblock %}
|
||||
104
sphinx/templates/admin/moderate_comments.html
Normal file
@@ -0,0 +1,104 @@
|
||||
{% extends "admin/layout.html" %}
|
||||
{% block admin_body %}
|
||||
<h1>Moderate Comments</h1>
|
||||
<p>
|
||||
From here you can delete and edit comments. If you want to be
|
||||
informed about new comments you can use the <a href="{{ pathto('index.rst')
|
||||
}}?feed=recent_comments">feed</a> provided.
|
||||
</p>
|
||||
<form action="" method="post">
|
||||
{% if ask_confirmation %}
|
||||
<div class="dialog">
|
||||
<h2>Confirm</h2>
|
||||
<div class="text">
|
||||
{% trans amount=to_delete|length %}
|
||||
Do you really want to delete one comment?
|
||||
{% pluralize %}
|
||||
Do you really want to delete {{ amount }} comments?
|
||||
{% endtrans %}
|
||||
</div>
|
||||
<div class="buttons">
|
||||
<input type="submit" name="confirmed" value="Yes">
|
||||
<input type="submit" name="aborted" value="No">
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if edit_detail %}
|
||||
<div class="dialog detail_form">
|
||||
<h2>Edit Comment</h2>
|
||||
<div class="text">
|
||||
<input type="hidden" name="edit" value="{{ edit_detail.comment_id }}">
|
||||
<dl>
|
||||
<dt>Name</dt>
|
||||
<dd><input type="text" size="24" name="author" value="{{ edit_detail.author|e(true) }}"></dd>
|
||||
<dt>E-Mail</dt>
|
||||
<dd><input type="text" size="24" name="author_mail" value="{{ edit_detail.author_mail|e(true) }}"></dd>
|
||||
<dt>Comment Title</dt>
|
||||
<dd><input type="text" size="36" name="title" value="{{ edit_detail.title|e(true) }}"></dd>
|
||||
</dl>
|
||||
<textarea name="comment_body" rows="7" cols="50">{{ edit_detail.comment_body|e }}</textarea>
|
||||
</div>
|
||||
<div class="buttons">
|
||||
<input type="submit" value="Save">
|
||||
<input type="submit" name="aborted" value="Cancel">
|
||||
<input type="submit" name="view" value="View">
|
||||
<input type="submit" name="delete_this" value="Delete">
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{%- macro render_row(comment, include_page=false) %}
|
||||
<tr>
|
||||
<td class="title">
|
||||
<a href="{{ pathto(comment.url, true) }}">{{ comment.title|e }}</a>
|
||||
<span class="meta">by <a href="mailto:{{ comment.author_mail|e
|
||||
}}">{{ comment.author|e }}</a>{% if include_page
|
||||
%} on <a href="{{ pathto('@admin/moderate_comments/' +
|
||||
comment.associated_page) }}">{{ comment.associated_page }}</a{%
|
||||
endif %}</span>
|
||||
</td>
|
||||
<td class="pub_date">{{ comment.pub_date|datetimeformat }}</td>
|
||||
<td class="actions">
|
||||
<span class="meta"><a href="?edit={{ comment.comment_id }}">edit</a></span>
|
||||
<input type="checkbox" name="delete" value="{{
|
||||
comment.comment_id }}"{% if comment.comment_id in to_delete
|
||||
%} checked{% endif %}>
|
||||
</td>
|
||||
</tr>
|
||||
{%- endmacro %}
|
||||
<table class="mapping">
|
||||
{% if pages_with_comments %}
|
||||
<tr>
|
||||
<th colspan="4" class="recent_comments">
|
||||
<a href="{{ pathto('@admin/moderate_comments/recent_comments/', true)
|
||||
}}">Recent Comments</a>
|
||||
<span class="meta">(<a href="{{ pathto('index.rst')
|
||||
}}?feed=recent_comments">feed</a>)</span>
|
||||
</th>
|
||||
</tr>
|
||||
{%- for comment in recent_comments %}
|
||||
{{- render_row(comment, true) }}
|
||||
{%- endfor %}
|
||||
{%- for page in pages_with_comments %}
|
||||
<tr>
|
||||
<th colspan="4">
|
||||
<a href="{{ pathto('@admin/moderate_comments/' + page.page_id) }}">{{ page.title|e }}</a>
|
||||
<span class="meta">(<a href="{{ pathto(page.page_id) }}">view</a> |
|
||||
<a href="{{ pathto(page.page_id) }}?feed=comments">feed</a>)</span>
|
||||
</th>
|
||||
</tr>
|
||||
{%- if page.has_details %}
|
||||
{%- for comment in page.comments %}
|
||||
{{- render_row(comment) }}
|
||||
{%- endfor %}
|
||||
{%- endif %}
|
||||
{% endfor %}
|
||||
{%- else %}
|
||||
<tr><th>no comments submitted so far</th></tr>
|
||||
{%- endif %}
|
||||
</table>
|
||||
<div class="actions">
|
||||
<input type="submit" value="Delete">
|
||||
<input type="submit" name="cancel" value="Cancel">
|
||||
</div>
|
||||
</form>
|
||||
{% endblock %}
|
||||
26
sphinx/templates/commentform.html
Normal file
@@ -0,0 +1,26 @@
|
||||
{% extends "layout.html" %}
|
||||
{% block body %}
|
||||
<div id="new-commment-box">
|
||||
<h4 id="comments-new-comment">New Comment</h4>
|
||||
{{ form }}
|
||||
<div class="help">
|
||||
<p>
|
||||
<strong>You can format a comment using the
|
||||
following syntax elements provided:</strong>
|
||||
</p>
|
||||
<p>
|
||||
`code` / ``code too`` / **strong** /
|
||||
*emphasized* / !!!important!!! /
|
||||
[[link_target Link Title]] /
|
||||
[[link_target_only]] / <code>code block with
|
||||
syntax highlighting</code> / <quote>some
|
||||
quoted text</quote>.
|
||||
</p>
|
||||
<p>
|
||||
HTML is not supported, relative link targets are treated as
|
||||
quicklinks and code blocks that start with ">>>" are
|
||||
highlighted as interactive python sessions.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
{% endblock %}
|
||||
22
sphinx/templates/comments.html
Normal file
@@ -0,0 +1,22 @@
|
||||
<div id="comments">
|
||||
<h3>Comments</h3>
|
||||
{% for comment in comments %}
|
||||
<div class="comment" id="comment-{{ comment.comment_id }}">
|
||||
<h4>{{ comment.title|e }}
|
||||
{%- if comment.associated_name %} — on
|
||||
<a href="#{{ comment.associated_name }}">{{-
|
||||
comment.associated_name }}</a>{% endif %}</h4>
|
||||
<div class="text">{{ comment.parsed_comment_body }}</div>
|
||||
<div class="meta">by {{ comment.author|e }}, written on
|
||||
{{ comment.pub_date|datetimeformat }} |
|
||||
<a href="#comment-{{ comment.comment_id }}">#</a></div>
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="nocomments">
|
||||
There are no user contributed notes for this page.
|
||||
</div>
|
||||
{% endfor %}
|
||||
<div class="actions">
|
||||
<a class="newcomment" href="{{ pathto(comment_url, 1)|e }}">add comment to page</a>
|
||||
</div>
|
||||
</div>
|
||||
53
sphinx/templates/edit.html
Normal file
@@ -0,0 +1,53 @@
|
||||
{% extends "layout.html" %}
|
||||
{% if rendered %}{% set title = "Suggest changes - Preview" %}
|
||||
{% else %}{% set title = "Suggest changes" %}{% endif %}
|
||||
{% block body %}
|
||||
{% if rendered %}
|
||||
<h1>Preview</h1>
|
||||
<div class="preview">
|
||||
<div class="previewwrapper">
|
||||
{{ rendered }}
|
||||
</div>
|
||||
</div>
|
||||
{% if warnings %}
|
||||
<h1>Warnings</h1>
|
||||
<p>You must fix these warnings before you can submit your patch.</p>
|
||||
<ul>
|
||||
{% for warning in warnings %}
|
||||
<li>{{ warning }}</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
<h1 id="suggest-changes-for-this-page">Suggest changes for this page</h1>
|
||||
{% if not rendered %}
|
||||
<p>Here you can edit the source of “{{ doctitle|striptags }}” and
|
||||
submit the results as a patch to the Python documentation team. If you want
|
||||
to know more about reST, the markup language used, read
|
||||
<a href="{{ pathto('documenting/index.rst') }}">Documenting Python</a>.</p>
|
||||
{% endif %}
|
||||
<form action="{{ submiturl }}" method="post">
|
||||
<div id="suggest-changes-box">
|
||||
<textarea name="contents">{{ contents|e }}</textarea>
|
||||
{# XXX: shortcuts to make the edit area larger/smaller #}
|
||||
{% if form_error %}
|
||||
<div class="error">{{ form_error|e }}</div>
|
||||
{% endif %}
|
||||
<dl>
|
||||
<dt>Name:</dt>
|
||||
<dd><input type="text" size="24" name="name" value="{{ author }}"></dd>
|
||||
<dt>E-mail Address:</dt>
|
||||
<dd><input type="text" size="24" name="email" value="{{ email }}"></dd>
|
||||
<dt>Summary of the change:</dt>
|
||||
<dd><input type="text" size="48" name="summary" value="{{ summary }}"></dd>
|
||||
</dl>
|
||||
<input type="text" name="homepage" size="12" id="homepage">
|
||||
<div class="actions">
|
||||
<input type="submit" value="Submit patch for review">
|
||||
<input type="submit" name="preview" value="Preview changes">
|
||||
<input type="reset" value="Reset form">
|
||||
<input type="submit" name="cancel" value="Cancel">
|
||||
</div>
|
||||
</div>
|
||||
</form>
|
||||
{% endblock %}
|
||||
46
sphinx/templates/genindex.html
Normal file
@@ -0,0 +1,46 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Index' %}
|
||||
{% block body %}
|
||||
|
||||
<h1 id="index">Index</h1>
|
||||
|
||||
{% for key, dummy in genindexentries -%}
|
||||
<a href="#{{ key }}"><strong>{{ key }}</strong></a> {% if not loop.last %}| {% endif %}
|
||||
{%- endfor %}
|
||||
|
||||
<hr>
|
||||
|
||||
{% for key, entries in genindexentries %}
|
||||
<h2 id="{{ key }}">{{ key }}</h2>
|
||||
<table width="100%" class="indextable"><tr><td width="33%" valign="top">
|
||||
<dl>
|
||||
{%- set breakat = genindexcounts[loop.index0] // 2 %}
|
||||
{%- set numcols = 1 %}
|
||||
{%- set numitems = 0 %}
|
||||
{% for entryname, (links, subitems) in entries %}
|
||||
<dt>{%- if links -%}
|
||||
<a href="{{ links[0] }}">{{ entryname }}</a>
|
||||
{%- for link in links[1:] %}, <a href="{{ link }}">[Link]</a>{% endfor -%}
|
||||
{%- else -%}
|
||||
{{ entryname }}
|
||||
{%- endif -%}</dt>
|
||||
{%- if subitems %}
|
||||
<dd><dl>
|
||||
{%- for subentryname, subentrylinks in subitems %}
|
||||
<dt><a href="{{ subentrylinks[0] }}">{{ subentryname }}</a>
|
||||
{%- for link in subentrylinks[1:] %}, <a href="{{ link }}">[Link]</a>{% endfor -%}
|
||||
</dt>
|
||||
{%- endfor %}
|
||||
</dl></dd>
|
||||
{%- endif -%}
|
||||
{%- set numitems = numitems + 1 + len(subitems) -%}
|
||||
{%- if numcols < 2 and numitems > breakat -%}
|
||||
{%- set numcols = numcols+1 -%}
|
||||
</dl></td><td width="33%" valign="top"><dl>
|
||||
{%- endif -%}
|
||||
{% endfor %}
|
||||
</dl></td></tr></table>
|
||||
|
||||
{% endfor %}
|
||||
|
||||
{% endblock %}
|
||||
67
sphinx/templates/index.html
Normal file
@@ -0,0 +1,67 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Overview' %}
|
||||
{% set current_page_name = 'index' %}
|
||||
{% set page_links = [
|
||||
(pathto('@rss/recent'), 'application/rss+xml', 'Recent Comments')
|
||||
] %}
|
||||
{% block body %}
|
||||
<h1>Python Documentation</h1>
|
||||
<p>
|
||||
Welcome! This is the documentation for Python
|
||||
{{ release }}{% if last_updated %}, last updated {{ last_updated }}{% endif %}.
|
||||
</p>
|
||||
|
||||
<p><strong>Parts of the documentation:</strong></p>
|
||||
<table class="contentstable" align="center"><tr>
|
||||
<td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("whatsnew/2.6.rst") }}">What's new in Python 2.6?</a><br>
|
||||
<span class="linkdescr">changes since previous major release</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("tutorial/index.rst") }}">Tutorial</a><br>
|
||||
<span class="linkdescr">start here</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("reference/index.rst") }}">Language Reference</a><br>
|
||||
<span class="linkdescr">describes syntax, language elements and builtins</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("modules/index.rst") }}">Library Reference</a><br>
|
||||
<span class="linkdescr">keep this under your pillow</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("macmodules/index.rst") }}">Macintosh Library Modules</a><br>
|
||||
<span class="linkdescr">this too, if you use a Macintosh</span></p>
|
||||
</td><td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("extending/index.rst") }}">Extending and Embedding</a><br>
|
||||
<span class="linkdescr">tutorial for C/C++ programmers</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("c-api/index.rst") }}">Python/C API</a><br>
|
||||
<span class="linkdescr">reference for C/C++ programmers</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("install/index.rst") }}">Installing Python Modules</a><br>
|
||||
<span class="linkdescr">information for installers & sys-admins</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("distutils/index.rst") }}">Distributing Python Modules</a><br>
|
||||
<span class="linkdescr">sharing modules with others</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("documenting/index.rst") }}">Documenting Python</a><br>
|
||||
<span class="linkdescr">guide for documentation authors</span></p>
|
||||
</td></tr>
|
||||
</table>
|
||||
|
||||
<p><strong>Indices and tables:</strong></p>
|
||||
<table class="contentstable" align="center"><tr>
|
||||
<td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("modindex.rst") }}">Global Module Index</a><br>
|
||||
<span class="linkdescr">quick access to all modules</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("genindex.rst") }}">General Index</a><br>
|
||||
<span class="linkdescr">all functions, classes, terms</span></p>
|
||||
</td><td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("search.rst") }}">Search page</a><br>
|
||||
<span class="linkdescr">search this documentation</span></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("contents.rst") }}">Complete Table of Contents</a><br>
|
||||
<span class="linkdescr">lists all sections and subsections</span></p>
|
||||
</td></tr>
|
||||
</table>
|
||||
|
||||
<p><strong>Meta information:</strong></p>
|
||||
<table class="contentstable" align="center"><tr>
|
||||
<td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("bugs.rst") }}">Reporting bugs</a></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("about.rst") }}">About the documentation</a></p>
|
||||
</td><td width="50%">
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("license.rst") }}">History and License of Python</a></p>
|
||||
<p class="biglink"><a class="biglink" href="{{ pathto("copyright.rst") }}">Copyright</a></p>
|
||||
</td></tr>
|
||||
</table>
|
||||
|
||||
{% endblock %}
|
||||
36
sphinx/templates/inlinecomments.html
Normal file
@@ -0,0 +1,36 @@
|
||||
{# rendered for inline comments -#}
|
||||
<div class="inlinecomments">
|
||||
{%- if mode == 'bottom' %}
|
||||
{%- if comments -%}
|
||||
<a class="bubble" href="#comment-{{ comments[0].comment_id }}"><span>[Read Comments]</span></a>
|
||||
{%- else -%}
|
||||
<a class="bubble emptybubble" href="{{ pathto(comment_url, 1) }}?target={{ id }}"><span>[Write Comments]</span></a>
|
||||
{%- endif %}
|
||||
{%- else %}
|
||||
<div>
|
||||
{%- if comments -%}
|
||||
<a class="bubble" href="{{ pathto(comment_url, 1) }}?target={{ id
|
||||
}}"><span>[</span>{{ comments|length }}<span> Comments]</span></a>
|
||||
{%- else -%}
|
||||
<a class="bubble emptybubble" href="{{ pathto(comment_url, 1)
|
||||
}}?target={{ id }}"><span>[Write Comment]</span></a>
|
||||
{%- endif -%}
|
||||
</div>
|
||||
{%- if comments %}
|
||||
<div class="comments">
|
||||
<h3>Comments</h3>
|
||||
<div class="actions"><a href="{{ pathto(comment_url, 1) }}?target={{
|
||||
id }}" onclick="Documentation.newCommentFromBox(this); return false">write new comment</a></div>
|
||||
{%- for comment in comments %}
|
||||
<div class="comment" id="comment-{{ comment.comment_id }}">
|
||||
<h4>{{ comment.title|e }}</h4>
|
||||
<div class="text">{{ comment.parsed_comment_body }}</div>
|
||||
<div class="meta">by {{ comment.author|e }}, written on
|
||||
{{ comment.pub_date|datetimeformat }} |
|
||||
<a href="#comment-{{ comment.comment_id }}">#</a></div>
|
||||
</div>
|
||||
{%- endfor %}
|
||||
</div>
|
||||
{%- endif %}
|
||||
{%- endif %}
|
||||
</div>
|
||||
31
sphinx/templates/keyword_not_found.html
Normal file
@@ -0,0 +1,31 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Keyword Not Found' %}
|
||||
{% block body %}
|
||||
<h1 id="keyword-not-found">Keyword Not Found</h1>
|
||||
<p>
|
||||
The keyword <strong>{{ keyword|e }}</strong> is not directly associated with
|
||||
a page. {% if close_matches %}A similarity search returned {{
|
||||
close_matches|length }} items that are possible matches.
|
||||
{% if good_matches_count %}{{ good_matches_count }} of them are really
|
||||
good matches and emphasized.{% endif %}{% endif %}
|
||||
</p>
|
||||
{% if close_matches %}
|
||||
<ul class="keywordmatches">
|
||||
{% for item in close_matches %}
|
||||
<li{% if item.good_match %} class="goodmatch"{% endif
|
||||
%}><a href="{{ item.href }}">{{ item.title|e }}</a> ({{
|
||||
item.type }}) {% if item.description
|
||||
%} — {{ item.description|e }}{% endif %}</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
{% endif %}
|
||||
<p>
|
||||
If you want to search the entire Python documentation for the string
|
||||
"{{ keyword|e }}", then <a href="{{ pathto('search.rst') }}?q={{ keyword|e
|
||||
}}">use the search function</a>.
|
||||
</p>
|
||||
<p>
|
||||
For a quick overview over all documented modules,
|
||||
<a href="{{ pathto('modules/index.rst') }}">click here</a>.
|
||||
</p>
|
||||
{% endblock %}
|
||||
89
sphinx/templates/layout.html
Normal file
@@ -0,0 +1,89 @@
|
||||
{% if builder != 'htmlhelp' %}{% set titlesuffix = " — Python Documentation" %}{% endif -%}
|
||||
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
|
||||
"http://www.w3.org/TR/html4/loose.dtd">
|
||||
<html>
|
||||
<head>
|
||||
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
|
||||
<title>{{ title|striptags }}{{ titlesuffix }}</title>
|
||||
{%- if builder == 'web' %}
|
||||
<link rel="stylesheet" href="{{ pathto('index.rst') }}?do=stylesheet{%
|
||||
if in_admin_panel %}&admin=yes{% endif %}" type="text/css">
|
||||
{%- for link, type, title in page_links %}
|
||||
<link rel="alternate" type="{{ type|e(true) }}" title="{{ title|e(true) }}" href="{{ link|e(true) }}">
|
||||
{%- endfor %}
|
||||
{%- else %}
|
||||
<link rel="stylesheet" href="{{ pathto('style/default.css', 1) }}" type="text/css">
|
||||
<link rel="stylesheet" href="{{ pathto('style/pygments.css', 1) }}" type="text/css">
|
||||
{%- endif %}
|
||||
<script type="text/javascript">
|
||||
var DOCUMENTATION_OPTIONS = {
|
||||
URL_ROOT: '{{ pathto("", 1) }}',
|
||||
VERSION: '{{ release }}'
|
||||
};
|
||||
</script>
|
||||
<script type="text/javascript" src="{{ pathto('style/jquery.js', 1) }}"></script>
|
||||
<script type="text/javascript" src="{{ pathto('style/interface.js', 1) }}"></script>
|
||||
<script type="text/javascript" src="{{ pathto('style/doctools.js', 1) }}"></script>
|
||||
<link rel="author" title="About these documents" href="{{ pathto('about.rst') }}">
|
||||
<link rel="contents" title="Global table of contents" href="{{ pathto('contents.rst') }}">
|
||||
<link rel="index" title="Global index" href="{{ pathto('genindex.rst') }}">
|
||||
<link rel="search" title="Search" href="{{ pathto('search.rst') }}">
|
||||
<link rel="copyright" title="Copyright" href="{{ pathto('copyright.rst') }}">
|
||||
<link rel="top" title="Python Documentation" href="{{ pathto('index.rst') }}">
|
||||
{%- if parents %}
|
||||
<link rel="up" title="{{ parents[-1].title|striptags }}" href="{{ parents[-1].link|e }}">
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<link rel="next" title="{{ next.title|striptags }}" href="{{ next.link|e }}">
|
||||
{%- endif %}
|
||||
{%- if prev %}
|
||||
<link rel="prev" title="{{ prev.title|striptags }}" href="{{ prev.link|e }}">
|
||||
{%- endif %}
|
||||
{% block head %}{% endblock %}
|
||||
</head>
|
||||
<body>
|
||||
<div class="related">
|
||||
<h3>Navigation</h3>
|
||||
<ul>
|
||||
<li class="right" style="margin-right: 10px"><a href="{{ pathto('genindex.rst') }}" title="General Index">index</a></li>
|
||||
<li class="right"><a href="{{ pathto('modindex.rst') }}" title="Global Module Index">modules</a> |</li>
|
||||
{%- if next %}
|
||||
<li class="right"><a href="{{ next.link|e }}" title="{{ next.title|striptags }}">next</a> |</li>
|
||||
{%- endif %}
|
||||
{%- if prev %}
|
||||
<li class="right"><a href="{{ prev.link|e }}" title="{{ prev.title|striptags }}">previous</a> |</li>
|
||||
{%- endif %}
|
||||
{%- if builder == 'web' %}
|
||||
<li class="right"><a href="{{ pathto('settings.rst') }}"
|
||||
title="Customize your viewing settings">settings</a> |</li>
|
||||
{%- endif %}
|
||||
<li><a href="{{ pathto('index.rst') }}">Python v{{ release }} Documentation</a> »</li>
|
||||
{%- for parent in parents %}
|
||||
<li><a href="{{ parent.link|e }}">{{ parent.title }}</a> »</li>
|
||||
{%- endfor %}
|
||||
</ul>
|
||||
</div>
|
||||
<div class="document">
|
||||
<div class="documentwrapper">
|
||||
{%- if builder != 'htmlhelp' %}
|
||||
<div class="bodywrapper">
|
||||
{%- endif %}
|
||||
<div class="body">
|
||||
{% block body %}{% endblock %}
|
||||
</div>
|
||||
{%- if builder != 'htmlhelp' %}
|
||||
</div>
|
||||
{%- endif %}
|
||||
</div>
|
||||
{%- if builder != 'htmlhelp' %}
|
||||
{%- include "sidebar.html" %}
|
||||
{%- endif %}
|
||||
<div class="clearer"></div>
|
||||
</div>
|
||||
<div class="footer">
|
||||
© <a href="{{ pathto('copyright.rst') }}">Copyright</a>
|
||||
1990-2007, Python Software Foundation.
|
||||
{% if last_updated %}Last updated on {{ last_updated }}.{% endif %}
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
45
sphinx/templates/modindex.html
Normal file
@@ -0,0 +1,45 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Global Module Index' %}
|
||||
{% block body %}
|
||||
|
||||
<h1 id="global-module-index">Global Module Index</h1>
|
||||
{% if builder == 'web' and freqentries %}
|
||||
<p>Most popular modules:</p>
|
||||
<div class="modulecloud">
|
||||
{%- for module in freqentries %}
|
||||
<a href="../q/{{ module.name|e }}/" style="font-size: {{ module.size }}%">{{ module.name|e }}</a>
|
||||
{%- endfor %}
|
||||
</div>
|
||||
{% endif %}
|
||||
<form class="pfform" action="" method="get">
|
||||
Show modules only available on these platforms:<br>
|
||||
{% for pl in platforms -%}
|
||||
<input type="checkbox" name="pf" value="{{ pl }}" id="pl-{{ pl }}"
|
||||
{%- if pl in showpf %} checked="checked"{% endif %}>
|
||||
<label for="pl-{{ pl }}">{{ pl }}</label>
|
||||
{% endfor %}
|
||||
<input type="submit" value="Apply">
|
||||
</form>
|
||||
|
||||
<table width="100%" class="indextable" cellspacing="0" cellpadding="2">
|
||||
{%- for modname, collapse, cgroup, indent, fname, synops, pform in modindexentries %}
|
||||
{%- if not modname -%}
|
||||
<tr class="pcap"><td></td><td> </td><td></td></tr>
|
||||
<tr class="cap"><td></td><td><strong>{{ fname }}</strong></td><td></td></tr>
|
||||
{%- else -%}
|
||||
<tr{% if indent %} class="cg-{{ cgroup }}"{% endif %}>
|
||||
<td>{% if collapse -%}
|
||||
<img src="{{ pathto('style/minus.png', 1) }}" id="toggle-{{ cgroup }}"
|
||||
class="toggler" style="display: none">
|
||||
{%- endif %}</td>
|
||||
<td>{% if indent %} {% endif %}
|
||||
{% if fname %}<a href="{{ fname }}">{% endif -%}
|
||||
<tt class="xref">{{ modname|e }}</tt>
|
||||
{%- if fname %}</a>{% endif %}
|
||||
{%- if pform[0] %} <em>({{ pform|join(', ') }})</em>{% endif -%}
|
||||
</td><td><em>{{ synops|e }}</em></td></tr>
|
||||
{%- endif -%}
|
||||
{% endfor %}
|
||||
</table>
|
||||
|
||||
{% endblock %}
|
||||
11
sphinx/templates/not_found.html
Normal file
@@ -0,0 +1,11 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Page Not Found' %}
|
||||
{% block body %}
|
||||
<h1 id="page-not-found">Page Not Found</h1>
|
||||
<p>
|
||||
The page {{ req.path|e }} does not exist on this server.
|
||||
</p>
|
||||
<p>
|
||||
Click here to <a href="{{ pathto('index.rst') }}">return to the index</a>.
|
||||
</p>
|
||||
{% endblock %}
|
||||
14
sphinx/templates/page.html
Normal file
@@ -0,0 +1,14 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set page_links = [
|
||||
(pathto('@rss/' + sourcename), 'application/rss+xml', 'Page Comments'),
|
||||
] %}
|
||||
{% block body %}
|
||||
{% if oldurl %}
|
||||
<div class="docwarning">
|
||||
<strong>Note:</strong> You requested an out-of-date URL from this server.
|
||||
We've tried to redirect you to the new location of this page, but it may not
|
||||
be the right one.
|
||||
</div>
|
||||
{% endif %}
|
||||
{{ body }}
|
||||
{% endblock %}
|
||||
60
sphinx/templates/search.html
Normal file
@@ -0,0 +1,60 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Search Documentation' %}
|
||||
{% block header %}
|
||||
<script type="text/javascript" src="{{ pathto('style/searchtools.js', 1) }}"></script>
|
||||
{% endblock %}
|
||||
{% block body %}
|
||||
<h1 id="search-documentation">Search Documentation</h1>
|
||||
<p>
|
||||
From here you can search the Python documentation. Enter your search
|
||||
words into the box below and click "search". Note that the search
|
||||
function will automatically search for all of the words. Pages
|
||||
containing less words won't appear in the result list.
|
||||
</p>
|
||||
<p>
|
||||
In order to speed up the results you can limit your search by
|
||||
excluding some of the sections listed below.
|
||||
</p>
|
||||
<form action="" method="get">
|
||||
<input type="text" name="q" value="">
|
||||
<input type="submit" value="search">
|
||||
<p>
|
||||
Sections:
|
||||
</p>
|
||||
<ul class="fakelist">
|
||||
{% for id, name, checked in [
|
||||
('tutorial', 'Python Tutorial', true),
|
||||
('modules', 'Library Reference', true),
|
||||
('macmodules', 'Macintosh Library Modules', false),
|
||||
('extending', 'Extending and Embedding', false),
|
||||
('c-api', 'Python/C API', false),
|
||||
('install', 'Installing Python Modules', true),
|
||||
('distutils', 'Distributing Python Modules', true),
|
||||
('documenting', 'Documenting Python', false),
|
||||
('whatsnew', 'What\'s new in Python?', false),
|
||||
('reference', 'Language Reference', false)
|
||||
] -%}
|
||||
<li><input type="checkbox" name="area" id="area-{{ id }}" value="{{ id
|
||||
}}"{% if checked %} checked{% endif %}>
|
||||
<label for="area-{{ id }}">{{ name }}</label></li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
</form>
|
||||
{% if search_performed %}
|
||||
<h2>Search Results</h2>
|
||||
{% if not search_results %}
|
||||
<p>Your search did not match any results.</p>
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
<div id="search-results">
|
||||
{% if search_results %}
|
||||
<ul>
|
||||
{% for href, caption, context in search_results %}
|
||||
<li><a href="{{ pathto(item.href) }}">{{ caption }}</a>
|
||||
<div class="context">{{ context|e }}</div>
|
||||
</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endblock %}
|
||||
37
sphinx/templates/settings.html
Normal file
@@ -0,0 +1,37 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Settings' %}
|
||||
{% set current_page_name = 'settings' %}
|
||||
{% block body %}
|
||||
<h1>Python Documentation Settings</h1>
|
||||
<p>
|
||||
Here you can customize how you want to view the Python documentation.
|
||||
These settings are saved using a cookie on your computer.
|
||||
</p>
|
||||
|
||||
<form action="{{ pathto('settings.rst') }}" method="post">
|
||||
<p class="subhead">Select your stylesheet:</p>
|
||||
<p>
|
||||
{%- for design, (foo, descr) in known_designs %}
|
||||
<input type="radio" name="design" value="{{ design }}" id="stylesheet-{{ design }}"
|
||||
{% if curdesign == design %}checked="checked"{% endif %}>
|
||||
<label for="stylesheet-{{ design }}">{{ design }} — {{ descr }}</label><br>
|
||||
{%- endfor %}
|
||||
</p>
|
||||
|
||||
<p class="subhead">Select how you want to view comments:</p>
|
||||
<p>
|
||||
{%- for meth, descr in comments_methods %}
|
||||
<input type="radio" name="comments" value="{{ meth }}" id="comments-{{ meth }}"
|
||||
{% if curcomments == meth %}checked="checked"{% endif %}>
|
||||
<label for="comments-{{ meth }}">{{ descr }}</label><br>
|
||||
{%- endfor %}
|
||||
</p>
|
||||
<input type="hidden" name="referer" value="{{ referer|e}}">
|
||||
<p>
|
||||
<input type="submit" name="goback" value="Save and back to last page">
|
||||
<input type="submit" value="Save">
|
||||
<input type="submit" name="cancel" value="Cancel and back to last page">
|
||||
</p>
|
||||
</form>
|
||||
|
||||
{% endblock %}
|
||||
6
sphinx/templates/show_source.html
Normal file
@@ -0,0 +1,6 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = 'Page Source' %}
|
||||
{% block body %}
|
||||
<h1 id="page-source">Page Source</h1>
|
||||
{{ highlighted_code }}
|
||||
{% endblock %}
|
||||
48
sphinx/templates/sidebar.html
Normal file
@@ -0,0 +1,48 @@
|
||||
{# this file is included by layout.html #}
|
||||
<div class="sidebar">
|
||||
<div class="sidebarwrapper">
|
||||
{% if display_toc %}
|
||||
<h3>Table Of Contents</h3>
|
||||
{{ toc }}
|
||||
{% endif %}
|
||||
{%- if prev %}
|
||||
<h4>Previous topic</h4>
|
||||
<p class="topless"><a href="{{ prev.link|e }}" title="previous chapter">{{ prev.title }}</a></p>
|
||||
{%- endif %}
|
||||
{%- if next %}
|
||||
<h4>Next topic</h4>
|
||||
<p class="topless"><a href="{{ next.link|e }}" title="next chapter">{{ next.title }}</a></p>
|
||||
{%- endif %}
|
||||
{% if sourcename %}
|
||||
<h3>This Page</h3>
|
||||
<ul class="this-page-menu">
|
||||
{% if builder == 'web' %}
|
||||
<li><a href="#comments">Comments ({{ comments|length }} so far)</a></li>
|
||||
<li><a href="{{ pathto('@edit/' + sourcename)|e }}">Suggest Change</a></li>
|
||||
<li><a href="{{ pathto('@source/' + sourcename)|e }}">Show Source</a></li>
|
||||
{% elif builder == 'html' %}
|
||||
<li><a href="{{ pathto(sourcename, true)|e }}">Show Source</a></li>
|
||||
{% endif %}
|
||||
<li><a href="http://bugs.python.org/XXX?page={{ sourcename|e }}">Report Bug</a></li>
|
||||
</ul>
|
||||
{% endif %}
|
||||
{% if current_page_name == "index" %}
|
||||
<h3>Download</h3>
|
||||
<p>
|
||||
XXX: Add download links here.
|
||||
</p>
|
||||
<h3>Old docs</h3>
|
||||
<p>
|
||||
XXX: Add links to old docs/essays/etc. here.
|
||||
</p>
|
||||
{% endif %}
|
||||
{% if current_page_name != "search" %}
|
||||
<h3>{{ builder == 'web' and 'Keyword' or 'Quick' }} search</h3>
|
||||
<form class="search" action="{{ pathto('search.rst') }}" method="get">
|
||||
<input type="text" name="q" size="18"> <input type="submit" value="Go">
|
||||
<input type="hidden" name="check_keywords" value="yes">
|
||||
<input type="hidden" name="area" value="default">
|
||||
</form>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
12
sphinx/templates/submitted.html
Normal file
@@ -0,0 +1,12 @@
|
||||
{% extends "layout.html" %}
|
||||
{% set title = "Patch submitted" %}
|
||||
{% block head %}
|
||||
<meta http-equiv="refresh" content="2; URL={{ backlink|e }}">
|
||||
{% endblock %}
|
||||
{% block body %}
|
||||
<h1>Patch submitted</h1>
|
||||
<p>Your patch has been submitted to the Python documentation team and will be
|
||||
processed shortly.</p>
|
||||
<p>You will be redirected to the
|
||||
<a href="{{ backlink|e }}">original documentation page</a> shortly.</p>
|
||||
{% endblock %}
|
||||
109
sphinx/util.py
Normal file
@@ -0,0 +1,109 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.util
|
||||
~~~~~~~~~~~
|
||||
|
||||
Utility functions for Sphinx.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import fnmatch
|
||||
from os import path
|
||||
|
||||
|
||||
def relative_uri(base, to):
|
||||
"""Return a relative URL from ``base`` to ``to``."""
|
||||
b2 = base.split('/')
|
||||
t2 = to.split('/')
|
||||
# remove common segments
|
||||
for x, y in zip(b2, t2):
|
||||
if x != y:
|
||||
break
|
||||
b2.pop(0)
|
||||
t2.pop(0)
|
||||
return '../' * (len(b2)-1) + '/'.join(t2)
|
||||
|
||||
|
||||
def ensuredir(path):
|
||||
"""Ensure that a path exists."""
|
||||
try:
|
||||
os.makedirs(path)
|
||||
except OSError, err:
|
||||
if not err.errno == 17:
|
||||
raise
|
||||
|
||||
|
||||
def status_iterator(iterable, colorfunc=lambda x: x, stream=sys.stdout):
|
||||
"""Print out each item before yielding it."""
|
||||
for item in iterable:
|
||||
print >>stream, colorfunc(item),
|
||||
stream.flush()
|
||||
yield item
|
||||
print >>stream
|
||||
|
||||
|
||||
def get_matching_files(dirname, pattern, exclude=()):
|
||||
"""Get all files matching a pattern in a directory, recursively."""
|
||||
# dirname is a normalized absolute path.
|
||||
dirname = path.normpath(path.abspath(dirname))
|
||||
dirlen = len(dirname) + 1 # exclude slash
|
||||
for root, dirs, files in os.walk(dirname):
|
||||
dirs.sort()
|
||||
files.sort()
|
||||
for sfile in files:
|
||||
if not fnmatch.fnmatch(sfile, pattern):
|
||||
continue
|
||||
qualified_name = path.join(root[dirlen:], sfile)
|
||||
if qualified_name in exclude:
|
||||
continue
|
||||
yield qualified_name
|
||||
|
||||
|
||||
def get_category(filename):
|
||||
"""Get the "category" part of a RST filename."""
|
||||
parts = filename.split('/', 1)
|
||||
if len(parts) < 2:
|
||||
return
|
||||
return parts[0]
|
||||
|
||||
|
||||
def shorten_result(text='', keywords=[], maxlen=240, fuzz=60):
|
||||
if not text:
|
||||
text = ''
|
||||
text_low = text.lower()
|
||||
beg = -1
|
||||
for k in keywords:
|
||||
i = text_low.find(k.lower())
|
||||
if (i > -1 and i < beg) or beg == -1:
|
||||
beg = i
|
||||
excerpt_beg = 0
|
||||
if beg > fuzz:
|
||||
for sep in ('.', ':', ';', '='):
|
||||
eb = text.find(sep, beg - fuzz, beg - 1)
|
||||
if eb > -1:
|
||||
eb += 1
|
||||
break
|
||||
else:
|
||||
eb = beg - fuzz
|
||||
excerpt_beg = eb
|
||||
if excerpt_beg < 0:
|
||||
excerpt_beg = 0
|
||||
msg = text[excerpt_beg:beg+maxlen]
|
||||
if beg > fuzz:
|
||||
msg = '... ' + msg
|
||||
if beg < len(text)-maxlen:
|
||||
msg = msg + ' ...'
|
||||
return msg
|
||||
|
||||
|
||||
class attrdict(dict):
|
||||
def __getattr__(self, key):
|
||||
return self[key]
|
||||
def __setattr__(self, key, val):
|
||||
self[key] = val
|
||||
def __delattr__(self, key):
|
||||
del self[key]
|
||||
10
sphinx/web/__init__.py
Normal file
@@ -0,0 +1,10 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web
|
||||
~~~~~~~~~~
|
||||
|
||||
A web application to serve the Python docs interactively.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
258
sphinx/web/admin.py
Normal file
@@ -0,0 +1,258 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.admin
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Admin application parts.
|
||||
|
||||
:copyright: 2007 by Georg Brandl, Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
from .util import render_template
|
||||
from .wsgiutil import Response, RedirectResponse, NotFound
|
||||
from .database import Comment
|
||||
|
||||
|
||||
class AdminPanel(object):
|
||||
"""
|
||||
Provide the admin functionallity.
|
||||
"""
|
||||
|
||||
def __init__(self, app):
|
||||
self.app = app
|
||||
self.env = app.env
|
||||
self.userdb = app.userdb
|
||||
|
||||
def dispatch(self, req, page):
|
||||
"""
|
||||
Dispatch the requests for the current user in the admin panel.
|
||||
"""
|
||||
is_logged_in = req.user is not None
|
||||
if is_logged_in:
|
||||
privileges = self.userdb.privileges[req.user]
|
||||
is_master_admin = 'master' in privileges
|
||||
can_change_password = 'frozenpassword' not in privileges
|
||||
else:
|
||||
privileges = set()
|
||||
can_change_password = is_master_admin = False
|
||||
|
||||
# login and logout
|
||||
if page == 'login':
|
||||
return self.do_login(req)
|
||||
elif not is_logged_in:
|
||||
return RedirectResponse('@admin/login/')
|
||||
elif page == 'logout':
|
||||
return self.do_logout(req)
|
||||
|
||||
# account maintance
|
||||
elif page == 'change_password' and can_change_password:
|
||||
return self.do_change_password(req)
|
||||
elif page == 'manage_users' and is_master_admin:
|
||||
return self.do_manage_users(req)
|
||||
|
||||
# moderate comments
|
||||
elif page.split('/')[0] == 'moderate_comments':
|
||||
return self.do_moderate_comments(req, page[18:])
|
||||
|
||||
# missing page
|
||||
elif page != '':
|
||||
raise NotFound()
|
||||
return Response(render_template(req, 'admin/index.html', {
|
||||
'is_master_admin': is_master_admin,
|
||||
'can_change_password': can_change_password
|
||||
}))
|
||||
|
||||
def do_login(self, req):
|
||||
"""
|
||||
Display login form and do the login procedure.
|
||||
"""
|
||||
if req.user is not None:
|
||||
return RedirectResponse('@admin/')
|
||||
login_failed = False
|
||||
if req.method == 'POST':
|
||||
if req.form.get('cancel'):
|
||||
return RedirectResponse('')
|
||||
username = req.form.get('username')
|
||||
password = req.form.get('password')
|
||||
if self.userdb.check_password(username, password):
|
||||
req.login(username)
|
||||
return RedirectResponse('@admin/')
|
||||
login_failed = True
|
||||
return Response(render_template(req, 'admin/login.html', {
|
||||
'login_failed': login_failed
|
||||
}))
|
||||
|
||||
def do_logout(self, req):
|
||||
"""
|
||||
Log the user out.
|
||||
"""
|
||||
req.logout()
|
||||
return RedirectResponse('admin/login/')
|
||||
|
||||
def do_change_password(self, req):
|
||||
"""
|
||||
Allows the user to change his password.
|
||||
"""
|
||||
change_failed = change_successful = False
|
||||
if req.method == 'POST':
|
||||
if req.form.get('cancel'):
|
||||
return RedirectResponse('@admin/')
|
||||
pw = req.form.get('pw1')
|
||||
if pw and pw == req.form.get('pw2'):
|
||||
self.userdb.set_password(req.user, pw)
|
||||
self.userdb.save()
|
||||
change_successful = True
|
||||
else:
|
||||
change_failed = True
|
||||
return Response(render_template(req, 'admin/change_password.html', {
|
||||
'change_failed': change_failed,
|
||||
'change_successful': change_successful
|
||||
}))
|
||||
|
||||
def do_manage_users(self, req):
|
||||
"""
|
||||
Manage other user accounts. Requires master privileges.
|
||||
"""
|
||||
add_user_mode = False
|
||||
user_privileges = {}
|
||||
users = sorted((user, []) for user in self.userdb.users)
|
||||
to_delete = set()
|
||||
generated_user = generated_password = None
|
||||
user_exists = False
|
||||
|
||||
if req.method == 'POST':
|
||||
for item in req.form.getlist('delete'):
|
||||
try:
|
||||
to_delete.add(item)
|
||||
except ValueError:
|
||||
pass
|
||||
for name, item in req.form.iteritems():
|
||||
if name.startswith('privileges-'):
|
||||
user_privileges[name[11:]] = [x.strip() for x
|
||||
in item.split(',')]
|
||||
if req.form.get('cancel'):
|
||||
return RedirectResponse('@admin/')
|
||||
elif req.form.get('add_user'):
|
||||
username = req.form.get('username')
|
||||
if username:
|
||||
if username in self.userdb.users:
|
||||
user_exists = username
|
||||
else:
|
||||
generated_password = self.userdb.add_user(username)
|
||||
self.userdb.save()
|
||||
generated_user = username
|
||||
else:
|
||||
add_user_mode = True
|
||||
elif req.form.get('aborted'):
|
||||
return RedirectResponse('@admin/manage_users/')
|
||||
|
||||
users = {}
|
||||
for user in self.userdb.users:
|
||||
if user not in user_privileges:
|
||||
users[user] = sorted(self.userdb.privileges[user])
|
||||
else:
|
||||
users[user] = user_privileges[user]
|
||||
|
||||
new_users = users.copy()
|
||||
for user in to_delete:
|
||||
new_users.pop(user, None)
|
||||
|
||||
self_destruction = req.user not in new_users or \
|
||||
'master' not in new_users[req.user]
|
||||
|
||||
if req.method == 'POST' and (not to_delete or
|
||||
(to_delete and req.form.get('confirmed'))) and \
|
||||
req.form.get('update'):
|
||||
old_users = self.userdb.users.copy()
|
||||
for user in old_users:
|
||||
if user not in new_users:
|
||||
del self.userdb.users[user]
|
||||
else:
|
||||
self.userdb.privileges[user].clear()
|
||||
self.userdb.privileges[user].update(new_users[user])
|
||||
self.userdb.save()
|
||||
return RedirectResponse('@admin/manage_users/')
|
||||
|
||||
return Response(render_template(req, 'admin/manage_users.html', {
|
||||
'users': users,
|
||||
'add_user_mode': add_user_mode,
|
||||
'to_delete': to_delete,
|
||||
'ask_confirmation': req.method == 'POST' and to_delete \
|
||||
and not self_destruction,
|
||||
'generated_user': generated_user,
|
||||
'generated_password': generated_password,
|
||||
'self_destruction': self_destruction,
|
||||
'user_exists': user_exists
|
||||
}))
|
||||
|
||||
def do_moderate_comments(self, req, url):
|
||||
"""
|
||||
Comment moderation panel.
|
||||
"""
|
||||
if url == 'recent_comments':
|
||||
details_for = None
|
||||
recent_comments = Comment.get_recent(20)
|
||||
else:
|
||||
details_for = url and self.env.get_real_filename(url) or None
|
||||
recent_comments = None
|
||||
to_delete = set()
|
||||
edit_detail = None
|
||||
|
||||
if 'edit' in req.args:
|
||||
try:
|
||||
edit_detail = Comment.get(int(req.args['edit']))
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
if req.method == 'POST':
|
||||
for item in req.form.getlist('delete'):
|
||||
try:
|
||||
to_delete.add(int(item))
|
||||
except ValueError:
|
||||
pass
|
||||
if req.form.get('cancel'):
|
||||
return RedirectResponse('@admin/')
|
||||
elif req.form.get('confirmed'):
|
||||
for comment_id in to_delete:
|
||||
try:
|
||||
Comment.get(comment_id).delete()
|
||||
except ValueError:
|
||||
pass
|
||||
return RedirectResponse(req.path)
|
||||
elif req.form.get('aborted'):
|
||||
return RedirectResponse(req.path)
|
||||
elif req.form.get('edit') and not to_delete:
|
||||
if 'delete_this' in req.form:
|
||||
try:
|
||||
to_delete.add(req.form['delete_this'])
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
edit_detail = c = Comment.get(int(req.args['edit']))
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
if req.form.get('view'):
|
||||
return RedirectResponse(c.url)
|
||||
c.author = req.form.get('author', '')
|
||||
c.author_mail = req.form.get('author_mail', '')
|
||||
c.title = req.form.get('title', '')
|
||||
c.comment_body = req.form.get('comment_body', '')
|
||||
c.save()
|
||||
self.app.cache.pop(edit_detail.associated_page, None)
|
||||
return RedirectResponse(req.path)
|
||||
|
||||
return Response(render_template(req, 'admin/moderate_comments.html', {
|
||||
'pages_with_comments': [{
|
||||
'page_id': page_id,
|
||||
'title': page_id, #XXX: get title somehow
|
||||
'has_details': details_for == page_id,
|
||||
'comments': comments
|
||||
} for page_id, comments in Comment.get_overview(details_for)],
|
||||
'recent_comments': recent_comments,
|
||||
'to_delete': to_delete,
|
||||
'ask_confirmation': req.method == 'POST' and to_delete,
|
||||
'edit_detail': edit_detail
|
||||
}))
|
||||
60
sphinx/web/antispam.py
Normal file
@@ -0,0 +1,60 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.antispam
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Small module that performs anti spam tests based on the bad content
|
||||
regex list provided by moin moin.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
import re
|
||||
import urllib
|
||||
import time
|
||||
from os import path
|
||||
|
||||
DOWNLOAD_URL = 'http://moinmaster.wikiwikiweb.de/BadContent?action=raw'
|
||||
UPDATE_INTERVAL = 60 * 60 * 24 * 7
|
||||
|
||||
|
||||
class AntiSpam(object):
|
||||
"""
|
||||
Class that reads a bad content database (flat file that is automatically
|
||||
updated from the moin moin server) and checks strings against it.
|
||||
"""
|
||||
|
||||
def __init__(self, bad_content_file):
|
||||
self.bad_content_file = bad_content_file
|
||||
lines = None
|
||||
|
||||
if not path.exists(self.bad_content_file):
|
||||
last_change = 0
|
||||
else:
|
||||
last_change = path.getmtime(self.bad_content_file)
|
||||
|
||||
if last_change + UPDATE_INTERVAL < time.time():
|
||||
try:
|
||||
f = urllib.urlopen(DOWNLOAD_URL)
|
||||
data = f.read()
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
lines = [l.strip() for l in data.splitlines()
|
||||
if not l.startswith('#')]
|
||||
f = file(bad_content_file, 'w')
|
||||
f.write('\n'.join(lines))
|
||||
last_change = int(time.time())
|
||||
|
||||
if lines is None:
|
||||
with file(bad_content_file) as f:
|
||||
lines = [l.strip() for l in f]
|
||||
self.rules = [re.compile(rule) for rule in lines if rule]
|
||||
|
||||
def is_spam(self, fields):
|
||||
for regex in self.rules:
|
||||
for field in fields:
|
||||
if regex.search(field) is not None:
|
||||
return True
|
||||
return False
|
||||
790
sphinx/web/application.py
Normal file
@@ -0,0 +1,790 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.application
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A simple WSGI application that serves an interactive version
|
||||
of the python documentation.
|
||||
|
||||
:copyright: 2007 by Georg Brandl, Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
|
||||
import os
|
||||
import re
|
||||
import copy
|
||||
import time
|
||||
import heapq
|
||||
import math
|
||||
import difflib
|
||||
import tempfile
|
||||
import threading
|
||||
import cPickle as pickle
|
||||
import cStringIO as StringIO
|
||||
from os import path
|
||||
from itertools import groupby
|
||||
from collections import defaultdict
|
||||
|
||||
from .feed import Feed
|
||||
from .mail import Email
|
||||
from .util import render_template, render_simple_template, get_target_uri, \
|
||||
blackhole_dict, striptags
|
||||
from .admin import AdminPanel
|
||||
from .userdb import UserDatabase
|
||||
from .oldurls import handle_html_url
|
||||
from .antispam import AntiSpam
|
||||
from .database import connect, set_connection, Comment
|
||||
from .wsgiutil import Request, Response, RedirectResponse, \
|
||||
JSONResponse, SharedDataMiddleware, NotFound, get_base_uri
|
||||
|
||||
from ..util import relative_uri, shorten_result
|
||||
from ..search import SearchFrontend
|
||||
from ..writer import HTMLWriter
|
||||
from ..builder import LAST_BUILD_FILENAME, ENV_PICKLE_FILENAME
|
||||
|
||||
from docutils.io import StringOutput
|
||||
from docutils.utils import Reporter
|
||||
from docutils.frontend import OptionParser
|
||||
|
||||
_mail_re = re.compile(r'^([a-zA-Z0-9_\.\-])+\@'
|
||||
r'(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,})+$')
|
||||
|
||||
env_lock = threading.Lock()
|
||||
|
||||
|
||||
PATCH_MESSAGE = '''\
|
||||
A new documentation patch has been submitted.
|
||||
Author: %(author)s <%(email)s>
|
||||
Date: %(asctime)s
|
||||
Page: %(page_id)s
|
||||
Summary: %(summary)s
|
||||
|
||||
'''
|
||||
|
||||
known_designs = {
|
||||
'default': (['default.css', 'pygments.css'],
|
||||
'The default design, with the sidebar on the left side.'),
|
||||
'rightsidebar': (['default.css', 'rightsidebar.css', 'pygments.css'],
|
||||
'Display the sidebar on the right side.'),
|
||||
'stickysidebar': (['default.css', 'stickysidebar.css', 'pygments.css'],
|
||||
'''\
|
||||
Display the sidebar on the left and don\'t scroll it
|
||||
with the content. This can cause parts of the content to
|
||||
become inaccessible when the table of contents is too long.'''),
|
||||
'traditional': (['traditional.css'],
|
||||
'''\
|
||||
A design similar to the old documentation style.'''),
|
||||
}
|
||||
|
||||
comments_methods = {
|
||||
'inline': 'Show all comments inline.',
|
||||
'bottom': 'Show all comments at the page bottom.',
|
||||
'none': 'Don\'t show comments at all.',
|
||||
}
|
||||
|
||||
|
||||
class MockBuilder(object):
|
||||
def get_relative_uri(self, from_, to):
|
||||
return ''
|
||||
|
||||
|
||||
NoCache = object()
|
||||
|
||||
def cached(inner):
|
||||
"""
|
||||
Response caching system.
|
||||
"""
|
||||
def caching_function(self, *args, **kwds):
|
||||
gen = inner(self, *args, **kwds)
|
||||
cache_id = gen.next()
|
||||
if cache_id is NoCache:
|
||||
response = gen.next()
|
||||
gen.close()
|
||||
# this could also return a RedirectResponse...
|
||||
if isinstance(response, Response):
|
||||
return response
|
||||
else:
|
||||
return Response(response)
|
||||
try:
|
||||
text = self.cache[cache_id]
|
||||
gen.close()
|
||||
except KeyError:
|
||||
text = gen.next()
|
||||
self.cache[cache_id] = text
|
||||
return Response(text)
|
||||
return caching_function
|
||||
|
||||
|
||||
class DocumentationApplication(object):
|
||||
"""
|
||||
Serves the documentation.
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
self.cache = blackhole_dict() if config['debug'] else {}
|
||||
self.freqmodules = defaultdict(int)
|
||||
self.last_most_frequent = []
|
||||
self.generated_stylesheets = {}
|
||||
self.config = config
|
||||
self.data_root = config['data_root_path']
|
||||
self.buildfile = path.join(self.data_root, LAST_BUILD_FILENAME)
|
||||
self.buildmtime = -1
|
||||
self.load_env(0)
|
||||
self.db_con = connect(path.join(self.data_root, 'sphinx.db'))
|
||||
self.antispam = AntiSpam(path.join(self.data_root, 'bad_content'))
|
||||
self.userdb = UserDatabase(path.join(self.data_root, 'docusers'))
|
||||
self.admin_panel = AdminPanel(self)
|
||||
|
||||
|
||||
def load_env(self, new_mtime):
|
||||
env_lock.acquire()
|
||||
try:
|
||||
if self.buildmtime == new_mtime:
|
||||
# happens if another thread already reloaded the env
|
||||
return
|
||||
print "* Loading the environment..."
|
||||
with file(path.join(self.data_root, ENV_PICKLE_FILENAME)) as f:
|
||||
self.env = pickle.load(f)
|
||||
with file(path.join(self.data_root, 'globalcontext.pickle')) as f:
|
||||
self.globalcontext = pickle.load(f)
|
||||
with file(path.join(self.data_root, 'searchindex.pickle')) as f:
|
||||
self.search_frontend = SearchFrontend(pickle.load(f))
|
||||
self.buildmtime = path.getmtime(self.buildfile)
|
||||
self.cache.clear()
|
||||
finally:
|
||||
env_lock.release()
|
||||
|
||||
|
||||
def search(self, req):
|
||||
"""
|
||||
Search the database. Currently just a keyword based search.
|
||||
"""
|
||||
if not req.args.get('q'):
|
||||
return RedirectResponse('')
|
||||
return RedirectResponse('q/%s/' % req.args['q'])
|
||||
|
||||
|
||||
def get_page_source(self, page):
|
||||
"""
|
||||
Get the reST source of a page.
|
||||
"""
|
||||
page_id = self.env.get_real_filename(page)
|
||||
if page_id is None:
|
||||
raise NotFound()
|
||||
filename = path.join(self.data_root, 'sources', page_id)[:-3] + 'txt'
|
||||
with file(filename) as f:
|
||||
return page_id, f.read()
|
||||
|
||||
|
||||
def show_source(self, req, page):
|
||||
"""
|
||||
Show the highlighted source for a given page.
|
||||
"""
|
||||
return Response(self.get_page_source(page)[1], mimetype='text/plain')
|
||||
|
||||
|
||||
def suggest_changes(self, req, page):
|
||||
"""
|
||||
Show a "suggest changes" form.
|
||||
"""
|
||||
page_id, contents = self.get_page_source(page)
|
||||
|
||||
return Response(render_template(req, 'edit.html', self.globalcontext, dict(
|
||||
contents=contents,
|
||||
pagename=page,
|
||||
doctitle=self.globalcontext['titles'].get(page_id) or 'this page',
|
||||
submiturl=relative_uri('/@edit/'+page+'/', '/@submit/'+page),
|
||||
)))
|
||||
|
||||
def _generate_preview(self, page_id, contents):
|
||||
"""
|
||||
Generate a preview for suggested changes.
|
||||
"""
|
||||
handle, pathname = tempfile.mkstemp()
|
||||
os.write(handle, contents.encode('utf-8'))
|
||||
os.close(handle)
|
||||
|
||||
warning_stream = StringIO.StringIO()
|
||||
env2 = copy.deepcopy(self.env)
|
||||
destination = StringOutput(encoding='utf-8')
|
||||
writer = HTMLWriter(env2.config)
|
||||
doctree = env2.read_file(page_id, pathname, save_parsed=False)
|
||||
doctree = env2.get_and_resolve_doctree(page_id, MockBuilder(), doctree)
|
||||
doctree.settings = OptionParser(defaults=env2.settings,
|
||||
components=(writer,)).get_default_values()
|
||||
doctree.reporter = Reporter(page_id, 2, 4, stream=warning_stream)
|
||||
output = writer.write(doctree, destination)
|
||||
writer.assemble_parts()
|
||||
return writer.parts['fragment']
|
||||
|
||||
|
||||
def submit_changes(self, req, page):
|
||||
"""
|
||||
Submit the suggested changes as a patch.
|
||||
"""
|
||||
if req.method != 'POST':
|
||||
# only available via POST
|
||||
raise NotFound()
|
||||
if req.form.get('cancel'):
|
||||
# handle cancel requests directly
|
||||
return RedirectResponse(page)
|
||||
# raises NotFound if page doesn't exist
|
||||
page_id, orig_contents = self.get_page_source(page)
|
||||
author = req.form.get('name')
|
||||
email = req.form.get('email')
|
||||
summary = req.form.get('summary')
|
||||
contents = req.form.get('contents')
|
||||
fields = (author, email, summary, contents)
|
||||
|
||||
form_error = None
|
||||
rendered = None
|
||||
|
||||
if not all(fields):
|
||||
form_error = 'You have to fill out all fields.'
|
||||
elif not _mail_re.search(email):
|
||||
form_error = 'You have to provide a valid e-mail address.'
|
||||
elif req.form.get('homepage') or self.antispam.is_spam(fields):
|
||||
form_error = 'Your text contains blocked URLs or words.'
|
||||
else:
|
||||
if req.form.get('preview'):
|
||||
rendered = self._generate_preview(page_id, contents)
|
||||
|
||||
else:
|
||||
asctime = time.asctime()
|
||||
contents = contents.splitlines()
|
||||
orig_contents = orig_contents.splitlines()
|
||||
diffname = 'suggestion on %s by %s <%s>' % (asctime, author, email)
|
||||
diff = difflib.unified_diff(orig_contents, contents, n=3,
|
||||
fromfile=page_id, tofile=diffname,
|
||||
lineterm='')
|
||||
diff_text = '\n'.join(diff)
|
||||
try:
|
||||
mail = Email(
|
||||
self.config['patch_mail_from'], 'Python Documentation Patches',
|
||||
self.config['patch_mail_to'], '',
|
||||
'Patch for %s by %s' % (page_id, author),
|
||||
PATCH_MESSAGE % locals(),
|
||||
self.config['patch_mail_smtp'],
|
||||
)
|
||||
mail.attachments.add_string('patch.diff', diff_text, 'text/x-diff')
|
||||
mail.send()
|
||||
except:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
# XXX: how to report?
|
||||
pass
|
||||
return Response(render_template(req, 'submitted.html',
|
||||
self.globalcontext, dict(
|
||||
backlink=relative_uri('/@submit/'+page+'/', page+'/')
|
||||
)))
|
||||
|
||||
return Response(render_template(req, 'edit.html', self.globalcontext, dict(
|
||||
contents=contents,
|
||||
author=author,
|
||||
email=email,
|
||||
summary=summary,
|
||||
pagename=page,
|
||||
form_error=form_error,
|
||||
rendered=rendered,
|
||||
submiturl=relative_uri('/@edit/'+page+'/', '/@submit/'+page),
|
||||
)))
|
||||
|
||||
|
||||
def get_settings_page(self, req):
|
||||
"""
|
||||
Handle the settings page.
|
||||
"""
|
||||
referer = req.environ.get('HTTP_REFERER') or ''
|
||||
if referer:
|
||||
base = get_base_uri(req.environ)
|
||||
if not referer.startswith(base):
|
||||
referer = ''
|
||||
else:
|
||||
referer = referer[len(base):]
|
||||
referer = referer.rpartition('?')[0] or referer
|
||||
|
||||
if req.method == 'POST':
|
||||
if req.form.get('cancel'):
|
||||
if req.form.get('referer'):
|
||||
return RedirectResponse(req.form['referer'])
|
||||
return RedirectResponse('')
|
||||
new_style = req.form.get('design')
|
||||
if new_style and new_style in known_designs:
|
||||
req.session['design'] = new_style
|
||||
new_comments = req.form.get('comments')
|
||||
if new_comments and new_comments in comments_methods:
|
||||
req.session['comments'] = new_comments
|
||||
if req.form.get('goback') and req.form.get('referer'):
|
||||
return RedirectResponse(req.form['referer'])
|
||||
# else display the same page again
|
||||
referer = ''
|
||||
|
||||
context = {
|
||||
'known_designs': sorted(known_designs.iteritems()),
|
||||
'comments_methods': comments_methods.items(),
|
||||
'curdesign': req.session.get('design') or 'default',
|
||||
'curcomments': req.session.get('comments') or 'inline',
|
||||
'referer': referer,
|
||||
}
|
||||
|
||||
return Response(render_template(req, 'settings.html',
|
||||
self.globalcontext, context))
|
||||
|
||||
|
||||
@cached
|
||||
def get_module_index(self, req):
|
||||
"""
|
||||
Get the module index or redirect to a module from the module index.
|
||||
"""
|
||||
most_frequent = heapq.nlargest(30, self.freqmodules.iteritems(),
|
||||
lambda x: x[1])
|
||||
most_frequent = [{
|
||||
'name': x[0],
|
||||
'size': 100 + math.log(x[1] or 1) * 20,
|
||||
'count': x[1]
|
||||
} for x in sorted(most_frequent)]
|
||||
|
||||
showpf = None
|
||||
newpf = req.args.get('pf')
|
||||
sesspf = req.session.get('pf')
|
||||
if newpf or sesspf:
|
||||
yield NoCache
|
||||
if newpf:
|
||||
req.session['pf'] = showpf = req.args.getlist('pf')
|
||||
else:
|
||||
showpf = sesspf
|
||||
else:
|
||||
if most_frequent != self.last_most_frequent:
|
||||
self.cache.pop('@modindex', None)
|
||||
yield '@modindex'
|
||||
|
||||
filename = path.join(self.data_root, 'modindex.fpickle')
|
||||
with open(filename, 'rb') as f:
|
||||
context = pickle.load(f)
|
||||
if showpf:
|
||||
entries = context['modindexentries']
|
||||
i = 0
|
||||
while i < len(entries):
|
||||
if entries[i][6]:
|
||||
for pform in entries[i][6]:
|
||||
if pform in showpf:
|
||||
break
|
||||
else:
|
||||
del entries[i]
|
||||
continue
|
||||
i += 1
|
||||
context['freqentries'] = most_frequent
|
||||
context['showpf'] = showpf or context['platforms']
|
||||
self.last_most_frequent = most_frequent
|
||||
yield render_template(req, 'modindex.html',
|
||||
self.globalcontext, context)
|
||||
|
||||
def show_comment_form(self, req, page):
|
||||
"""
|
||||
Show the "new comment" form.
|
||||
"""
|
||||
page_id = self.env.get_real_filename(page)
|
||||
ajax_mode = req.args.get('mode') == 'ajax'
|
||||
target = req.args.get('target')
|
||||
page_comment_mode = not target
|
||||
|
||||
form_error = preview = None
|
||||
title = req.form.get('title', '').strip()
|
||||
if 'author' in req.form:
|
||||
author = req.form['author']
|
||||
else:
|
||||
author = req.session.get('author', '')
|
||||
if 'author_mail' in req.form:
|
||||
author_mail = req.form['author_mail']
|
||||
else:
|
||||
author_mail = req.session.get('author_mail', '')
|
||||
comment_body = req.form.get('comment_body', '')
|
||||
fields = (title, author, author_mail, comment_body)
|
||||
|
||||
if req.method == 'POST':
|
||||
if req.form.get('preview'):
|
||||
preview = Comment(page_id, target, title, author, author_mail,
|
||||
comment_body)
|
||||
# 'homepage' is a forbidden field to thwart bots
|
||||
elif req.form.get('homepage') or self.antispam.is_spam(fields):
|
||||
form_error = 'Your text contains blocked URLs or words.'
|
||||
else:
|
||||
if not all(fields):
|
||||
form_error = 'You have to fill out all fields.'
|
||||
elif _mail_re.search(author_mail) is None:
|
||||
form_error = 'You have to provide a valid e-mail address.'
|
||||
elif len(comment_body) < 20:
|
||||
form_error = 'You comment is too short ' \
|
||||
'(must have at least 20 characters).'
|
||||
else:
|
||||
# '|none' can stay since it doesn't include comments
|
||||
self.cache.pop(page_id + '|inline', None)
|
||||
self.cache.pop(page_id + '|bottom', None)
|
||||
comment = Comment(page_id, target,
|
||||
title, author, author_mail,
|
||||
comment_body)
|
||||
comment.save()
|
||||
req.session['author'] = author
|
||||
req.session['author_mail'] = author_mail
|
||||
if ajax_mode:
|
||||
return JSONResponse({'posted': True, 'error': False,
|
||||
'commentID': comment.comment_id})
|
||||
return RedirectResponse(comment.url)
|
||||
|
||||
output = render_template(req, '_commentform.html', {
|
||||
'ajax_mode': ajax_mode,
|
||||
'preview': preview,
|
||||
'suggest_url': '@edit/%s/' % page,
|
||||
'comments_form': {
|
||||
'target': target,
|
||||
'title': title,
|
||||
'author': author,
|
||||
'author_mail': author_mail,
|
||||
'comment_body': comment_body,
|
||||
'error': form_error
|
||||
}
|
||||
})
|
||||
|
||||
if ajax_mode:
|
||||
return JSONResponse({
|
||||
'body': output,
|
||||
'error': bool(form_error),
|
||||
'posted': False
|
||||
})
|
||||
return Response(render_template(req, 'commentform.html', {
|
||||
'form': output
|
||||
}))
|
||||
|
||||
def _insert_comments(self, req, url, context, mode):
|
||||
"""
|
||||
Insert inline comments into a page context.
|
||||
"""
|
||||
if 'body' not in context:
|
||||
return
|
||||
|
||||
comment_url = '@comments/%s/' % url
|
||||
page_id = self.env.get_real_filename(url)
|
||||
tx = context['body']
|
||||
all_comments = Comment.get_for_page(page_id)
|
||||
global_comments = []
|
||||
for name, comments in groupby(all_comments, lambda x: x.associated_name):
|
||||
if not name:
|
||||
global_comments.extend(comments)
|
||||
continue
|
||||
comments = list(comments)
|
||||
if not comments:
|
||||
continue
|
||||
tx = re.sub('<!--#%s#-->' % name,
|
||||
render_template(req, 'inlinecomments.html', {
|
||||
'comments': comments,
|
||||
'id': name,
|
||||
'comment_url': comment_url,
|
||||
'mode': mode}),
|
||||
tx)
|
||||
if mode == 'bottom':
|
||||
global_comments.extend(comments)
|
||||
if mode == 'inline':
|
||||
# replace all markers for items without comments
|
||||
tx = re.sub('<!--#([^#]*)#-->',
|
||||
(lambda match:
|
||||
render_template(req, 'inlinecomments.html', {
|
||||
'id': match.group(1),
|
||||
'mode': 'inline',
|
||||
'comment_url': comment_url
|
||||
},)),
|
||||
tx)
|
||||
tx += render_template(req, 'comments.html', {
|
||||
'comments': global_comments,
|
||||
'comment_url': comment_url
|
||||
})
|
||||
context['body'] = tx
|
||||
|
||||
|
||||
@cached
|
||||
def get_page(self, req, url):
|
||||
"""
|
||||
Show the requested documentation page or raise an
|
||||
`NotFound` exception to display a page with close matches.
|
||||
"""
|
||||
page_id = self.env.get_real_filename(url)
|
||||
if page_id is None:
|
||||
raise NotFound(show_keyword_matches=True)
|
||||
# increment view count of all modules on that page
|
||||
for modname in self.env.filemodules.get(page_id, ()):
|
||||
self.freqmodules[modname] += 1
|
||||
# comments enabled?
|
||||
comments = self.env.metadata[page_id].get('comments_enabled', True)
|
||||
|
||||
# how does the user want to view comments?
|
||||
commentmode = req.session.get('comments', 'inline') if comments else ''
|
||||
|
||||
# show "old URL" message? -> no caching possible
|
||||
oldurl = req.args.get('oldurl')
|
||||
if oldurl:
|
||||
yield NoCache
|
||||
else:
|
||||
# there must be different cache entries per comment mode
|
||||
yield page_id + '|' + commentmode
|
||||
|
||||
# cache miss; load the page and render it
|
||||
filename = path.join(self.data_root, page_id[:-3] + 'fpickle')
|
||||
with open(filename, 'rb') as f:
|
||||
context = pickle.load(f)
|
||||
|
||||
# add comments to paqe text
|
||||
if commentmode != 'none':
|
||||
self._insert_comments(req, url, context, commentmode)
|
||||
|
||||
yield render_template(req, 'page.html', self.globalcontext, context,
|
||||
{'oldurl': oldurl})
|
||||
|
||||
|
||||
@cached
|
||||
def get_special_page(self, req, name):
|
||||
yield '@'+name
|
||||
filename = path.join(self.data_root, name + '.fpickle')
|
||||
with open(filename, 'rb') as f:
|
||||
context = pickle.load(f)
|
||||
yield render_template(req, name+'.html',
|
||||
self.globalcontext, context)
|
||||
|
||||
|
||||
def comments_feed(self, req, url):
|
||||
if url == 'recent':
|
||||
feed = Feed(req, 'Recent Comments', 'Recent Comments', '')
|
||||
for comment in Comment.get_recent():
|
||||
feed.add_item(comment.title, comment.author, comment.url,
|
||||
comment.parsed_comment_body, comment.pub_date)
|
||||
else:
|
||||
page_id = self.env.get_real_filename(url)
|
||||
doctitle = striptags(self.globalcontext['titles'].get(page_id, url))
|
||||
feed = Feed(req, 'Comments for "%s"' % doctitle,
|
||||
'List of comments for the topic "%s"' % doctitle, url)
|
||||
for comment in Comment.get_for_page(page_id):
|
||||
feed.add_item(comment.title, comment.author, comment.url,
|
||||
comment.parsed_comment_body, comment.pub_date)
|
||||
return Response(feed.generate(), mimetype='application/rss+xml')
|
||||
|
||||
|
||||
def get_error_404(self, req):
|
||||
"""
|
||||
Show a simple error 404 page.
|
||||
"""
|
||||
return Response(render_template(req, 'not_found.html', self.globalcontext))
|
||||
|
||||
|
||||
pretty_type = {
|
||||
'data': 'module data',
|
||||
'cfunction': 'C function',
|
||||
'cmember': 'C member',
|
||||
'cmacro': 'C macro',
|
||||
'ctype': 'C type',
|
||||
'cvar': 'C variable',
|
||||
}
|
||||
|
||||
def get_keyword_matches(self, req, term=None, avoid_fuzzy=False,
|
||||
is_error_page=False):
|
||||
"""
|
||||
Find keyword matches. If there is an exact match, just redirect:
|
||||
http://docs.python.org/os.path.exists would automatically
|
||||
redirect to http://docs.python.org/modules/os.path/#os.path.exists.
|
||||
Else, show a page with close matches.
|
||||
|
||||
Module references are processed first so that "os.path" is handled as
|
||||
a module and not as member of os.
|
||||
"""
|
||||
if term is None:
|
||||
term = req.path.strip('/')
|
||||
|
||||
matches = self.env.find_keyword(term, avoid_fuzzy)
|
||||
|
||||
# if avoid_fuzzy is False matches can be None
|
||||
if matches is None:
|
||||
return
|
||||
|
||||
if isinstance(matches, tuple):
|
||||
url = get_target_uri(matches[1])
|
||||
if matches[0] != 'module':
|
||||
url += '#' + matches[2]
|
||||
return RedirectResponse(url)
|
||||
else:
|
||||
# get some close matches
|
||||
close_matches = []
|
||||
good_matches = 0
|
||||
for ratio, type, filename, anchorname, desc in matches:
|
||||
link = get_target_uri(filename)
|
||||
if type != 'module':
|
||||
link += '#' + anchorname
|
||||
good_match = ratio > 0.75
|
||||
good_matches += good_match
|
||||
close_matches.append({
|
||||
'href': relative_uri(req.path, link),
|
||||
'title': anchorname,
|
||||
'good_match': good_match,
|
||||
'type': self.pretty_type.get(type, type),
|
||||
'description': desc,
|
||||
})
|
||||
return Response(render_template(req, 'keyword_not_found.html', {
|
||||
'close_matches': close_matches,
|
||||
'good_matches_count': good_matches,
|
||||
'keyword': term
|
||||
}, self.globalcontext), status=404 if is_error_page else 404)
|
||||
|
||||
|
||||
def get_user_stylesheet(self, req):
|
||||
"""
|
||||
Stylesheets are exchangeable. Handle them here and
|
||||
cache them on the server side until server shuts down
|
||||
and on the client side for 1 hour (not in debug mode).
|
||||
"""
|
||||
style = req.session.get('design')
|
||||
if style not in known_designs:
|
||||
style = 'default'
|
||||
|
||||
if style in self.generated_stylesheets:
|
||||
stylesheet = self.generated_stylesheets[style]
|
||||
else:
|
||||
stylesheet = []
|
||||
for filename in known_designs[style][0]:
|
||||
with file(path.join(self.data_root, 'style', filename)) as f:
|
||||
stylesheet.append(f.read())
|
||||
stylesheet = '\n'.join(stylesheet)
|
||||
if not self.config.get('debug'):
|
||||
self.generated_stylesheets[style] = stylesheet
|
||||
|
||||
if req.args.get('admin') == 'yes':
|
||||
with file(path.join(self.data_root, 'style', 'admin.css')) as f:
|
||||
stylesheet += '\n' + f.read()
|
||||
|
||||
# XXX: add timestamp based http caching
|
||||
return Response(stylesheet, mimetype='text/css')
|
||||
|
||||
def __call__(self, environ, start_response):
|
||||
"""
|
||||
Dispatch requests.
|
||||
"""
|
||||
set_connection(self.db_con)
|
||||
req = Request(environ)
|
||||
url = req.path.strip('/') or 'index'
|
||||
|
||||
# check if the environment was updated
|
||||
new_mtime = path.getmtime(self.buildfile)
|
||||
if self.buildmtime != new_mtime:
|
||||
self.load_env(new_mtime)
|
||||
|
||||
try:
|
||||
if req.path == 'favicon.ico':
|
||||
# TODO: change this to real favicon?
|
||||
resp = self.get_error_404()
|
||||
elif not req.path.endswith('/') and req.method == 'GET':
|
||||
# may be an old URL
|
||||
if url.endswith('.html'):
|
||||
resp = handle_html_url(self, url)
|
||||
else:
|
||||
# else, require a trailing slash on GET requests
|
||||
# this ensures nice looking urls and working relative
|
||||
# links for cached resources.
|
||||
query = req.environ.get('QUERY_STRING', '')
|
||||
resp = RedirectResponse(req.path + '/' + (query and '?'+query))
|
||||
# index page is special
|
||||
elif url == 'index':
|
||||
# presets for settings
|
||||
if req.args.get('design') and req.args['design'] in known_designs:
|
||||
req.session['design'] = req.args['design']
|
||||
if req.args.get('comments') and req.args['comments'] in comments_methods:
|
||||
req.session['comments'] = req.args['comments']
|
||||
# alias for fuzzy search
|
||||
if 'q' in req.args:
|
||||
resp = RedirectResponse('q/%s/' % req.args['q'])
|
||||
# stylesheet
|
||||
elif req.args.get('do') == 'stylesheet':
|
||||
resp = self.get_user_stylesheet(req)
|
||||
else:
|
||||
resp = self.get_special_page(req, 'index')
|
||||
# go to the search page
|
||||
# XXX: this is currently just a redirect to /q/ which is handled below
|
||||
elif url == 'search':
|
||||
resp = self.search(req)
|
||||
# settings page cannot be cached
|
||||
elif url == 'settings':
|
||||
resp = self.get_settings_page(req)
|
||||
# module index page is special
|
||||
elif url == 'modindex':
|
||||
resp = self.get_module_index(req)
|
||||
# genindex page is special too
|
||||
elif url == 'genindex':
|
||||
resp = self.get_special_page(req, 'genindex')
|
||||
# start the fuzzy search
|
||||
elif url[:2] == 'q/':
|
||||
resp = self.get_keyword_matches(req, url[2:])
|
||||
# special URLs
|
||||
elif url[0] == '@':
|
||||
# source view
|
||||
if url[:8] == '@source/':
|
||||
resp = self.show_source(req, url[8:])
|
||||
# suggest changes view
|
||||
elif url[:6] == '@edit/':
|
||||
resp = self.suggest_changes(req, url[6:])
|
||||
# suggest changes submit
|
||||
elif url[:8] == '@submit/':
|
||||
resp = self.submit_changes(req, url[8:])
|
||||
# show that comment form
|
||||
elif url[:10] == '@comments/':
|
||||
resp = self.show_comment_form(req, url[10:])
|
||||
# comments RSS feed
|
||||
elif url[:5] == '@rss/':
|
||||
resp = self.comments_feed(req, url[5:])
|
||||
# dispatch requests to the admin panel
|
||||
elif url == '@admin' or url[:7] == '@admin/':
|
||||
resp = self.admin_panel.dispatch(req, url[7:])
|
||||
else:
|
||||
raise NotFound()
|
||||
# everything else is handled as page or fuzzy search
|
||||
# if a page does not exist.
|
||||
else:
|
||||
resp = self.get_page(req, url)
|
||||
# views can raise a NotFound exception to show an error page.
|
||||
# Either a real not found page or a similar matches page.
|
||||
except NotFound, e:
|
||||
if e.show_keyword_matches:
|
||||
resp = self.get_keyword_matches(req, is_error_page=True)
|
||||
else:
|
||||
resp = self.get_error_404(req)
|
||||
return resp(environ, start_response)
|
||||
|
||||
|
||||
def _check_superuser(app):
|
||||
"""Check if there is a superuser and create one if necessary."""
|
||||
if not app.userdb.users:
|
||||
print 'Warning: you have no user database or no master "admin" account.'
|
||||
create = raw_input('Do you want to create an admin account now? [y/n] ')
|
||||
if not create or create.lower().startswith('y'):
|
||||
import getpass
|
||||
print 'Creating "admin" user.'
|
||||
pw1 = getpass.getpass('Enter password: ')
|
||||
pw2 = getpass.getpass('Enter password again: ')
|
||||
if pw1 != pw2:
|
||||
print 'Error: Passwords don\'t match.'
|
||||
sys.exit(1)
|
||||
app.userdb.set_password('admin', pw1)
|
||||
app.userdb.privileges['admin'].add('master')
|
||||
app.userdb.save()
|
||||
|
||||
|
||||
def setup_app(config, check_superuser=False):
|
||||
"""
|
||||
Create the WSGI application based on a configuration dict.
|
||||
Handled configuration values so far:
|
||||
|
||||
`data_root_path`
|
||||
the folder containing the documentation data as generated
|
||||
by sphinx with the web builder.
|
||||
"""
|
||||
app = DocumentationApplication(config)
|
||||
if check_superuser:
|
||||
_check_superuser(app)
|
||||
app = SharedDataMiddleware(app, {
|
||||
'/style': path.join(config['data_root_path'], 'style')
|
||||
})
|
||||
return app
|
||||
194
sphinx/web/database.py
Normal file
@@ -0,0 +1,194 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.database
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The database connections are thread local. To set the connection
|
||||
for a thread use the `set_connection` function provided. The
|
||||
`connect` method automatically sets up new tables and returns a
|
||||
usable connection which is also set as the connection for the
|
||||
thread that called that function.
|
||||
|
||||
:copyright: 2007 by Georg Brandl, Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
import time
|
||||
import sqlite3
|
||||
from datetime import datetime
|
||||
from threading import local
|
||||
|
||||
from .markup import markup
|
||||
|
||||
|
||||
_thread_local = local()
|
||||
|
||||
|
||||
def connect(path):
|
||||
"""Connect and create tables if required. Also assigns
|
||||
the connection for the current thread."""
|
||||
con = sqlite3.connect(path, detect_types=sqlite3.PARSE_DECLTYPES)
|
||||
con.isolation_level = None
|
||||
|
||||
# create tables that do not exist.
|
||||
for table in tables:
|
||||
try:
|
||||
con.execute('select * from %s limit 1;' % table)
|
||||
except sqlite3.OperationalError:
|
||||
con.execute(tables[table])
|
||||
|
||||
set_connection(con)
|
||||
return con
|
||||
|
||||
|
||||
def get_cursor():
|
||||
"""Return a new cursor."""
|
||||
return _thread_local.connection.cursor()
|
||||
|
||||
|
||||
def set_connection(con):
|
||||
"""Call this after thread creation to make this connection
|
||||
the connection for this thread."""
|
||||
_thread_local.connection = con
|
||||
|
||||
|
||||
#: tables that we use
|
||||
tables = {
|
||||
'comments': '''
|
||||
create table comments (
|
||||
comment_id integer primary key,
|
||||
associated_page varchar(200),
|
||||
associated_name varchar(200),
|
||||
title varchar(120),
|
||||
author varchar(200),
|
||||
author_mail varchar(250),
|
||||
comment_body text,
|
||||
pub_date timestamp
|
||||
);'''
|
||||
}
|
||||
|
||||
|
||||
class Comment(object):
|
||||
"""
|
||||
Represents one comment.
|
||||
"""
|
||||
|
||||
def __init__(self, associated_page, associated_name, title, author,
|
||||
author_mail, comment_body, pub_date=None):
|
||||
self.comment_id = None
|
||||
self.associated_page = associated_page
|
||||
self.associated_name = associated_name
|
||||
self.title = title
|
||||
if pub_date is None:
|
||||
pub_date = datetime.utcnow()
|
||||
self.pub_date = pub_date
|
||||
self.author = author
|
||||
self.author_mail = author_mail
|
||||
self.comment_body = comment_body
|
||||
|
||||
@property
|
||||
def url(self):
|
||||
return '%s#comment-%s' % (
|
||||
self.associated_page[:-4],
|
||||
self.comment_id
|
||||
)
|
||||
|
||||
@property
|
||||
def parsed_comment_body(self):
|
||||
from .util import get_target_uri
|
||||
from ..util import relative_uri
|
||||
uri = get_target_uri(self.associated_page)
|
||||
def make_rel_link(keyword):
|
||||
return relative_uri(uri, 'q/%s/' % keyword)
|
||||
return markup(self.comment_body, make_rel_link)
|
||||
|
||||
def save(self):
|
||||
"""
|
||||
Save the comment and use the cursor provided.
|
||||
"""
|
||||
cur = get_cursor()
|
||||
args = (self.associated_page, self.associated_name, self.title,
|
||||
self.author, self.author_mail, self.comment_body, self.pub_date)
|
||||
if self.comment_id is None:
|
||||
cur.execute('''insert into comments (associated_page, associated_name,
|
||||
title,
|
||||
author, author_mail,
|
||||
comment_body, pub_date)
|
||||
values (?, ?, ?, ?, ?, ?, ?)''', args)
|
||||
self.comment_id = cur.lastrowid
|
||||
else:
|
||||
args += (self.comment_id,)
|
||||
cur.execute('''update comments set associated_page=?,
|
||||
associated_name=?,
|
||||
title=?, author=?,
|
||||
author_mail=?, comment_body=?,
|
||||
pub_date=? where comment_id = ?''', args)
|
||||
cur.close()
|
||||
|
||||
def delete(self):
|
||||
cur = get_cursor()
|
||||
cur.execute('delete from comments where comment_id = ?',
|
||||
(self.comment_id,))
|
||||
cur.close()
|
||||
|
||||
@staticmethod
|
||||
def _make_comment(row):
|
||||
rv = Comment(*row[1:])
|
||||
rv.comment_id = row[0]
|
||||
return rv
|
||||
|
||||
@staticmethod
|
||||
def get(comment_id):
|
||||
cur = get_cursor()
|
||||
cur.execute('select * from comments where comment_id = ?', (comment_id,))
|
||||
row = cur.fetchone()
|
||||
if row is None:
|
||||
raise ValueError('comment not found')
|
||||
try:
|
||||
return Comment._make_comment(row)
|
||||
finally:
|
||||
cur.close()
|
||||
|
||||
@staticmethod
|
||||
def get_for_page(associated_page, reverse=False):
|
||||
cur = get_cursor()
|
||||
cur.execute('''select * from comments where associated_page = ?
|
||||
order by associated_name, comment_id %s''' %
|
||||
('desc' if reverse else 'asc'),
|
||||
(associated_page,))
|
||||
try:
|
||||
return [Comment._make_comment(row) for row in cur]
|
||||
finally:
|
||||
cur.close()
|
||||
|
||||
@staticmethod
|
||||
def get_recent(n=10):
|
||||
cur = get_cursor()
|
||||
cur.execute('select * from comments order by comment_id desc limit ?',
|
||||
(n,))
|
||||
try:
|
||||
return [Comment._make_comment(row) for row in cur]
|
||||
finally:
|
||||
cur.close()
|
||||
|
||||
@staticmethod
|
||||
def get_overview(detail_for=None):
|
||||
cur = get_cursor()
|
||||
cur.execute('''select distinct associated_page from comments
|
||||
order by associated_page asc''')
|
||||
pages = []
|
||||
for row in cur:
|
||||
page_id = row[0]
|
||||
if page_id == detail_for:
|
||||
pages.append((page_id, Comment.get_for_page(page_id, True)))
|
||||
else:
|
||||
pages.append((page_id, []))
|
||||
cur.close()
|
||||
return pages
|
||||
|
||||
def __repr__(self):
|
||||
return '<Comment by %r on %r:%r (%s)>' % (
|
||||
self.author,
|
||||
self.associated_page,
|
||||
self.associated_name,
|
||||
self.comment_id or 'not saved'
|
||||
)
|
||||
78
sphinx/web/feed.py
Normal file
@@ -0,0 +1,78 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.feed
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Nifty module that generates RSS feeds.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
import time
|
||||
from datetime import datetime
|
||||
from xml.dom.minidom import Document
|
||||
from email.Utils import formatdate
|
||||
|
||||
|
||||
def format_rss_date(date):
|
||||
"""
|
||||
Pass it a datetime object to receive the string representation
|
||||
for RSS date fields.
|
||||
"""
|
||||
return formatdate(time.mktime(date.timetuple()) + date.microsecond / 1e6)
|
||||
|
||||
|
||||
class Feed(object):
|
||||
"""
|
||||
Abstract feed creation class. To generate feeds use one of
|
||||
the subclasses `RssFeed` or `AtomFeed`.
|
||||
"""
|
||||
|
||||
def __init__(self, req, title, description, link):
|
||||
self.req = req
|
||||
self.title = title
|
||||
self.description = description
|
||||
self.link = req.make_external_url(link)
|
||||
self.items = []
|
||||
self._last_update = None
|
||||
|
||||
def add_item(self, title, author, link, description, pub_date):
|
||||
if self._last_update is None or pub_date > self._last_update:
|
||||
self._last_update = pub_date
|
||||
date = pub_date or datetime.utcnow()
|
||||
self.items.append({
|
||||
'title': title,
|
||||
'author': author,
|
||||
'link': self.req.make_external_url(link),
|
||||
'description': description,
|
||||
'pub_date': date
|
||||
})
|
||||
|
||||
def generate(self):
|
||||
return self.generate_document().toxml('utf-8')
|
||||
|
||||
def generate_document(self):
|
||||
doc = Document()
|
||||
Element = doc.createElement
|
||||
Text = doc.createTextNode
|
||||
|
||||
rss = doc.appendChild(Element('rss'))
|
||||
rss.setAttribute('version', '2.0')
|
||||
|
||||
channel = rss.appendChild(Element('channel'))
|
||||
for key in ('title', 'description', 'link'):
|
||||
value = getattr(self, key)
|
||||
channel.appendChild(Element(key)).appendChild(Text(value))
|
||||
date = format_rss_date(self._last_update or datetime.utcnow())
|
||||
channel.appendChild(Element('pubDate')).appendChild(Text(date))
|
||||
|
||||
for item in self.items:
|
||||
d = Element('item')
|
||||
for key in ('title', 'author', 'link', 'description'):
|
||||
d.appendChild(Element(key)).appendChild(Text(item[key]))
|
||||
pub_date = format_rss_date(item['pub_date'])
|
||||
d.appendChild(Element('pubDate')).appendChild(Text(pub_date))
|
||||
d.appendChild(Element('guid')).appendChild(Text(item['link']))
|
||||
channel.appendChild(d)
|
||||
|
||||
return doc
|
||||
278
sphinx/web/mail.py
Normal file
@@ -0,0 +1,278 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.mail
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
A simple module for sending e-mails, based on simplemail.py.
|
||||
|
||||
:copyright: 2004-2007 by Gerold Penz.
|
||||
2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
import time
|
||||
import smtplib
|
||||
import mimetypes
|
||||
|
||||
from email import Encoders
|
||||
from email.Header import Header
|
||||
from email.MIMEText import MIMEText
|
||||
from email.MIMEMultipart import MIMEMultipart
|
||||
from email.Utils import formataddr
|
||||
from email.Utils import formatdate
|
||||
from email.Message import Message
|
||||
from email.MIMEAudio import MIMEAudio
|
||||
from email.MIMEBase import MIMEBase
|
||||
from email.MIMEImage import MIMEImage
|
||||
|
||||
|
||||
|
||||
# Exceptions
|
||||
#----------------------------------------------------------------------
|
||||
class SimpleMail_Exception(Exception):
|
||||
def __str__(self):
|
||||
return self.__doc__
|
||||
|
||||
class NoFromAddress_Exception(SimpleMail_Exception):
|
||||
pass
|
||||
|
||||
class NoToAddress_Exception(SimpleMail_Exception):
|
||||
pass
|
||||
|
||||
class NoSubject_Exception(SimpleMail_Exception):
|
||||
pass
|
||||
|
||||
class AttachmentNotFound_Exception(SimpleMail_Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Attachments(object):
|
||||
def __init__(self):
|
||||
self._attachments = []
|
||||
|
||||
def add_filename(self, filename = ''):
|
||||
self._attachments.append(('file', filename))
|
||||
|
||||
def add_string(self, filename, text, mimetype):
|
||||
self._attachments.append(('string', (filename, text, mimetype)))
|
||||
|
||||
def count(self):
|
||||
return len(self._attachments)
|
||||
|
||||
def get_list(self):
|
||||
return self._attachments
|
||||
|
||||
|
||||
class Recipients(object):
|
||||
def __init__(self):
|
||||
self._recipients = []
|
||||
|
||||
def add(self, address, caption = ''):
|
||||
self._recipients.append(formataddr((caption, address)))
|
||||
|
||||
def count(self):
|
||||
return len(self._recipients)
|
||||
|
||||
def __repr__(self):
|
||||
return str(self._recipients)
|
||||
|
||||
def get_list(self):
|
||||
return self._recipients
|
||||
|
||||
|
||||
class CCRecipients(Recipients):
|
||||
pass
|
||||
|
||||
|
||||
class BCCRecipients(Recipients):
|
||||
pass
|
||||
|
||||
|
||||
class Email(object):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
from_address = "",
|
||||
from_caption = "",
|
||||
to_address = "",
|
||||
to_caption = "",
|
||||
subject = "",
|
||||
message = "",
|
||||
smtp_server = "localhost",
|
||||
smtp_user = "",
|
||||
smtp_password = "",
|
||||
user_agent = "",
|
||||
reply_to_address = "",
|
||||
reply_to_caption = "",
|
||||
use_tls = False,
|
||||
):
|
||||
"""
|
||||
Initialize the email object
|
||||
from_address = the email address of the sender
|
||||
from_caption = the caption (name) of the sender
|
||||
to_address = the email address of the recipient
|
||||
to_caption = the caption (name) of the recipient
|
||||
subject = the subject of the email message
|
||||
message = the body text of the email message
|
||||
smtp_server = the ip-address or the name of the SMTP-server
|
||||
smtp_user = (optional) Login name for the SMTP-Server
|
||||
smtp_password = (optional) Password for the SMTP-Server
|
||||
user_agent = (optional) program identification
|
||||
reply_to_address = (optional) Reply-to email address
|
||||
reply_to_caption = (optional) Reply-to caption (name)
|
||||
use_tls = (optional) True, if the connection should use TLS
|
||||
to encrypt.
|
||||
"""
|
||||
|
||||
self.from_address = from_address
|
||||
self.from_caption = from_caption
|
||||
self.recipients = Recipients()
|
||||
self.cc_recipients = CCRecipients()
|
||||
self.bcc_recipients = BCCRecipients()
|
||||
if to_address:
|
||||
self.recipients.add(to_address, to_caption)
|
||||
self.subject = subject
|
||||
self.message = message
|
||||
self.smtp_server = smtp_server
|
||||
self.smtp_user = smtp_user
|
||||
self.smtp_password = smtp_password
|
||||
self.attachments = Attachments()
|
||||
self.content_subtype = "plain"
|
||||
self.content_charset = "iso-8859-1"
|
||||
self.header_charset = "us-ascii"
|
||||
self.statusdict = None
|
||||
self.user_agent = user_agent
|
||||
self.reply_to_address = reply_to_address
|
||||
self.reply_to_caption = reply_to_caption
|
||||
self.use_tls = use_tls
|
||||
|
||||
|
||||
def send(self):
|
||||
"""
|
||||
Send the mail. Returns True if successfully sent to at least one
|
||||
recipient.
|
||||
"""
|
||||
|
||||
# validation
|
||||
if len(self.from_address.strip()) == 0:
|
||||
raise NoFromAddress_Exception
|
||||
if self.recipients.count() == 0:
|
||||
if (
|
||||
(self.cc_recipients.count() == 0) and
|
||||
(self.bcc_recipients.count() == 0)
|
||||
):
|
||||
raise NoToAddress_Exception
|
||||
if len(self.subject.strip()) == 0:
|
||||
raise NoSubject_Exception
|
||||
|
||||
# assemble
|
||||
if self.attachments.count() == 0:
|
||||
msg = MIMEText(
|
||||
_text = self.message,
|
||||
_subtype = self.content_subtype,
|
||||
_charset = self.content_charset
|
||||
)
|
||||
else:
|
||||
msg = MIMEMultipart()
|
||||
if self.message:
|
||||
att = MIMEText(
|
||||
_text = self.message,
|
||||
_subtype = self.content_subtype,
|
||||
_charset = self.content_charset
|
||||
)
|
||||
msg.attach(att)
|
||||
|
||||
# add headers
|
||||
from_str = formataddr((self.from_caption, self.from_address))
|
||||
msg["From"] = from_str
|
||||
if self.reply_to_address:
|
||||
reply_to_str = formataddr((self.reply_to_caption, self.reply_to_address))
|
||||
msg["Reply-To"] = reply_to_str
|
||||
if self.recipients.count() > 0:
|
||||
msg["To"] = ", ".join(self.recipients.get_list())
|
||||
if self.cc_recipients.count() > 0:
|
||||
msg["Cc"] = ", ".join(self.cc_recipients.get_list())
|
||||
msg["Date"] = formatdate(time.time())
|
||||
msg["User-Agent"] = self.user_agent
|
||||
try:
|
||||
msg["Subject"] = Header(
|
||||
self.subject, self.header_charset
|
||||
)
|
||||
except(UnicodeDecodeError):
|
||||
msg["Subject"] = Header(
|
||||
self.subject, self.content_charset
|
||||
)
|
||||
msg.preamble = "You will not see this in a MIME-aware mail reader.\n"
|
||||
msg.epilogue = ""
|
||||
|
||||
# assemble multipart
|
||||
if self.attachments.count() > 0:
|
||||
for typ, info in self.attachments.get_list():
|
||||
if typ == 'file':
|
||||
filename = info
|
||||
if not os.path.isfile(filename):
|
||||
raise AttachmentNotFound_Exception, filename
|
||||
mimetype, encoding = mimetypes.guess_type(filename)
|
||||
if mimetype is None or encoding is not None:
|
||||
mimetype = 'application/octet-stream'
|
||||
if mimetype.startswith('text/'):
|
||||
fp = file(filename)
|
||||
else:
|
||||
fp = file(filename, 'rb')
|
||||
text = fp.read()
|
||||
fp.close()
|
||||
else:
|
||||
filename, text, mimetype = info
|
||||
maintype, subtype = mimetype.split('/', 1)
|
||||
if maintype == 'text':
|
||||
# Note: we should handle calculating the charset
|
||||
att = MIMEText(text, _subtype=subtype)
|
||||
elif maintype == 'image':
|
||||
att = MIMEImage(text, _subtype=subtype)
|
||||
elif maintype == 'audio':
|
||||
att = MIMEAudio(text, _subtype=subtype)
|
||||
else:
|
||||
att = MIMEBase(maintype, subtype)
|
||||
att.set_payload(text)
|
||||
# Encode the payload using Base64
|
||||
Encoders.encode_base64(att)
|
||||
# Set the filename parameter
|
||||
att.add_header(
|
||||
'Content-Disposition',
|
||||
'attachment',
|
||||
filename = os.path.basename(filename).strip()
|
||||
)
|
||||
msg.attach(att)
|
||||
|
||||
# connect to server
|
||||
smtp = smtplib.SMTP()
|
||||
if self.smtp_server:
|
||||
smtp.connect(self.smtp_server)
|
||||
else:
|
||||
smtp.connect()
|
||||
|
||||
# TLS?
|
||||
if self.use_tls:
|
||||
smtp.ehlo()
|
||||
smtp.starttls()
|
||||
smtp.ehlo()
|
||||
|
||||
# authenticate
|
||||
if self.smtp_user:
|
||||
smtp.login(user = self.smtp_user, password = self.smtp_password)
|
||||
|
||||
# send
|
||||
self.statusdict = smtp.sendmail(
|
||||
from_str,
|
||||
(
|
||||
self.recipients.get_list() +
|
||||
self.cc_recipients.get_list() +
|
||||
self.bcc_recipients.get_list()
|
||||
),
|
||||
msg.as_string()
|
||||
)
|
||||
smtp.close()
|
||||
|
||||
return True
|
||||
239
sphinx/web/markup.py
Normal file
@@ -0,0 +1,239 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.markup
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Awfully simple markup used in comments. Syntax:
|
||||
|
||||
`this is some <code>`
|
||||
like <tt> in HTML
|
||||
|
||||
``this is like ` just that i can contain backticks``
|
||||
like <tt> in HTML
|
||||
|
||||
*emphasized*
|
||||
translates to <em class="important">
|
||||
|
||||
**strong**
|
||||
translates to <strong>
|
||||
|
||||
!!!very important message!!!
|
||||
use this to mark important or dangerous things.
|
||||
Translates to <em class="dangerous">
|
||||
|
||||
[[http://www.google.com/]]
|
||||
Simple link with the link target as caption. If the
|
||||
URL is relative the provided callback is called to get
|
||||
the full URL.
|
||||
|
||||
[[http://www.google.com/ go to google]]
|
||||
Link with "go to google" as caption.
|
||||
|
||||
<code>preformatted code that could by python code</code>
|
||||
Python code (most of the time), otherwise preformatted.
|
||||
|
||||
<quote>cite someone</quote>
|
||||
Like <blockquote> in HTML.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
import cgi
|
||||
import re
|
||||
from urlparse import urlparse
|
||||
|
||||
from ..highlighting import highlight_block
|
||||
|
||||
|
||||
inline_formatting = {
|
||||
'escaped_code': ('``', '``'),
|
||||
'code': ('`', '`'),
|
||||
'strong': ('**', '**'),
|
||||
'emphasized': ('*', '*'),
|
||||
'important': ('!!!', '!!!'),
|
||||
'link': ('[[', ']]'),
|
||||
'quote': ('<quote>', '</quote>'),
|
||||
'code_block': ('<code>', '</code>'),
|
||||
'paragraph': (r'\n{2,}', None),
|
||||
'newline': (r'\\$', None)
|
||||
}
|
||||
|
||||
simple_formattings = {
|
||||
'strong_begin': '<strong>',
|
||||
'strong_end': '</strong>',
|
||||
'emphasized_begin': '<em>',
|
||||
'emphasized_end': '</em>',
|
||||
'important_begin': '<em class="important">',
|
||||
'important_end': '</em>',
|
||||
'quote_begin': '<blockquote>',
|
||||
'quote_end': '</blockquote>'
|
||||
}
|
||||
|
||||
raw_formatting = set(['link', 'code', 'escaped_code', 'code_block'])
|
||||
|
||||
formatting_start_re = re.compile('|'.join(
|
||||
'(?P<%s>%s)' % (name, end is not None and re.escape(start) or start)
|
||||
for name, (start, end)
|
||||
in sorted(inline_formatting.items(), key=lambda x: -len(x[1][0]))
|
||||
), re.S | re.M)
|
||||
|
||||
formatting_end_res = dict(
|
||||
(name, re.compile(re.escape(end))) for name, (start, end)
|
||||
in inline_formatting.iteritems() if end is not None
|
||||
)
|
||||
|
||||
without_end_tag = set(name for name, (_, end) in inline_formatting.iteritems()
|
||||
if end is None)
|
||||
|
||||
|
||||
|
||||
class StreamProcessor(object):
|
||||
|
||||
def __init__(self, stream):
|
||||
self._pushed = []
|
||||
self._stream = stream
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
if self._pushed:
|
||||
return self._pushed.pop()
|
||||
return self._stream.next()
|
||||
|
||||
def push(self, token, data):
|
||||
self._pushed.append((token, data))
|
||||
|
||||
def get_data(self, drop_needle=False):
|
||||
result = []
|
||||
try:
|
||||
while True:
|
||||
token, data = self.next()
|
||||
if token != 'text':
|
||||
if not drop_needle:
|
||||
self.push(token, data)
|
||||
break
|
||||
result.append(data)
|
||||
except StopIteration:
|
||||
pass
|
||||
return ''.join(result)
|
||||
|
||||
|
||||
class MarkupParser(object):
|
||||
|
||||
def __init__(self, make_rel_url):
|
||||
self.make_rel_url = make_rel_url
|
||||
|
||||
def tokenize(self, text):
|
||||
text = '\n'.join(text.splitlines())
|
||||
last_pos = 0
|
||||
pos = 0
|
||||
end = len(text)
|
||||
stack = []
|
||||
text_buffer = []
|
||||
|
||||
while pos < end:
|
||||
if stack:
|
||||
m = formatting_end_res[stack[-1]].match(text, pos)
|
||||
if m is not None:
|
||||
if text_buffer:
|
||||
yield 'text', ''.join(text_buffer)
|
||||
del text_buffer[:]
|
||||
yield stack[-1] + '_end', None
|
||||
stack.pop()
|
||||
pos = m.end()
|
||||
continue
|
||||
|
||||
m = formatting_start_re.match(text, pos)
|
||||
if m is not None:
|
||||
if text_buffer:
|
||||
yield 'text', ''.join(text_buffer)
|
||||
del text_buffer[:]
|
||||
|
||||
for key, value in m.groupdict().iteritems():
|
||||
if value is not None:
|
||||
if key in without_end_tag:
|
||||
yield key, None
|
||||
else:
|
||||
if key in raw_formatting:
|
||||
regex = formatting_end_res[key]
|
||||
m2 = regex.search(text, m.end())
|
||||
if m2 is None:
|
||||
yield key, text[m.end():]
|
||||
else:
|
||||
yield key, text[m.end():m2.start()]
|
||||
m = m2
|
||||
else:
|
||||
yield key + '_begin', None
|
||||
stack.append(key)
|
||||
break
|
||||
|
||||
if m is None:
|
||||
break
|
||||
else:
|
||||
pos = m.end()
|
||||
continue
|
||||
|
||||
text_buffer.append(text[pos])
|
||||
pos += 1
|
||||
|
||||
yield 'text', ''.join(text_buffer)
|
||||
for token in reversed(stack):
|
||||
yield token + '_end', None
|
||||
|
||||
def stream_to_html(self, text):
|
||||
stream = StreamProcessor(self.tokenize(text))
|
||||
paragraph = []
|
||||
result = []
|
||||
|
||||
def new_paragraph():
|
||||
result.append(paragraph[:])
|
||||
del paragraph[:]
|
||||
|
||||
for token, data in stream:
|
||||
if token in simple_formattings:
|
||||
paragraph.append(simple_formattings[token])
|
||||
elif token in ('text', 'escaped_code', 'code'):
|
||||
if data:
|
||||
data = cgi.escape(data)
|
||||
if token in ('escaped_code', 'code'):
|
||||
data = '<tt>%s</tt>' % data
|
||||
paragraph.append(data)
|
||||
elif token == 'link':
|
||||
if ' ' in data:
|
||||
href, caption = data.split(' ', 1)
|
||||
else:
|
||||
href = caption = data
|
||||
protocol = urlparse(href)[0]
|
||||
nofollow = True
|
||||
if not protocol:
|
||||
href = self.make_rel_url(href)
|
||||
nofollow = False
|
||||
elif protocol == 'javascript':
|
||||
href = href[11:]
|
||||
paragraph.append('<a href="%s"%s>%s</a>' % (cgi.escape(href),
|
||||
' rel="nofollow"' if nofollow else '',
|
||||
cgi.escape(caption)))
|
||||
elif token == 'code_block':
|
||||
result.append(highlight_block(data, 'python'))
|
||||
new_paragraph()
|
||||
elif token == 'paragraph':
|
||||
new_paragraph()
|
||||
elif token == 'newline':
|
||||
paragraph.append('<br>')
|
||||
|
||||
if paragraph:
|
||||
result.append(paragraph)
|
||||
for item in result:
|
||||
if isinstance(item, list):
|
||||
if item:
|
||||
yield '<p>%s</p>' % ''.join(item)
|
||||
else:
|
||||
yield item
|
||||
|
||||
def to_html(self, text):
|
||||
return ''.join(self.stream_to_html(text))
|
||||
|
||||
|
||||
def markup(text, make_rel_url=lambda x: './' + x):
|
||||
return MarkupParser(make_rel_url).to_html(text)
|
||||
91
sphinx/web/oldurls.py
Normal file
@@ -0,0 +1,91 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.oldurls
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Handle old URLs gracefully.
|
||||
|
||||
:copyright: 2007 by Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
||||
from .wsgiutil import RedirectResponse, NotFound
|
||||
|
||||
|
||||
_module_re = re.compile(r'module-(.*)\.html')
|
||||
_modobj_re = re.compile(r'(.*)-objects\.html')
|
||||
_modsub_re = re.compile(r'(.*?)-(.*)\.html')
|
||||
|
||||
|
||||
special_module_names = {
|
||||
'main': '__main__',
|
||||
'builtin': '__builtin__',
|
||||
'future': '__future__',
|
||||
'pycompile': 'py_compile',
|
||||
}
|
||||
|
||||
tutorial_nodes = [
|
||||
'', '', '',
|
||||
'appetite',
|
||||
'interpreter',
|
||||
'introduction',
|
||||
'controlflow',
|
||||
'datastructures',
|
||||
'modules',
|
||||
'inputoutput',
|
||||
'errors',
|
||||
'classes',
|
||||
'stdlib',
|
||||
'stdlib2',
|
||||
'whatnow',
|
||||
'interactive',
|
||||
'floatingpoint',
|
||||
'',
|
||||
'glossary',
|
||||
]
|
||||
|
||||
|
||||
def handle_html_url(req, url):
|
||||
def inner():
|
||||
# global special pages
|
||||
if url.endswith('/contents.html'):
|
||||
return 'contents/'
|
||||
if url.endswith('/genindex.html'):
|
||||
return 'genindex/'
|
||||
if url.endswith('/about.html'):
|
||||
return 'about/'
|
||||
if url.endswith('/reporting-bugs.html'):
|
||||
return 'bugs/'
|
||||
if url == 'modindex.html' or url.endswith('/modindex.html'):
|
||||
return 'modindex/'
|
||||
# modules, macmodules
|
||||
if url[:4] in ('lib/', 'mac/'):
|
||||
p = '' if url[0] == 'l' else 'mac'
|
||||
m = _module_re.match(url[4:])
|
||||
if m:
|
||||
mn = m.group(1)
|
||||
return p + 'modules/' + special_module_names.get(mn, mn)
|
||||
# module sub-pages
|
||||
m = _modsub_re.match(url[4:])
|
||||
if m and not _modobj_re.match(url[4:]):
|
||||
mn = m.group(1)
|
||||
return p + 'modules/' + special_module_names.get(mn, mn)
|
||||
# XXX: handle all others
|
||||
# tutorial
|
||||
elif url[:4] == 'tut/':
|
||||
try:
|
||||
node = int(url[8:].partition('.html')[0])
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
if tutorial_nodes[node]:
|
||||
return 'tutorial/' + tutorial_nodes[node]
|
||||
# installing: all in one (ATM)
|
||||
elif url[:5] == 'inst/':
|
||||
return 'install/'
|
||||
# no mapping for "documenting Python..."
|
||||
# nothing found
|
||||
raise NotFound()
|
||||
return RedirectResponse('%s?oldurl=1' % inner())
|
||||
99
sphinx/web/serve.py
Normal file
@@ -0,0 +1,99 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.serve
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
This module optionally wraps the `wsgiref` module so that it reloads code
|
||||
automatically. Works with any WSGI application but it won't help in non
|
||||
`wsgiref` environments. Use it only for development.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher, Georg Brandl.
|
||||
:license: Python license.
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import thread
|
||||
|
||||
|
||||
def reloader_loop(extra_files):
|
||||
"""When this function is run from the main thread, it will force other
|
||||
threads to exit when any modules currently loaded change.
|
||||
|
||||
:param extra_files: a list of additional files it should watch.
|
||||
"""
|
||||
mtimes = {}
|
||||
while True:
|
||||
for filename in filter(None, [getattr(module, '__file__', None)
|
||||
for module in sys.modules.values()] +
|
||||
extra_files):
|
||||
while not os.path.isfile(filename):
|
||||
filename = os.path.dirname(filename)
|
||||
if not filename:
|
||||
break
|
||||
if not filename:
|
||||
continue
|
||||
|
||||
if filename[-4:] in ('.pyc', '.pyo'):
|
||||
filename = filename[:-1]
|
||||
|
||||
mtime = os.stat(filename).st_mtime
|
||||
if filename not in mtimes:
|
||||
mtimes[filename] = mtime
|
||||
continue
|
||||
if mtime > mtimes[filename]:
|
||||
sys.exit(3)
|
||||
time.sleep(1)
|
||||
|
||||
|
||||
def restart_with_reloader():
|
||||
"""Spawn a new Python interpreter with the same arguments as this one,
|
||||
but running the reloader thread."""
|
||||
while True:
|
||||
print '* Restarting with reloader...'
|
||||
args = [sys.executable] + sys.argv
|
||||
if sys.platform == 'win32':
|
||||
args = ['"%s"' % arg for arg in args]
|
||||
new_environ = os.environ.copy()
|
||||
new_environ['RUN_MAIN'] = 'true'
|
||||
exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)
|
||||
if exit_code != 3:
|
||||
return exit_code
|
||||
|
||||
|
||||
def run_with_reloader(main_func, extra_watch):
|
||||
"""
|
||||
Run the given function in an independent python interpreter.
|
||||
"""
|
||||
if os.environ.get('RUN_MAIN') == 'true':
|
||||
thread.start_new_thread(main_func, ())
|
||||
try:
|
||||
reloader_loop(extra_watch)
|
||||
except KeyboardInterrupt:
|
||||
return
|
||||
try:
|
||||
sys.exit(restart_with_reloader())
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
|
||||
|
||||
def run_simple(hostname, port, make_app, use_reloader=False,
|
||||
extra_files=None):
|
||||
"""
|
||||
Start an application using wsgiref and with an optional reloader.
|
||||
"""
|
||||
from wsgiref.simple_server import make_server
|
||||
def inner():
|
||||
application = make_app()
|
||||
print '* Startup complete.'
|
||||
srv = make_server(hostname, port, application)
|
||||
try:
|
||||
srv.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
if os.environ.get('RUN_MAIN') != 'true':
|
||||
print '* Running on http://%s:%d/' % (hostname, port)
|
||||
if use_reloader:
|
||||
run_with_reloader(inner, extra_files or [])
|
||||
else:
|
||||
inner()
|
||||
90
sphinx/web/userdb.py
Normal file
@@ -0,0 +1,90 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
sphinx.web.userdb
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
A module that provides pythonic access to the `docusers` file
|
||||
that stores users and their passwords so that they can gain access
|
||||
to the administration system.
|
||||
|
||||
:copyright: 2007 by Armin Ronacher.
|
||||
:license: Python license.
|
||||
"""
|
||||
from __future__ import with_statement
|
||||
from os import path
|
||||
from hashlib import sha1
|
||||
from random import choice, randrange
|
||||
from collections import defaultdict
|
||||
|
||||
|
||||
def gen_password(length=8, add_numbers=True, mix_case=True,
|
||||
add_special_char=True):
|
||||
"""
|
||||
Generate a pronounceable password.
|
||||
"""
|
||||
if length <= 0:
|
||||
raise ValueError('requested password of length <= 0')
|
||||
consonants = 'bcdfghjklmnprstvwz'
|
||||
vowels = 'aeiou'
|
||||
if mix_case:
|
||||
consonants = consonants * 2 + consonants.upper()
|
||||
vowels = vowels * 2 + vowels.upper()
|
||||
pw = ''.join([choice(consonants) +
|
||||
choice(vowels) +
|
||||
choice(consonants + vowels) for _
|
||||
in xrange(length // 3 + 1)])[:length]
|
||||
if add_numbers:
|
||||
n = length // 3
|
||||
if n > 0:
|
||||
pw = pw[:-n]
|
||||
for _ in xrange(n):
|
||||
pw += choice('0123456789')
|
||||
if add_special_char:
|
||||
tmp = randrange(0, len(pw))
|
||||
l1 = pw[:tmp]
|
||||
l2 = pw[tmp:]
|
||||
if max(len(l1), len(l2)) == len(l1):
|
||||
l1 = l1[:-1]
|
||||
else:
|
||||
l2 = l2[:-1]
|
||||
return l1 + choice('#$&%?!') + l2
|
||||
return pw
|
||||
|
||||
|
||||
class UserDatabase(object):
|
||||
|
||||
def __init__(self, filename):
|
||||
self.filename = filename
|
||||
self.users = {}
|
||||
self.privileges = defaultdict(set)
|
||||
if path.exists(filename):
|
||||
with file(filename) as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if line and line[0] != '#':
|
||||
parts = line.split(':')
|
||||
self.users[parts[0]] = parts[1]
|
||||
self.privileges[parts[0]].update(x for x in
|
||||
parts[2].split(',')
|
||||
if x)
|
||||
|
||||
def set_password(self, user, password):
|
||||
"""Encode the password for a user (also adds users)."""
|
||||
self.users[user] = sha1('%s|%s' % (user, password)).hexdigest()
|
||||
|
||||
def add_user(self, user):
|
||||
"""Add a new user and return the generated password."""
|
||||
pw = gen_password(8, add_special_char=False)
|
||||
self.set_password(user, pw)
|
||||
self.privileges[user].clear()
|
||||
return pw
|
||||
|
||||
def check_password(self, user, password):
|
||||
return user in self.users and \
|
||||
self.users[user] == sha1('%s|%s' % (user, password)).hexdigest()
|
||||
|
||||
def save(self):
|
||||
with file(self.filename, 'w') as f:
|
||||
for username, password in self.users.iteritems():
|
||||
privileges = ','.join(self.privileges.get(username, ()))
|
||||
f.write('%s:%s:%s\n' % (username, password, privileges))
|
||||