module documentation

Parser for epytext strings. Epytext is a lightweight markup whose primary intended application is Python documentation strings. This parser converts Epytext strings to a simple DOM-like representation (encoded as a tree of Element objects and strings). Epytext strings can contain the following structural blocks:

  • epytext: The top-level element of the DOM tree.
  • para: A paragraph of text. Paragraphs contain no newlines, and all spaces are soft.
  • section: A section or subsection.
  • field: A tagged field. These fields provide information about specific aspects of a Python object, such as the description of a function's parameter, or the author of a module.
  • literalblock: A block of literal text. This text should be displayed as it would be displayed in plaintext. The parser removes the appropriate amount of leading whitespace from each line in the literal block.
  • doctestblock: A block containing sample python code, formatted according to the specifications of the doctest module.
  • ulist: An unordered list.
  • olist: An ordered list.
  • li: A list item. This tag is used both for unordered list items and for ordered list items.

Additionally, the following inline regions may be used within para blocks:

  • code: Source code and identifiers.
  • math: Mathematical expressions.
  • index: A term which should be included in an index, if one is generated.
  • italic: Italicized text.
  • bold: Bold-faced text.
  • uri: A Universal Resource Indicator (URI) or Universal Resource Locator (URL)
  • link: A Python identifier which should be hyperlinked to the named object's documentation, when possible.

The returned DOM tree will conform to the the following Document Type Description:

   <!ENTITY % colorized '(code | math | index | italic |
                          bold | uri | link | symbol)*'>

   <!ELEMENT epytext ((para | literalblock | doctestblock |
                      section | ulist | olist)*, fieldlist?)>

   <!ELEMENT para (#PCDATA | %colorized;)*>

   <!ELEMENT section (para | listblock | doctestblock |
                      section | ulist | olist)+>

   <!ELEMENT fieldlist (field+)>
   <!ELEMENT field (tag, arg?, (para | listblock | doctestblock)
                                ulist | olist)+)>
   <!ELEMENT tag (#PCDATA)>
   <!ELEMENT arg (#PCDATA)>

   <!ELEMENT literalblock (#PCDATA | %colorized;)*>
   <!ELEMENT doctestblock (#PCDATA)>

   <!ELEMENT ulist (li+)>
   <!ELEMENT olist (li+)>
   <!ELEMENT li (para | literalblock | doctestblock | ulist | olist)+>
   <!ATTLIST li bullet NMTOKEN #IMPLIED>
   <!ATTLIST olist start NMTOKEN #IMPLIED>

   <!ELEMENT uri     (name, target)>
   <!ELEMENT link    (name, target)>
   <!ELEMENT name    (#PCDATA | %colorized;)*>
   <!ELEMENT target  (#PCDATA)>

   <!ELEMENT code    (#PCDATA | %colorized;)*>
   <!ELEMENT math    (#PCDATA | %colorized;)*>
   <!ELEMENT italic  (#PCDATA | %colorized;)*>
   <!ELEMENT bold    (#PCDATA | %colorized;)*>
   <!ELEMENT indexed (#PCDATA | %colorized;)>
   <!ATTLIST code style CDATA #IMPLIED>

   <!ELEMENT symbol (#PCDATA)>
   <!ELEMENT wbr>
Class ​Colorizing​Error An error generated while colorizing a paragraph.
Class ​Element No summary
Class ​Parsed​Epytext​Docstring Undocumented
Class ​Structuring​Error An error generated while structuring a formatted documentation string.
Class ​Token Tokens are an intermediate data structure used while constructing the structuring DOM tree for a formatted docstring. There are five types of Token:
Class ​Tokenization​Error An error generated while tokenizing a formatted documentation string.
Function get​_parser Get the parse_docstring function.
Function parse Return a DOM tree encoding the contents of an epytext string. Any errors generated during parsing will be stored in errors.
Function parse​_docstring Parse the given docstring, which is formatted using epytext; and return a ParsedDocstring representation of its contents.
Constant SYMBOLS A list of the of escape symbols that are supported by epydoc. Currently the following symbols are supported :
Variable __doc__ Undocumented
Variable symblist Undocumented
Function ​_add​_list Add a new list item or field to the DOM tree, with the given bullet or field tag. When necessary, create the associated list.
Function ​_add​_para Colorize the given paragraph, and add it to the DOM tree.
Function ​_add​_section Add a new section to the DOM tree, with the given heading.
Function ​_colorize No summary
Function ​_colorize​_link Undocumented
Function ​_pop​_completed​_blocks No summary
Function ​_tokenize Split a given formatted docstring into an ordered list of Tokens, according to the epytext markup rules.
Function ​_tokenize​_doctest No summary
Function ​_tokenize​_listart No summary
Function ​_tokenize​_literal No summary
Function ​_tokenize​_para No summary
Constant ​_BRACE​_RE Undocumented
Constant ​_BULLET​_RE Undocumented
Constant ​_COLORIZING​_TAGS Undocumented
Constant ​_ESCAPES Undocumented
Constant ​_FIELD​_BULLET Undocumented
Constant ​_FIELD​_BULLET​_RE Undocumented
Constant ​_HEADING​_CHARS Undocumented
Constant ​_LINK​_COLORIZING​_TAGS Undocumented
Constant ​_LIST​_BULLET​_RE Undocumented
Constant ​_OLIST​_BULLET Undocumented
Constant ​_SYMBOLS Undocumented
Constant ​_TARGET​_RE Undocumented
Constant ​_ULIST​_BULLET Undocumented
def get_parser(obj):
Get the parse_docstring function.
Parameters
obj:Optional[Documentable]Undocumented
Returns
Callable[[str, List[ParseError], bool], ParsedDocstring]Undocumented
def parse(text, errors=None):
Return a DOM tree encoding the contents of an epytext string. Any errors generated during parsing will be stored in errors.
Parameters
text:strThe epytext string to parse.
errors:Optional[List[ParseError]]A list where any errors generated during parsing will be stored. If no list is specified, then fatal errors will generate exceptions, and non-fatal errors will be ignored.
Returns
Optional[Element]a DOM tree encoding the contents of an epytext string, or None if non-fatal errors were encountered and no errors accumulator was provided.
Raises
ParseErrorIf errors is None and an error is encountered while parsing.
def parse_docstring(docstring, errors, processtypes=False):
Parse the given docstring, which is formatted using epytext; and return a ParsedDocstring representation of its contents.
Parameters
docstring:strThe docstring to parse
errors:List[ParseError]A list where any errors generated during parsing will be stored.
processtypes:boolUse ParsedTypeDocstring to parsed 'type' fields.
Returns
ParsedDocstringUndocumented
SYMBOLS: list[str] =

A list of the of escape symbols that are supported by epydoc. Currently the following symbols are supported :

    # Arrows
    '<-', '->', '^', 'v',

    # Greek letters
    'alpha', 'beta', 'gamma', 'delta', 'epsilon', 'zeta',
    'eta', 'theta', 'iota', 'kappa', 'lambda', 'mu',
    'nu', 'xi', 'omicron', 'pi', 'rho', 'sigma',
    'tau', 'upsilon', 'phi', 'chi', 'psi', 'omega',
    'Alpha', 'Beta', 'Gamma', 'Delta', 'Epsilon', 'Zeta',
    'Eta', 'Theta', 'Iota', 'Kappa', 'Lambda', 'Mu',
    'Nu', 'Xi', 'Omicron', 'Pi', 'Rho', 'Sigma',
    'Tau', 'Upsilon', 'Phi', 'Chi', 'Psi', 'Omega',

    # HTML character entities
    'larr', 'rarr', 'uarr', 'darr', 'harr', 'crarr',
    'lArr', 'rArr', 'uArr', 'dArr', 'hArr',
    'copy', 'times', 'forall', 'exist', 'part',
    'empty', 'isin', 'notin', 'ni', 'prod', 'sum',
    'prop', 'infin', 'ang', 'and', 'or', 'cap', 'cup',
    'int', 'there4', 'sim', 'cong', 'asymp', 'ne',
    'equiv', 'le', 'ge', 'sub', 'sup', 'nsub',
    'sube', 'supe', 'oplus', 'otimes', 'perp',

    # Alternate (long) names
    'infinity', 'integral', 'product',
    '>=', '<=',
Value
['<-',
 '->',
 '^',
 'v',
 'alpha',
 'beta',
 'gamma',
...
__doc__ =

Undocumented

symblist: str =

Undocumented

def _add_list(bullet_token, stack, indent_stack, errors):
Add a new list item or field to the DOM tree, with the given bullet or field tag. When necessary, create the associated list.
Parameters
bullet​_token:TokenUndocumented
stack:List[Element]Undocumented
indent​_stack:List[Optional[int]]Undocumented
errors:List[ParseError]Undocumented
def _add_para(para_token, stack, indent_stack, errors):
Colorize the given paragraph, and add it to the DOM tree.
Parameters
para​_token:TokenUndocumented
stack:List[Element]Undocumented
indent​_stack:List[Optional[int]]Undocumented
errors:List[ParseError]Undocumented
def _add_section(heading_token, stack, indent_stack, errors):
Add a new section to the DOM tree, with the given heading.
Parameters
heading​_token:TokenUndocumented
stack:List[Element]Undocumented
indent​_stack:List[Optional[int]]Undocumented
errors:List[ParseError]Undocumented
def _colorize(token, errors, tagName='para'):
Given a string containing the contents of a paragraph, produce a DOM Element encoding that paragraph. Colorized regions are represented using DOM Elements, and text is represented using DOM Texts.
Parameters
token:TokenUndocumented
errors:list of stringA list of errors. Any newly generated errors will be appended to this list.
tag​Name:stringThe element tag for the DOM Element that should be generated.
Returns
Elementa DOM Element encoding the given paragraph.
def _colorize_link(link, token, end, errors):

Undocumented

Parameters
link:ElementUndocumented
token:TokenUndocumented
end:intUndocumented
errors:List[ParseError]Undocumented
def _pop_completed_blocks(token, stack, indent_stack):
Pop any completed blocks off the stack. This includes any blocks that we have dedented past, as well as any list item blocks that we've dedented to. The top element on the stack should only be a list if we're about to start a new list item (i.e., if the next token is a bullet).
Parameters
token:TokenUndocumented
stack:List[Element]Undocumented
indent​_stack:List[Optional[int]]Undocumented
def _tokenize(text, errors):
Split a given formatted docstring into an ordered list of Tokens, according to the epytext markup rules.
Parameters
text:strThe epytext string
errors:List[ParseError]A list where any errors generated during parsing will be stored. If no list is specified, then errors will generate exceptions.
Returns
List[Token]a list of the Tokens that make up the given string.
def _tokenize_doctest(lines, start, block_indent, tokens, errors):
Construct a Token containing the doctest block starting at lines[start], and append it to tokens. block_indent should be the indentation of the doctest block. Any errors generated while tokenizing the doctest block will be appended to errors.
Parameters
lines:List[str]The list of lines to be tokenized
start:intThe index into lines of the first line of the doctest block to be tokenized.
block​_indent:intThe indentation of lines[start]. This is the indentation of the doctest block.
tokens:List[Token]Undocumented
errors:List[ParseError]A list where any errors generated during parsing will be stored. If no list is specified, then errors will generate exceptions.
Returns
intThe line number of the first line following the doctest block.
def _tokenize_listart(lines, start, bullet_indent, tokens, errors):
Construct Tokens for the bullet and the first paragraph of the list item (or field) starting at lines[start], and append them to tokens. bullet_indent should be the indentation of the list item. Any errors generated while tokenizing will be appended to errors.
Parameters
lines:List[str]The list of lines to be tokenized
start:intThe index into lines of the first line of the list item to be tokenized.
bullet​_indent:intThe indentation of lines[start]. This is the indentation of the list item.
tokens:List[Token]Undocumented
errors:List[ParseError]A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
Returns
intThe line number of the first line following the list item's first paragraph.
def _tokenize_literal(lines, start, block_indent, tokens, errors):
Construct a Token containing the literal block starting at lines[start], and append it to tokens. block_indent should be the indentation of the literal block. Any errors generated while tokenizing the literal block will be appended to errors.
Parameters
lines:List[str]The list of lines to be tokenized
start:intThe index into lines of the first line of the literal block to be tokenized.
block​_indent:intThe indentation of lines[start]. This is the indentation of the literal block.
tokens:List[Token]Undocumented
errors:List[ParseError]A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
Returns
intThe line number of the first line following the literal block.
def _tokenize_para(lines, start, para_indent, tokens, errors):
Construct a Token containing the paragraph starting at lines[start], and append it to tokens. para_indent should be the indentation of the paragraph . Any errors generated while tokenizing the paragraph will be appended to errors.
Parameters
lines:List[str]The list of lines to be tokenized
start:intThe index into lines of the first line of the paragraph to be tokenized.
para​_indent:intThe indentation of lines[start]. This is the indentation of the paragraph.
tokens:List[Token]Undocumented
errors:List[ParseError]A list of the errors generated by parsing. Any new errors generated while will tokenizing this paragraph will be appended to this list.
Returns
intThe line number of the first line following the paragraph.
_BRACE_RE =

Undocumented

Value
re.compile(r'[\{\}]')
_BULLET_RE =

Undocumented

Value
re.compile((((_ULIST_BULLET+'|')+_OLIST_BULLET)+'|')+_FIELD_BULLET)
_COLORIZING_TAGS: dict[str, str] =

Undocumented

Value
{'C': 'code',
 'M': 'math',
 'I': 'italic',
 'B': 'bold',
 'U': 'uri',
 'L': 'link',
 'E': 'escape',
...
_ESCAPES: dict[str, str] =

Undocumented

Value
{'lb': '{', 'rb': '}'}
_FIELD_BULLET: str =

Undocumented

Value
'@\\w+( [^{}:\\n]+)?:'
_FIELD_BULLET_RE =

Undocumented

Value
re.compile(_FIELD_BULLET)
_HEADING_CHARS: str =

Undocumented

Value
'=-~'
_LINK_COLORIZING_TAGS: list[str] =

Undocumented

Value
['link', 'uri']
_LIST_BULLET_RE =

Undocumented

Value
re.compile((_ULIST_BULLET+'|')+_OLIST_BULLET)
_OLIST_BULLET: str =

Undocumented

Value
'(\\d+[.])+( +|$)'
_SYMBOLS =

Undocumented

Value
set(SYMBOLS)
_TARGET_RE =

Undocumented

Value
re.compile(r'^(.*?)\s*<(?:URI:|L:)?([^<>]+)>$')
_ULIST_BULLET: str =

Undocumented

Value
'[-]( +|$)'