module documentation

Implements a Jinja / Python combination lexer. The Lexer class is used to do some preprocessing. It filters out invalid operators like the bitshift operators we don't allow in templates. It separates template code and python code in expressions.
Class ​Token No class docstring; 0/3 class variable, 2/3 methods documented
Class ​Token​Stream A token stream is an iterable that yields Tokens. The parser however does not iterate over it but calls next to go one token ahead. The current active token is stored as current.
Function count​_newlines Count the number of newline characters in the string. This is useful for extensions that filter a stream.
Constant TOKEN​_ADD Undocumented
Constant TOKEN​_ASSIGN Undocumented
Constant TOKEN​_BLOCK​_BEGIN Undocumented
Constant TOKEN​_BLOCK​_END Undocumented
Constant TOKEN​_COLON Undocumented
Constant TOKEN​_COMMA Undocumented
Constant TOKEN​_COMMENT Undocumented
Constant TOKEN​_COMMENT​_BEGIN Undocumented
Constant TOKEN​_COMMENT​_END Undocumented
Constant TOKEN​_DATA Undocumented
Constant TOKEN​_DIV Undocumented
Constant TOKEN​_DOT Undocumented
Constant TOKEN​_EOF Undocumented
Constant TOKEN​_EQ Undocumented
Constant TOKEN​_FLOAT Undocumented
Constant TOKEN​_FLOORDIV Undocumented
Constant TOKEN​_GT Undocumented
Constant TOKEN​_GTEQ Undocumented
Constant TOKEN​_INITIAL Undocumented
Constant TOKEN​_INTEGER Undocumented
Constant TOKEN​_LBRACE Undocumented
Constant TOKEN​_LBRACKET Undocumented
Constant TOKEN​_LINECOMMENT Undocumented
Constant TOKEN​_LINECOMMENT​_BEGIN Undocumented
Constant TOKEN​_LINECOMMENT​_END Undocumented
Constant TOKEN​_LINESTATEMENT​_BEGIN Undocumented
Constant TOKEN​_LINESTATEMENT​_END Undocumented
Constant TOKEN​_LPAREN Undocumented
Constant TOKEN​_LT Undocumented
Constant TOKEN​_LTEQ Undocumented
Constant TOKEN​_MOD Undocumented
Constant TOKEN​_MUL Undocumented
Constant TOKEN​_NAME Undocumented
Constant TOKEN​_NE Undocumented
Constant TOKEN​_OPERATOR Undocumented
Constant TOKEN​_PIPE Undocumented
Constant TOKEN​_POW Undocumented
Constant TOKEN​_RAW​_BEGIN Undocumented
Constant TOKEN​_RAW​_END Undocumented
Constant TOKEN​_RBRACE Undocumented
Constant TOKEN​_RBRACKET Undocumented
Constant TOKEN​_RPAREN Undocumented
Constant TOKEN​_SEMICOLON Undocumented
Constant TOKEN​_STRING Undocumented
Constant TOKEN​_SUB Undocumented
Constant TOKEN​_TILDE Undocumented
Constant TOKEN​_VARIABLE​_BEGIN Undocumented
Constant TOKEN​_VARIABLE​_END Undocumented
Constant TOKEN​_WHITESPACE Undocumented
Variable float​_re Undocumented
Variable ignore​_if​_empty Undocumented
Variable ignored​_tokens Undocumented
Variable integer​_re Undocumented
Variable newline​_re Undocumented
Variable operator​_re Undocumented
Variable operators Undocumented
Variable reverse​_operators Undocumented
Variable string​_re Undocumented
Variable whitespace​_re Undocumented
Class _​Rule Undocumented
Class ​Failure Class that raises a TemplateSyntaxError if called. Used by the Lexer to specify known errors.
Class ​Lexer Class that implements a lexer for a given environment. Automatically created by the environment class, usually you don't have to do that.
Class ​Optional​LStrip A special tuple for marking a point in the state that can have lstrip applied.
Class ​Token​Stream​Iterator The iterator for tokenstreams. Iterate over the stream until the eof token is reached.
Function ​_describe​_token​_type Undocumented
Function compile​_rules Compiles all the rules from the environment into a list of rules.
Function describe​_token Returns a description of the token.
Function describe​_token​_expr Like describe_token but for token expressions.
Function get​_lexer Return a lexer which is probably cached.
Variable ​_lexer​_cache Undocumented
def count_newlines(value):
Count the number of newline characters in the string. This is useful for extensions that filter a stream.
Parameters
value:strUndocumented
Returns
intUndocumented
TOKEN_ADD =

Undocumented

Value
intern('add')
TOKEN_ASSIGN =

Undocumented

Value
intern('assign')
TOKEN_BLOCK_BEGIN =

Undocumented

Value
intern('block_begin')
TOKEN_BLOCK_END =

Undocumented

Value
intern('block_end')
TOKEN_COLON =

Undocumented

Value
intern('colon')
TOKEN_COMMA =

Undocumented

Value
intern('comma')
TOKEN_COMMENT =

Undocumented

Value
intern('comment')
TOKEN_COMMENT_BEGIN =

Undocumented

Value
intern('comment_begin')
TOKEN_COMMENT_END =

Undocumented

Value
intern('comment_end')
TOKEN_DATA =

Undocumented

Value
intern('data')
TOKEN_DIV =

Undocumented

Value
intern('div')
TOKEN_DOT =

Undocumented

Value
intern('dot')
TOKEN_EOF =

Undocumented

Value
intern('eof')
TOKEN_EQ =

Undocumented

Value
intern('eq')
TOKEN_FLOAT =

Undocumented

Value
intern('float')
TOKEN_FLOORDIV =

Undocumented

Value
intern('floordiv')
TOKEN_GT =

Undocumented

Value
intern('gt')
TOKEN_GTEQ =

Undocumented

Value
intern('gteq')
TOKEN_INITIAL =

Undocumented

Value
intern('initial')
TOKEN_INTEGER =

Undocumented

Value
intern('integer')
TOKEN_LBRACE =

Undocumented

Value
intern('lbrace')
TOKEN_LBRACKET =

Undocumented

Value
intern('lbracket')
TOKEN_LINECOMMENT =

Undocumented

Value
intern('linecomment')
TOKEN_LINECOMMENT_BEGIN =

Undocumented

Value
intern('linecomment_begin')
TOKEN_LINECOMMENT_END =

Undocumented

Value
intern('linecomment_end')
TOKEN_LINESTATEMENT_BEGIN =

Undocumented

Value
intern('linestatement_begin')
TOKEN_LINESTATEMENT_END =

Undocumented

Value
intern('linestatement_end')
TOKEN_LPAREN =

Undocumented

Value
intern('lparen')
TOKEN_LT =

Undocumented

Value
intern('lt')
TOKEN_LTEQ =

Undocumented

Value
intern('lteq')
TOKEN_MOD =

Undocumented

Value
intern('mod')
TOKEN_MUL =

Undocumented

Value
intern('mul')
TOKEN_NAME =

Undocumented

Value
intern('name')
TOKEN_NE =

Undocumented

Value
intern('ne')
TOKEN_OPERATOR =

Undocumented

Value
intern('operator')
TOKEN_PIPE =

Undocumented

Value
intern('pipe')
TOKEN_POW =

Undocumented

Value
intern('pow')
TOKEN_RAW_BEGIN =

Undocumented

Value
intern('raw_begin')
TOKEN_RAW_END =

Undocumented

Value
intern('raw_end')
TOKEN_RBRACE =

Undocumented

Value
intern('rbrace')
TOKEN_RBRACKET =

Undocumented

Value
intern('rbracket')
TOKEN_RPAREN =

Undocumented

Value
intern('rparen')
TOKEN_SEMICOLON =

Undocumented

Value
intern('semicolon')
TOKEN_STRING =

Undocumented

Value
intern('string')
TOKEN_SUB =

Undocumented

Value
intern('sub')
TOKEN_TILDE =

Undocumented

Value
intern('tilde')
TOKEN_VARIABLE_BEGIN =

Undocumented

Value
intern('variable_begin')
TOKEN_VARIABLE_END =

Undocumented

Value
intern('variable_end')
TOKEN_WHITESPACE =

Undocumented

Value
intern('whitespace')
float_re =

Undocumented

ignore_if_empty =

Undocumented

ignored_tokens =

Undocumented

integer_re =

Undocumented

newline_re =

Undocumented

operator_re =

Undocumented

operators =

Undocumented

reverse_operators =

Undocumented

string_re =

Undocumented

whitespace_re =

Undocumented

def _describe_token_type(token_type):

Undocumented

Parameters
token​_type:strUndocumented
Returns
strUndocumented
def compile_rules(environment):
Compiles all the rules from the environment into a list of rules.
Parameters
environment:EnvironmentUndocumented
Returns
t.List[t.Tuple[str, str]]Undocumented
def describe_token(token):
Returns a description of the token.
Parameters
token:TokenUndocumented
Returns
strUndocumented
def describe_token_expr(expr):
Like describe_token but for token expressions.
Parameters
expr:strUndocumented
Returns
strUndocumented
def get_lexer(environment):
Return a lexer which is probably cached.
Parameters
environment:EnvironmentUndocumented
Returns
LexerUndocumented
_lexer_cache: t.MutableMapping[t.Tuple, Lexer] =

Undocumented