class documentation

class SnowballLexer(ExtendedRegexLexer):

View In Hierarchy

Lexer for Snowball source code.

New in version 2.2.
Method __init__ Undocumented
Method ​_reset​_stringescapes Undocumented
Method ​_string Undocumented
Method ​_stringescapes Undocumented
Method get​_tokens​_unprocessed Split text into (tokentype, text) pairs. If context is given, use this lexer context instead.
Class Variable ​_ws Undocumented
Class Variable aliases Undocumented
Class Variable filenames Undocumented
Class Variable name Undocumented
Class Variable tokens Undocumented
Instance Variable ​_end Undocumented
Instance Variable ​_start Undocumented

Inherited from Lexer (via ExtendedRegexLexer, RegexLexer):

Method analyse​_text No summary
Method get​_tokens Return an iterable of (tokentype, value) pairs generated from text. If unfiltered is set to True, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias​_filenames Undocumented
Class Variable mimetypes Undocumented
Method __repr__ Undocumented
Method add​_filter Add a new stream filter to this lexer.
Class Variable priority Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
def __init__(self, **options):

Undocumented

def _reset_stringescapes(self):

Undocumented

def _string(do_string_first):

Undocumented

def _stringescapes(lexer, match, ctx):

Undocumented

def get_tokens_unprocessed(self, text=None, context=None):
Split text into (tokentype, text) pairs. If context is given, use this lexer context instead.
_ws: str =

Undocumented

aliases: list[str] =

Undocumented

filenames: list[str] =

Undocumented

name: str =

Undocumented

tokens =

Undocumented

_end: str =

Undocumented

_start: str =

Undocumented