class documentation

class NumPyLexer(PythonLexer):

View In Hierarchy

A Python lexer recognizing Numerical Python builtins.

New in version 0.10.
Constant EXTRA​_KEYWORDS Undocumented
Method analyse​_text No summary
Method get​_tokens​_unprocessed Split text into (tokentype, text) pairs.
Class Variable aliases Undocumented
Class Variable filenames Undocumented
Class Variable mimetypes Undocumented
Class Variable name Undocumented

Inherited from PythonLexer:

Method fstring​_rules Undocumented
Method innerstring​_rules Undocumented
Class Variable flags Undocumented
Class Variable tokens Undocumented
Class Variable uni​_name Undocumented

Inherited from Lexer (via PythonLexer, RegexLexer):

Method get​_tokens Return an iterable of (tokentype, value) pairs generated from text. If unfiltered is set to True, the filtering mechanism is bypassed even if filters are defined.
Class Variable alias​_filenames Undocumented
Method __init__ Undocumented
Method __repr__ Undocumented
Method add​_filter Add a new stream filter to this lexer.
Class Variable priority Undocumented
Instance Variable encoding Undocumented
Instance Variable ensurenl Undocumented
Instance Variable filters Undocumented
Instance Variable options Undocumented
Instance Variable stripall Undocumented
Instance Variable stripnl Undocumented
Instance Variable tabsize Undocumented
EXTRA_KEYWORDS: set[str] =

Undocumented

Value
set(['abs',
     'absolute',
     'accumulate',
     'add',
     'alen',
     'all',
     'allclose',
...
def analyse_text(text):

Has to return a float between 0 and 1 that indicates if a lexer wants to highlight this text. Used by guess_lexer. If this method returns 0 it won't highlight it in any case, if it returns 1 highlighting with this lexer is guaranteed.

The LexerMeta metaclass automatically wraps this function so that it works like a static method (no self or cls parameter) and the return value is automatically converted to float. If the return value is an object that is boolean False it's the same as if the return values was 0.0.

def get_tokens_unprocessed(self, text):

Split text into (tokentype, text) pairs.

stack is the inital stack (default: ['root'])

aliases: list[str] =
filenames: list =
mimetypes: list =
name: str =