class VimLexer(RegexLexer):
Lexer for VimL script files.
Method | __init__ |
Undocumented |
Method | get_tokens_unprocessed |
Split text into (tokentype, text) pairs. |
Method | is_in |
No summary |
Class Variable | _python |
Undocumented |
Class Variable | aliases |
Undocumented |
Class Variable | filenames |
Undocumented |
Class Variable | mimetypes |
Undocumented |
Class Variable | name |
Undocumented |
Class Variable | tokens |
Undocumented |
Instance Variable | _aut |
Undocumented |
Instance Variable | _cmd |
Undocumented |
Instance Variable | _opt |
Undocumented |
Inherited from Lexer
(via RegexLexer
):
Method | analyse_text |
No summary |
Method | get_tokens |
Return an iterable of (tokentype, value) pairs generated from text . If unfiltered is set to True , the filtering mechanism is bypassed even if filters are defined. |
Class Variable | alias_filenames |
Undocumented |
Method | __repr__ |
Undocumented |
Method | add_filter |
Add a new stream filter to this lexer. |
Class Variable | priority |
Undocumented |
Instance Variable | encoding |
Undocumented |
Instance Variable | ensurenl |
Undocumented |
Instance Variable | filters |
Undocumented |
Instance Variable | options |
Undocumented |
Instance Variable | stripall |
Undocumented |
Instance Variable | stripnl |
Undocumented |
Instance Variable | tabsize |
Undocumented |
Split text into (tokentype, text) pairs.
stack is the inital stack (default: ['root'])
It's kind of difficult to decide if something might be a keyword in VimL because it allows you to abbreviate them. In fact, 'ab[breviate]' is a good example. :ab, :abbre, or :abbreviate are valid ways to call it so rather than making really awful regexps like:
\bab(?:b(?:r(?:e(?:v(?:i(?:a(?:t(?:e)?)?)?)?)?)?)?)?\b
we match bw+b
and then call is_in() on those tokens. See
scripts/get_vimkw.py
for how the lists are extracted.