module documentation

Functions for working with URLs.

Contains implementations of functions from urllib.parse that handle bytes and strings.

Class ​Base​URL Superclass of URL and BytesURL.
Class ​Bytes​URL Represents a parsed URL in bytes.
Class ​Href No summary
Class URL Represents a parsed URL. This behaves like a regular tuple but also has some extra attributes that give further insight into the URL.
Function iri​_to​_uri Convert an IRI to a URI. All non-ASCII and unsafe characters are quoted. If the URL has a domain, it is encoded to Punycode.
Function uri​_to​_iri Convert a URI to an IRI. All valid UTF-8 characters are unquoted, leaving all reserved and invalid characters quoted. If the URL has a domain, it is decoded from Punycode.
Function url​_decode Parse a query string and return it as a MultiDict.
Function url​_decode​_stream No summary
Function url​_encode URL encode a dict/MultiDict. If a value is None it will not appear in the result string. Per default only values are encoded into the target charset strings.
Function url​_encode​_stream Like url_encode but writes the results to a stream object. If the stream is None a generator over all encoded pairs is returned.
Function url​_fix No summary
Function url​_join Join a base URL and a possibly relative URL to form an absolute interpretation of the latter.
Function url​_parse No summary
Function url​_quote URL encode a single string with a given encoding.
Function url​_quote​_plus URL encode a single string with the given encoding and convert whitespace to "+".
Function url​_unparse The reverse operation to url_parse. This accepts arbitrary as well as URL tuples and returns a URL as a string.
Function url​_unquote URL decode a single string with a given encoding. If the charset is set to None no decoding is performed and raw bytes are returned.
Function url​_unquote​_plus URL decode a single string with the given charset and decode "+" to whitespace.
Class _​URLTuple Undocumented
Function ​_codec​_error​_url​_quote Used in uri_to_iri after unquoting to re-quote any invalid bytes.
Function ​_fast​_url​_quote​_plus Undocumented
Function ​_make​_fast​_url​_quote Precompile the translation table for a URL encoding function.
Function ​_unquote​_to​_bytes Undocumented
Function ​_url​_decode​_impl Undocumented
Function ​_url​_encode​_impl Undocumented
Function ​_url​_unquote​_legacy Undocumented
Variable ​_always​_safe Undocumented
Variable ​_bytetohex Undocumented
Variable ​_fast​_quote​_plus Undocumented
Variable ​_fast​_url​_quote Undocumented
Variable ​_hexdigits Undocumented
Variable ​_hextobyte Undocumented
Variable ​_scheme​_re Undocumented
Variable ​_to​_iri​_unsafe Undocumented
Variable ​_to​_uri​_safe Undocumented
Variable ​_unquote​_maps Undocumented
def iri_to_uri(iri, charset='utf-8', errors='strict', safe_conversion=False):

Convert an IRI to a URI. All non-ASCII and unsafe characters are quoted. If the URL has a domain, it is encoded to Punycode.

>>> iri_to_uri('http://\u2603.net/p\xe5th?q=\xe8ry%DF')
'http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF'

There is a general problem with IRI conversion with some protocols that are in violation of the URI specification. Consider the following two IRIs:

magnet:?xt=uri:whatever
itms-services://?action=download-manifest

After parsing, we don't know if the scheme requires the //, which is dropped if empty, but conveys different meanings in the final URL if it's present or not. In this case, you can use safe_conversion, which will return the URL unchanged if it only contains ASCII characters and no whitespace. This can result in a URI with unquoted characters if it was not already quoted correctly, but preserves the URL's semantics. Werkzeug uses this for the Location header for redirects.

Changed in version 0.15: All reserved characters remain unquoted. Previously, only some reserved characters were left unquoted.
Changed in version 0.9.6: The safe_conversion parameter was added.
New in version 0.6.
Parameters
iri:t.Union[str, t.Tuple[str, str, str, str, str]]The IRI to convert.
charset:strThe encoding of the IRI.
errors:strError handler to use during bytes.encode.
safe​_conversion:boolReturn the URL unchanged if it only contains ASCII characters and no whitespace. See the explanation below.
Returns
strUndocumented
def uri_to_iri(uri, charset='utf-8', errors='werkzeug.url_quote'):

Convert a URI to an IRI. All valid UTF-8 characters are unquoted, leaving all reserved and invalid characters quoted. If the URL has a domain, it is decoded from Punycode.

>>> uri_to_iri("http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF")
'http://\u2603.net/p\xe5th?q=\xe8ry%DF'
Changed in version 0.15: All reserved and invalid characters remain quoted. Previously, only some reserved characters were preserved, and invalid bytes were replaced instead of left quoted.
New in version 0.6.
Parameters
uri:t.Union[str, t.Tuple[str, str, str, str, str]]The URI to convert.
charset:strThe encoding to encode unquoted bytes with.
errors:strError handler to use during bytes.encode. By default, invalid bytes are left quoted.
Returns
strUndocumented
def url_decode(s, charset='utf-8', decode_keys=None, include_empty=True, errors='replace', separator='&', cls=None):

Parse a query string and return it as a MultiDict.

Changed in version 2.0: The decode_keys parameter is deprecated and will be removed in Werkzeug 2.1.
Changed in version 0.5: In previous versions ";" and "&" could be used for url decoding. Now only "&" is supported. If you want to use ";", a different separator can be provided.
Changed in version 0.5: The cls parameter was added.
Parameters
s:t.AnyStrThe query string to parse.
charset:strDecode bytes to string with this charset. If not given, bytes are returned as-is.
decode​_keys:NoneUndocumented
include​_empty:boolInclude keys with empty values in the dict.
errors:strError handling behavior when decoding bytes.
separator:strSeparator character between pairs.
cls:t.Optional[t.Type[ds.MultiDict]]Container to hold result instead of MultiDict.
Returns
ds.MultiDict[str, str]Undocumented
def url_decode_stream(stream, charset='utf-8', decode_keys=None, include_empty=True, errors='replace', separator=b'&', cls=None, limit=None, return_iterator=False):

Works like url_decode but decodes a stream. The behavior of stream and limit follows functions like ~werkzeug.wsgi.make_line_iter. The generator of pairs is directly fed to the cls so you can consume the data while it's parsed.

Changed in version 2.0: The decode_keys and return_iterator parameters are deprecated and will be removed in Werkzeug 2.1.
New in version 0.8.
Parameters
stream:t.IO[bytes]a stream with the encoded querystring
charset:strthe charset of the query string. If set to None no decoding will take place.
decode​_keys:NoneUndocumented
include​_empty:boolSet to False if you don't want empty values to appear in the dict.
errors:strthe decoding error behavior.
separator:bytesthe pair separator to be used, defaults to &
cls:t.Optional[t.Type[ds.MultiDict]]an optional dict class to use. If this is not specified or None the default MultiDict is used.
limit:t.Optional[int]the content length of the URL data. Not necessary if a limited stream is provided.
return​_iterator:boolUndocumented
Returns
ds.MultiDict[str, str]Undocumented
def url_encode(obj, charset='utf-8', encode_keys=None, sort=False, key=None, separator='&'):

URL encode a dict/MultiDict. If a value is None it will not appear in the result string. Per default only values are encoded into the target charset strings.

Changed in version 2.0: The encode_keys parameter is deprecated and will be removed in Werkzeug 2.1.
Changed in version 0.5: Added the sort, key, and separator parameters.
Parameters
obj:t.Union[t.Mapping[str, str], t.Iterable[t.Tuple[str, str]]]the object to encode into a query string.
charset:strthe charset of the query string.
encode​_keys:NoneUndocumented
sort:boolset to True if you want parameters to be sorted by key.
key:t.Optional[t.Callable[[t.Tuple[str, str]], t.Any]]an optional function to be used for sorting. For more details check out the sorted documentation.
separator:strthe separator to be used for the pairs.
Returns
strUndocumented
def url_encode_stream(obj, stream=None, charset='utf-8', encode_keys=None, sort=False, key=None, separator='&'):

Like url_encode but writes the results to a stream object. If the stream is None a generator over all encoded pairs is returned.

Changed in version 2.0: The encode_keys parameter is deprecated and will be removed in Werkzeug 2.1.
New in version 0.8.
Parameters
obj:t.Union[t.Mapping[str, str], t.Iterable[t.Tuple[str, str]]]the object to encode into a query string.
stream:t.Optional[t.IO[str]]a stream to write the encoded object into or None if an iterator over the encoded pairs should be returned. In that case the separator argument is ignored.
charset:strthe charset of the query string.
encode​_keys:NoneUndocumented
sort:boolset to True if you want parameters to be sorted by key.
key:t.Optional[t.Callable[[t.Tuple[str, str]], t.Any]]an optional function to be used for sorting. For more details check out the sorted documentation.
separator:strthe separator to be used for the pairs.
def url_fix(s, charset='utf-8'):

Sometimes you get an URL by a user that just isn't a real URL because it contains unsafe characters like ' ' and so on. This function can fix some of the problems in a similar way browsers handle data entered by the user:

>>> url_fix('http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'
Parameters
s:strthe string with the URL to fix.
charset:strThe target charset for the URL if the url was given as a string.
Returns
strUndocumented
def url_join(base, url, allow_fragments=True):
Join a base URL and a possibly relative URL to form an absolute interpretation of the latter.
Parameters
base:t.Union[str, t.Tuple[str, str, str, str, str]]the base URL for the join operation.
url:t.Union[str, t.Tuple[str, str, str, str, str]]the URL to join.
allow​_fragments:boolindicates whether fragments should be allowed.
Returns
strUndocumented
def url_parse(url, scheme=None, allow_fragments=True):

Parses a URL from a string into a URL tuple. If the URL is lacking a scheme it can be provided as second argument. Otherwise, it is ignored. Optionally fragments can be stripped from the URL by setting allow_fragments to False.

The inverse of this function is url_unparse.

Parameters
url:strthe URL to parse.
scheme:t.Optional[str]the default schema to use if the URL is schemaless.
allow​_fragments:boolif set to False a fragment will be removed from the URL.
Returns
BaseURLUndocumented
def url_quote(string, charset='utf-8', errors='strict', safe='/:', unsafe=''):

URL encode a single string with a given encoding.

New in version 0.9.2: The unsafe parameter was added.
Parameters
string:t.Union[str, bytes]Undocumented
charset:strthe charset to be used.
errors:strUndocumented
safe:t.Union[str, bytes]an optional sequence of safe characters.
unsafe:t.Union[str, bytes]an optional sequence of unsafe characters.
sthe string to quote.
Returns
strUndocumented
def url_quote_plus(string, charset='utf-8', errors='strict', safe=''):
URL encode a single string with the given encoding and convert whitespace to "+".
Parameters
string:strUndocumented
charset:strThe charset to be used.
errors:strUndocumented
safe:strAn optional sequence of safe characters.
sThe string to quote.
Returns
strUndocumented
def url_unparse(components):
The reverse operation to url_parse. This accepts arbitrary as well as URL tuples and returns a URL as a string.
Parameters
components:t.Tuple[str, str, str, str, str]the parsed URL as tuple which should be converted into a URL string.
Returns
strUndocumented
def url_unquote(s, charset='utf-8', errors='replace', unsafe=''):
URL decode a single string with a given encoding. If the charset is set to None no decoding is performed and raw bytes are returned.
Parameters
s:t.Union[str, bytes]the string to unquote.
charset:strthe charset of the query string. If set to None no decoding will take place.
errors:strthe error handling for the charset decoding.
unsafe:strUndocumented
Returns
strUndocumented
def url_unquote_plus(s, charset='utf-8', errors='replace'):

URL decode a single string with the given charset and decode "+" to whitespace.

Per default encoding errors are ignored. If you want a different behavior you can set errors to 'replace' or 'strict'.

Parameters
s:t.Union[str, bytes]The string to unquote.
charset:strthe charset of the query string. If set to None no decoding will take place.
errors:strThe error handling for the charset decoding.
Returns
strUndocumented
def _codec_error_url_quote(e):
Used in uri_to_iri after unquoting to re-quote any invalid bytes.
Parameters
e:UnicodeErrorUndocumented
Returns
t.Tuple[str, int]Undocumented
def _fast_url_quote_plus(string):

Undocumented

Parameters
string:bytesUndocumented
Returns
strUndocumented
def _make_fast_url_quote(charset='utf-8', errors='strict', safe='/:', unsafe=''):

Precompile the translation table for a URL encoding function.

Unlike url_quote, the generated function only takes the string to quote.

Parameters
charset:strThe charset to encode the result with.
errors:strHow to handle encoding errors.
safe:t.Union[str, bytes]An optional sequence of safe characters to never encode.
unsafe:t.Union[str, bytes]An optional sequence of unsafe characters to always encode.
Returns
t.Callable[[bytes], str]Undocumented
def _unquote_to_bytes(string, unsafe=''):

Undocumented

Parameters
string:t.Union[str, bytes]Undocumented
unsafe:t.Union[str, bytes]Undocumented
Returns
bytesUndocumented
def _url_decode_impl(pair_iter, charset, include_empty, errors):

Undocumented

Parameters
pair​_iter:t.Iterable[t.AnyStr]Undocumented
charset:strUndocumented
include​_empty:boolUndocumented
errors:strUndocumented
Returns
t.Iterator[t.Tuple[str, str]]Undocumented
def _url_encode_impl(obj, charset, sort, key):

Undocumented

Parameters
obj:t.Union[t.Mapping[str, str], t.Iterable[t.Tuple[str, str]]]Undocumented
charset:strUndocumented
sort:boolUndocumented
key:t.Optional[t.Callable[[t.Tuple[str, str]], t.Any]]Undocumented
Returns
t.Iterator[str]Undocumented
def _url_unquote_legacy(value, unsafe=''):

Undocumented

Parameters
value:strUndocumented
unsafe:strUndocumented
Returns
strUndocumented
_always_safe =

Undocumented

_bytetohex =

Undocumented

_fast_quote_plus =

Undocumented

_fast_url_quote =

Undocumented

_hexdigits: str =

Undocumented

_hextobyte =

Undocumented

_scheme_re =

Undocumented

_to_iri_unsafe =

Undocumented

_to_uri_safe: str =

Undocumented

_unquote_maps: t.Dict[t.FrozenSet[int], t.Dict[bytes, int]] =

Undocumented