package documentation

New in version 1.20.

Large parts of the NumPy API have PEP-484-style type annotations. In addition a number of type aliases are available to users, most prominently the two below:

`ArrayLike`

: objects that can be converted to arrays`DTypeLike`

: objects that can be converted to dtypes

New in version 1.21.

NumPy is very flexible. Trying to describe the full range of possibilities statically would result in types that are not very helpful. For that reason, the typed NumPy API is often stricter than the runtime NumPy API. This section describes some notable differences.

The `ArrayLike`

type tries to avoid creating object arrays. For
example,

>>> np.array(x**2 for x in range(10)) array(<generator object <genexpr> at ...>, dtype=object)

is valid NumPy code which will create a 0-dimensional object
array. Type checkers will complain about the above example when using
the NumPy types however. If you really intended to do the above, then
you can either use a `# type: ignore` comment:

>>> np.array(x**2 for x in range(10)) # type: ignore

or explicitly type the array like object as `~typing.Any`

:

>>> from typing import Any >>> array_like: Any = (x**2 for x in range(10)) >>> np.array(array_like) array(<generator object <genexpr> at ...>, dtype=object)

It's possible to mutate the dtype of an array at runtime. For example, the following code is valid:

>>> x = np.array([1, 2]) >>> x.dtype = np.bool_

This sort of mutation is not allowed by the types. Users who want to
write statically typed code should instead use the `numpy.ndarray.view`

method to create a view of the array with a different dtype.

The `DTypeLike`

type tries to avoid creation of dtype objects using
dictionary of fields like below:

>>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)})

Although this is valid NumPy code, the type checker will complain about it, since its usage is discouraged. Please see : :ref:`Data type objects <arrays.dtypes>`

The precision of `numpy.number`

subclasses is treated as a covariant generic
parameter (see `~NBitBase`

), simplifying the annotating of processes
involving precision-based casting.

>>> from typing import TypeVar >>> import numpy as np >>> import numpy.typing as npt >>> T = TypeVar("T", bound=npt.NBitBase) >>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]": ... ...

Consequently, the likes of `~numpy.float16`

, `~numpy.float32`

and
`~numpy.float64`

are still sub-types of `~numpy.floating`

, but, contrary to
runtime, they're not necessarily considered as sub-classes.

The `~numpy.timedelta64`

class is not considered a subclass of
`~numpy.signedinteger`

, the former only inheriting from `~numpy.generic`

while static type checking.

During runtime numpy aggressively casts any passed 0D arrays into their
corresponding `~numpy.generic`

instance. Until the introduction of shape
typing (see PEP 646) it is unfortunately not possible to make the
necessary distinction between 0D and >0D arrays. While thus not strictly
correct, all operations are that can potentially perform a 0D-array -> scalar
cast are currently annotated as exclusively returning an `ndarray`

.

If it is known in advance that an operation _will_ perform a
0D-array -> scalar cast, then one can consider manually remedying the
situation with either `typing.cast`

or a `# type: ignore` comment.

The dtype of `numpy.recarray`

, and the `numpy.rec`

functions in general,
can be specified in one of two ways:

- Directly via the
`dtype`argument. - With up to five helper arguments that operate via
`numpy.format_parser`

:`formats`,`names`,`titles`,`aligned`and`byteorder`.

These two approaches are currently typed as being mutually exclusive,
*i.e.* if `dtype` is specified than one may not specify `formats`.
While this mutual exclusivity is not (strictly) enforced during runtime,
combining both dtype specifiers can lead to unexpected or even downright
buggy behavior.

Module | `mypy_plugin` |
A mypy_ plugin for managing a number of platform-specific annotations. Its functionality can be split into three distinct parts: |

Module | `_add_docstring` |
A module for creating docstrings for sphinx data domains. |

Module | `_array_like` |
Undocumented |

Module | `_char_codes` |
Undocumented |

Module | `_dtype_like` |
Undocumented |

Module | `_extended_precision` |
A module with platform-specific extended precision `numpy.number` subclasses. |

Module | `_generic_alias` |
No module docstring; 0/3 variable, 0/2 constant, 3/3 functions, 1/1 class documented |

Module | `_nbit` |
A module with the precisions of platform-specific `~numpy.number`s. |

Module | `_nested_sequence` |
A module containing the `_NestedSequence` protocol. |

Module | `_scalars` |
Undocumented |

Module | `_shape` |
Undocumented |

Module | `setup` |
Undocumented |

Package | `tests` |
No package docstring; 1/4 module documented |

From `__init__.py`

:

Class | `NBitBase` |
A type representing `numpy.number` precision during static type checking. |

Variable | `test` |
Undocumented |

Class | `_128Bit` |
Undocumented |

Class | `_16Bit` |
Undocumented |

Class | `_256Bit` |
Undocumented |

Class | `_32Bit` |
Undocumented |

Class | `_64Bit` |
Undocumented |

Class | `_80Bit` |
Undocumented |

Class | `_8Bit` |
Undocumented |

Class | `_96Bit` |
Undocumented |