Variable | array_function_dispatch |
Undocumented |
Function | _alen_dispathcer |
Undocumented |
Function | _all_dispatcher |
Undocumented |
Function | _amax_dispatcher |
Undocumented |
Function | _amin_dispatcher |
Undocumented |
Function | _any_dispatcher |
Undocumented |
Function | _argmax_dispatcher |
Undocumented |
Function | _argmin_dispatcher |
Undocumented |
Function | _argpartition_dispatcher |
Undocumented |
Function | _argsort_dispatcher |
Undocumented |
Function | _around_dispatcher |
Undocumented |
Function | _choose_dispatcher |
Undocumented |
Function | _clip_dispatcher |
Undocumented |
Function | _compress_dispatcher |
Undocumented |
Function | _cumprod_dispatcher |
Undocumented |
Function | _cumsum_dispatcher |
Undocumented |
Function | _diagonal_dispatcher |
Undocumented |
Function | _mean_dispatcher |
Undocumented |
Function | _ndim_dispatcher |
Undocumented |
Function | _nonzero_dispatcher |
Undocumented |
Function | _partition_dispatcher |
Undocumented |
Function | _prod_dispatcher |
Undocumented |
Function | _ptp_dispatcher |
Undocumented |
Function | _put_dispatcher |
Undocumented |
Function | _ravel_dispatcher |
Undocumented |
Function | _repeat_dispatcher |
Undocumented |
Function | _reshape_dispatcher |
Undocumented |
Function | _resize_dispatcher |
Undocumented |
Function | _searchsorted_dispatcher |
Undocumented |
Function | _shape_dispatcher |
Undocumented |
Function | _size_dispatcher |
Undocumented |
Function | _sort_dispatcher |
Undocumented |
Function | _squeeze_dispatcher |
Undocumented |
Function | _std_dispatcher |
Undocumented |
Function | _sum_dispatcher |
Undocumented |
Function | _swapaxes_dispatcher |
Undocumented |
Function | _take_dispatcher |
Undocumented |
Function | _trace_dispatcher |
Undocumented |
Function | _transpose_dispatcher |
Undocumented |
Function | _var_dispatcher |
Undocumented |
Function | _wrapfunc |
Undocumented |
Function | _wrapit |
Undocumented |
Function | _wrapreduction |
Undocumented |
Function | alen |
Return the length of the first dimension of the input array. |
Function | all |
Test whether all array elements along a given axis evaluate to True. |
Function | alltrue |
Check if all elements of input array are true. |
Function | amax |
Return the maximum of an array or maximum along an axis. |
Function | amin |
Return the minimum of an array or minimum along an axis. |
Function | any |
Test whether any array element along a given axis evaluates to True. |
Function | argmax |
Returns the indices of the maximum values along an axis. |
Function | argmin |
Returns the indices of the minimum values along an axis. |
Function | argpartition |
No summary |
Function | argsort |
Returns the indices that would sort an array. |
Function | around |
Evenly round to the given number of decimals. |
Function | choose |
Construct an array from an index array and a list of arrays to choose from. |
Function | clip |
Clip (limit) the values in an array. |
Function | compress |
Return selected slices of an array along given axis. |
Function | cumprod |
Return the cumulative product of elements along a given axis. |
Function | cumproduct |
Return the cumulative product over the given axis. |
Function | cumsum |
Return the cumulative sum of the elements along a given axis. |
Function | diagonal |
Return specified diagonals. |
Function | mean |
Compute the arithmetic mean along the specified axis. |
Function | ndim |
Return the number of dimensions of an array. |
Function | nonzero |
Return the indices of the elements that are non-zero. |
Function | partition |
Return a partitioned copy of an array. |
Function | prod |
Return the product of array elements over a given axis. |
Function | product |
Return the product of array elements over a given axis. |
Function | ptp |
Range of values (maximum - minimum) along an axis. |
Function | put |
Replaces specified elements of an array with given values. |
Function | ravel |
Return a contiguous flattened array. |
Function | repeat |
Repeat elements of an array. |
Function | reshape |
Gives a new shape to an array without changing its data. |
Function | resize |
Return a new array with the specified shape. |
Function | round_ |
Round an array to the given number of decimals. |
Function | searchsorted |
Find indices where elements should be inserted to maintain order. |
Function | shape |
Return the shape of an array. |
Function | size |
Return the number of elements along a given axis. |
Function | sometrue |
Check whether some values are true. |
Function | sort |
Return a sorted copy of an array. |
Function | squeeze |
Remove axes of length one from a . |
Function | std |
Compute the standard deviation along the specified axis. |
Function | sum |
Sum of array elements over a given axis. |
Function | swapaxes |
Interchange two axes of an array. |
Function | take |
Take elements from an array along an axis. |
Function | trace |
Return the sum along diagonals of the array. |
Function | transpose |
Reverse or permute the axes of an array; returns the modified array. |
Function | var |
Compute the variance along the specified axis. |
Undocumented
Undocumented
Undocumented
Undocumented
Undocumented
Return the length of the first dimension of the input array.
numpy.alen
is deprecated, use len
instead.a
.shape, size
>>> a = np.zeros((7,4,5)) >>> a.shape[0] 7 >>> np.alen(a) 7
Test whether all array elements along a given axis evaluate to True.
Axis or axes along which a logical AND reduction is performed.
The default (axis=None) is to perform a logical AND over all
the dimensions of the input array. axis
may be negative, in
which case it counts from the last to the first axis.
If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the all
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Elements to include in checking for all True
values.
See ~numpy.ufunc.reduce
for details.
out
is specified,
in which case a reference to out
is returned.ndarray.all : equivalent method
any : Test whether any element along a given axis evaluates to True.
Not a Number (NaN), positive infinity and negative infinity
evaluate to True
because these are not equal to zero.
>>> np.all([[True,False],[True,True]]) False
>>> np.all([[True,False],[True,True]], axis=0) array([ True, False])
>>> np.all([-1, 4, 5]) True
>>> np.all([1.0, np.nan]) True
>>> np.all([[True, True], [False, True]], where=[[True], [False]]) True
>>> o=np.array(False) >>> z=np.all([-1, 4, 5], out=o) >>> id(z), id(o), z (28293632, 28293632, array(True)) # may vary
Check if all elements of input array are true.
numpy.all : Equivalent function; see for details.
Return the maximum of an array or maximum along an axis.
Axis or axes along which to operate. By default, flattened input is used.
If this is a tuple of ints, the maximum is selected over multiple axes, instead of a single axis or all the axes as before.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the amax
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
The minimum value of an output element. Must be present to allow
computation on empty slice. See ~numpy.ufunc.reduce
for details.
Elements to compare for the maximum. See ~numpy.ufunc.reduce
for details.
a
. If axis
is None, the result is a scalar value.
If axis
is given, the result is an array of dimension
a.ndim - 1.nanmin, minimum, fmin
NaN values are propagated, that is if at least one item is NaN, the corresponding max value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmax.
Don't use amax
for element-wise comparison of 2 arrays; when
a.shape[0] is 2, maximum(a[0], a[1]) is faster than
amax(a, axis=0).
>>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.amax(a) # Maximum of the flattened array 3 >>> np.amax(a, axis=0) # Maxima along the first axis array([2, 3]) >>> np.amax(a, axis=1) # Maxima along the second axis array([1, 3]) >>> np.amax(a, where=[False, True], initial=-1, axis=0) array([-1, 3]) >>> b = np.arange(5, dtype=float) >>> b[2] = np.NaN >>> np.amax(b) nan >>> np.amax(b, where=~np.isnan(b), initial=-1) 4.0 >>> np.nanmax(b) 4.0
You can use an initial value to compute the maximum of an empty slice, or to initialize it to a different value:
>>> np.amax([[-50], [10]], axis=-1, initial=0) array([ 0, 10])
Notice that the initial value is used as one of the elements for which the maximum is determined, unlike for the default argument Python's max function, which is only used for empty iterables.
>>> np.amax([5], initial=6) 6 >>> max([5], default=6) 5
Return the minimum of an array or minimum along an axis.
Axis or axes along which to operate. By default, flattened input is used.
If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the amin
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
The maximum value of an output element. Must be present to allow
computation on empty slice. See ~numpy.ufunc.reduce
for details.
Elements to compare for the minimum. See ~numpy.ufunc.reduce
for details.
a
. If axis
is None, the result is a scalar value.
If axis
is given, the result is an array of dimension
a.ndim - 1.nanmax, maximum, fmax
NaN values are propagated, that is if at least one item is NaN, the corresponding min value will be NaN as well. To ignore NaN values (MATLAB behavior), please use nanmin.
Don't use amin
for element-wise comparison of 2 arrays; when
a.shape[0] is 2, minimum(a[0], a[1]) is faster than
amin(a, axis=0).
>>> a = np.arange(4).reshape((2,2)) >>> a array([[0, 1], [2, 3]]) >>> np.amin(a) # Minimum of the flattened array 0 >>> np.amin(a, axis=0) # Minima along the first axis array([0, 1]) >>> np.amin(a, axis=1) # Minima along the second axis array([0, 2]) >>> np.amin(a, where=[False, True], initial=10, axis=0) array([10, 1])
>>> b = np.arange(5, dtype=float) >>> b[2] = np.NaN >>> np.amin(b) nan >>> np.amin(b, where=~np.isnan(b), initial=10) 0.0 >>> np.nanmin(b) 0.0
>>> np.amin([[-50], [10]], axis=-1, initial=0) array([-50, 0])
Notice that the initial value is used as one of the elements for which the minimum is determined, unlike for the default argument Python's max function, which is only used for empty iterables.
Notice that this isn't the same as Python's default argument.
>>> np.amin([6], initial=5) 5 >>> min([6], default=5) 6
Test whether any array element along a given axis evaluates to True.
Returns single boolean unless axis
is not None
Axis or axes along which a logical OR reduction is performed.
The default (axis=None) is to perform a logical OR over all
the dimensions of the input array. axis
may be negative, in
which case it counts from the last to the first axis.
If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before.
a
).
See :ref:`ufuncs-output-type` for more details.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the any
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Elements to include in checking for any True
values.
See ~numpy.ufunc.reduce
for details.
ndarray
is returned unless out
is specified,
in which case a reference to out
is returned.ndarray.any : equivalent method
all : Test whether all elements along a given axis evaluate to True.
Not a Number (NaN), positive infinity and negative infinity evaluate
to True
because these are not equal to zero.
>>> np.any([[True, False], [True, True]]) True
>>> np.any([[True, False], [False, False]], axis=0) array([ True, False])
>>> np.any([-1, 0, 5]) True
>>> np.any(np.nan) True
>>> np.any([[True, False], [False, False]], where=[[False], [True]]) False
>>> o=np.array(False) >>> z=np.any([-1, 4, 5], out=o) >>> z, o (array(True), array(True)) >>> # Check now that z is a reference to o >>> z is o True >>> id(z), id(o) # identity of z and o # doctest: +SKIP (191614240, 191614240)
Returns the indices of the maximum values along an axis.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array.
a.shape
with the dimension along axis
removed. If keepdims
is set to True,
then the size of axis
will be 1 with the resulting array having same
shape as a.shape
.ndarray.argmax, argmin amax : The maximum value along a given axis. unravel_index : Convert a flat index into an index tuple. take_along_axis : Apply np.expand_dims(index_array, axis)
from argmax to an array as if by calling max.
In case of multiple occurrences of the maximum values, the indices corresponding to the first occurrence are returned.
>>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmax(a) 5 >>> np.argmax(a, axis=0) array([1, 1, 1]) >>> np.argmax(a, axis=1) array([2, 2])
Indexes of the maximal elements of a N-dimensional array:
>>> ind = np.unravel_index(np.argmax(a, axis=None), a.shape) >>> ind (1, 2) >>> a[ind] 15
>>> b = np.arange(6) >>> b[1] = 5 >>> b array([0, 5, 2, 3, 4, 5]) >>> np.argmax(b) # Only the first occurrence is returned. 1
>>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmax(x, axis=-1) >>> # Same as np.amax(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[4], [3]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1) array([4, 3])
Setting keepdims
to True
,
>>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmax(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4)
Returns the indices of the minimum values along an axis.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the array.
a.shape
with the dimension along axis
removed. If keepdims
is set to True,
then the size of axis
will be 1 with the resulting array having same
shape as a.shape
.ndarray.argmin, argmax amin : The minimum value along a given axis. unravel_index : Convert a flat index into an index tuple. take_along_axis : Apply np.expand_dims(index_array, axis)
from argmin to an array as if by calling min.
In case of multiple occurrences of the minimum values, the indices corresponding to the first occurrence are returned.
>>> a = np.arange(6).reshape(2,3) + 10 >>> a array([[10, 11, 12], [13, 14, 15]]) >>> np.argmin(a) 0 >>> np.argmin(a, axis=0) array([0, 0, 0]) >>> np.argmin(a, axis=1) array([0, 0])
Indices of the minimum elements of a N-dimensional array:
>>> ind = np.unravel_index(np.argmin(a, axis=None), a.shape) >>> ind (0, 0) >>> a[ind] 10
>>> b = np.arange(6) + 10 >>> b[4] = 10 >>> b array([10, 11, 12, 13, 10, 15]) >>> np.argmin(b) # Only the first occurrence is returned. 0
>>> x = np.array([[4,2,3], [1,0,3]]) >>> index_array = np.argmin(x, axis=-1) >>> # Same as np.amin(x, axis=-1, keepdims=True) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1) array([[2], [0]]) >>> # Same as np.amax(x, axis=-1) >>> np.take_along_axis(x, np.expand_dims(index_array, axis=-1), axis=-1).squeeze(axis=-1) array([2, 0])
Setting keepdims
to True
,
>>> x = np.arange(24).reshape((2, 3, 4)) >>> res = np.argmin(x, axis=1, keepdims=True) >>> res.shape (2, 1, 4)
Perform an indirect partition along the given axis using the
algorithm specified by the kind
keyword. It returns an array of
indices of the same shape as a
that index data along the given
axis in partitioned order.
Element index to partition by. The k-th element will be in its final sorted position and all smaller elements will be moved before it and all larger elements behind it. The order all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all of them into their sorted position at once.
a
is an array with fields defined, this argument
specifies which fields to compare first, second, etc. A single
field can be specified as a string, and not all fields need be
specified, but unspecified fields will still be used, in the
order in which they come up in the dtype, to break ties.a
along the specified axis.
If a
is one-dimensional, a[index_array] yields a partitioned a
.
More generally, np.take_along_axis(a, index_array, axis=a) always
yields the partitioned a
, irrespective of dimensionality.partition : Describes partition algorithms used. ndarray.partition : Inplace partition. argsort : Full indirect sort. take_along_axis : Apply index_array from argpartition
to an array as if by calling partition.
See partition
for notes on the different selection algorithms.
One dimensional array:
>>> x = np.array([3, 4, 2, 1]) >>> x[np.argpartition(x, 3)] array([2, 1, 3, 4]) >>> x[np.argpartition(x, (1, 3))] array([1, 2, 3, 4])
>>> x = [3, 4, 2, 1] >>> np.array(x)[np.argpartition(x, 3)] array([2, 1, 3, 4])
Multi-dimensional array:
>>> x = np.array([[3, 4, 2], [1, 3, 1]]) >>> index_array = np.argpartition(x, kth=1, axis=-1) >>> np.take_along_axis(x, index_array, axis=-1) # same as np.partition(x, kth=1) array([[2, 3, 4], [1, 1, 3]])
Returns the indices that would sort an array.
Perform an indirect sort along the given axis using the algorithm specified
by the kind
keyword. It returns an array of indices of the same shape as
a
that index data along the given axis in sorted order.
Sorting algorithm. The default is 'quicksort'. Note that both 'stable' and 'mergesort' use timsort under the covers and, in general, the actual implementation will vary with data type. The 'mergesort' option is retained for backwards compatibility.
a
is an array with fields defined, this argument specifies
which fields to compare first, second, etc. A single field can
be specified as a string, and not all fields need be specified,
but unspecified fields will still be used, in the order in which
they come up in the dtype, to break ties.a
along the specified axis
.
If a
is one-dimensional, a[index_array] yields a sorted a
.
More generally, np.take_along_axis(a, index_array, axis=axis)
always yields the sorted a
, irrespective of dimensionality.sort : Describes sorting algorithms used. lexsort : Indirect stable sort with multiple keys. ndarray.sort : Inplace sort. argpartition : Indirect partial sort. take_along_axis : Apply index_array from argsort
to an array as if by calling sort.
See sort
for notes on the different sorting algorithms.
As of NumPy 1.4.0 argsort
works with real/complex arrays containing
nan values. The enhanced sort order is documented in sort
.
One dimensional array:
>>> x = np.array([3, 1, 2]) >>> np.argsort(x) array([1, 2, 0])
Two-dimensional array:
>>> x = np.array([[0, 3], [2, 2]]) >>> x array([[0, 3], [2, 2]])
>>> ind = np.argsort(x, axis=0) # sorts along first axis (down) >>> ind array([[0, 1], [1, 0]]) >>> np.take_along_axis(x, ind, axis=0) # same as np.sort(x, axis=0) array([[0, 2], [2, 3]])
>>> ind = np.argsort(x, axis=1) # sorts along last axis (across) >>> ind array([[0, 1], [0, 1]]) >>> np.take_along_axis(x, ind, axis=1) # same as np.sort(x, axis=1) array([[0, 3], [2, 2]])
Indices of the sorted elements of a N-dimensional array:
>>> ind = np.unravel_index(np.argsort(x, axis=None), x.shape) >>> ind (array([0, 1, 1, 0]), array([0, 0, 1, 1])) >>> x[ind] # same as np.sort(x, axis=None) array([0, 2, 2, 3])
Sorting with keys:
>>> x = np.array([(1, 0), (0, 1)], dtype=[('x', '<i4'), ('y', '<i4')]) >>> x array([(1, 0), (0, 1)], dtype=[('x', '<i4'), ('y', '<i4')])
>>> np.argsort(x, order=('x','y')) array([1, 0])
>>> np.argsort(x, order=('y','x')) array([0, 1])
Evenly round to the given number of decimals.
An array of the same type as a
, containing the rounded values.
Unless out
was specified, a new array is created. A reference to
the result is returned.
The real and imaginary parts of complex numbers are rounded separately. The result of rounding a float is a float.
ndarray.round : equivalent method
ceil, fix, floor, rint, trunc
For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc.
np.around uses a fast but sometimes inexact algorithm to round
floating-point datatypes. For positive decimals
it is equivalent to
np.true_divide(np.rint(a * 10**decimals), 10**decimals), which has
error due to the inexact representation of decimal fractions in the IEEE
floating point standard [1] and errors introduced when scaling by powers
of ten. For instance, note the extra "1" in the following:
>>> np.round(56294995342131.5, 3) 56294995342131.51
If your goal is to print such values with a fixed number of decimals, it is preferable to use numpy's float printing routines to limit the number of printed decimals:
>>> np.format_float_positional(56294995342131.5, precision=3) '56294995342131.5'
The float printing routines use an accurate but much more computationally demanding algorithm to compute the number of digits after the decimal point.
Alternatively, Python's builtin round
function uses a more accurate
but slower algorithm for 64-bit floating point values:
>>> round(56294995342131.5, 3) 56294995342131.5 >>> np.round(16.055, 2), round(16.055, 2) # equals 16.0549999999999997 (16.06, 16.05)
[1] | "Lecture Notes on the Status of IEEE 754", William Kahan, https://people.eecs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF |
>>> np.around([0.37, 1.64]) array([0., 2.]) >>> np.around([0.37, 1.64], decimals=1) array([0.4, 1.6]) >>> np.around([.5, 1.5, 2.5, 3.5, 4.5]) # rounds to nearest even value array([0., 2., 2., 4., 4.]) >>> np.around([1,2,3,11], decimals=1) # ndarray of ints is returned array([ 1, 2, 3, 11]) >>> np.around([1,2,3,11], decimals=-1) array([ 0, 0, 0, 10])
Construct an array from an index array and a list of arrays to choose from.
First of all, if confused or uncertain, definitely look at the Examples -
in its full generality, this function is less simple than it might
seem from the following code description (below ndi =
numpy.lib.index_tricks
):
np.choose(a,c) == np.array([c[a[I]][I] for I in ndi.ndindex(a.shape)]).
But this omits some subtleties. Here is a fully general summary:
Given an "index" array (a
) of integers and a sequence of n arrays
(choices
), a
and each choice array are first broadcast, as necessary,
to arrays of a common shape; calling these Ba and Bchoices[i], i =
0,...,n-1 we have that, necessarily, Ba.shape == Bchoices[i].shape
for each i. Then, a new array with shape Ba.shape is created as
follows:
a
(and thus Ba
) may be any (signed)
integer; modular arithmetic is used to map integers outside the range
[0, n-1]
back into that range; and then the new array is constructed
as above;a
(and thus Ba) may be any (signed)
integer; negative integers are mapped to 0; values greater than n-1
are mapped to n-1; and then the new array is constructed as above.a
and all of the choices must be broadcastable to the
same shape. If choices
is itself an array (not recommended), then
its outermost dimension (i.e., the one corresponding to
choices.shape[0]) is taken as defining the "sequence".out
is always
buffered if mode='raise'; use other modes for better performance.Specifies how indices outside [0, n-1] will be treated:
- 'raise' : an exception is raised
- 'wrap' : value becomes value mod n
- 'clip' : values < 0 are mapped to 0, values > n-1 are mapped to n-1
a
and each choice array are not all broadcastable to the same
shape.ndarray.choose : equivalent method
numpy.take_along_axis : Preferable if choices
is an array
To reduce the chance of misinterpretation, even though the following
"abuse" is nominally supported, choices
should neither be, nor be
thought of as, a single array, i.e., the outermost sequence-like container
should be either a list or a tuple.
>>> choices = [[0, 1, 2, 3], [10, 11, 12, 13], ... [20, 21, 22, 23], [30, 31, 32, 33]] >>> np.choose([2, 3, 1, 0], choices ... # the first element of the result will be the first element of the ... # third (2+1) "array" in choices, namely, 20; the second element ... # will be the second element of the fourth (3+1) choice array, i.e., ... # 31, etc. ... ) array([20, 31, 12, 3]) >>> np.choose([2, 4, 1, 0], choices, mode='clip') # 4 goes to 3 (4-1) array([20, 31, 12, 3]) >>> # because there are 4 choice arrays >>> np.choose([2, 4, 1, 0], choices, mode='wrap') # 4 goes to (4 mod 4) array([20, 1, 12, 3]) >>> # i.e., 0
A couple examples illustrating how choose broadcasts:
>>> a = [[1, 0, 1], [0, 1, 0], [1, 0, 1]] >>> choices = [-10, 10] >>> np.choose(a, choices) array([[ 10, -10, 10], [-10, 10, -10], [ 10, -10, 10]])
>>> # With thanks to Anne Archibald >>> a = np.array([0, 1]).reshape((2,1,1)) >>> c1 = np.array([1, 2, 3]).reshape((1,3,1)) >>> c2 = np.array([-1, -2, -3, -4, -5]).reshape((1,1,5)) >>> np.choose(a, (c1, c2)) # result is 2x3x5, res[0,:,:]=c1, res[1,:,:]=c2 array([[[ 1, 1, 1, 1, 1], [ 2, 2, 2, 2, 2], [ 3, 3, 3, 3, 3]], [[-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5], [-1, -2, -3, -4, -5]]])
Clip (limit) the values in an array.
Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1.
Equivalent to but faster than np.minimum(a_max, np.maximum(a, a_min)).
No check is performed to ensure a_min < a_max.
a_min
and a_max
may be
None. Both are broadcast against a
.out
must be of the right shape
to hold the output. Its type is preserved.For other keyword-only arguments, see the :ref:`ufunc docs <ufuncs.kwargs>`.
a
, but where values
< a_min
are replaced with a_min
, and those > a_max
with a_max
.When a_min
is greater than a_max
, clip
returns an
array in which all values are equal to a_max
,
as shown in the second example.
>>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, 1, 8) array([1, 1, 2, 3, 4, 5, 6, 7, 8, 8]) >>> np.clip(a, 8, 1) array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1]) >>> np.clip(a, 3, 6, out=a) array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a array([3, 3, 3, 3, 4, 5, 6, 6, 6, 6]) >>> a = np.arange(10) >>> a array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) >>> np.clip(a, [3, 4, 1, 1, 1, 4, 4, 4, 4, 4], 8) array([3, 4, 2, 3, 4, 5, 6, 7, 8, 8])
Return selected slices of an array along given axis.
When working along a given axis, a slice along that axis is returned in
output
for each index where condition
evaluates to True. When
working on a 1-D array, compress
is equivalent to extract
.
a
along the given axis, then output is
truncated to the length of the condition array.a
without the slices along axis for which condition
is false.take, choose, diag, diagonal, select ndarray.compress : Equivalent method in ndarray extract : Equivalent method when working on 1-D arrays :ref:`ufuncs-output-type`
>>> a = np.array([[1, 2], [3, 4], [5, 6]]) >>> a array([[1, 2], [3, 4], [5, 6]]) >>> np.compress([0, 1], a, axis=0) array([[3, 4]]) >>> np.compress([False, True, True], a, axis=0) array([[3, 4], [5, 6]]) >>> np.compress([False, True], a, axis=1) array([[2], [4], [6]])
Working on the flattened array does not return slices along an axis but selects elements.
>>> np.compress([False, True], a) array([2])
Return the cumulative product of elements along a given axis.
a
, unless a
has an integer dtype with
a precision less than that of the default platform integer. In
that case, the default platform integer is used instead.out
is
specified, in which case a reference to out is returned.Arithmetic is modular when using integer types, and no error is raised on overflow.
>>> a = np.array([1,2,3]) >>> np.cumprod(a) # intermediate results 1, 1*2 ... # total product 1*2*3 = 6 array([1, 2, 6]) >>> a = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.cumprod(a, dtype=float) # specify type of output array([ 1., 2., 6., 24., 120., 720.])
The cumulative product for each column (i.e., over the rows) of a
:
>>> np.cumprod(a, axis=0) array([[ 1, 2, 3], [ 4, 10, 18]])
The cumulative product for each row (i.e. over the columns) of a
:
>>> np.cumprod(a,axis=1) array([[ 1, 2, 6], [ 4, 20, 120]])
Return the cumulative product over the given axis.
cumprod : equivalent function; see for details.
Return the cumulative sum of the elements along a given axis.
dtype
is not specified, it defaults
to the dtype of a
, unless a
has an integer dtype with a
precision less than that of the default platform integer. In
that case, the default platform integer is used.out
is
specified, in which case a reference to out
is returned. The
result has the same size as a
, and the same shape as a
if
axis
is not None or a
is a 1-d array.sum : Sum array elements. trapz : Integration of array values using the composite trapezoidal rule. diff : Calculate the n-th discrete difference along given axis.
Arithmetic is modular when using integer types, and no error is raised on overflow.
cumsum(a)[-1] may not be equal to sum(a) for floating-point
values since sum may use a pairwise summation routine, reducing
the roundoff-error. See sum
for more information.
>>> a = np.array([[1,2,3], [4,5,6]]) >>> a array([[1, 2, 3], [4, 5, 6]]) >>> np.cumsum(a) array([ 1, 3, 6, 10, 15, 21]) >>> np.cumsum(a, dtype=float) # specifies type of output value(s) array([ 1., 3., 6., 10., 15., 21.])
>>> np.cumsum(a,axis=0) # sum over rows for each of the 3 columns array([[1, 2, 3], [5, 7, 9]]) >>> np.cumsum(a,axis=1) # sum over columns for each of the 2 rows array([[ 1, 3, 6], [ 4, 9, 15]])
cumsum(b)[-1] may not be equal to sum(b)
>>> b = np.array([1, 2e-9, 3e-9] * 1000000) >>> b.cumsum()[-1] 1000000.0050045159 >>> b.sum() 1000000.0050000029
Return specified diagonals.
If a
is 2-D, returns the diagonal of a
with the given offset,
i.e., the collection of elements of the form a[i, i+offset]. If
a
has more than two dimensions, then the axes specified by axis1
and axis2
are used to determine the 2-D sub-array whose diagonal is
returned. The shape of the resulting array can be determined by
removing axis1
and axis2
and appending an index to the right equal
to the size of the resulting diagonals.
In versions of NumPy prior to 1.7, this function always returned a new, independent array containing a copy of the values in the diagonal.
In NumPy 1.7 and 1.8, it continues to return a copy of the diagonal, but depending on this fact is deprecated. Writing to the resulting array continues to work as it used to, but a FutureWarning is issued.
Starting in NumPy 1.9 it returns a read-only view on the original array. Attempting to write to the resulting array will produce an error.
In some future release, it will return a read/write view and writing to the returned array will alter your original array. The returned array will have the same type as the input array.
If you don't write to the array returned by this function, then you can just ignore all of the above.
If you depend on the current behavior, then we suggest copying the returned array explicitly, i.e., use np.diagonal(a).copy() instead of just np.diagonal(a). This will work with both past and future versions of NumPy.
If a
is 2-D, then a 1-D array containing the diagonal and of the
same type as a
is returned unless a
is a matrix
, in which case
a 1-D array rather than a (2-D) matrix
is returned in order to
maintain backward compatibility.
If a.ndim > 2, then the dimensions specified by axis1
and axis2
are removed, and a new axis inserted at the end corresponding to the
diagonal.
a
is less than 2.diag : MATLAB work-a-like for 1-D and 2-D arrays. diagflat : Create diagonal arrays. trace : Sum along diagonals.
>>> a = np.arange(4).reshape(2,2) >>> a array([[0, 1], [2, 3]]) >>> a.diagonal() array([0, 3]) >>> a.diagonal(1) array([1])
A 3-D example:
>>> a = np.arange(8).reshape(2,2,2); a array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) >>> a.diagonal(0, # Main diagonals of two arrays created by skipping ... 0, # across the outer(left)-most axis last and ... 1) # the "middle" (row) axis first. array([[0, 6], [1, 7]])
The sub-arrays whose main diagonals we just obtained; note that each corresponds to fixing the right-most (column) axis, and that the diagonals are "packed" in rows.
>>> a[:,:,0] # main diagonal is [0 6] array([[0, 2], [4, 6]]) >>> a[:,:,1] # main diagonal is [1 7] array([[1, 3], [5, 7]])
The anti-diagonal can be obtained by reversing the order of elements
using either numpy.flipud
or numpy.fliplr
.
>>> a = np.arange(9).reshape(3, 3) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> np.fliplr(a).diagonal() # Horizontal flip array([2, 4, 6]) >>> np.flipud(a).diagonal() # Vertical flip array([6, 4, 2])
Note that the order in which the diagonal is retrieved varies depending on the flip function.
Compute the arithmetic mean along the specified axis.
Returns the average of the array elements. The average is taken over
the flattened array by default, otherwise over the specified axis.
float64
intermediate and return values are used for integer inputs.
a
is not an
array, a conversion is attempted.Axis or axes along which the means are computed. The default is to compute the mean of the flattened array.
If this is a tuple of ints, a mean is performed over multiple axes, instead of a single axis or all the axes as before.
float64
; for floating point inputs, it is the same as the
input dtype.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the mean
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Elements to include in the mean. See ~numpy.ufunc.reduce
for details.
out=None
, returns a new array containing the mean values,
otherwise a reference to the output array is returned.average : Weighted average std, var, nanmean, nanstd, nanvar
The arithmetic mean is the sum of the elements along the axis divided by the number of elements.
Note that for floating-point input, the mean is computed using the
same precision the input has. Depending on the input data, this can
cause the results to be inaccurate, especially for float32
(see
example below). Specifying a higher-precision accumulator using the
dtype
keyword can alleviate this issue.
By default, float16
results are computed using float32
intermediates
for extra precision.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.mean(a) 2.5 >>> np.mean(a, axis=0) array([2., 3.]) >>> np.mean(a, axis=1) array([1.5, 3.5])
In single precision, mean
can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.mean(a) 0.54999924
Computing the mean in float64 is more accurate:
>>> np.mean(a, dtype=np.float64) 0.55000000074505806 # may vary
Specifying a where argument: >>> a = np.array([[5, 9, 13], [14, 10, 12], [11, 15, 19]]) >>> np.mean(a) 12.0 >>> np.mean(a, where=[[True], [False], [False]]) 9.0
Return the number of dimensions of an array.
a
. Scalars are zero-dimensional.ndarray.ndim : equivalent method shape : dimensions of array ndarray.shape : dimensions of array
>>> np.ndim([[1,2,3],[4,5,6]]) 2 >>> np.ndim(np.array([[1,2,3],[4,5,6]])) 2 >>> np.ndim(1) 0
Return the indices of the elements that are non-zero.
Returns a tuple of arrays, one for each dimension of a
,
containing the indices of the non-zero elements in that
dimension. The values in a
are always tested and returned in
row-major, C-style order.
To group the indices by element, rather than dimension, use argwhere
,
which returns a row for each non-zero element.
Note
When called on a zero-d array or scalar, nonzero(a) is treated as nonzero(atleast_1d(a)).
atleast_1d
explicitly if this behavior is deliberate.While the nonzero values can be obtained with a[nonzero(a)], it is recommended to use x[x.astype(bool)] or x[x != 0] instead, which will correctly handle 0-d arrays.
>>> x = np.array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> x array([[3, 0, 0], [0, 4, 0], [5, 6, 0]]) >>> np.nonzero(x) (array([0, 1, 2, 2]), array([0, 1, 0, 1]))
>>> x[np.nonzero(x)] array([3, 4, 5, 6]) >>> np.transpose(np.nonzero(x)) array([[0, 0], [1, 1], [2, 0], [2, 1]])
A common use for nonzero is to find the indices of an array, where
a condition is True. Given an array a
, the condition a
> 3 is a
boolean array and since False is interpreted as 0, np.nonzero(a > 3)
yields the indices of the a
where the condition is true.
>>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> a > 3 array([[False, False, False], [ True, True, True], [ True, True, True]]) >>> np.nonzero(a > 3) (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
Using this result to index a
is equivalent to using the mask directly:
>>> a[np.nonzero(a > 3)] array([4, 5, 6, 7, 8, 9]) >>> a[a > 3] # prefer this spelling array([4, 5, 6, 7, 8, 9])
nonzero can also be called as a method of the array.
>>> (a > 3).nonzero() (array([1, 1, 1, 2, 2, 2]), array([0, 1, 2, 0, 1, 2]))
Return a partitioned copy of an array.
Creates a copy of the array with its elements rearranged in such a way that the value of the element in k-th position is in the position it would be in a sorted array. All elements smaller than the k-th element are moved before this element and all equal or greater are moved behind it. The ordering of the elements in the two partitions is undefined.
Element index to partition by. The k-th value of the element will be in its final sorted position and all smaller elements will be moved before it and all equal or greater elements behind it. The order of all elements in the partitions is undefined. If provided with a sequence of k-th it will partition all elements indexed by k-th of them into their sorted position at once.
a
is an array with fields defined, this argument
specifies which fields to compare first, second, etc. A single
field can be specified as a string. Not all fields need be
specified, but unspecified fields will still be used, in the
order in which they come up in the dtype, to break ties.a
.ndarray.partition : Method to sort an array in-place. argpartition : Indirect partition. sort : Full sorting
The various selection algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The available algorithms have the following properties:
kind | speed | worst case | work space | stable |
---|---|---|---|---|
'introselect' | 1 | O(n) | 0 | no |
All the partition algorithms make temporary copies of the data when partitioning along any but the last axis. Consequently, partitioning along the last axis is faster and uses less space than partitioning along any other axis.
The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts.
>>> a = np.array([3, 4, 2, 1]) >>> np.partition(a, 3) array([2, 1, 3, 4])
>>> np.partition(a, (1, 3)) array([1, 2, 3, 4])
Return the product of array elements over a given axis.
Axis or axes along which a product is performed. The default, axis=None, will calculate the product of all the elements in the input array. If axis is negative it counts from the last to the first axis.
If axis is a tuple of ints, a product is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.
a
is used by
default unless a
has an integer dtype of less precision than the
default platform integer. In that case, if a
is signed then the
platform integer is used while if a
is unsigned then an unsigned
integer of the same precision as the platform integer is used.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the prod
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
The starting value for this product. See ~numpy.ufunc.reduce
for details.
Elements to include in the product. See ~numpy.ufunc.reduce
for details.
dtype
parameter above.a
but with the specified axis removed.
Returns a reference to out
if specified.ndarray.prod : equivalent method :ref:`ufuncs-output-type`
Arithmetic is modular when using integer types, and no error is raised on overflow. That means that, on a 32-bit platform:
>>> x = np.array([536870910, 536870910, 536870910, 536870910]) >>> np.prod(x) 16 # may vary
The product of an empty array is the neutral element 1:
>>> np.prod([]) 1.0
By default, calculate the product of all elements:
>>> np.prod([1.,2.]) 2.0
Even when the input array is two-dimensional:
>>> np.prod([[1.,2.],[3.,4.]]) 24.0
But we can also specify the axis over which to multiply:
>>> np.prod([[1.,2.],[3.,4.]], axis=1) array([ 2., 12.])
Or select specific elements to include:
>>> np.prod([1., np.nan, 3.], where=[True, False, True]) 3.0
If the type of x
is unsigned, then the output type is
the unsigned platform integer:
>>> x = np.array([1, 2, 3], dtype=np.uint8) >>> np.prod(x).dtype == np.uint True
If x
is of a signed integer type, then the output type
is the default platform integer:
>>> x = np.array([1, 2, 3], dtype=np.int8) >>> np.prod(x).dtype == int True
You can also start the product with a value other than one:
>>> np.prod([1, 2], initial=5) 10
Return the product of array elements over a given axis.
prod : equivalent function; see for details.
Range of values (maximum - minimum) along an axis.
The name of the function comes from the acronym for 'peak to peak'.
Warning
ptp
preserves the data type of the array. This means the
return value for an input of signed integers with n bits
(e.g. np.int8
, np.int16
, etc) is also a signed integer
with n bits. In that case, peak-to-peak values greater than
2**(n-1)-1 will be returned as negative values. An example
with a work-around is shown below.
Axis along which to find the peaks. By default, flatten the
array. axis
may be negative, in
which case it counts from the last to the first axis.
If this is a tuple of ints, a reduction is performed on multiple axes, instead of a single axis or all the axes as before.
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the ptp
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
out
was
specified, in which case a reference to out
is returned.>>> x = np.array([[4, 9, 2, 10], ... [6, 9, 7, 12]])
>>> np.ptp(x, axis=1) array([8, 6])
>>> np.ptp(x, axis=0) array([2, 0, 5, 2])
>>> np.ptp(x) 10
This example shows that a negative value can be returned when the input is an array of signed integers.
>>> y = np.array([[1, 127], ... [0, 127], ... [-1, 127], ... [-2, 127]], dtype=np.int8) >>> np.ptp(y, axis=1) array([ 126, 127, -128, -127], dtype=int8)
A work-around is to use the view()
method to view the result as
unsigned integers with the same bit width:
>>> np.ptp(y, axis=1).view(np.uint8) array([126, 127, 128, 129], dtype=uint8)
Replaces specified elements of an array with given values.
The indexing works on the flattened target array. put
is roughly
equivalent to:
a.flat[ind] = v
a
at target indices. If v
is shorter than
ind
it will be repeated as necessary.Specifies how out-of-bounds indices will behave.
'clip' mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers. In 'raise' mode, if an exception occurs the target array may still be modified.
putmask, place put_along_axis : Put elements by matching the array and the index arrays
>>> a = np.arange(5) >>> np.put(a, [0, 2], [-44, -55]) >>> a array([-44, 1, -55, 3, 4])
>>> a = np.arange(5) >>> np.put(a, 22, -5, mode='clip') >>> a array([ 0, 1, 2, 3, -5])
Return a contiguous flattened array.
A 1-D array, containing the elements of the input, is returned. A copy is made only if needed.
As of NumPy 1.10, the returned array will have the same type as the input array. (for example, a masked array will be returned for a masked array input)
a
are read in the order specified by
order
, and packed as a 1-D array.order : {'C','F', 'A', 'K'}, optional
The elements ofa
are read using this index order. 'C' means to index the elements in row-major, C-style order, with the last axis index changing fastest, back to the first axis index changing slowest. 'F' means to index the elements in column-major, Fortran-style order, with the first index changing fastest, and the last index changing slowest. Note that the 'C' and 'F' options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. 'A' means to read the elements in Fortran-like index order ifa
is Fortran contiguous in memory, C-like order otherwise. 'K' means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, 'C' index order is used.
a
, with shape (a.size,).
Note that matrices are special cased for backward compatibility, if a
is a matrix, then y is a 1-D ndarray.ndarray.flat : 1-D iterator over an array. ndarray.flatten : 1-D array copy of the elements of an array
in row-major order.
ndarray.reshape : Change the shape of an array without changing its data.
In row-major, C-style order, in two dimensions, the row index varies the slowest, and the column index the quickest. This can be generalized to multiple dimensions, where row-major order implies that the index along the first axis varies slowest, and the index along the last quickest. The opposite holds for column-major, Fortran-style index ordering.
When a view is desired in as many cases as possible, arr.reshape(-1) may be preferable.
It is equivalent to reshape(-1, order=order).
>>> x = np.array([[1, 2, 3], [4, 5, 6]]) >>> np.ravel(x) array([1, 2, 3, 4, 5, 6])
>>> x.reshape(-1) array([1, 2, 3, 4, 5, 6])
>>> np.ravel(x, order='F') array([1, 4, 2, 5, 3, 6])
When order is 'A', it will preserve the array's 'C' or 'F' ordering:
>>> np.ravel(x.T) array([1, 4, 2, 5, 3, 6]) >>> np.ravel(x.T, order='A') array([1, 2, 3, 4, 5, 6])
When order is 'K', it will preserve orderings that are neither 'C' nor 'F', but won't reverse axes:
>>> a = np.arange(3)[::-1]; a array([2, 1, 0]) >>> a.ravel(order='C') array([2, 1, 0]) >>> a.ravel(order='K') array([2, 1, 0])
>>> a = np.arange(12).reshape(2,3,2).swapaxes(1,2); a array([[[ 0, 2, 4], [ 1, 3, 5]], [[ 6, 8, 10], [ 7, 9, 11]]]) >>> a.ravel(order='C') array([ 0, 2, 4, 1, 3, 5, 6, 8, 10, 7, 9, 11]) >>> a.ravel(order='K') array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
Repeat elements of an array.
repeats
is broadcasted
to fit the shape of the given axis.a
, except along
the given axis.tile : Tile an array. unique : Find the unique elements of an array.
>>> np.repeat(3, 4) array([3, 3, 3, 3]) >>> x = np.array([[1,2],[3,4]]) >>> np.repeat(x, 2) array([1, 1, 2, 2, 3, 3, 4, 4]) >>> np.repeat(x, 3, axis=1) array([[1, 1, 1, 2, 2, 2], [3, 3, 3, 4, 4, 4]]) >>> np.repeat(x, [1, 2], axis=0) array([[1, 2], [3, 4], [3, 4]])
Gives a new shape to an array without changing its data.
a
using this index order, and place the
elements into the reshaped array using this index order. 'C'
means to read / write the elements using C-like index order,
with the last axis index changing fastest, back to the first
axis index changing slowest. 'F' means to read / write the
elements using Fortran-like index order, with the first index
changing fastest, and the last index changing slowest. Note that
the 'C' and 'F' options take no account of the memory layout of
the underlying array, and only refer to the order of indexing.
'A' means to read / write the elements in Fortran-like index
order if a
is Fortran contiguous in memory, C-like order
otherwise.ndarray.reshape : Equivalent method.
It is not always possible to change the shape of an array without copying the data. If you want an error to be raised when the data is copied, you should assign the new shape to the shape attribute of the array:
>>> a = np.zeros((10, 2)) # A transpose makes the array non-contiguous >>> b = a.T # Taking a view makes it possible to modify the shape without modifying # the initial object. >>> c = b.view() >>> c.shape = (20) Traceback (most recent call last): ... AttributeError: Incompatible shape for in-place modification. Use `.reshape()` to make a copy with the desired shape.
The order
keyword gives the index ordering both for fetching the values
from a
, and then placing the values into the output array.
For example, let's say you have an array:
>>> a = np.arange(6).reshape((3, 2)) >>> a array([[0, 1], [2, 3], [4, 5]])
You can think of reshaping as first raveling the array (using the given index order), then inserting the elements from the raveled array into the new array using the same kind of index ordering as was used for the raveling.
>>> np.reshape(a, (2, 3)) # C-like index ordering array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(np.ravel(a), (2, 3)) # equivalent to C ravel then C reshape array([[0, 1, 2], [3, 4, 5]]) >>> np.reshape(a, (2, 3), order='F') # Fortran-like index ordering array([[0, 4, 3], [2, 1, 5]]) >>> np.reshape(np.ravel(a, order='F'), (2, 3), order='F') array([[0, 4, 3], [2, 1, 5]])
>>> a = np.array([[1,2,3], [4,5,6]]) >>> np.reshape(a, 6) array([1, 2, 3, 4, 5, 6]) >>> np.reshape(a, 6, order='F') array([1, 4, 2, 5, 3, 6])
>>> np.reshape(a, (3,-1)) # the unspecified value is inferred to be 2 array([[1, 2], [3, 4], [5, 6]])
Return a new array with the specified shape.
If the new array is larger than the original array, then the new
array is filled with repeated copies of a
. Note that this behavior
is different from a.resize(new_shape) which fills with zeros instead
of repeated copies of a
.
numpy.reshape : Reshape an array without changing the total size. numpy.pad : Enlarge and pad an array. numpy.repeat : Repeat elements of an array. ndarray.resize : resize an array in-place.
When the total size of the array does not change ~numpy.reshape
should
be used. In most other cases either indexing (to reduce the size)
or padding (to increase the size) may be a more appropriate solution.
Warning: This functionality does not consider axes separately,
i.e. it does not apply interpolation/extrapolation.
It fills the return array with the required number of elements, iterating
over a
in C-order, disregarding axes (and cycling back from the start if
the new shape is larger). This functionality is therefore not suitable to
resize images, or data where each axis represents a separate and distinct
entity.
>>> a=np.array([[0,1],[2,3]]) >>> np.resize(a,(2,3)) array([[0, 1, 2], [3, 0, 1]]) >>> np.resize(a,(1,4)) array([[0, 1, 2, 3]]) >>> np.resize(a,(2,4)) array([[0, 1, 2, 3], [0, 1, 2, 3]])
Round an array to the given number of decimals.
around : equivalent function; see for details.
Find indices where elements should be inserted to maintain order.
Find the indices into a sorted array a
such that, if the
corresponding elements in v
were inserted before the indices, the
order of a
would be preserved.
Assuming that a
is sorted:
side |
returned index i satisfies |
---|---|
left | a[i-1] < v <= a[i] |
right | a[i-1] <= v < a[i] |
sorter
is None, then it must be sorted in
ascending order, otherwise sorter
must be an array of indices
that sort it.a
.a
).Optional array of integer indices that sort array a into ascending order. They are typically the result of argsort.
v
,
or an integer if v
is a scalar.sort : Return a sorted copy of an array. histogram : Produce histogram from 1-D data.
Binary search is used to find the required insertion points.
As of NumPy 1.4.0 searchsorted
works with real/complex arrays containing
nan
values. The enhanced sort order is documented in sort
.
This function uses the same algorithm as the builtin python bisect.bisect_left
(side='left') and bisect.bisect_right
(side='right') functions,
which is also vectorized in the v
argument.
>>> np.searchsorted([1,2,3,4,5], 3) 2 >>> np.searchsorted([1,2,3,4,5], 3, side='right') 3 >>> np.searchsorted([1,2,3,4,5], [-10, 10, 2, 3]) array([0, 5, 1, 2])
Return the shape of an array.
len ndarray.shape : Equivalent array method.
>>> np.shape(np.eye(3)) (3, 3) >>> np.shape([[1, 2]]) (1, 2) >>> np.shape([0]) (1,) >>> np.shape(0) ()
>>> a = np.array([(1, 2), (3, 4)], dtype=[('x', 'i4'), ('y', 'i4')]) >>> np.shape(a) (2,) >>> a.shape (2,)
Return the number of elements along a given axis.
shape : dimensions of array ndarray.shape : dimensions of array ndarray.size : number of elements in array
>>> a = np.array([[1,2,3],[4,5,6]]) >>> np.size(a) 6 >>> np.size(a,1) 3 >>> np.size(a,0) 2
Check whether some values are true.
Refer to any
for full documentation.
any : equivalent function; see for details.
Return a sorted copy of an array.
Sorting algorithm. The default is 'quicksort'. Note that both 'stable' and 'mergesort' use timsort or radix sort under the covers and, in general, the actual implementation will vary with data type. The 'mergesort' option is retained for backwards compatibility.
a
is an array with fields defined, this argument specifies
which fields to compare first, second, etc. A single field can
be specified as a string, and not all fields need be specified,
but unspecified fields will still be used, in the order in which
they come up in the dtype, to break ties.a
.ndarray.sort : Method to sort an array in-place. argsort : Indirect sort. lexsort : Indirect stable sort on multiple keys. searchsorted : Find elements in a sorted array. partition : Partial sort.
The various sorting algorithms are characterized by their average speed, worst case performance, work space size, and whether they are stable. A stable sort keeps items with the same key in the same relative order. The four algorithms implemented in NumPy have the following properties:
kind | speed | worst case | work space | stable |
---|---|---|---|---|
'quicksort' | 1 | O(n^2) | 0 | no |
'heapsort' | 3 | O(n*log(n)) | 0 | no |
'mergesort' | 2 | O(n*log(n)) | ~n/2 | yes |
'timsort' | 2 | O(n*log(n)) | ~n/2 | yes |
Note
The datatype determines which of 'mergesort' or 'timsort' is actually used, even if 'mergesort' is specified. User selection at a finer scale is not currently available.
All the sort algorithms make temporary copies of the data when sorting along any but the last axis. Consequently, sorting along the last axis is faster and uses less space than sorting along any other axis.
The sort order for complex numbers is lexicographic. If both the real and imaginary parts are non-nan then the order is determined by the real parts except when they are equal, in which case the order is determined by the imaginary parts.
Previous to numpy 1.4.0 sorting real and complex arrays containing nan values led to undefined behaviour. In numpy versions >= 1.4.0 nan values are sorted to the end. The extended sort order is:
- Real: [R, nan]
- Complex: [R + Rj, R + nanj, nan + Rj, nan + nanj]
where R is a non-nan real value. Complex values with the same nan placements are sorted according to the non-nan part if it exists. Non-nan values are sorted as before.
quicksort has been changed to introsort. When sorting does not make enough progress it switches to heapsort. This implementation makes quicksort O(n*log(n)) in the worst case.
'stable' automatically chooses the best stable sorting algorithm for the data type being sorted. It, along with 'mergesort' is currently mapped to timsort or radix sort depending on the data type. API forward compatibility currently limits the ability to select the implementation and it is hardwired for the different data types.
Timsort is added for better performance on already or nearly sorted data. On random data timsort is almost identical to mergesort. It is now used for stable sort while quicksort is still the default sort if none is chosen. For timsort details, refer to CPython listsort.txt. 'mergesort' and 'stable' are mapped to radix sort for integer data types. Radix sort is an O(n) sort instead of O(n log n).
NaT now sorts to the end of arrays for consistency with NaN.
>>> a = np.array([[1,4],[3,1]]) >>> np.sort(a) # sort along the last axis array([[1, 4], [1, 3]]) >>> np.sort(a, axis=None) # sort the flattened array array([1, 1, 3, 4]) >>> np.sort(a, axis=0) # sort along the first axis array([[1, 1], [3, 4]])
Use the order
keyword to specify a field to use when sorting a
structured array:
>>> dtype = [('name', 'S10'), ('height', float), ('age', int)] >>> values = [('Arthur', 1.8, 41), ('Lancelot', 1.9, 38), ... ('Galahad', 1.7, 38)] >>> a = np.array(values, dtype=dtype) # create a structured array >>> np.sort(a, order='height') # doctest: +SKIP array([('Galahad', 1.7, 38), ('Arthur', 1.8, 41), ('Lancelot', 1.8999999999999999, 38)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')])
Sort by age, then height if ages are equal:
>>> np.sort(a, order=['age', 'height']) # doctest: +SKIP array([('Galahad', 1.7, 38), ('Lancelot', 1.8999999999999999, 38), ('Arthur', 1.8, 41)], dtype=[('name', '|S10'), ('height', '<f8'), ('age', '<i4')])
Remove axes of length one from a
.
Selects a subset of the entries of length one in the shape. If an axis is selected with shape entry greater than one, an error is raised.
a
itself
or a view into a
. Note that if all axes are squeezed,
the result is a 0d array and not a scalar.axis
is not None, and an axis being squeezed is not of length 1expand_dims : The inverse operation, adding entries of length one reshape : Insert, remove, and combine dimensions, and resize existing ones
>>> x = np.array([[[0], [1], [2]]]) >>> x.shape (1, 3, 1) >>> np.squeeze(x).shape (3,) >>> np.squeeze(x, axis=0).shape (3, 1) >>> np.squeeze(x, axis=1).shape Traceback (most recent call last): ... ValueError: cannot select an axis to squeeze out which has size not equal to one >>> np.squeeze(x, axis=2).shape (1, 3) >>> x = np.array([[1234]]) >>> x.shape (1, 1) >>> np.squeeze(x) array(1234) # 0d array >>> np.squeeze(x).shape () >>> np.squeeze(x)[()] 1234
Compute the standard deviation along the specified axis.
Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis.
Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.
If this is a tuple of ints, a standard deviation is performed over multiple axes, instead of a single axis or all the axes as before.
ddof
is zero.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the std
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Elements to include in the standard deviation.
See ~numpy.ufunc.reduce
for details.
out
is None, return a new array containing the standard deviation,
otherwise return a reference to the output array.var, mean, nanmean, nanstd, nanvar :ref:`ufuncs-output-type`
The standard deviation is the square root of the average of the squared deviations from the mean, i.e., std = sqrt(mean(x)), where x = abs(a - a.mean())**2.
The average squared deviation is typically calculated as x.sum() / N,
where N = len(x). If, however, ddof
is specified, the divisor
N - ddof is used instead. In standard statistical practice, ddof=1
provides an unbiased estimator of the variance of the infinite population.
ddof=0 provides a maximum likelihood estimate of the variance for
normally distributed variables. The standard deviation computed in this
function is the square root of the estimated variance, so even with
ddof=1, it will not be an unbiased estimate of the standard deviation
per se.
Note that, for complex numbers, std
takes the absolute
value before squaring, so that the result is always real and nonnegative.
For floating-point input, the std is computed using the same
precision the input has. Depending on the input data, this can cause
the results to be inaccurate, especially for float32 (see example below).
Specifying a higher-accuracy accumulator using the dtype
keyword can
alleviate this issue.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.std(a) 1.1180339887498949 # may vary >>> np.std(a, axis=0) array([1., 1.]) >>> np.std(a, axis=1) array([0.5, 0.5])
In single precision, std() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.std(a) 0.45000005
Computing the standard deviation in float64 is more accurate:
>>> np.std(a, dtype=np.float64) 0.44999999925494177 # may vary
Specifying a where argument:
>>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.std(a) 2.614064523559687 # may vary >>> np.std(a, where=[[True], [True], [False]]) 2.0
Sum of array elements over a given axis.
Axis or axes along which a sum is performed. The default, axis=None, will sum all of the elements of the input array. If axis is negative it counts from the last to the first axis.
If axis is a tuple of ints, a sum is performed on all of the axes specified in the tuple instead of a single axis or all the axes as before.
a
is used by default unless a
has an integer dtype of less precision than the default platform
integer. In that case, if a
is signed then the platform integer
is used while if a
is unsigned then an unsigned integer of the
same precision as the platform integer is used.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the sum
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Starting value for the sum. See ~numpy.ufunc.reduce
for details.
Elements to include in the sum. See ~numpy.ufunc.reduce
for details.
a
, with the specified
axis removed. If a
is a 0-d array, or if axis
is None, a scalar
is returned. If an output array is specified, a reference to
out
is returned.ndarray.sum : Equivalent method.
add.reduce : Equivalent functionality of add
.
cumsum : Cumulative sum of array elements.
trapz : Integration of array values using the composite trapezoidal rule.
mean, average
Arithmetic is modular when using integer types, and no error is raised on overflow.
The sum of an empty array is the neutral element 0:
>>> np.sum([]) 0.0
For floating point numbers the numerical precision of sum (and
np.add.reduce) is in general limited by directly adding each number
individually to the result causing rounding errors in every step.
However, often numpy will use a numerically better approach (partial
pairwise summation) leading to improved precision in many use-cases.
This improved precision is always provided when no axis is given.
When axis is given, it will depend on which axis is summed.
Technically, to provide the best speed possible, the improved precision
is only used when the summation is along the fast axis in memory.
Note that the exact precision may vary depending on other parameters.
In contrast to NumPy, Python's math.fsum function uses a slower but
more precise approach to summation.
Especially when summing a large number of lower precision floating point
numbers, such as float32, numerical errors can become significant.
In such cases it can be advisable to use dtype="float64"
to use a higher
precision for the output.
>>> np.sum([0.5, 1.5]) 2.0 >>> np.sum([0.5, 0.7, 0.2, 1.5], dtype=np.int32) 1 >>> np.sum([[0, 1], [0, 5]]) 6 >>> np.sum([[0, 1], [0, 5]], axis=0) array([0, 6]) >>> np.sum([[0, 1], [0, 5]], axis=1) array([1, 5]) >>> np.sum([[0, 1], [np.nan, 5]], where=[False, True], axis=1) array([1., 5.])
If the accumulator is too small, overflow occurs:
>>> np.ones(128, dtype=np.int8).sum(dtype=np.int8) -128
You can also start the sum with a value other than zero:
>>> np.sum([10], initial=5) 15
Interchange two axes of an array.
a
is an ndarray, then a view of a
is
returned; otherwise a new array is created. For earlier NumPy
versions a view of a
is returned only if the order of the
axes is changed, otherwise the input array is returned.>>> x = np.array([[1,2,3]]) >>> np.swapaxes(x,0,1) array([[1], [2], [3]])
>>> x = np.array([[[0,1],[2,3]],[[4,5],[6,7]]]) >>> x array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
>>> np.swapaxes(x,0,2) array([[[0, 4], [2, 6]], [[1, 5], [3, 7]]])
Take elements from an array along an axis.
When axis is not None, this function does the same thing as "fancy" indexing (indexing arrays using arrays); however, it can be easier to use if you need elements along a given axis. A call such as np.take(arr, indices, axis=3) is equivalent to arr[:,:,:,indices,...].
Explained without fancy indexing, this is equivalent to the following use
of ndindex
, which sets each of ii, jj, and kk to a tuple of
indices:
Ni, Nk = a.shape[:axis], a.shape[axis+1:] Nj = indices.shape for ii in ndindex(Ni): for jj in ndindex(Nj): for kk in ndindex(Nk): out[ii + jj + kk] = a[ii + (indices[jj],) + kk]
The indices of the values to extract.
Also allow scalars for indices.
out
is always
buffered if mode='raise'
; use other modes for better performance.Specifies how out-of-bounds indices will behave.
'clip' mode means that all indices that are too large are replaced by the index that addresses the last element along that axis. Note that this disables indexing with negative numbers.
a
.compress : Take elements using a boolean mask ndarray.take : equivalent method take_along_axis : Take elements by matching the array and the index arrays
By eliminating the inner loop in the description above, and using s_
to
build simple slice objects, take
can be expressed in terms of applying
fancy indexing to each 1-d slice:
Ni, Nk = a.shape[:axis], a.shape[axis+1:] for ii in ndindex(Ni): for kk in ndindex(Nj): out[ii + s_[...,] + kk] = a[ii + s_[:,] + kk][indices]
For this reason, it is equivalent to (but faster than) the following use
of apply_along_axis
:
out = np.apply_along_axis(lambda a_1d: a_1d[indices], axis, a)
>>> a = [4, 3, 5, 7, 6, 8] >>> indices = [0, 1, 4] >>> np.take(a, indices) array([4, 3, 6])
In this example if a
is an ndarray, "fancy" indexing can be used.
>>> a = np.array(a) >>> a[indices] array([4, 3, 6])
If indices
is not one dimensional, the output also has these dimensions.
>>> np.take(a, [[0, 1], [2, 3]]) array([[4, 3], [5, 7]])
Return the sum along diagonals of the array.
If a
is 2-D, the sum along its diagonal with the given offset
is returned, i.e., the sum of elements a[i,i+offset] for all i.
If a
has more than two dimensions, then the axes specified by axis1 and
axis2 are used to determine the 2-D sub-arrays whose traces are returned.
The shape of the resulting array is the same as that of a
with axis1
and axis2
removed.
a
.a
is
of integer type of precision less than the default integer
precision, then the default integer precision is used. Otherwise,
the precision is the same as that of a
.a
is 2-D, the sum along the diagonal is returned. If a
has
larger dimensions, then an array of sums along diagonals is returned.diag, diagonal, diagflat
>>> np.trace(np.eye(3)) 3.0 >>> a = np.arange(8).reshape((2,2,2)) >>> np.trace(a) array([6, 8])
>>> a = np.arange(24).reshape((2,2,2,3)) >>> np.trace(a).shape (2, 3)
Reverse or permute the axes of an array; returns the modified array.
For an array a with two axes, transpose(a) gives the matrix transpose.
Refer to numpy.ndarray.transpose
for full documentation.
a
with its axes permuted. A view is returned whenever
possible.ndarray.transpose : Equivalent method moveaxis argsort
Use transpose(a, argsort(axes))
to invert the transposition of tensors
when using the axes
keyword argument.
Transposing a 1-D array returns an unchanged view of the original array.
>>> x = np.arange(4).reshape((2,2)) >>> x array([[0, 1], [2, 3]])
>>> np.transpose(x) array([[0, 2], [1, 3]])
>>> x = np.ones((1, 2, 3)) >>> np.transpose(x, (1, 0, 2)).shape (2, 1, 3)
>>> x = np.ones((2, 3, 4, 5)) >>> np.transpose(x).shape (5, 4, 3, 2)
Compute the variance along the specified axis.
Returns the variance of the array elements, a measure of the spread of a distribution. The variance is computed for the flattened array by default, otherwise over the specified axis.
a
is not an
array, a conversion is attempted.Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
If this is a tuple of ints, a variance is performed over multiple axes, instead of a single axis or all the axes as before.
float64
; for arrays of float types it is the same as
the array type.ddof
is zero.If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
If the default value is passed, then keepdims
will not be
passed through to the var
method of sub-classes of
ndarray
, however any non-default value will be. If the
sub-class' method does not implement keepdims
any
exceptions will be raised.
Elements to include in the variance. See ~numpy.ufunc.reduce
for
details.
std, mean, nanmean, nanstd, nanvar :ref:`ufuncs-output-type`
The variance is the average of the squared deviations from the mean, i.e., var = mean(x), where x = abs(a - a.mean())**2.
The mean is typically calculated as x.sum() / N, where N = len(x).
If, however, ddof
is specified, the divisor N - ddof is used
instead. In standard statistical practice, ddof=1 provides an
unbiased estimator of the variance of a hypothetical infinite population.
ddof=0 provides a maximum likelihood estimate of the variance for
normally distributed variables.
Note that for complex numbers, the absolute value is taken before squaring, so that the result is always real and nonnegative.
For floating-point input, the variance is computed using the same
precision the input has. Depending on the input data, this can cause
the results to be inaccurate, especially for float32
(see example
below). Specifying a higher-accuracy accumulator using the dtype
keyword can alleviate this issue.
>>> a = np.array([[1, 2], [3, 4]]) >>> np.var(a) 1.25 >>> np.var(a, axis=0) array([1., 1.]) >>> np.var(a, axis=1) array([0.25, 0.25])
In single precision, var() can be inaccurate:
>>> a = np.zeros((2, 512*512), dtype=np.float32) >>> a[0, :] = 1.0 >>> a[1, :] = 0.1 >>> np.var(a) 0.20250003
Computing the variance in float64 is more accurate:
>>> np.var(a, dtype=np.float64) 0.20249999932944759 # may vary >>> ((1-0.55)**2 + (0.1-0.55)**2)/2 0.2025
Specifying a where argument:
>>> a = np.array([[14, 8, 11, 10], [7, 9, 10, 11], [10, 15, 5, 10]]) >>> np.var(a) 6.833333333333333 # may vary >>> np.var(a, where=[[True], [True], [False]]) 4.0