Variable | einsum_symbols |
Undocumented |
Variable | einsum_symbols_set |
Undocumented |
Function | _can_dot |
Checks if we can use BLAS (np.tensordot) call and its beneficial to do so. |
Function | _compute_size_by_dict |
Computes the product of the elements in indices based on the dictionary idx_dict. |
Function | _einsum_dispatcher |
Undocumented |
Function | _einsum_path_dispatcher |
Undocumented |
Function | _find_contraction |
Finds the contraction for a given set of input and output sets. |
Function | _flop_count |
Computes the number of FLOPS in the contraction. |
Function | _greedy_path |
No summary |
Function | _optimal_path |
Computes all possible pair contractions, sieves the results based on memory_limit and returns the lowest cost path. This algorithm scales factorial with respect to the elements in the list input_sets. |
Function | _parse_einsum_input |
A reproduction of einsum c side einsum parsing in python. |
Function | _parse_possible_contraction |
Compute the cost (removed size + flops) and resultant indices for performing the contraction specified by positions. |
Function | _update_other_results |
Update the positions and provisional input_sets of results based on performing the contraction result best. Remove any involving the tensors contracted. |
Function | einsum |
einsum(subscripts, *operands, out=None, dtype=None, order='K', casting='safe', optimize=False) |
Function | einsum_path |
einsum_path(subscripts, *operands, optimize='greedy') |
Checks if we can use BLAS (np.tensordot) call and its beneficial to do so.
If the operations is BLAS level 1 or 2 and is not already aligned we default back to einsum as the memory movement to copy is more costly than the operation itself.
# Standard GEMM operation >>> _can_dot(['ij', 'jk'], 'ik', set('j')) True
# Can use the standard BLAS, but requires odd data movement >>> _can_dot(['ijj', 'jk'], 'ik', set('j')) False
# DDOT where the memory is not aligned >>> _can_dot(['ijk', 'ikj'], '', set('ijk')) False
Computes the product of the elements in indices based on the dictionary idx_dict.
>>> _compute_size_by_dict('abbc', {'a': 2, 'b':3, 'c':5}) 90
Finds the contraction for a given set of input and output sets.
# A simple dot product test case >>> pos = (0, 1) >>> isets = [set('ab'), set('bc')] >>> oset = set('ac') >>> _find_contraction(pos, isets, oset) ({'a', 'c'}, [{'a', 'c'}], {'b'}, {'a', 'b', 'c'})
# A more complex case with additional terms in the contraction >>> pos = (0, 2) >>> isets = [set('abd'), set('ac'), set('bdc')] >>> oset = set('ac') >>> _find_contraction(pos, isets, oset) ({'a', 'c'}, [{'a', 'c'}, {'a', 'c'}], {'b', 'd'}, {'a', 'b', 'c', 'd'})
Computes the number of FLOPS in the contraction.
>>> _flop_count('abc', False, 1, {'a': 2, 'b':3, 'c':5}) 30
>>> _flop_count('abc', True, 2, {'a': 2, 'b':3, 'c':5}) 60
Finds the path by contracting the best pair until the input list is exhausted. The best pair is found by minimizing the tuple (-prod(indices_removed), cost). What this amounts to is prioritizing matrix multiplication or inner product operations, then Hadamard like operations, and finally outer operations. Outer products are limited by memory_limit. This algorithm scales cubically with respect to the number of elements in the list input_sets.
>>> isets = [set('abd'), set('ac'), set('bdc')] >>> oset = set() >>> idx_sizes = {'a': 1, 'b':2, 'c':3, 'd':4} >>> _greedy_path(isets, oset, idx_sizes, 5000) [(0, 2), (0, 1)]
Computes all possible pair contractions, sieves the results based on memory_limit and returns the lowest cost path. This algorithm scales factorial with respect to the elements in the list input_sets.
>>> isets = [set('abd'), set('ac'), set('bdc')] >>> oset = set() >>> idx_sizes = {'a': 1, 'b':2, 'c':3, 'd':4} >>> _optimal_path(isets, oset, idx_sizes, 5000) [(0, 2), (0, 1)]
A reproduction of einsum c side einsum parsing in python.
The operand list is simplified to reduce printing:
>>> np.random.seed(123) >>> a = np.random.rand(4, 4) >>> b = np.random.rand(4, 4, 4) >>> _parse_einsum_input(('...a,...a->...', a, b)) ('za,xza', 'xz', [a, b]) # may vary
>>> _parse_einsum_input((a, [Ellipsis, 0], b, [Ellipsis, 0])) ('za,xza', 'xz', [a, b]) # may vary
Compute the cost (removed size + flops) and resultant indices for performing the contraction specified by positions.
Update the positions and provisional input_sets of results based on performing the contraction result best. Remove any involving the tensors contracted.
Evaluates the Einstein summation convention on the operands.
Using the Einstein summation convention, many common multi-dimensional,
linear algebraic array operations can be represented in a simple fashion.
In implicit mode einsum
computes these values.
In explicit mode, einsum
provides further flexibility to compute
other array operations that might not be considered classical Einstein
summation operations, by disabling, or forcing summation over specified
subscript labels.
See the notes and examples for clarification.
casting
parameter to allow the conversions. Default is None.Controls what kind of data casting may occur. Setting this to 'unsafe' is not recommended, as it can adversely affect accumulations.
- 'no' means the data types should not be cast at all.
- 'equiv' means only byte-order changes are allowed.
- 'safe' means only casts which can preserve values are allowed.
- 'same_kind' means only safe casts or casts within a kind, like float64 to float32, are allowed.
- 'unsafe' means any data conversions may be done.
Default is 'safe'.
einsum_path, dot, inner, outer, tensordot, linalg.multi_dot einops :
similar verbose interface is provided by einops package to cover additional operations: transpose, reshape/flatten, repeat/tile, squeeze/unsqueeze and reductions.
The Einstein summation convention can be used to compute
many multi-dimensional, linear algebraic array operations. einsum
provides a succinct way of representing these.
A non-exhaustive list of these operations,
which can be computed by einsum
, is shown below along with examples:
numpy.trace
.numpy.diag
.numpy.sum
.numpy.transpose
.numpy.matmul
numpy.dot
.numpy.inner
numpy.outer
.numpy.multiply
.numpy.tensordot
.numpy.einsum_path
.The subscripts string is a comma-separated list of subscript labels,
where each label refers to a dimension of the corresponding operand.
Whenever a label is repeated it is summed, so np.einsum('i,i', a, b)
is equivalent to np.inner(a,b)
. If a label
appears only once, it is not summed, so np.einsum('i', a) produces a
view of a with no changes. A further example np.einsum('ij,jk', a, b)
describes traditional matrix multiplication and is equivalent to
np.matmul(a,b)
. Repeated subscript labels in one
operand take the diagonal. For example, np.einsum('ii', a) is equivalent
to np.trace(a)
.
In implicit mode, the chosen subscripts are important since the axes of the output are reordered alphabetically. This means that np.einsum('ij', a) doesn't affect a 2D array, while np.einsum('ji', a) takes its transpose. Additionally, np.einsum('ij,jk', a, b) returns a matrix multiplication, while, np.einsum('ij,jh', a, b) returns the transpose of the multiplication since subscript 'h' precedes subscript 'i'.
In explicit mode the output can be directly controlled by
specifying output subscript labels. This requires the
identifier '->' as well as the list of output subscript labels.
This feature increases the flexibility of the function since
summing can be disabled or forced when required. The call
np.einsum('i->', a) is like np.sum(a, axis=-1)
,
and np.einsum('ii->i', a) is like np.diag(a)
.
The difference is that einsum
does not allow broadcasting by default.
Additionally np.einsum('ij,jh->ih', a, b) directly specifies the
order of the output subscript labels and therefore returns matrix
multiplication, unlike the example above in implicit mode.
To enable and control broadcasting, use an ellipsis. Default NumPy-style broadcasting is done by adding an ellipsis to the left of each term, like np.einsum('...ii->...i', a). To take the trace along the first and last axes, you can do np.einsum('i...i', a), or to do a matrix-matrix product with the left-most indices instead of rightmost, one can do np.einsum('ij...,jk...->ik...', a, b).
When there is only one operand, no axes are summed, and no output parameter is provided, a view into the operand is returned instead of a new array. Thus, taking the diagonal as np.einsum('ii->i', a) produces a view (changed in version 1.10.0).
einsum
also provides an alternative way to provide the subscripts
and operands as einsum(op0, sublist0, op1, sublist1, ..., [sublistout]).
If the output shape is not provided in this format einsum
will be
calculated in implicit mode, otherwise it will be performed explicitly.
The examples below have corresponding einsum
calls with the two
parameter methods.
Views returned from einsum are now writeable whenever the input array
is writeable. For example, np.einsum('ijk...->kji...', a) will now
have the same effect as np.swapaxes(a, 0, 2)
and np.einsum('ii->i', a) will return a writeable view of the diagonal
of a 2D array.
Added the optimize argument which will optimize the contraction order of an einsum expression. For a contraction with three or more operands this can greatly increase the computational efficiency at the cost of a larger memory footprint during computation.
Typically a 'greedy' algorithm is applied which empirical tests have shown returns the optimal path in the majority of cases. In some cases 'optimal' will return the superlative path through a more expensive, exhaustive search. For iterative calculations it may be advisable to calculate the optimal path once and reuse that path by supplying it as an argument. An example is given below.
See numpy.einsum_path
for more details.
>>> a = np.arange(25).reshape(5,5) >>> b = np.arange(5) >>> c = np.arange(6).reshape(2,3)
Trace of a matrix:
>>> np.einsum('ii', a) 60 >>> np.einsum(a, [0,0]) 60 >>> np.trace(a) 60
Extract the diagonal (requires explicit form):
>>> np.einsum('ii->i', a) array([ 0, 6, 12, 18, 24]) >>> np.einsum(a, [0,0], [0]) array([ 0, 6, 12, 18, 24]) >>> np.diag(a) array([ 0, 6, 12, 18, 24])
Sum over an axis (requires explicit form):
>>> np.einsum('ij->i', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [0,1], [0]) array([ 10, 35, 60, 85, 110]) >>> np.sum(a, axis=1) array([ 10, 35, 60, 85, 110])
For higher dimensional arrays summing a single axis can be done with ellipsis:
>>> np.einsum('...j->...', a) array([ 10, 35, 60, 85, 110]) >>> np.einsum(a, [Ellipsis,1], [Ellipsis]) array([ 10, 35, 60, 85, 110])
Compute a matrix transpose, or reorder any number of axes:
>>> np.einsum('ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum('ij->ji', c) array([[0, 3], [1, 4], [2, 5]]) >>> np.einsum(c, [1,0]) array([[0, 3], [1, 4], [2, 5]]) >>> np.transpose(c) array([[0, 3], [1, 4], [2, 5]])
Vector inner products:
>>> np.einsum('i,i', b, b) 30 >>> np.einsum(b, [0], b, [0]) 30 >>> np.inner(b,b) 30
Matrix vector multiplication:
>>> np.einsum('ij,j', a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum(a, [0,1], b, [1]) array([ 30, 80, 130, 180, 230]) >>> np.dot(a, b) array([ 30, 80, 130, 180, 230]) >>> np.einsum('...j,j', a, b) array([ 30, 80, 130, 180, 230])
Broadcasting and scalar multiplication:
>>> np.einsum('..., ...', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(',ij', 3, c) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.einsum(3, [Ellipsis], c, [Ellipsis]) array([[ 0, 3, 6], [ 9, 12, 15]]) >>> np.multiply(3, c) array([[ 0, 3, 6], [ 9, 12, 15]])
Vector outer product:
>>> np.einsum('i,j', np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.einsum(np.arange(2)+1, [0], b, [1]) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]]) >>> np.outer(np.arange(2)+1, b) array([[0, 1, 2, 3, 4], [0, 2, 4, 6, 8]])
Tensor contraction:
>>> a = np.arange(60.).reshape(3,4,5) >>> b = np.arange(24.).reshape(4,3,2) >>> np.einsum('ijk,jil->kl', a, b) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.einsum(a, [0,1,2], b, [1,0,3], [2,3]) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]]) >>> np.tensordot(a,b, axes=([1,0],[0,1])) array([[4400., 4730.], [4532., 4874.], [4664., 5018.], [4796., 5162.], [4928., 5306.]])
Writeable returned arrays (since version 1.10.0):
>>> a = np.zeros((3, 3)) >>> np.einsum('ii->i', a)[:] = 1 >>> a array([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
Example of ellipsis use:
>>> a = np.arange(6).reshape((3,2)) >>> b = np.arange(12).reshape((4,3)) >>> np.einsum('ki,jk->ij', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('ki,...k->i...', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]]) >>> np.einsum('k...,jk', a, b) array([[10, 28, 46, 64], [13, 40, 67, 94]])
Chained array operations. For more complicated contractions, speed ups
might be achieved by repeatedly computing a 'greedy' path or pre-computing the
'optimal' path and repeatedly applying it, using an
einsum_path
insertion (since version 1.12.0). Performance improvements can be
particularly significant with larger arrays:
>>> a = np.ones(64).reshape(2,4,8)
Basic einsum
: ~1520ms (benchmarked on 3.1GHz Intel i5.)
>>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a)
Sub-optimal einsum
(due to repeated path calculation time): ~330ms
>>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal')
Greedy einsum
(faster optimal path approximation): ~160ms
>>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='greedy')
Optimal einsum
(best usage pattern in some use cases): ~110ms
>>> path = np.einsum_path('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize='optimal')[0] >>> for iteration in range(500): ... _ = np.einsum('ijk,ilm,njm,nlk,abc->',a,a,a,a,a, optimize=path)
einsum_path(subscripts, *operands, optimize='greedy')
Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays.
Choose the type of path. If a tuple is provided, the second argument is assumed to be the maximum intermediate size created. If only a single argument is provided the largest input or output array size is used as a maximum intermediate size.
Default is 'greedy'.
The resulting path indicates which terms of the input contraction should be contracted first, the result of this contraction is then appended to the end of the contraction list. This list can then be iterated over until all intermediate contractions are complete.
einsum, linalg.multi_dot
We can begin with a chain dot example. In this case, it is optimal to contract the b and c tensors first as represented by the first element of the path (1, 2). The resulting tensor is added to the end of the contraction and the remaining contraction (0, 1) is then completed.
>>> np.random.seed(123) >>> a = np.random.rand(2, 2) >>> b = np.random.rand(2, 5) >>> c = np.random.rand(5, 2) >>> path_info = np.einsum_path('ij,jk,kl->il', a, b, c, optimize='greedy') >>> print(path_info[0]) ['einsum_path', (1, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ij,jk,kl->il # may vary Naive scaling: 4 Optimized scaling: 3 Naive FLOP count: 1.600e+02 Optimized FLOP count: 5.600e+01 Theoretical speedup: 2.857 Largest intermediate: 4.000e+00 elements ------------------------------------------------------------------------- scaling current remaining ------------------------------------------------------------------------- 3 kl,jk->jl ij,jl->il 3 jl,ij->il il->il
A more complex index transformation example.
>>> I = np.random.rand(10, 10, 10, 10) >>> C = np.random.rand(10, 10) >>> path_info = np.einsum_path('ea,fb,abcd,gc,hd->efgh', C, C, I, C, C, ... optimize='greedy')
>>> print(path_info[0]) ['einsum_path', (0, 2), (0, 3), (0, 2), (0, 1)] >>> print(path_info[1]) Complete contraction: ea,fb,abcd,gc,hd->efgh # may vary Naive scaling: 8 Optimized scaling: 5 Naive FLOP count: 8.000e+08 Optimized FLOP count: 8.000e+05 Theoretical speedup: 1000.000 Largest intermediate: 1.000e+04 elements -------------------------------------------------------------------------- scaling current remaining -------------------------------------------------------------------------- 5 abcd,ea->bcde fb,gc,hd,bcde->efgh 5 bcde,fb->cdef gc,hd,cdef->efgh 5 cdef,gc->defg hd,defg->efgh 5 defg,hd->efgh efgh->efgh