module documentation

Undocumented

Variable array​_function​_dispatch Undocumented
Class ​Bag​Obj BagObj(obj)
Class ​Data​Source DataSource(destpath='.')
Class ​Npz​File NpzFile(fid)
Function ​_floatconv Undocumented
Function ​_genfromtxt​_dispatcher Undocumented
Function ​_getconv Find the correct dtype converter. Adapted from matplotlib.
Function ​_loadtxt​_dispatcher Undocumented
Function ​_loadtxt​_flatten​_dtype​_internal Unpack a structured data-type, and produce a packer function.
Function ​_loadtxt​_pack​_items Pack items into nested lists based on re-packing info.
Function ​_save​_dispatcher Undocumented
Function ​_savetxt​_dispatcher Undocumented
Function ​_savez Undocumented
Function ​_savez​_compressed​_dispatcher Undocumented
Function ​_savez​_dispatcher Undocumented
Function fromregex Construct an array from a text file, using regular expression parsing.
Function genfromtxt Load data from a text file, with missing values handled as specified.
Function load Load arrays or pickled objects from .npy, .npz or pickled files.
Function loadtxt Load data from a text file.
Function recfromcsv Load ASCII data stored in a comma-separated file.
Function recfromtxt Load ASCII data from a file and return it in a record array.
Function save Save an array to a binary file in NumPy .npy format.
Function savetxt Save an array to a text file.
Function savez Save several arrays into a single file in uncompressed .npz format.
Function savez​_compressed Save several arrays into a single file in compressed .npz format.
Function zipfile​_factory Create a ZipFile.
Constant ​_CONVERTERS Undocumented
Variable ​_genfromtxt​_with​_like Undocumented
Variable ​_loadtxt​_chunksize Undocumented
Variable ​_loadtxt​_with​_like Undocumented
array_function_dispatch =

Undocumented

def _floatconv(x):

Undocumented

def _genfromtxt_dispatcher(fname, dtype=None, comments=None, delimiter=None, skip_header=None, skip_footer=None, converters=None, missing_values=None, filling_values=None, usecols=None, names=None, excludelist=None, deletechars=None, replace_space=None, autostrip=None, case_sensitive=None, defaultfmt=None, unpack=None, usemask=None, loose=None, invalid_raise=None, max_rows=None, encoding=None, *, like=None):

Undocumented

def _getconv(dtype):

Find the correct dtype converter. Adapted from matplotlib.

Even when a lambda is returned, it is defined at the toplevel, to allow testing for equality and enabling optimization for single-type data.

def _loadtxt_dispatcher(fname, dtype=None, comments=None, delimiter=None, converters=None, skiprows=None, usecols=None, unpack=None, ndmin=None, encoding=None, max_rows=None, *, like=None):

Undocumented

def _loadtxt_flatten_dtype_internal(dt):
Unpack a structured data-type, and produce a packer function.
def _loadtxt_pack_items(packing, items):
Pack items into nested lists based on re-packing info.
def _save_dispatcher(file, arr, allow_pickle=None, fix_imports=None):

Undocumented

def _savetxt_dispatcher(fname, X, fmt=None, delimiter=None, newline=None, header=None, footer=None, comments=None, encoding=None):

Undocumented

def _savez(file, args, kwds, compress, allow_pickle=True, pickle_kwargs=None):

Undocumented

def _savez_compressed_dispatcher(file, *args, **kwds):

Undocumented

def _savez_dispatcher(file, *args, **kwds):

Undocumented

@set_module('numpy')
def fromregex(file, regexp, dtype, encoding=None):

Construct an array from a text file, using regular expression parsing.

The returned array is always a structured array, and is constructed from all matches of the regular expression in the file. Groups in the regular expression are converted to fields of the structured array.

Parameters

file : path or file

Filename or file object to read.

Changed in version 1.22.0: Now accepts os.PathLike implementations.
regexp : str or regexp
Regular expression used to parse the file. Groups in the regular expression correspond to fields in the dtype.
dtype : dtype or list of dtypes
Dtype for the structured array; must be a structured datatype.
encoding : str, optional

Encoding used to decode the inputfile. Does not apply to input streams.

New in version 1.14.0.

Returns

output : ndarray
The output array, containing the part of the content of file that was matched by regexp. output is always a structured array.

Raises

TypeError
When dtype is not a valid dtype for a structured array.

See Also

fromstring, loadtxt

Notes

Dtypes for structured arrays can be specified in several forms, but all forms specify at least the data type and field name. For details see basics.rec.

Examples

>>> from io import StringIO
>>> text = StringIO("1312 foo\n1534  bar\n444   qux")
>>> regexp = r"(\d+)\s+(...)"  # match [digits, whitespace, anything]
>>> output = np.fromregex(text, regexp,
...                       [('num', np.int64), ('key', 'S3')])
>>> output
array([(1312, b'foo'), (1534, b'bar'), ( 444, b'qux')],
      dtype=[('num', '<i8'), ('key', 'S3')])
>>> output['num']
array([1312, 1534,  444])
@set_array_function_like_doc
@set_module('numpy')
def genfromtxt(fname, dtype=float, comments='#', delimiter=None, skip_header=0, skip_footer=0, converters=None, missing_values=None, filling_values=None, usecols=None, names=None, excludelist=None, deletechars="""""".join(sorted(NameValidator.defaultdeletechars)), replace_space='_', autostrip=False, case_sensitive=True, defaultfmt='f%i', unpack=None, usemask=False, loose=True, invalid_raise=True, max_rows=None, encoding='bytes', *, like=None):

Load data from a text file, with missing values handled as specified.

Each line past the first skip_header lines is split at the delimiter character, and characters following the comments character are discarded.

Parameters

fname : file, str, pathlib.Path, list of str, generator
File, filename, list, or generator to read. If the filename extension is .gz or .bz2, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines.
dtype : dtype, optional
Data type of the resulting array. If None, the dtypes will be determined by the contents of each column, individually.
comments : str, optional
The character used to indicate the start of a comment. All the characters occurring on a line after a comment are discarded.
delimiter : str, int, or sequence, optional
The string used to separate values. By default, any consecutive whitespaces act as delimiter. An integer or sequence of integers can also be provided as width(s) of each field.
skiprows : int, optional
skiprows was removed in numpy 1.10. Please use skip_header instead.
skip_header : int, optional
The number of lines to skip at the beginning of the file.
skip_footer : int, optional
The number of lines to skip at the end of the file.
converters : variable, optional
The set of functions that convert the data of a column to a value. The converters can also be used to provide a default value for missing data: converters = {3: lambda s: float(s or 0)}.
missing : variable, optional
missing was removed in numpy 1.10. Please use missing_values instead.
missing_values : variable, optional
The set of strings corresponding to missing data.
filling_values : variable, optional
The set of values to be used as default when the data are missing.
usecols : sequence, optional
Which columns to read, with 0 being the first. For example, usecols = (1, 4, 5) will extract the 2nd, 5th and 6th columns.
names : {None, True, str, sequence}, optional
If names is True, the field names are read from the first line after the first skip_header lines. This line can optionally be preceded by a comment delimiter. If names is a sequence or a single-string of comma-separated names, the names will be used to define the field names in a structured dtype. If names is None, the names of the dtype fields will be used, if any.
excludelist : sequence, optional
A list of names to exclude. This list is appended to the default list ['return','file','print']. Excluded names are appended with an underscore: for example, file would become file_.
deletechars : str, optional
A string combining invalid characters that must be deleted from the names.
defaultfmt : str, optional
A format used to define default field names, such as "f%i" or "f_%02i".
autostrip : bool, optional
Whether to automatically strip white spaces from the variables.
replace_space : char, optional
Character(s) used in replacement of white spaces in the variable names. By default, use a '_'.
case_sensitive : {True, False, 'upper', 'lower'}, optional
If True, field names are case sensitive. If False or 'upper', field names are converted to upper case. If 'lower', field names are converted to lower case.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be unpacked using x, y, z = genfromtxt(...). When used with a structured data-type, arrays are returned for each field. Default is False.
usemask : bool, optional
If True, return a masked array. If False, return a regular array.
loose : bool, optional
If True, do not raise errors for invalid values.
invalid_raise : bool, optional
If True, an exception is raised if an inconsistency is detected in the number of columns. If False, a warning is emitted and the offending lines are skipped.
max_rows : int, optional

The maximum number of rows to read. Must not be used with skip_footer at the same time. If given, the value must be at least 1. Default is to read the entire file.

New in version 1.10.0.
encoding : str, optional

Encoding used to decode the inputfile. Does not apply when fname is a file object. The special value 'bytes' enables backward compatibility workarounds that ensure that you receive byte arrays when possible and passes latin1 encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is 'bytes'.

New in version 1.14.0.

${ARRAY_FUNCTION_LIKE}

New in version 1.20.0.

Returns

out : ndarray
Data read from the text file. If usemask is True, this is a masked array.

See Also

numpy.loadtxt : equivalent function when no data is missing.

Notes

  • When spaces are used as delimiters, or when no delimiter has been given as input, there should not be any missing data between two fields.
  • When the variables are named (either by a flexible dtype or with names), there must not be any header in the file (else a ValueError exception is raised).
  • Individual values are not stripped of spaces by default. When using a custom converter, make sure the function does remove spaces.

References

[1]NumPy User Guide, section I/O with NumPy.

Examples

>>> from io import StringIO
>>> import numpy as np

Comma delimited file with mixed dtype

>>> s = StringIO(u"1,1.3,abcde")
>>> data = np.genfromtxt(s, dtype=[('myint','i8'),('myfloat','f8'),
... ('mystring','S5')], delimiter=",")
>>> data
array((1, 1.3, b'abcde'),
      dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')])

Using dtype = None

>>> _ = s.seek(0) # needed for StringIO example only
>>> data = np.genfromtxt(s, dtype=None,
... names = ['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, b'abcde'),
      dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')])

Specifying dtype and names

>>> _ = s.seek(0)
>>> data = np.genfromtxt(s, dtype="i8,f8,S5",
... names=['myint','myfloat','mystring'], delimiter=",")
>>> data
array((1, 1.3, b'abcde'),
      dtype=[('myint', '<i8'), ('myfloat', '<f8'), ('mystring', 'S5')])

An example with fixed-width columns

>>> s = StringIO(u"11.3abcde")
>>> data = np.genfromtxt(s, dtype=None, names=['intvar','fltvar','strvar'],
...     delimiter=[1,3,5])
>>> data
array((1, 1.3, b'abcde'),
      dtype=[('intvar', '<i8'), ('fltvar', '<f8'), ('strvar', 'S5')])

An example to show comments

>>> f = StringIO('''
... text,# of chars
... hello world,11
... numpy,5''')
>>> np.genfromtxt(f, dtype='S12,S12', delimiter=',')
array([(b'text', b''), (b'hello world', b'11'), (b'numpy', b'5')],
  dtype=[('f0', 'S12'), ('f1', 'S12')])
@set_module('numpy')
def load(file, mmap_mode=None, allow_pickle=False, fix_imports=True, encoding='ASCII'):

Load arrays or pickled objects from .npy, .npz or pickled files.

Warning

Loading files that contain object arrays uses the pickle module, which is not secure against erroneous or maliciously constructed data. Consider passing allow_pickle=False to load data that is known not to contain object arrays for the safer handling of untrusted sources.

Parameters

file : file-like object, string, or pathlib.Path
The file to read. File-like objects must support the seek() and read() methods. Pickled files require that the file-like object support the readline() method as well.
mmap_mode : {None, 'r+', 'r', 'w+', 'c'}, optional
If not None, then memory-map the file, using the given mode (see numpy.memmap for a detailed description of the modes). A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory.
allow_pickle : bool, optional

Allow loading pickled object arrays stored in npy files. Reasons for disallowing pickles include security, as loading pickled data can execute arbitrary code. If pickles are disallowed, loading object arrays will fail. Default: False

Changed in version 1.16.3: Made default False in response to CVE-2019-6446.
fix_imports : bool, optional
Only useful when loading Python 2 generated pickled files on Python 3, which includes npy/npz files containing object arrays. If fix_imports is True, pickle will try to map the old Python 2 names to the new names used in Python 3.
encoding : str, optional
What encoding to use when reading Python 2 strings. Only useful when loading Python 2 generated pickled files in Python 3, which includes npy/npz files containing object arrays. Values other than 'latin1', 'ASCII', and 'bytes' are not allowed, as they can corrupt numerical data. Default: 'ASCII'

Returns

result : array, tuple, dict, etc.
Data stored in the file. For .npz files, the returned instance of NpzFile class must be closed to avoid leaking file descriptors.

Raises

OSError
If the input file does not exist or cannot be read.
UnpicklingError
If allow_pickle=True, but the file cannot be loaded as a pickle.
ValueError
The file contains an object array, but allow_pickle=False given.

See Also

save, savez, savez_compressed, loadtxt memmap : Create a memory-map to an array stored in a file on disk. lib.format.open_memmap : Create or load a memory-mapped .npy file.

Notes

  • If the file contains pickle data, then whatever object is stored in the pickle is returned.

  • If the file is a .npy file, then a single array is returned.

  • If the file is a .npz file, then a dictionary-like object is returned, containing {filename: array} key-value pairs, one for each file in the archive.

  • If the file is a .npz file, the returned value supports the context manager protocol in a similar fashion to the open function:

    with load('foo.npz') as data:
        a = data['a']
    

    The underlying file descriptor is closed when exiting the 'with' block.

Examples

Store data to disk, and load it again:

>>> np.save('/tmp/123', np.array([[1, 2, 3], [4, 5, 6]]))
>>> np.load('/tmp/123.npy')
array([[1, 2, 3],
       [4, 5, 6]])

Store compressed data to disk, and load it again:

>>> a=np.array([[1, 2, 3], [4, 5, 6]])
>>> b=np.array([1, 2])
>>> np.savez('/tmp/123.npz', a=a, b=b)
>>> data = np.load('/tmp/123.npz')
>>> data['a']
array([[1, 2, 3],
       [4, 5, 6]])
>>> data['b']
array([1, 2])
>>> data.close()

Mem-map the stored array, and then access the second row directly from disk:

>>> X = np.load('/tmp/123.npy', mmap_mode='r')
>>> X[1, :]
memmap([4, 5, 6])
@set_array_function_like_doc
@set_module('numpy')
def loadtxt(fname, dtype=float, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0, encoding='bytes', max_rows=None, *, like=None):

Load data from a text file.

Each row in the text file must have the same number of values.

Parameters

fname : file, str, pathlib.Path, list of str, generator
File, filename, list, or generator to read. If the filename extension is .gz or .bz2, the file is first decompressed. Note that generators must return bytes or strings. The strings in a list or produced by a generator are treated as lines.
dtype : data-type, optional
Data-type of the resulting array; default: float. If this is a structured data-type, the resulting array will be 1-dimensional, and each row will be interpreted as an element of the array. In this case, the number of columns used must match the number of fields in the data-type.
comments : str or sequence of str, optional
The characters or list of characters used to indicate the start of a comment. None implies no comments. For backwards compatibility, byte strings will be decoded as 'latin1'. The default is '#'.
delimiter : str, optional
The string used to separate values. For backwards compatibility, byte strings will be decoded as 'latin1'. The default is whitespace.
converters : dict, optional
A dictionary mapping column number to a function that will parse the column string into the desired value. E.g., if column 0 is a date string: converters = {0: datestr2num}. Converters can also be used to provide a default value for missing data (but see also genfromtxt): converters = {3: lambda s: float(s.strip() or 0)}. Default: None.
skiprows : int, optional
Skip the first skiprows lines, including comments; default: 0.
usecols : int or sequence, optional

Which columns to read, with 0 being the first. For example, usecols = (1,4,5) will extract the 2nd, 5th and 6th columns. The default, None, results in all columns being read.

Changed in version 1.11.0: When a single column has to be read it is possible to use an integer instead of a tuple. E.g usecols = 3 reads the fourth column the same way as usecols = (3,) would.
unpack : bool, optional
If True, the returned array is transposed, so that arguments may be unpacked using x, y, z = loadtxt(...). When used with a structured data-type, arrays are returned for each field. Default is False.
ndmin : int, optional

The returned array will have at least ndmin dimensions. Otherwise mono-dimensional axes will be squeezed. Legal values: 0 (default), 1 or 2.

New in version 1.6.0.
encoding : str, optional

Encoding used to decode the inputfile. Does not apply to input streams. The special value 'bytes' enables backward compatibility workarounds that ensures you receive byte arrays as results if possible and passes 'latin1' encoded strings to converters. Override this value to receive unicode arrays and pass strings as input to converters. If set to None the system default is used. The default value is 'bytes'.

New in version 1.14.0.
max_rows : int, optional

Read max_rows lines of content after skiprows lines. The default is to read all the lines.

New in version 1.16.0.

${ARRAY_FUNCTION_LIKE}

New in version 1.20.0.

Returns

out : ndarray
Data read from the text file.

See Also

load, fromstring, fromregex genfromtxt : Load data with missing values handled as specified. scipy.io.loadmat : reads MATLAB data files

Notes

This function aims to be a fast reader for simply formatted files. The genfromtxt function provides more sophisticated handling of, e.g., lines with missing values.

New in version 1.10.0.

The strings produced by the Python float.hex method can be used as input for floats.

Examples

>>> from io import StringIO   # StringIO behaves like a file object
>>> c = StringIO("0 1\n2 3")
>>> np.loadtxt(c)
array([[0., 1.],
       [2., 3.]])
>>> d = StringIO("M 21 72\nF 35 58")
>>> np.loadtxt(d, dtype={'names': ('gender', 'age', 'weight'),
...                      'formats': ('S1', 'i4', 'f4')})
array([(b'M', 21, 72.), (b'F', 35, 58.)],
      dtype=[('gender', 'S1'), ('age', '<i4'), ('weight', '<f4')])
>>> c = StringIO("1,0,2\n3,0,4")
>>> x, y = np.loadtxt(c, delimiter=',', usecols=(0, 2), unpack=True)
>>> x
array([1., 3.])
>>> y
array([2., 4.])

This example shows how converters can be used to convert a field with a trailing minus sign into a negative number.

>>> s = StringIO('10.01 31.25-\n19.22 64.31\n17.57- 63.94')
>>> def conv(fld):
...     return -float(fld[:-1]) if fld.endswith(b'-') else float(fld)
...
>>> np.loadtxt(s, converters={0: conv, 1: conv})
array([[ 10.01, -31.25],
       [ 19.22,  64.31],
       [-17.57,  63.94]])
def recfromcsv(fname, **kwargs):

Load ASCII data stored in a comma-separated file.

The returned array is a record array (if usemask=False, see recarray) or a masked record array (if usemask=True, see ma.mrecords.MaskedRecords).

Parameters

fname, kwargs : For a description of input parameters, see genfromtxt.

See Also

numpy.genfromtxt : generic function to load ASCII data.

Notes

By default, dtype is None, which means that the data-type of the output array will be determined from the data.

def recfromtxt(fname, **kwargs):

Load ASCII data from a file and return it in a record array.

If usemask=False a standard recarray is returned, if usemask=True a MaskedRecords array is returned.

Parameters

fname, kwargs : For a description of input parameters, see genfromtxt.

See Also

numpy.genfromtxt : generic function

Notes

By default, dtype is None, which means that the data-type of the output array will be determined from the data.

@array_function_dispatch(_save_dispatcher)
def save(file, arr, allow_pickle=True, fix_imports=True):

Save an array to a binary file in NumPy .npy format.

Parameters

file : file, str, or pathlib.Path
File or filename to which the data is saved. If file is a file-object, then the filename is unchanged. If file is a string or Path, a .npy extension will be appended to the filename if it does not already have one.
arr : array_like
Array data to be saved.
allow_pickle : bool, optional
Allow saving object arrays using Python pickles. Reasons for disallowing pickles include security (loading pickled data can execute arbitrary code) and portability (pickled objects may not be loadable on different Python installations, for example if the stored objects require libraries that are not available, and not all pickled data is compatible between Python 2 and Python 3). Default: True
fix_imports : bool, optional
Only useful in forcing objects in object arrays on Python 3 to be pickled in a Python 2 compatible way. If fix_imports is True, pickle will try to map the new Python 3 names to the old module names used in Python 2, so that the pickle data stream is readable with Python 2.

See Also

savez : Save several arrays into a .npz archive savetxt, load

Notes

For a description of the .npy format, see numpy.lib.format.

Any data saved to the file is appended to the end of the file.

Examples

>>> from tempfile import TemporaryFile
>>> outfile = TemporaryFile()
>>> x = np.arange(10)
>>> np.save(outfile, x)
>>> _ = outfile.seek(0) # Only needed here to simulate closing & reopening file
>>> np.load(outfile)
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> with open('test.npy', 'wb') as f:
...     np.save(f, np.array([1, 2]))
...     np.save(f, np.array([1, 3]))
>>> with open('test.npy', 'rb') as f:
...     a = np.load(f)
...     b = np.load(f)
>>> print(a, b)
# [1 2] [1 3]
@array_function_dispatch(_savetxt_dispatcher)
def savetxt(fname, X, fmt='%.18e', delimiter=' ', newline='\n', header='', footer='', comments='# ', encoding=None):

Save an array to a text file.

Parameters

fname : filename or file handle
If the filename ends in .gz, the file is automatically saved in compressed gzip format. loadtxt understands gzipped files transparently.
X : 1D or 2D array_like
Data to be saved to a text file.
fmt : str or sequence of strs, optional

A single format (%10.5f), a sequence of formats, or a multi-format string, e.g. 'Iteration %d -- %10.5f', in which case delimiter is ignored. For complex X, the legal options for fmt are:

  • a single specifier, fmt='%.4e', resulting in numbers formatted like ' (%s+%sj)' % (fmt, fmt)
  • a full string specifying every real and imaginary part, e.g. ' %.4e %+.4ej %.4e %+.4ej %.4e %+.4ej' for 3 columns
  • a list of specifiers, one per column - in this case, the real and imaginary part must have separate specifiers, e.g. ['%.3e + %.3ej', '(%.15e%+.15ej)'] for 2 columns
delimiter : str, optional
String or character separating columns.
newline : str, optional

String or character separating lines.

New in version 1.5.0.
header : str, optional

String that will be written at the beginning of the file.

New in version 1.7.0.
footer : str, optional

String that will be written at the end of the file.

New in version 1.7.0.
comments : str, optional

String that will be prepended to the header and footer strings, to mark them as comments. Default: '# ', as expected by e.g. numpy.loadtxt.

New in version 1.7.0.
encoding : {None, str}, optional

Encoding used to encode the outputfile. Does not apply to output streams. If the encoding is something other than 'bytes' or 'latin1' you will not be able to load the file in NumPy versions < 1.14. Default is 'latin1'.

New in version 1.14.0.

See Also

save : Save an array to a binary file in NumPy .npy format savez : Save several arrays into an uncompressed .npz archive savez_compressed : Save several arrays into a compressed .npz archive

Notes

Further explanation of the fmt parameter (%[flag]width[.precision]specifier):

flags:

- : left justify

+ : Forces to precede result with + or -.

0 : Left pad the number with zeros instead of space (see width).

width:
Minimum number of characters to be printed. The value is not truncated if it has more characters.
precision:
  • For integer specifiers (eg. d,i,o,x), the minimum number of digits.
  • For e, E and f specifiers, the number of digits to print after the decimal point.
  • For g and G, the maximum number of significant digits.
  • For s, the maximum number of characters.
specifiers:

c : character

d or i : signed decimal integer

e or E : scientific notation with e or E.

f : decimal floating point

g,G : use the shorter of e,E or f

o : signed octal

s : string of characters

u : unsigned decimal integer

x,X : unsigned hexadecimal integer

This explanation of fmt is not complete, for an exhaustive specification see [1].

References

[1]Format Specification Mini-Language, Python Documentation.

Examples

>>> x = y = z = np.arange(0.0,5.0,1.0)
>>> np.savetxt('test.out', x, delimiter=',')   # X is an array
>>> np.savetxt('test.out', (x,y,z))   # x,y,z equal sized 1D arrays
>>> np.savetxt('test.out', x, fmt='%1.4e')   # use exponential notation
@array_function_dispatch(_savez_dispatcher)
def savez(file, *args, **kwds):

Save several arrays into a single file in uncompressed .npz format.

Provide arrays as keyword arguments to store them under the corresponding name in the output file: savez(fn, x=x, y=y).

If arrays are specified as positional arguments, i.e., savez(fn, x, y), their names will be arr_0, arr_1, etc.

Parameters

file : str or file
Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the .npz extension will be appended to the filename if it is not already there.
args : Arguments, optional
Arrays to save to the file. Please use keyword arguments (see kwds below) to assign names to arrays. Arrays specified as args will be named "arr_0", "arr_1", and so on.
kwds : Keyword arguments, optional
Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name.

Returns

None

See Also

save : Save a single array to a binary file in NumPy format. savetxt : Save an array to a file as plain text. savez_compressed : Save several arrays into a compressed .npz archive

Notes

The .npz file format is a zipped archive of files named after the variables they contain. The archive is not compressed and each file in the archive contains one variable in .npy format. For a description of the .npy format, see numpy.lib.format.

When opening the saved .npz file with load a NpzFile object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the .files attribute), and for the arrays themselves.

Keys passed in kwds are used as filenames inside the ZIP archive. Therefore, keys should be valid filenames; e.g., avoid keys that begin with / or contain ..

When naming variables with keyword arguments, it is not possible to name a variable file, as this would cause the file argument to be defined twice in the call to savez.

Examples

>>> from tempfile import TemporaryFile
>>> outfile = TemporaryFile()
>>> x = np.arange(10)
>>> y = np.sin(x)

Using savez with *args, the arrays are saved with default names.

>>> np.savez(outfile, x, y)
>>> _ = outfile.seek(0) # Only needed here to simulate closing & reopening file
>>> npzfile = np.load(outfile)
>>> npzfile.files
['arr_0', 'arr_1']
>>> npzfile['arr_0']
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

Using savez with **kwds, the arrays are saved with the keyword names.

>>> outfile = TemporaryFile()
>>> np.savez(outfile, x=x, y=y)
>>> _ = outfile.seek(0)
>>> npzfile = np.load(outfile)
>>> sorted(npzfile.files)
['x', 'y']
>>> npzfile['x']
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
@array_function_dispatch(_savez_compressed_dispatcher)
def savez_compressed(file, *args, **kwds):

Save several arrays into a single file in compressed .npz format.

Provide arrays as keyword arguments to store them under the corresponding name in the output file: savez(fn, x=x, y=y).

If arrays are specified as positional arguments, i.e., savez(fn, x, y), their names will be arr_0, arr_1, etc.

Parameters

file : str or file
Either the filename (string) or an open file (file-like object) where the data will be saved. If file is a string or a Path, the .npz extension will be appended to the filename if it is not already there.
args : Arguments, optional
Arrays to save to the file. Please use keyword arguments (see kwds below) to assign names to arrays. Arrays specified as args will be named "arr_0", "arr_1", and so on.
kwds : Keyword arguments, optional
Arrays to save to the file. Each array will be saved to the output file with its corresponding keyword name.

Returns

None

See Also

numpy.save : Save a single array to a binary file in NumPy format. numpy.savetxt : Save an array to a file as plain text. numpy.savez : Save several arrays into an uncompressed .npz file format numpy.load : Load the files created by savez_compressed.

Notes

The .npz file format is a zipped archive of files named after the variables they contain. The archive is compressed with zipfile.ZIP_DEFLATED and each file in the archive contains one variable in .npy format. For a description of the .npy format, see numpy.lib.format.

When opening the saved .npz file with load a NpzFile object is returned. This is a dictionary-like object which can be queried for its list of arrays (with the .files attribute), and for the arrays themselves.

Examples

>>> test_array = np.random.rand(3, 2)
>>> test_vector = np.random.rand(4)
>>> np.savez_compressed('/tmp/123', a=test_array, b=test_vector)
>>> loaded = np.load('/tmp/123.npz')
>>> print(np.array_equal(test_array, loaded['a']))
True
>>> print(np.array_equal(test_vector, loaded['b']))
True
def zipfile_factory(file, *args, **kwargs):

Create a ZipFile.

Allows for Zip64, and the file argument can accept file, str, or pathlib.Path objects. args and kwargs are passed to the zipfile.ZipFile constructor.

_CONVERTERS =

Undocumented

Value
[(np.bool_, (lambda x: bool(int(x)))),
 (np.uint64, np.uint64),
 (np.int64, np.int64),
 (np.integer, (lambda x: int(float(x)))),
 (np.longdouble, np.longdouble),
 (np.floating, _floatconv),
 (complex, (lambda x: complex(x.replace('+-', '-')))),
...
_genfromtxt_with_like =

Undocumented

_loadtxt_chunksize: int =

Undocumented

_loadtxt_with_like =

Undocumented