class documentation

class NoseTester:

View In Hierarchy

Nose test runner.

This class is made available as numpy.testing.Tester, and a test function is typically added to a package's __init__.py like so:

from numpy.testing import Tester
test = Tester().test

Calling this test function finds and runs all tests associated with the package and all its sub-packages.

Attributes

package_path : str
Full path to the package to test.
package_name : str
Name of the package to test.

Parameters

package : module, str or None, optional
The package to test. If a string, this should be the full path to the package. If None (default), package is set to the module from which NoseTester is initialized.
raise_warnings : None, str or sequence of warnings, optional

This specifies which warnings to configure as 'raise' instead of being shown once during the test execution. Valid strings are:

  • "develop" : equals (Warning,)
  • "release" : equals (), don't raise on any warnings.

Default is "release".

depth : int, optional
If package is None, then this can be used to initialize from the module of the caller of (the caller of (...)) the code that initializes NoseTester. Default of 0 means the module of the immediate caller; higher values are useful for utility routines that want to initialize NoseTester objects on behalf of other code.
Method __init__ Undocumented
Method ​_get​_custom​_doctester Return instantiated plugin for doctests
Method ​_show​_system​_info Undocumented
Method ​_test​_argv Generate argv for nosetest command
Method bench Run benchmarks for module using nose.
Method prepare​_test​_args Run tests for module using nose.
Method test Run tests for module using nose.
Instance Variable check​_fpu​_mode Undocumented
Instance Variable package​_name Undocumented
Instance Variable package​_path Undocumented
Instance Variable raise​_warnings Undocumented
def __init__(self, package=None, raise_warnings='release', depth=0, check_fpu_mode=False):

Undocumented

def _get_custom_doctester(self):

Return instantiated plugin for doctests

Allows subclassing of this class to override doctester

A return value of None means use the nose builtin doctest plugin

def _show_system_info(self):

Undocumented

def _test_argv(self, label, verbose, extra_argv):

Generate argv for nosetest command

Parameters

label : {'fast', 'full', '', attribute identifier}, optional
see test docstring
verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to nosetests.

Returns

argv : list
command line arguments that will be passed to nose
def bench(self, label='fast', verbose=1, extra_argv=None):

Run benchmarks for module using nose.

Parameters

label : {'fast', 'full', '', attribute identifier}, optional

Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the '-A' option, or one of several special values. Special values are:

  • 'fast' - the default - which corresponds to the nosetests -A option of 'not slow'.
  • 'full' - fast (as above) and slow benchmarks as in the 'no -A' option to nosetests - this is the same as ''.
  • None or '' - run all tests.
  • attribute_identifier - string passed directly to nosetests as '-A'.
verbose : int, optional
Verbosity value for benchmark outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to nosetests.

Returns

success : bool
Returns True if running the benchmarks works, False if an error occurred.

Notes

Benchmarks are like tests, but have names starting with "bench" instead of "test", and can be found under the "benchmarks" sub-directory of the module.

Each NumPy module exposes bench in its namespace to run all benchmarks for it.

Examples

>>> success = np.lib.bench() #doctest: +SKIP
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success #doctest: +SKIP
True
def prepare_test_args(self, label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, timer=False):

Run tests for module using nose.

This method does the heavy lifting for the test method. It takes all the same arguments, for details see test.

See Also

test

def test(self, label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False):

Run tests for module using nose.

Parameters

label : {'fast', 'full', '', attribute identifier}, optional

Identifies the tests to run. This can be a string to pass to the nosetests executable with the '-A' option, or one of several special values. Special values are:

  • 'fast' - the default - which corresponds to the nosetests -A option of 'not slow'.
  • 'full' - fast (as above) and slow tests as in the 'no -A' option to nosetests - this is the same as ''.
  • None or '' - run all tests.
  • attribute_identifier - string passed directly to nosetests as '-A'.
verbose : int, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
extra_argv : list, optional
List with any extra arguments to pass to nosetests.
doctests : bool, optional
If True, run doctests in module. Default is False.
coverage : bool, optional
If True, report coverage of NumPy code. Default is False. (This requires the coverage module).
raise_warnings : None, str or sequence of warnings, optional

This specifies which warnings to configure as 'raise' instead of being shown once during the test execution. Valid strings are:

  • "develop" : equals (Warning,)
  • "release" : equals (), do not raise on any warnings.
timer : bool or int, optional
Timing of individual tests with nose-timer (which needs to be installed). If True, time tests and report on all of them. If an integer (say N), report timing results for N slowest tests.

Returns

result : object
Returns the result of running the tests as a nose.result.TextTestResult object.

Notes

Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:

>>> np.lib.test() #doctest: +SKIP

Examples

>>> result = np.lib.test() #doctest: +SKIP
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s

OK

>>> result.errors #doctest: +SKIP
[]
>>> result.knownfail #doctest: +SKIP
[]
check_fpu_mode =

Undocumented

package_name =

Undocumented

package_path =

Undocumented

raise_warnings =

Undocumented