For more information, please visit http://dipy.org Run benchmarks for module using nose. Set numpy print options to "legacy" for new versions of numpy. Run tests for module using nose. Run benchmarks for module using nose. Identifies the benchmarks to run. This can be a string to pass to
the nosetests executable with the ‘-A’ option, or one of several
special values. Special values are: ‘fast’ - the default - which corresponds to the ‘full’ - fast (as above) and slow benchmarks as in the
‘no -A’ option to nosetests - this is the same as ‘’. None or ‘’ - run all tests. attribute_identifier - string passed directly to nosetests as ‘-A’. Verbosity value for benchmark outputs, in the range 1-10. Default is 1. List with any extra arguments to pass to nosetests. Returns True if running the benchmarks works, False if an error
occurred. Notes Benchmarks are like tests, but have names starting with “bench” instead
of “test”, and can be found under the “benchmarks” sub-directory of the
module. Each NumPy module exposes bench in its namespace to run all benchmarks
for it. Examples Set numpy print options to “legacy” for new versions of numpy. If imported into a file, pytest will run this before any doctests. References https://github.com/numpy/numpy/commit/710e0327687b9f7653e5ac02d222ba62c657a718
https://github.com/numpy/numpy/commit/734b907fc2f7af6e40ec989ca49ee6d87e21c495
https://github.com/nipy/nibabel/pull/556 Run tests for module using nose. Identifies the tests to run. This can be a string to pass to
the nosetests executable with the ‘-A’ option, or one of several
special values. Special values are: ‘fast’ - the default - which corresponds to the ‘full’ - fast (as above) and slow tests as in the
‘no -A’ option to nosetests - this is the same as ‘’. None or ‘’ - run all tests. attribute_identifier - string passed directly to nosetests as ‘-A’. Verbosity value for test outputs, in the range 1-10. Default is 1. List with any extra arguments to pass to nosetests. If True, run doctests in module. Default is False. If True, report coverage of NumPy code. Default is False.
(This requires the
coverage module). This specifies which warnings to configure as ‘raise’ instead
of being shown once during the test execution. Valid strings are: “develop” : equals “release” : equals Timing of individual tests with Returns the result of running the tests as a
Notes Each NumPy module exposes test in its namespace to run all tests for it.
For example, to run all tests for numpy.lib: Examples OKdipy
Diffusion Imaging in Python
Subpackages
align -- Registration, streamline alignment, volume resampling
boots -- Bootstrapping algorithms
core -- Spheres, gradient tables
core.geometry -- Spherical geometry, coordinate and vector manipulation
core.meshes -- Point distributions on the sphere
data -- Small testing datasets
denoise -- Denoising algorithms
direction -- Manage peaks and tracking
io -- Loading/saving of dpy datasets
reconst -- Signal reconstruction modules (tensor, spherical harmonics,
diffusion spectrum, etc.)
segment -- Tractography segmentation
sims -- MRI phantom signal simulation
tracking -- Tractography, metrics for streamlines
viz -- Visualization and GUIs
Utilities
test -- Run unittests
__version__ -- Dipy version
bench
([label, verbose, extra_argv])get_info
()
test
([label, verbose, extra_argv, doctests, ...])bench
nosetests -A
option of ‘not slow’.
>>> success = np.lib.bench()
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success
True
get_info
setup_test
test
nosetests -A
option of ‘not slow’.
(Warning,)
()
, do not raise on any warnings.nose-timer
(which needs to be
installed). If True, time tests and report on all of them.
If an integer (say N
), report timing results for N
slowest
tests.
nose.result.TextTestResult
object.>>> np.lib.test()
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
>>> result.errors
[]
>>> result.knownfail
[]