core

Core objects

test([label, verbose, extra_argv, doctests, ...])

Run tests for module using nose.

Module: core.benchmarks

Module: core.benchmarks.bench_sphere

Benchmarks for sphere

Run all benchmarks with:

import dipy.core as dipycore
dipycore.bench()

With Pytest, Run this benchmark with:

pytest -svv -c bench.ini /path/to/bench_sphere.py

Timer()

Methods

bench_disperse_charges_alt()

func_minimize_adhoc(init_hemisphere, ...)

func_minimize_scipy(init_pointset, ...)

Module: core.geometry

Utility functions for algebra etc

cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z

cart_distance(pts1, pts2)

Cartesian distance between pts1 and pts2

circumradius(a, b, c)

a, b and c are 3-dimensional vectors which are the vertices of a triangle.

compose_matrix([scale, shear, angles, ...])

Return 4x4 transformation matrix from sequence of transformations.

compose_transformations(*mats)

Compose multiple 4x4 affine transformations in one 4x4 matrix

decompose_matrix(matrix)

Return sequence of transformations from transformation matrix.

dist_to_corner(affine)

Calculate the maximal distance from the center to a corner of a voxel, given an affine

euler_matrix(ai, aj, ak[, axes])

Return homogeneous rotation matrix from Euler angles and axis sequence.

is_hemispherical(vecs)

Test whether all points on a unit sphere lie in the same hemisphere.

lambert_equal_area_projection_cart(x, y, z)

Lambert Equal Area Projection from cartesian vector to plane

lambert_equal_area_projection_polar(theta, phi)

Lambert Equal Area Projection from polar sphere to plane

nearest_pos_semi_def(B)

Least squares positive semi-definite tensor estimation

normalized_vector(vec[, axis])

Return vector divided by its Euclidean (L2) norm

perpendicular_directions(v[, num, half])

Computes n evenly spaced perpendicular directions relative to a given vector v

rodrigues_axis_rotation(r, theta)

Rodrigues formula

sph2latlon(theta, phi)

Convert spherical coordinates to latitude and longitude.

sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

sphere_distance(pts1, pts2[, radius, ...])

Distance across sphere surface between pts1 and pts2

vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

vector_cosine(vecs1, vecs2)

Cosine of angle between two (sets of) vectors

vector_norm(vec[, axis, keepdims])

Return vector Euclidean (L2) norm

Module: core.gradients

GradientTable(gradients[, big_delta, ...])

Diffusion gradient information

HemiSphere([x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

auto_attr(func)

Decorator to create OneTimeProperty attributes.

btens_to_params(btens[, ztol])

Compute trace, anisotropy and assymetry parameters from b-tensors

check_multi_b(gtab, n_bvals[, non_zero, bmag])

Check if you have enough different b-values in your gradient table

deprecate_with_version(message[, since, ...])

Return decorator function function for deprecation warning / error.

disperse_charges(hemi, iters[, const])

Models electrostatic repulsion on the unit sphere

generate_bvecs(N[, iters])

Generates N bvectors.

get_bval_indices(bvals, bval[, tol])

Get indices where the b-value is bval

gradient_table(bvals[, bvecs, big_delta, ...])

A general function for creating diffusion MR gradients.

gradient_table_from_bvals_bvecs(bvals, bvecs)

Creates a GradientTable from a bvals array and a bvecs array

gradient_table_from_gradient_strength_bvecs(...)

A general function for creating diffusion MR gradients.

gradient_table_from_qvals_bvecs(qvals, ...)

A general function for creating diffusion MR gradients.

inv(a[, overwrite_a, check_finite])

Compute the inverse of a matrix.

params_to_btens(bval, bdelta, b_eta)

Compute b-tensor from trace, anisotropy and assymetry parameters

polar(a[, side])

Compute the polar decomposition.

reorient_bvecs(gtab, affines[, atol])

Reorient the directions in a GradientTable.

round_bvals(bvals[, bmag])

"This function rounds the b-values

unique_bvals(bvals[, bmag, rbvals])

This function gives the unique rounded b-values of the data

unique_bvals_magnitude(bvals[, bmag, rbvals])

This function gives the unique rounded b-values of the data

unique_bvals_tolerance(bvals[, tol])

Gives the unique b-values of the data, within a tolerance gap

vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

vector_norm(vec[, axis, keepdims])

Return vector Euclidean (L2) norm

warn(/, message[, category, stacklevel, source])

Issue a warning, or maybe ignore it or raise an exception.

Module: core.graph

A simple graph class

Graph()

A simple graph class

Module: core.histeq

histeq(arr[, num_bins])

Performs an histogram equalization on arr.

Module: core.interpolation

Interpolator(data, voxel_size)

Class to be subclassed by different interpolator types

NearestNeighborInterpolator(data, voxel_size)

Interpolates data using nearest neighbor interpolation

OutsideImage

TriLinearInterpolator(data, voxel_size)

Interpolates data using trilinear interpolation

interp_rbf

Interpolate data on the sphere, using radial basis functions.

interpolate_scalar_2d

Bilinear interpolation of a 2D scalar image

interpolate_scalar_3d

Trilinear interpolation of a 3D scalar image

interpolate_scalar_nn_2d

Nearest neighbor interpolation of a 2D scalar image

interpolate_scalar_nn_3d

Nearest neighbor interpolation of a 3D scalar image

interpolate_vector_2d

Bilinear interpolation of a 2D vector field

interpolate_vector_3d

Trilinear interpolation of a 3D vector field

map_coordinates_trilinear_iso

Trilinear interpolation (isotropic voxel size)

nearestneighbor_interpolate

trilinear_interp

Interpolates vector from 4D data at 3D point given by index

trilinear_interpolate4d

Tri-linear interpolation along the last dimension of a 4d array

Module: core.ndindex

as_strided(x[, shape, strides, subok, writeable])

Create a view into the array with the given shape and strides.

ndindex(shape)

An N-dimensional iterator object to index arrays.

Module: core.onetime

Descriptor support for NIPY.

Copyright (c) 2006-2011, NIPY Developers All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright

    notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above

    copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the NIPY Developers nor the names of any

    contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Utilities to support special Python descriptors [1,2], in particular the use of a useful pattern for properties we call ‘one time properties’. These are object attributes which are declared as properties, but become regular attributes once they’ve been read the first time. They can thus be evaluated later in the object’s life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints.

A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their ‘untriggered’ state.

References

[1] How-To Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm

[2] Python data model, http://docs.python.org/reference/datamodel.html

OneTimeProperty(func)

A descriptor to make special properties that become normal attributes.

ResetMixin()

A Mixin class to add a .reset() method to users of OneTimeProperty.

auto_attr(func)

Decorator to create OneTimeProperty attributes.

Module: core.optimize

A unified interface for performing and debugging optimization problems.

NonNegativeLeastSquares(*args, **kwargs)

A sklearn-like interface to scipy.optimize.nnls

Optimizer(fun, x0[, args, method, jac, ...])

Attributes

SKLearnLinearSolver(*args, **kwargs)

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\)

minimize(fun, x0[, args, method, jac, hess, ...])

Minimization of scalar function of one or more variables.

sparse_nnls(y, X[, momentum, step_size, ...])

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

spdot(A, B)

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

Module: core.profile

Class for profiling cython code

Profiler([call])

Profile python/cython files or functions

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

Module: core.rng

Random number generation utilities.

LEcuyer([s1, s2])

Return a LEcuyer random number generator.

WichmannHill1982([ix, iy, iz])

Algorithm AS 183 Appl.

WichmannHill2006([ix, iy, iz, it])

Wichmann Hill (2006) random number generator.

architecture([executable, bits, linkage])

Queries the given executable (defaults to the Python interpreter binary) for various architecture information.

floor(x, /)

Return the floor of x as an Integral.

Module: core.sphere

HemiSphere([x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

Sphere([x, y, z, theta, phi, xyz, faces, edges])

Points on the unit sphere.

auto_attr(func)

Decorator to create OneTimeProperty attributes.

cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z

disperse_charges(hemi, iters[, const])

Models electrostatic repulsion on the unit sphere

disperse_charges_alt(init_pointset, iters[, tol])

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

euler_characteristic_check(sphere[, chi])

Checks the euler characteristic of a sphere

faces_from_sphere_vertices(vertices)

Triangulate a set of vertices on the sphere.

remove_similar_vertices

Remove vertices that are less than theta degrees from any other

sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

unique_edges(faces[, return_mapping])

Extract all unique edges from given triangular faces.

unique_sets(sets[, return_inverse])

Remove duplicate sets.

vector_norm(vec[, axis, keepdims])

Return vector Euclidean (L2) norm

Module: core.sphere_stats

Statistics on spheres

permutations(iterable[, r])

Return successive r-length permutations of elements in the iterable.

angular_similarity(S, T)

Computes the cosine distance of the best match between points of two sets of vectors S and T

compare_orientation_sets(S, T)

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

eigenstats(points[, alpha])

Principal direction and confidence ellipse

random_uniform_on_sphere([n, coords])

Random unit vectors from a uniform distribution on the sphere.

Module: core.subdivide_octahedron

Create a unit sphere by subdividing all triangles of an octahedron recursively.

The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices.

This recursive method will avoid the common problem of the polar singularity, produced by 2d (lon-lat) parameterization methods.

HemiSphere([x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

create_unit_hemisphere([recursion_level])

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.

create_unit_sphere([recursion_level])

Creates a unit sphere by subdividing a unit octahedron.

Module: core.wavelet

afb3D(x, af1[, af2, af3])

3D Analysis Filter Bank

afb3D_A(x, af, d)

3D Analysis Filter Bank

cshift3D(x, m, d)

3D Circular Shift

dwt3D(x, J, af)

3-D Discrete Wavelet Transform

idwt3D(w, J, sf)

Inverse 3-D Discrete Wavelet Transform

permutationinverse(perm)

Function generating inverse of the permutation

sfb3D(lo, hi, sf1[, sf2, sf3])

3D Synthesis Filter Bank

sfb3D_A(lo, hi, sf, d)

3D Synthesis Filter Bank

test

dipy.core.test(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)

Run tests for module using nose.

Parameters
label{‘fast’, ‘full’, ‘’, attribute identifier}, optional

Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:

  • ‘fast’ - the default - which corresponds to the nosetests -A option of ‘not slow’.

  • ‘full’ - fast (as above) and slow tests as in the ‘no -A’ option to nosetests - this is the same as ‘’.

  • None or ‘’ - run all tests.

  • attribute_identifier - string passed directly to nosetests as ‘-A’.

verboseint, optional

Verbosity value for test outputs, in the range 1-10. Default is 1.

extra_argvlist, optional

List with any extra arguments to pass to nosetests.

doctestsbool, optional

If True, run doctests in module. Default is False.

coveragebool, optional

If True, report coverage of NumPy code. Default is False. (This requires the coverage module).

raise_warningsNone, str or sequence of warnings, optional

This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:

  • “develop” : equals (Warning,)

  • “release” : equals (), do not raise on any warnings.

timerbool or int, optional

Timing of individual tests with nose-timer (which needs to be installed). If True, time tests and report on all of them. If an integer (say N), report timing results for N slowest tests.

Returns
resultobject

Returns the result of running the tests as a nose.result.TextTestResult object.

Notes

Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:

>>> np.lib.test() 

Examples

>>> result = np.lib.test() 
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s

OK

>>> result.errors 
[]
>>> result.knownfail 
[]

Timer

class dipy.core.benchmarks.bench_sphere.Timer

Bases: object

Methods

duration_in_seconds

__init__(*args, **kwargs)
duration_in_seconds()

bench_disperse_charges_alt

dipy.core.benchmarks.bench_sphere.bench_disperse_charges_alt()

func_minimize_adhoc

dipy.core.benchmarks.bench_sphere.func_minimize_adhoc(init_hemisphere, num_iterations)

func_minimize_scipy

dipy.core.benchmarks.bench_sphere.func_minimize_scipy(init_pointset, num_iterations)

cart2sphere

dipy.core.geometry.cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z

See doc for sphere2cart for angle conventions and derivation of the formulae.

\(0\le\theta\mathrm{(theta)}\le\pi\) and \(-\pi\le\phi\mathrm{(phi)}\le\pi\)

Parameters
xarray_like

x coordinate in Cartesian space

yarray_like

y coordinate in Cartesian space

zarray_like

z coordinate

Returns
rarray

radius

thetaarray

inclination (polar) angle

phiarray

azimuth angle

cart_distance

dipy.core.geometry.cart_distance(pts1, pts2)

Cartesian distance between pts1 and pts2

If either of pts1 or pts2 is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension.

Parameters
pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

Returns
d(N,) or (0,) array

Cartesian distances between corresponding points in pts1 and pts2

See also

sphere_distance

distance between points on sphere surface

Examples

>>> cart_distance([0,0,0], [0,0,3])
3.0

circumradius

dipy.core.geometry.circumradius(a, b, c)

a, b and c are 3-dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points.

Parameters
a, b, c(3,) array_like

the three vertices of the triangle

Returns
circumradiusfloat

the desired circumradius

compose_matrix

dipy.core.geometry.compose_matrix(scale=None, shear=None, angles=None, translate=None, perspective=None)

Return 4x4 transformation matrix from sequence of transformations.

Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html

This is the inverse of the decompose_matrix function.

Parameters
scale(3,) array_like

Scaling factors.

sheararray_like

Shear factors for x-y, x-z, y-z axes.

anglesarray_like

Euler angles about static x, y, z axes.

translatearray_like

Translation vector along x, y, z axes.

perspectivearray_like

Perspective partition of matrix.

Returns
matrix4x4 array

Examples

>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3) - 0.5
>>> shear = np.random.random(3) - 0.5
>>> angles = (np.random.random(3) - 0.5) * (2*math.pi)
>>> trans = np.random.random(3) - 0.5
>>> persp = np.random.random(4) - 0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)

compose_transformations

dipy.core.geometry.compose_transformations(*mats)

Compose multiple 4x4 affine transformations in one 4x4 matrix

Parameters
mat1array, (4, 4)
mat2array, (4, 4)
matNarray, (4, 4)
Returns
matN x … x mat2 x mat1array, (4, 4)

decompose_matrix

dipy.core.geometry.decompose_matrix(matrix)

Return sequence of transformations from transformation matrix.

Code modified from the excellent work of Christoph Gohlke link provided here: http://www.lfd.uci.edu/~gohlke/code/transformations.py.html

Parameters
matrixarray_like

Non-degenerative homogeneous transformation matrix

Returns
scale(3,) ndarray

Three scaling factors.

shear(3,) ndarray

Shear factors for x-y, x-z, y-z axes.

angles(3,) ndarray

Euler angles about static x, y, z axes.

translate(3,) ndarray

Translation vector along x, y, z axes.

perspectivendarray

Perspective partition of matrix.

Raises
ValueError

If matrix is of wrong type or degenerative.

Examples

>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)

dist_to_corner

dipy.core.geometry.dist_to_corner(affine)

Calculate the maximal distance from the center to a corner of a voxel, given an affine

Parameters
affine4 by 4 array.

The spatial transformation from the measurement to the scanner space.

Returns
dist: float

The maximal distance to the corner of a voxel, given voxel size encoded in the affine.

euler_matrix

dipy.core.geometry.euler_matrix(ai, aj, ak, axes='sxyz')

Return homogeneous rotation matrix from Euler angles and axis sequence.

Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html

Parameters
ai, aj, akEuler’s roll, pitch and yaw angles
axesOne of 24 axis sequences as string or encoded tuple
Returns
matrixndarray (4, 4)
Code modified from the work of Christoph Gohlke link provided here
http://www.lfd.uci.edu/~gohlke/code/transformations.py.html

Examples

>>> import numpy
>>> R = euler_matrix(1, 2, 3, 'syxz')
>>> numpy.allclose(numpy.sum(R[0]), -1.34786452)
True
>>> R = euler_matrix(1, 2, 3, (0, 1, 0, 1))
>>> numpy.allclose(numpy.sum(R[0]), -0.383436184)
True
>>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3) - 0.5)
>>> for axes in _AXES2TUPLE.keys():
...    _ = euler_matrix(ai, aj, ak, axes)
>>> for axes in _TUPLE2AXES.keys():
...    _ = euler_matrix(ai, aj, ak, axes)

is_hemispherical

dipy.core.geometry.is_hemispherical(vecs)

Test whether all points on a unit sphere lie in the same hemisphere.

Parameters
vecsnumpy.ndarray

2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere.

Returns
is_hemibool

If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere

polenumpy.ndarray

If is_hemi == True, then pole is the “central” pole of the input vectors. Otherwise, pole is the zero vector.

References

https://rstudio-pubs-static.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html # noqa

lambert_equal_area_projection_cart

dipy.core.geometry.lambert_equal_area_projection_cart(x, y, z)

Lambert Equal Area Projection from cartesian vector to plane

Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).

The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2. and vice versa.

See doc for sphere2cart for angle conventions

Parameters
xarray_like

x coordinate in Cartesion space

yarray_like

y coordinate in Cartesian space

zarray_like

z coordinate

Returns
y(N,2) array

planar coordinates of points following mapping by Lambert’s EAP.

lambert_equal_area_projection_polar

dipy.core.geometry.lambert_equal_area_projection_polar(theta, phi)

Lambert Equal Area Projection from polar sphere to plane

Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).

See doc for sphere2cart for angle conventions

  • \(0 \le \theta \le \pi\) and \(0 \le \phi \le 2 \pi\)

  • \(|(y_1,y_2)| \le 2\)

The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, and vice versa.

Parameters
thetaarray_like

theta spherical coordinates

phiarray_like

phi spherical coordinates

Returns
y(N,2) array

planar coordinates of points following mapping by Lambert’s EAP.

nearest_pos_semi_def

dipy.core.geometry.nearest_pos_semi_def(B)

Least squares positive semi-definite tensor estimation

Parameters
B(3,3) array_like

B matrix - symmetric. We do not check the symmetry.

Returns
npds(3,3) array

Estimated nearest positive semi-definite array to matrix B.

References

1

Niethammer M, San Jose Estepar R, Bouix S, Shenton M, Westin CF. On diffusion tensor estimation. Conf Proc IEEE Eng Med Biol Soc. 2006;1:2622-5. PubMed PMID: 17946125; PubMed Central PMCID: PMC2791793.

Examples

>>> B = np.diag([1, 1, -1])
>>> nearest_pos_semi_def(B)
array([[ 0.75,  0.  ,  0.  ],
       [ 0.  ,  0.75,  0.  ],
       [ 0.  ,  0.  ,  0.  ]])

normalized_vector

dipy.core.geometry.normalized_vector(vec, axis=- 1)

Return vector divided by its Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters
vecarray_like shape (3,)
Returns
nvecarray shape (3,)

vector divided by L2 norm

Examples

>>> vec = [1, 2, 3]
>>> l2n = np.sqrt(np.dot(vec, vec))
>>> nvec = normalized_vector(vec)
>>> np.allclose(np.array(vec) / l2n, nvec)
True
>>> vec = np.array([[1, 2, 3]])
>>> vec.shape == (1, 3)
True
>>> normalized_vector(vec).shape == (1, 3)
True

perpendicular_directions

dipy.core.geometry.perpendicular_directions(v, num=30, half=False)

Computes n evenly spaced perpendicular directions relative to a given vector v

Parameters
varray (3,)

Array containing the three cartesian coordinates of vector v

numint, optional

Number of perpendicular directions to generate

halfbool, optional

If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False

Returns
psamplesarray (n, 3)

array of vectors perpendicular to v

Notes

Perpendicular directions are estimated using the following two step procedure:

1) the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the x-axis.

2) Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following:

  • The first axis is vector v

  • The second axis is defined as the normalized vector given by the

cross product between vector v and the unit vector aligned to the x-axis - The third axis is defined as the cross product between the previous computed vector and vector v.

Following this two steps, coordinates of the final perpendicular directions are given as:

\[\left [ -\sin(a_{i}) \sqrt{{v_{y}}^{2}+{v_{z}}^{2}} \; , \; \frac{v_{x}v_{y}\sin(a_{i})-v_{z}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \; , \; \frac{v_{x}v_{z}\sin(a_{i})-v_{y}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \right ]\]

This procedure has a singularity when vector v is aligned to the x-axis. To solve this singularity, perpendicular directions in procedure’s step 1 are defined in the plane normal to y-axis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the y-axis. Following this, the coordinates of the perpendicular directions are given as:

left [ -frac{left (v_{x}v_{y}sin(a_{i})+v_{z}cos(a_{i}) right )} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} ; , ; sin(a_{i}) sqrt{{v_{x}}^{2}+{v_{z}}^{2}} ; , ; frac{v_{y}v_{z}sin(a_{i})+v_{x}cos(a_{i})} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} right ]

For more details on this calculation, see ` here <http://gsoc2015dipydki.blogspot.it/2015/07/rnh-post-8-computing-perpendicular.html>`_.

rodrigues_axis_rotation

dipy.core.geometry.rodrigues_axis_rotation(r, theta)

Rodrigues formula

Rotation matrix for rotation around axis r for angle theta.

The rotation matrix is given by the Rodrigues formula:

R = Id + sin(theta)*Sn + (1-cos(theta))*Sn^2

with:

       0  -nz  ny
Sn =   nz   0 -nx
      -ny  nx   0

where n = r / ||r||

In case the angle ||r|| is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around theta=0:

R = I + sin(theta)/tetha Sr + (1-cos(theta))/teta2 Sr^2

leading to:

R = I + (1-theta2/6)*Sr + (1/2-theta2/24)*Sr^2

Parameters
rarray_like shape (3,), axis
thetafloat, angle in degrees
Returns
Rarray, shape (3,3), rotation matrix

Examples

>>> import numpy as np
>>> from dipy.core.geometry import rodrigues_axis_rotation
>>> v=np.array([0,0,1])
>>> u=np.array([1,0,0])
>>> R=rodrigues_axis_rotation(v,40)
>>> ur=np.dot(R,u)
>>> np.round(np.rad2deg(np.arccos(np.dot(ur,u))))
40.0

sph2latlon

dipy.core.geometry.sph2latlon(theta, phi)

Convert spherical coordinates to latitude and longitude.

Returns
lat, lonndarray

Latitude and longitude.

sphere2cart

dipy.core.geometry.sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.

Imagine a sphere with center (0,0,0). Orient it with the z axis running south-north, the y axis running west-east and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the z-axis (the zenith) around the y-axis, towards the x axis. Thus the rotation is counter-clockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the z-axis towards the y axis. The rotation is counter-clockwise from the point of view of positive z.

Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the z-axis, and phi is the angle between the projection of P onto the XY plane, and the X axis.

Geographical nomenclature designates theta as ‘co-latitude’, and phi as ‘longitude’

Parameters
rarray_like

radius

thetaarray_like

inclination or polar angle

phiarray_like

azimuth angle

Returns
xarray

x coordinate(s) in Cartesion space

yarray

y coordinate(s) in Cartesian space

zarray

z coordinate

Notes

See these pages:

for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.

Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r -> z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) -> x == r * sin(theta) * cos(phi) and so on.

We have deliberately named this function sphere2cart rather than sph2cart to distinguish it from the Matlab function of that name, because the Matlab function uses an unusual convention for the angles that we did not want to replicate. The Matlab function is trivial to implement with the formulae given in the Matlab help.

sphere_distance

dipy.core.geometry.sphere_distance(pts1, pts2, radius=None, check_radius=True)

Distance across sphere surface between pts1 and pts2

Parameters
pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

radiusNone or float, optional

Radius of sphere. Default is to work out radius from mean of the length of each point vector

check_radiusbool, optional

If True, check if the points are on the sphere surface - i.e check if the vector lengths in pts1 and pts2 are close to radius. Default is True.

Returns
d(N,) or (0,) array

Distances between corresponding points in pts1 and pts2 across the spherical surface, i.e. the great circle distance

See also

cart_distance

cartesian distance between points

vector_cosine

cosine of angle between vectors

Examples

>>> print('%.4f' % sphere_distance([0,1],[1,0]))
1.5708
>>> print('%.4f' % sphere_distance([0,3],[3,0]))
4.7124

vec2vec_rotmat

dipy.core.geometry.vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.

In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.

The transpose of R will align v to u.

Parameters
uarray, shape(3,)
varray, shape(3,)
Returns
Rarray, shape(3,3)

Examples

>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0.,  1.,  0.])
>>> np.dot(R.T,v)
array([ 1.,  0.,  0.])

vector_cosine

dipy.core.geometry.vector_cosine(vecs1, vecs2)

Cosine of angle between two (sets of) vectors

The cosine of the angle between two vectors v1 and v2 is given by the inner product of v1 and v2 divided by the product of the vector lengths:

v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) *
                            np.sqrt(np.sum(v2**2)))
Parameters
vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements.

vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast vecs1 against vecs2

Returns
vcos(N,) or (0,) array

Vector cosines. To get the angles you will need np.arccos

Notes

The vector cosine will be the same as the correlation only if all the input vectors have zero mean.

vector_norm

dipy.core.geometry.vector_norm(vec, axis=- 1, keepdims=False)

Return vector Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters
vecarray_like

Vectors to norm.

axisint

Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.

keepdimsbool

If True, the output will have the same number of dimensions as vec, with shape 1 on axis.

Returns
normarray

Euclidean norms of vectors.

Examples

>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17.,  85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
       [ 85.]])
>>> vector_norm(vec, axis=0)
array([  8.,  39.,  77.])

GradientTable

class dipy.core.gradients.GradientTable(gradients, big_delta=None, small_delta=None, b0_threshold=50, btens=None)

Bases: object

Diffusion gradient information

Parameters
gradientsarray_like (N, 3)

Diffusion gradients. The direction of each of these vectors corresponds to the b-vector, and the length corresponds to the b-value.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered as b0s i.e. without diffusion weighting.

See also

gradient_table

Notes

The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table

Attributes
gradients(N,3) ndarray

diffusion gradients

bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

qvals: (N,) ndarray

The q-value for each gradient direction. Needs big and small delta.

bvecs(N,3) ndarray

The direction, represented as a unit vector, of each gradient.

b0s_mask(N,) ndarray

Boolean array indicating which gradients have no diffusion weighting, ie b-value is close to 0.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

btens(N,3,3) ndarray

The b-tensor of each gradient direction.

Methods

b0s_mask

bvals

bvecs

gradient_strength

qvals

tau

__init__(gradients, big_delta=None, small_delta=None, b0_threshold=50, btens=None)

Constructor for GradientTable class

b0s_mask()
bvals()
bvecs()
gradient_strength()
property info
qvals()
tau()

HemiSphere

class dipy.core.gradients.HemiSphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Bases: dipy.core.sphere.Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Parameters
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See also

Sphere
Attributes
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

from_sphere(sphere[, tol])

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide([n])

Create a more subdivided HemiSphere

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Create a HemiSphere from points

faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters
xyzarray-like, 3 elements

A unit vector

Returns
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, tol=1e-05)

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide(n=1)

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

auto_attr

dipy.core.gradients.auto_attr(func)

Decorator to create OneTimeProperty attributes.

Parameters
funcmethod

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

Examples

>>> class MagicProp(object):
...     @auto_attr
...     def a(self):
...         return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True

btens_to_params

dipy.core.gradients.btens_to_params(btens, ztol=1e-10)

Compute trace, anisotropy and assymetry parameters from b-tensors

Parameters
btens(3, 3) OR (N, 3, 3) numpy.ndarray

input b-tensor, or b-tensors, where N = number of b-tensors

ztolfloat

Any parameters smaller than this value are considered to be 0

Returns
bval: numpy.ndarray

b-value(s) (trace(s))

bdelta: numpy.ndarray

normalized tensor anisotropy(s)

b_eta: numpy.ndarray

tensor assymetry(s)

Notes

This function can be used to get b-tensor parameters directly from the GradientTable btens attribute.

Examples

>>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])
>>> bval, bdelta, b_eta = btens_to_params(lte)
>>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta))
bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.]

check_multi_b

dipy.core.gradients.check_multi_b(gtab, n_bvals, non_zero=True, bmag=None)

Check if you have enough different b-values in your gradient table

Parameters
gtabGradientTable class instance.
n_bvalsint

The number of different b-values you are checking for.

non_zerobool

Whether to check only non-zero bvalues. In this case, we will require at least n_bvals non-zero b-values (where non-zero is defined depending on the gtab object’s b0_threshold attribute)

bmagint

The order of magnitude of the b-values used. The function will normalize the b-values relative \(10^{bmag}\). Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

Returns
boolWhether there are at least n_bvals different b-values in the
gradient table used.

deprecate_with_version

dipy.core.gradients.deprecate_with_version(message, since='', until='', version_comparator=<function cmp_pkg_version>, warn_class=<class 'DeprecationWarning'>, error_class=<class 'dipy.utils.deprecator.ExpiredDeprecationError'>)

Return decorator function function for deprecation warning / error.

The decorated function / method will:

  • Raise the given warning_class warning when the function / method gets called, up to (and including) version until (if specified);

  • Raise the given error_class error when the function / method gets called, when the package version is greater than version until (if specified).

Parameters
messagestr

Message explaining deprecation, giving possible alternatives.

sincestr, optional

Released version at which object was first deprecated.

untilstr, optional

Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error.

version_comparatorcallable

Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the version_comparator, 0 if the version is equal, and -1 if the version is lower. For example, the version_comparator may compare the input version string to the current package version string.

warn_classclass, optional

Class of warning to generate for deprecation.

error_classclass, optional

Class of error to generate when version_comparator returns 1 for a given argument of until.

Returns
deprecatorfunc

Function returning a decorator.

disperse_charges

dipy.core.gradients.disperse_charges(hemi, iters, const=0.2)

Models electrostatic repulsion on the unit sphere

Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.

Parameters
hemiHemiSphere

Points on a unit sphere.

itersint

Number of iterations to run.

constfloat

Using a smaller const could provide a more accurate result, but will need more iterations to converge.

Returns
hemiHemiSphere

Distributed points on a unit sphere.

potentialndarray

The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.

Notes

This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefor each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at -x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.

generate_bvecs

dipy.core.gradients.generate_bvecs(N, iters=5000)

Generates N bvectors.

Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere.

Parameters
Nint

The number of bvectors to generate. This should be equal to the number of bvals used.

itersint

Number of iterations to run.

Returns
bvecs(N,3) ndarray

The generated directions, represented as a unit vector, of each gradient.

get_bval_indices

dipy.core.gradients.get_bval_indices(bvals, bval, tol=20)

Get indices where the b-value is bval

Parameters
bvals: ndarray

Array containing the b-values

bval: float or int

b-value to extract indices

tol: int

The tolerated gap between the b-values to extract and the actual b-values.

Returns
Array of indices where the b-value is bval

gradient_table

dipy.core.gradients.gradient_table(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters
bvalscan be any of the four options
  1. an array of shape (N,) or (1, N) or (N, 1) with the b-values.

  2. a path for the file which contains an array like the above (1).

  3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.

  4. a path for the file which contains an array like the one at (3).

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat

acquisition pulse separation time in seconds (default None)

small_deltafloat

acquisition pulse duration time in seconds (default None)

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

btenscan be any of three options
  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Returns
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False

gradient_table_from_bvals_bvecs

dipy.core.gradients.gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=50, atol=0.01, btens=None, **kwargs)

Creates a GradientTable from a bvals array and a bvecs array

Parameters
bvalsarray_like (N,)

The b-value, or magnitude, of each gradient direction.

bvecsarray_like (N, 3)

The direction, represented as a unit vector, of each gradient.

b0_thresholdfloat

Gradients with b-value less than or equal to bo_threshold are considered to not have diffusion weighting.

atolfloat

Each vector in bvecs must be a unit vectors up to a tolerance of atol.

btenscan be any of three options
  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Returns
gradientsGradientTable

A GradientTable with all the gradient information.

Other Parameters
**kwargsdict

Other keyword inputs are passed to GradientTable.

gradient_table_from_gradient_strength_bvecs

dipy.core.gradients.gradient_table_from_gradient_strength_bvecs(gradient_strength, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters
gradient_strengthan array of shape (N,),

gradient strength given in T/mm

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import (
...    gradient_table_from_gradient_strength_bvecs)
>>> gradient_strength = .03e-3 * np.ones(7)  # clinical strength at 30 mT/m
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> gradient_strength[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_gradient_strength_bvecs(
...     gradient_strength, bvecs, big_delta, small_delta)

gradient_table_from_qvals_bvecs

dipy.core.gradients.gradient_table_from_qvals_bvecs(qvals, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters
qvalsan array of shape (N,),

q-value given in 1/mm

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table_from_qvals_bvecs
>>> qvals = 30. * np.ones(7)
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> qvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs,
...                                      big_delta, small_delta)

inv

dipy.core.gradients.inv(a, overwrite_a=False, check_finite=True)

Compute the inverse of a matrix.

Parameters
aarray_like

Square matrix to be inverted.

overwrite_abool, optional

Discard data in a (may improve performance). Default is False.

check_finitebool, optional

Whether to check that the input matrix contains only finite numbers. Disabling may give a performance gain, but may result in problems (crashes, non-termination) if the inputs do contain infinities or NaNs.

Returns
ainvndarray

Inverse of the matrix a.

Raises
LinAlgError

If a is singular.

ValueError

If a is not square, or not 2D.

Examples

>>> from scipy import linalg
>>> a = np.array([[1., 2.], [3., 4.]])
>>> linalg.inv(a)
array([[-2. ,  1. ],
       [ 1.5, -0.5]])
>>> np.dot(a, linalg.inv(a))
array([[ 1.,  0.],
       [ 0.,  1.]])

params_to_btens

dipy.core.gradients.params_to_btens(bval, bdelta, b_eta)

Compute b-tensor from trace, anisotropy and assymetry parameters

Parameters
bval: int or float

b-value (>= 0)

bdelta: int or float

normalized tensor anisotropy (>= -0.5 and <= 1)

b_eta: int or float

tensor assymetry (>= 0 and <= 1)

Returns
(3, 3) numpy.ndarray

output b-tensor

Notes

Implements eq. 7.11. p. 231 in [1].

References

1
  1. Topgaard, NMR methods for studying microscopic diffusion

anisotropy, in: R. Valiullin (Ed.), Diffusion NMR of Confined Systems: Fluid Transport in Porous Solids and Heterogeneous Materials, Royal Society of Chemistry, Cambridge, UK, 2016.

polar

dipy.core.gradients.polar(a, side='right')

Compute the polar decomposition.

Returns the factors of the polar decomposition [1] u and p such that a = up (if side is “right”) or a = pu (if side is “left”), where p is positive semidefinite. Depending on the shape of a, either the rows or columns of u are orthonormal. When a is a square array, u is a square unitary array. When a is not square, the “canonical polar decomposition” [2] is computed.

Parameters
a(m, n) array_like

The array to be factored.

side{‘left’, ‘right’}, optional

Determines whether a right or left polar decomposition is computed. If side is “right”, then a = up. If side is “left”, then a = pu. The default is “right”.

Returns
u(m, n) ndarray

If a is square, then u is unitary. If m > n, then the columns of a are orthonormal, and if m < n, then the rows of u are orthonormal.

pndarray

p is Hermitian positive semidefinite. If a is nonsingular, p is positive definite. The shape of p is (n, n) or (m, m), depending on whether side is “right” or “left”, respectively.

References

1

R. A. Horn and C. R. Johnson, “Matrix Analysis”, Cambridge University Press, 1985.

2

N. J. Higham, “Functions of Matrices: Theory and Computation”, SIAM, 2008.

Examples

>>> from scipy.linalg import polar
>>> a = np.array([[1, -1], [2, 4]])
>>> u, p = polar(a)
>>> u
array([[ 0.85749293, -0.51449576],
       [ 0.51449576,  0.85749293]])
>>> p
array([[ 1.88648444,  1.2004901 ],
       [ 1.2004901 ,  3.94446746]])

A non-square example, with m < n:

>>> b = np.array([[0.5, 1, 2], [1.5, 3, 4]])
>>> u, p = polar(b)
>>> u
array([[-0.21196618, -0.42393237,  0.88054056],
       [ 0.39378971,  0.78757942,  0.4739708 ]])
>>> p
array([[ 0.48470147,  0.96940295,  1.15122648],
       [ 0.96940295,  1.9388059 ,  2.30245295],
       [ 1.15122648,  2.30245295,  3.65696431]])
>>> u.dot(p)   # Verify the decomposition.
array([[ 0.5,  1. ,  2. ],
       [ 1.5,  3. ,  4. ]])
>>> u.dot(u.T)   # The rows of u are orthonormal.
array([[  1.00000000e+00,  -2.07353665e-17],
       [ -2.07353665e-17,   1.00000000e+00]])

Another non-square example, with m > n:

>>> c = b.T
>>> u, p = polar(c)
>>> u
array([[-0.21196618,  0.39378971],
       [-0.42393237,  0.78757942],
       [ 0.88054056,  0.4739708 ]])
>>> p
array([[ 1.23116567,  1.93241587],
       [ 1.93241587,  4.84930602]])
>>> u.dot(p)   # Verify the decomposition.
array([[ 0.5,  1.5],
       [ 1. ,  3. ],
       [ 2. ,  4. ]])
>>> u.T.dot(u)  # The columns of u are orthonormal.
array([[  1.00000000e+00,  -1.26363763e-16],
       [ -1.26363763e-16,   1.00000000e+00]])

reorient_bvecs

dipy.core.gradients.reorient_bvecs(gtab, affines, atol=0.01)

Reorient the directions in a GradientTable.

When correcting for motion, rotation of the diffusion-weighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect [Leemans2009].

Parameters
gtabGradientTable

The nominal gradient table with which the data were acquired.

affineslist or ndarray of shape (n, 4, 4) or (n, 3, 3)

Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the non-zero gradient directions (ordered according to their order in gtab.bvecs[~gtab.b0s_mask])

atol: see gradient_table()
Returns
gtaba GradientTable class instance with the reoriented directions

References

Leemans2009

The B-Matrix Must Be Rotated When Correcting for Subject Motion in DTI Data. Leemans, A. and Jones, D.K. (2009). MRM, 61: 1336-1349

round_bvals

dipy.core.gradients.round_bvals(bvals, bmag=None)

“This function rounds the b-values

Parameters
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude to round the b-values. If not given b-values will be rounded relative to the order of magnitude \(bmag = (bmagmax - 1)\), where bmaxmag is the magnitude order of the larger b-value.

Returns
rbvalsndarray

Array containing the rounded b-values

unique_bvals

dipy.core.gradients.unique_bvals(bvals, bmag=None, rbvals=False)

This function gives the unique rounded b-values of the data

dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead

  • deprecated from version: 1.2

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.4

Parameters
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

rbvalsbool, optional

If True function also returns all individual rounded b-values. Default: False

Returns
ubvalsndarray

Array containing the rounded unique b-values

unique_bvals_magnitude

dipy.core.gradients.unique_bvals_magnitude(bvals, bmag=None, rbvals=False)

This function gives the unique rounded b-values of the data

Parameters
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

rbvalsbool, optional

If True function also returns all individual rounded b-values. Default: False

Returns
ubvalsndarray

Array containing the rounded unique b-values

unique_bvals_tolerance

dipy.core.gradients.unique_bvals_tolerance(bvals, tol=20)

Gives the unique b-values of the data, within a tolerance gap

The b-values must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the b-values of a cluster fit within the tolerance gap, the highest b-value is kept.

Parameters
bvalsndarray

Array containing the b-values

tolint

The tolerated gap between the b-values to extract and the actual b-values.

Returns
ubvalsndarray

Array containing the unique b-values using the median value for each cluster

vec2vec_rotmat

dipy.core.gradients.vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.

In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.

The transpose of R will align v to u.

Parameters
uarray, shape(3,)
varray, shape(3,)
Returns
Rarray, shape(3,3)

Examples

>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0.,  1.,  0.])
>>> np.dot(R.T,v)
array([ 1.,  0.,  0.])

vector_norm

dipy.core.gradients.vector_norm(vec, axis=- 1, keepdims=False)

Return vector Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters
vecarray_like

Vectors to norm.

axisint

Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.

keepdimsbool

If True, the output will have the same number of dimensions as vec, with shape 1 on axis.

Returns
normarray

Euclidean norms of vectors.

Examples

>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17.,  85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
       [ 85.]])
>>> vector_norm(vec, axis=0)
array([  8.,  39.,  77.])

warn

dipy.core.gradients.warn(/, message, category=None, stacklevel=1, source=None)

Issue a warning, or maybe ignore it or raise an exception.

Graph

class dipy.core.graph.Graph

Bases: object

A simple graph class

Methods

add_edge

add_node

all_paths

children

del_node

del_node_and_edges

down

down_short

parents

shortest_path

up

up_short

__init__()

A graph class with nodes and edges :-)

This class allows us to:

  1. find the shortest path

  2. find all paths

  3. add/delete nodes and edges

  4. get parent & children nodes

Examples

>>> from dipy.core.graph import Graph
>>> g=Graph()
>>> g.add_node('a',5)
>>> g.add_node('b',6)
>>> g.add_node('c',10)
>>> g.add_node('d',11)
>>> g.add_edge('a','b')
>>> g.add_edge('b','c')
>>> g.add_edge('c','d')
>>> g.add_edge('b','d')
>>> g.up_short('d')
['d', 'b', 'a']
add_edge(n, m, ws=True, wp=True)
add_node(n, attr=None)
all_paths(graph, start, end=None, path=[])
children(n)
del_node(n)
del_node_and_edges(n)
down(n)
down_short(n)
parents(n)
shortest_path(graph, start, end=None, path=[])
up(n)
up_short(n)

histeq

dipy.core.histeq.histeq(arr, num_bins=256)

Performs an histogram equalization on arr. This was taken from: http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html

Parameters
arrndarray

Image on which to perform histogram equalization.

num_binsint

Number of bins used to construct the histogram.

Returns
resultndarray

Histogram equalized image.

Interpolator

class dipy.core.interpolation.Interpolator(data, voxel_size)

Bases: object

Class to be subclassed by different interpolator types

__init__(data, voxel_size)

NearestNeighborInterpolator

class dipy.core.interpolation.NearestNeighborInterpolator(data, voxel_size)

Bases: dipy.core.interpolation.Interpolator

Interpolates data using nearest neighbor interpolation

__init__(data, voxel_size)

OutsideImage

class dipy.core.interpolation.OutsideImage

Bases: Exception

Attributes
args

Methods

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

__init__(*args, **kwargs)

TriLinearInterpolator

class dipy.core.interpolation.TriLinearInterpolator(data, voxel_size)

Bases: dipy.core.interpolation.Interpolator

Interpolates data using trilinear interpolation

interpolate 4d diffusion volume using 3 indices, ie data[x, y, z]

__init__(data, voxel_size)

interp_rbf

dipy.core.interpolation.interp_rbf()

Interpolate data on the sphere, using radial basis functions.

Parameters
data(N,) ndarray

Function values on the unit sphere.

sphere_originSphere

Positions of data values.

sphere_targetSphere

M target positions for which to interpolate.

function{‘multiquadric’, ‘inverse’, ‘gaussian’}

Radial basis function.

epsilonfloat

Radial basis function spread parameter. Defaults to approximate average distance between nodes.

a good start
smoothfloat

values greater than zero increase the smoothness of the approximation with 0 as pure interpolation. Default: 0.1

normstr

A string indicating the function that returns the “distance” between two points. ‘angle’ - The angle between two vectors ‘euclidean_norm’ - The Euclidean distance

Returns
v(M,) ndarray

Interpolated values.

See also

scipy.interpolate.Rbf

interpolate_scalar_2d

dipy.core.interpolation.interpolate_scalar_2d(image, locations)

Bilinear interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with bilinear interpolation

Parameters
fieldarray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_3d

dipy.core.interpolation.interpolate_scalar_3d(image, locations)

Trilinear interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with trilinear interpolation

Parameters
fieldarray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_2d

dipy.core.interpolation.interpolate_scalar_nn_2d(image, locations)

Nearest neighbor interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters
imagearray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_3d

dipy.core.interpolation.interpolate_scalar_nn_3d(image, locations)

Nearest neighbor interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters
imagearray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_vector_2d

dipy.core.interpolation.interpolate_vector_2d(field, locations)

Bilinear interpolation of a 2D vector field

Interpolates the 2D vector field at the given locations. This function is a wrapper for _interpolate_vector_2d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with bilinear interpolation at each vector component

Parameters
fieldarray, shape (S, R, 2)

the 2D vector field to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the vector field at

Returns
outarray, shape (n, 2)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0) if locations[i,:] is outside the field

insidearray, (n,)

if (locations[i,0], locations[i,1]) is inside the vector field then inside[i]=1, else inside[i]=0

interpolate_vector_3d

dipy.core.interpolation.interpolate_vector_3d(field, locations)

Trilinear interpolation of a 3D vector field

Interpolates the 3D vector field at the given locations. This function is a wrapper for _interpolate_vector_3d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with trilinear interpolation at each vector component

Parameters
fieldarray, shape (S, R, C, 3)

the 3D vector field to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the vector field at

Returns
outarray, shape (n, 3)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0,0) if locations[i,:] is outside the field

insidearray, (n,)

if locations[i,:] is inside the vector field then inside[i]=1, else inside[i]=0

map_coordinates_trilinear_iso

dipy.core.interpolation.map_coordinates_trilinear_iso()

Trilinear interpolation (isotropic voxel size)

Has similar behavior to map_coordinates from scipy.ndimage

Parameters
dataarray, float64 shape (X, Y, Z)
pointsarray, float64 shape(N, 3)
data_stridesarray npy_intp shape (3,)

Strides sequence for data array

len_pointscnp.npy_intp

Number of points to interpolate

resultarray, float64 shape(N)

The output array. This array should be initialized before you call this function. On exit it will contain the interpolated values from data at points given by points.

Returns
None

Notes

The output array result is filled in-place.

nearestneighbor_interpolate

dipy.core.interpolation.nearestneighbor_interpolate()

trilinear_interp

dipy.core.interpolation.trilinear_interp(data, index, voxel_size)

Interpolates vector from 4D data at 3D point given by index

Interpolates a vector of length T from a 4D volume of shape (I, J, K, T), given point (x, y, z) where (x, y, z) are the coordinates of the point in real units (not yet adjusted for voxel size).

trilinear_interpolate4d

dipy.core.interpolation.trilinear_interpolate4d()

Tri-linear interpolation along the last dimension of a 4d array

Parameters
point1d array (3,)

3 doubles representing a 3d point in space. If point has integer values [i, j, k], the result will be the same as data[i, j, k].

data4d array

Data to be interpolated.

out1d array, optional

The output array for the result of the interpolation.

Returns
out1d array

The result of interpolation.

as_strided

dipy.core.ndindex.as_strided(x, shape=None, strides=None, subok=False, writeable=True)

Create a view into the array with the given shape and strides.

Warning

This function has to be used with extreme care, see notes.

Parameters
xndarray

Array to create a new.

shapesequence of int, optional

The shape of the new array. Defaults to x.shape.

stridessequence of int, optional

The strides of the new array. Defaults to x.strides.

subokbool, optional

New in version 1.10.

If True, subclasses are preserved.

writeablebool, optional

New in version 1.12.

If set to False, the returned array will always be readonly. Otherwise it will be writable if the original array was. It is advisable to set this to False if possible (see Notes).

Returns
viewndarray

See also

broadcast_to

broadcast an array to a given shape.

reshape

reshape an array.

lib.stride_tricks.sliding_window_view

userfriendly and safe function for the creation of sliding window views.

Notes

as_strided creates a view into the array given the exact strides and shape. This means it manipulates the internal data structure of ndarray and, if done incorrectly, the array elements can point to invalid memory and can corrupt results or crash your program. It is advisable to always use the original x.strides when calculating new strides to avoid reliance on a contiguous memory layout.

Furthermore, arrays created with this function often contain self overlapping memory, so that two elements are identical. Vectorized write operations on such arrays will typically be unpredictable. They may even give different results for small, large, or transposed arrays. Since writing to these arrays has to be tested and done with great care, you may want to use writeable=False to avoid accidental write operations.

For these reasons it is advisable to avoid as_strided when possible.

ndindex

dipy.core.ndindex.ndindex(shape)

An N-dimensional iterator object to index arrays.

Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first.

Parameters
shapetuple of ints

The dimensions of the array.

Examples

>>> from dipy.core.ndindex import ndindex
>>> shape = (3, 2, 1)
>>> for index in ndindex(shape):
...     print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)

OneTimeProperty

class dipy.core.onetime.OneTimeProperty(func)

Bases: object

A descriptor to make special properties that become normal attributes.

This is meant to be used mostly by the auto_attr decorator in this module.

__init__(func)

Create a OneTimeProperty instance.

Parameters
funcmethod
The method that will be called the first time to compute a value.
Afterwards, the method’s name will be a standard attribute holding
the value of this computation.

ResetMixin

class dipy.core.onetime.ResetMixin

Bases: object

A Mixin class to add a .reset() method to users of OneTimeProperty.

By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid.

This class offers a .reset() method that users can call explicitly when they know the state of their objects may have changed and they want to ensure that all their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again.

Warning

If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are auto-computed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains.

If this becomes a problem in practice (for our analyzer objects it isn’t, as they don’t have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor.

Examples

>>> class A(ResetMixin):
...     def __init__(self,x=1.0):
...         self.x = x
...
...     @auto_attr
...     def y(self):
...         print('*** y computation executed ***')
...         return self.x / 2.0
...
>>> a = A(10)

About to access y twice, the second time no computation is done: >>> a.y * y computation executed * 5.0 >>> a.y 5.0

Changing x >>> a.x = 20

a.y doesn’t change to 10, since it is a static attribute: >>> a.y 5.0

We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset()

About to access y twice again after reset(): >>> a.y * y computation executed * 10.0 >>> a.y 10.0

Methods

reset()

Reset all OneTimeProperty attributes that may have fired already.

__init__(*args, **kwargs)
reset()

Reset all OneTimeProperty attributes that may have fired already.

auto_attr

dipy.core.onetime.auto_attr(func)

Decorator to create OneTimeProperty attributes.

Parameters
funcmethod

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

Examples

>>> class MagicProp(object):
...     @auto_attr
...     def a(self):
...         return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True

NonNegativeLeastSquares

class dipy.core.optimize.NonNegativeLeastSquares(*args, **kwargs)

Bases: dipy.core.optimize.SKLearnLinearSolver

A sklearn-like interface to scipy.optimize.nnls

Methods

fit(X, y)

Fit the NonNegativeLeastSquares linear model to data

predict(X)

Predict using the result of the model

__init__(*args, **kwargs)
fit(X, y)

Fit the NonNegativeLeastSquares linear model to data

Parameters

Optimizer

class dipy.core.optimize.Optimizer(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)

Bases: object

Attributes
evolution
fopt
message
nfev
nit
xopt

Methods

print_summary

__init__(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)

A class for handling minimization of scalar function of one or more variables.

Parameters
funcallable

Objective function.

x0ndarray

Initial guess.

argstuple, optional

Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).

methodstr, optional

Type of solver. Should be one of

  • ‘Nelder-Mead’

  • ‘Powell’

  • ‘CG’

  • ‘BFGS’

  • ‘Newton-CG’

  • ‘Anneal’

  • ‘L-BFGS-B’

  • ‘TNC’

  • ‘COBYLA’

  • ‘SLSQP’

  • ‘dogleg’

  • ‘trust-ncg’

jacbool or callable, optional

Jacobian of objective function. Only for CG, BFGS, Newton-CG, dogleg, trust-ncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.

hess, hesspcallable, optional

Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for Newton-CG, dogleg, trust-ncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.

boundssequence, optional

Bounds for variables (only for L-BFGS-B, TNC and SLSQP). (min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction.

constraintsdict or sequence of dict, optional

Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:

typestr

Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.

funcallable

The function defining the constraint.

jaccallable, optional

The Jacobian of fun (only for SLSQP).

argssequence, optional

Extra arguments to be passed to the function and Jacobian.

Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints.

tolfloat, optional

Tolerance for termination. For detailed control, use solver-specific options.

callbackcallable, optional

Called after each iteration, as callback(xk), where xk is the current parameter vector. Only available using Scipy >= 0.12.

optionsdict, optional

A dictionary of solver options. All methods accept the following generic options:

maxiterint

Maximum number of iterations to perform.

dispbool

Set to True to print convergence messages.

For method-specific options, see show_options(‘minimize’, method).

evolutionbool, optional

save history of x for each iteration. Only available using Scipy >= 0.12.

See also

scipy.optimize.minimize
property evolution
property fopt
property message
property nfev
property nit
print_summary()
property xopt

SKLearnLinearSolver

class dipy.core.optimize.SKLearnLinearSolver(*args, **kwargs)

Bases: object

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\)

Sub-classes of SKLearnLinearSolver should provide a ‘fit’ method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)

Methods

fit(X, y)

Implement for all derived classes

predict(X)

Predict using the result of the model

__init__(*args, **kwargs)
abstract fit(X, y)

Implement for all derived classes

predict(X)

Predict using the result of the model

Parameters
Xarray-like (n_samples, n_features)

Samples.

Returns
Carray, shape = (n_samples,)

Predicted values.

minimize

dipy.core.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None)

Minimization of scalar function of one or more variables.

Parameters
funcallable

The objective function to be minimized.

fun(x, *args) -> float

where x is an 1-D array with shape (n,) and args is a tuple of the fixed parameters needed to completely specify the function.

x0ndarray, shape (n,)

Initial guess. Array of real elements of size (n,), where ‘n’ is the number of independent variables.

argstuple, optional

Extra arguments passed to the objective function and its derivatives (fun, jac and hess functions).

methodstr or callable, optional

Type of solver. Should be one of

  • ‘Nelder-Mead’ (see here)

  • ‘Powell’ (see here)

  • ‘CG’ (see here)

  • ‘BFGS’ (see here)

  • ‘Newton-CG’ (see here)

  • ‘L-BFGS-B’ (see here)

  • ‘TNC’ (see here)

  • ‘COBYLA’ (see here)

  • ‘SLSQP’ (see here)

  • ‘trust-constr’(see here)

  • ‘dogleg’ (see here)

  • ‘trust-ncg’ (see here)

  • ‘trust-exact’ (see here)

  • ‘trust-krylov’ (see here)

  • custom - a callable object (added in version 0.14.0), see below for description.

If not given, chosen to be one of BFGS, L-BFGS-B, SLSQP, depending if the problem has constraints or bounds.

jac{callable, ‘2-point’, ‘3-point’, ‘cs’, bool}, optional

Method for computing the gradient vector. Only for CG, BFGS, Newton-CG, L-BFGS-B, TNC, SLSQP, dogleg, trust-ncg, trust-krylov, trust-exact and trust-constr. If it is a callable, it should be a function that returns the gradient vector:

jac(x, *args) -> array_like, shape (n,)

where x is an array with shape (n,) and args is a tuple with the fixed parameters. If jac is a Boolean and is True, fun is assumed to return a tuple (f, g) containing the objective function and the gradient. Methods ‘Newton-CG’, ‘trust-ncg’, ‘dogleg’, ‘trust-exact’, and ‘trust-krylov’ require that either a callable be supplied, or that fun return the objective and gradient. If None or False, the gradient will be estimated using 2-point finite difference estimation with an absolute step size. Alternatively, the keywords {‘2-point’, ‘3-point’, ‘cs’} can be used to select a finite difference scheme for numerical estimation of the gradient with a relative step size. These finite difference schemes obey any specified bounds.

hess{callable, ‘2-point’, ‘3-point’, ‘cs’, HessianUpdateStrategy}, optional

Method for computing the Hessian matrix. Only for Newton-CG, dogleg, trust-ncg, trust-krylov, trust-exact and trust-constr. If it is callable, it should return the Hessian matrix:

hess(x, *args) -> {LinearOperator, spmatrix, array}, (n, n)

where x is a (n,) ndarray and args is a tuple with the fixed parameters. LinearOperator and sparse matrix returns are only allowed for ‘trust-constr’ method. Alternatively, the keywords {‘2-point’, ‘3-point’, ‘cs’} select a finite difference scheme for numerical estimation. Or, objects implementing the HessianUpdateStrategy interface can be used to approximate the Hessian. Available quasi-Newton methods implementing this interface are:

  • BFGS;

  • SR1.

Whenever the gradient is estimated via finite-differences, the Hessian cannot be estimated with options {‘2-point’, ‘3-point’, ‘cs’} and needs to be estimated using one of the quasi-Newton strategies. ‘trust-exact’ cannot use a finite-difference scheme, and must be used with a callable returning an (n, n) array.

hesspcallable, optional

Hessian of objective function times an arbitrary vector p. Only for Newton-CG, trust-ncg, trust-krylov, trust-constr. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. hessp must compute the Hessian times an arbitrary vector:

hessp(x, p, *args) ->  ndarray shape (n,)

where x is a (n,) ndarray, p is an arbitrary vector with dimension (n,) and args is a tuple with the fixed parameters.

boundssequence or Bounds, optional

Bounds on variables for Nelder-Mead, L-BFGS-B, TNC, SLSQP, Powell, and trust-constr methods. There are two ways to specify the bounds:

  1. Instance of Bounds class.

  2. Sequence of (min, max) pairs for each element in x. None is used to specify no bound.

constraints{Constraint, dict} or List of {Constraint, dict}, optional

Constraints definition (only for COBYLA, SLSQP and trust-constr).

Constraints for ‘trust-constr’ are defined as a single object or a list of objects specifying constraints to the optimization problem. Available constraints are:

  • LinearConstraint

  • NonlinearConstraint

Constraints for COBYLA, SLSQP are defined as a list of dictionaries. Each dictionary with fields:

typestr

Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.

funcallable

The function defining the constraint.

jaccallable, optional

The Jacobian of fun (only for SLSQP).

argssequence, optional

Extra arguments to be passed to the function and Jacobian.

Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints.

tolfloat, optional

Tolerance for termination. When tol is specified, the selected minimization algorithm sets some relevant solver-specific tolerance(s) equal to tol. For detailed control, use solver-specific options.

optionsdict, optional

A dictionary of solver options. All methods accept the following generic options:

maxiterint

Maximum number of iterations to perform. Depending on the method each iteration may use several function evaluations.

dispbool

Set to True to print convergence messages.

For method-specific options, see show_options().

callbackcallable, optional

Called after each iteration. For ‘trust-constr’ it is a callable with the signature:

callback(xk, OptimizeResult state) -> bool

where xk is the current parameter vector. and state is an OptimizeResult object, with the same fields as the ones from the return. If callback returns True the algorithm execution is terminated. For all the other methods, the signature is:

callback(xk)

where xk is the current parameter vector.

Returns
resOptimizeResult

The optimization result represented as a OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the optimizer exited successfully and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

See also

minimize_scalar

Interface to minimization algorithms for scalar univariate functions

show_options

Additional options accepted by the solvers

Notes

This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is BFGS.

Unconstrained minimization

Method CG uses a nonlinear conjugate gradient algorithm by Polak and Ribiere, a variant of the Fletcher-Reeves method described in [5] pp.120-122. Only the first derivatives are used.

Method BFGS uses the quasi-Newton method of Broyden, Fletcher, Goldfarb, and Shanno (BFGS) [5] pp. 136. It uses the first derivatives only. BFGS has proven good performance even for non-smooth optimizations. This method also returns an approximation of the Hessian inverse, stored as hess_inv in the OptimizeResult object.

Method Newton-CG uses a Newton-CG algorithm [5] pp. 168 (also known as the truncated Newton method). It uses a CG method to the compute the search direction. See also TNC method for a box-constrained minimization with a similar algorithm. Suitable for large-scale problems.

Method dogleg uses the dog-leg trust-region algorithm [5] for unconstrained minimization. This algorithm requires the gradient and Hessian; furthermore the Hessian is required to be positive definite.

Method trust-ncg uses the Newton conjugate gradient trust-region algorithm [5] for unconstrained minimization. This algorithm requires the gradient and either the Hessian or a function that computes the product of the Hessian with a given vector. Suitable for large-scale problems.

Method trust-krylov uses the Newton GLTR trust-region algorithm [14], [15] for unconstrained minimization. This algorithm requires the gradient and either the Hessian or a function that computes the product of the Hessian with a given vector. Suitable for large-scale problems. On indefinite problems it requires usually less iterations than the trust-ncg method and is recommended for medium and large-scale problems.

Method trust-exact is a trust-region method for unconstrained minimization in which quadratic subproblems are solved almost exactly [13]. This algorithm requires the gradient and the Hessian (which is not required to be positive definite). It is, in many situations, the Newton method to converge in fewer iteraction and the most recommended for small and medium-size problems.

Bound-Constrained minimization

Method Nelder-Mead uses the Simplex algorithm [1], [2]. This algorithm is robust in many applications. However, if numerical computation of derivative can be trusted, other algorithms using the first and/or second derivatives information might be preferred for their better performance in general.

Method L-BFGS-B uses the L-BFGS-B algorithm [6], [7] for bound constrained minimization.

Method Powell is a modification of Powell’s method [3], [4] which is a conjugate direction method. It performs sequential one-dimensional minimizations along each vector of the directions set (direc field in options and info), which is updated at each iteration of the main minimization loop. The function need not be differentiable, and no derivatives are taken. If bounds are not provided, then an unbounded line search will be used. If bounds are provided and the initial guess is within the bounds, then every function evaluation throughout the minimization procedure will be within the bounds. If bounds are provided, the initial guess is outside the bounds, and direc is full rank (default has full rank), then some function evaluations during the first iteration may be outside the bounds, but every function evaluation after the first iteration will be within the bounds. If direc is not full rank, then some parameters may not be optimized and the solution is not guaranteed to be within the bounds.

Method TNC uses a truncated Newton algorithm [5], [8] to minimize a function with variables subject to bounds. This algorithm uses gradient information; it is also called Newton Conjugate-Gradient. It differs from the Newton-CG method described above as it wraps a C implementation and allows each variable to be given upper and lower bounds.

Constrained Minimization

Method COBYLA uses the Constrained Optimization BY Linear Approximation (COBYLA) method [9], [10], [11]. The algorithm is based on linear approximations to the objective function and each constraint. The method wraps a FORTRAN implementation of the algorithm. The constraints functions ‘fun’ may return either a single number or an array or list of numbers.

Method SLSQP uses Sequential Least SQuares Programming to minimize a function of several variables with any combination of bounds, equality and inequality constraints. The method wraps the SLSQP Optimization subroutine originally implemented by Dieter Kraft [12]. Note that the wrapper handles infinite values in bounds by converting them into large floating values.

Method trust-constr is a trust-region algorithm for constrained optimization. It swiches between two implementations depending on the problem definition. It is the most versatile constrained minimization algorithm implemented in SciPy and the most appropriate for large-scale problems. For equality constrained problems it is an implementation of Byrd-Omojokun Trust-Region SQP method described in [17] and in [5], p. 549. When inequality constraints are imposed as well, it swiches to the trust-region interior point method described in [16]. This interior point algorithm, in turn, solves inequality constraints by introducing slack variables and solving a sequence of equality-constrained barrier problems for progressively smaller values of the barrier parameter. The previously described equality constrained SQP method is used to solve the subproblems with increasing levels of accuracy as the iterate gets closer to a solution.

Finite-Difference Options

For Method trust-constr the gradient and the Hessian may be approximated using three finite-difference schemes: {‘2-point’, ‘3-point’, ‘cs’}. The scheme ‘cs’ is, potentially, the most accurate but it requires the function to correctly handles complex inputs and to be differentiable in the complex plane. The scheme ‘3-point’ is more accurate than ‘2-point’ but requires twice as many operations.

Custom minimizers

It may be useful to pass a custom minimization method, for example when using a frontend to this method such as scipy.optimize.basinhopping or a different library. You can simply pass a callable as the method parameter.

The callable is called as method(fun, x0, args, **kwargs, **options) where kwargs corresponds to any other parameters passed to minimize (such as callback, hess, etc.), except the options dict, which has its contents also passed as method parameters pair by pair. Also, if jac has been passed as a bool type, jac and fun are mangled so that fun returns just the function values and jac is converted to a function returning the Jacobian. The method shall return an OptimizeResult object.

The provided method callable must be able to accept (and possibly ignore) arbitrary parameters; the set of parameters accepted by minimize may expand in future versions and then these parameters will be passed to the method. You can find an example in the scipy.optimize tutorial.

New in version 0.11.0.

References

1

Nelder, J A, and R Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal 7: 308-13.

2

Wright M H. 1996. Direct search methods: Once scorned, now respectable, in Numerical Analysis 1995: Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis (Eds. D F Griffiths and G A Watson). Addison Wesley Longman, Harlow, UK. 191-208.

3

Powell, M J D. 1964. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal 7: 155-162.

4

Press W, S A Teukolsky, W T Vetterling and B P Flannery. Numerical Recipes (any edition), Cambridge University Press.

5(1,2,3,4,5,6,7,8)

Nocedal, J, and S J Wright. 2006. Numerical Optimization. Springer New York.

6

Byrd, R H and P Lu and J. Nocedal. 1995. A Limited Memory Algorithm for Bound Constrained Optimization. SIAM Journal on Scientific and Statistical Computing 16 (5): 1190-1208.

7

Zhu, C and R H Byrd and J Nocedal. 1997. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN routines for large scale bound constrained optimization. ACM Transactions on Mathematical Software 23 (4): 550-560.

8

Nash, S G. Newton-Type Minimization Via the Lanczos Method. 1984. SIAM Journal of Numerical Analysis 21: 770-778.

9

Powell, M J D. A direct search optimization method that models the objective and constraint functions by linear interpolation. 1994. Advances in Optimization and Numerical Analysis, eds. S. Gomez and J-P Hennart, Kluwer Academic (Dordrecht), 51-67.

10

Powell M J D. Direct search algorithms for optimization calculations. 1998. Acta Numerica 7: 287-336.

11

Powell M J D. A view of algorithms for optimization without derivatives. 2007.Cambridge University Technical Report DAMTP 2007/NA03

12

Kraft, D. A software package for sequential quadratic programming. 1988. Tech. Rep. DFVLR-FB 88-28, DLR German Aerospace Center – Institute for Flight Mechanics, Koln, Germany.

13

Conn, A. R., Gould, N. I., and Toint, P. L. Trust region methods. 2000. Siam. pp. 169-200.

14

F. Lenders, C. Kirches, A. Potschka: “trlib: A vector-free implementation of the GLTR method for iterative solution of the trust region problem”, :arxiv:`1611.04718`

15

N. Gould, S. Lucidi, M. Roma, P. Toint: “Solving the Trust-Region Subproblem using the Lanczos Method”, SIAM J. Optim., 9(2), 504–525, (1999).

16

Byrd, Richard H., Mary E. Hribar, and Jorge Nocedal. 1999. An interior point algorithm for large-scale nonlinear programming. SIAM Journal on Optimization 9.4: 877-900.

17

Lalee, Marucha, Jorge Nocedal, and Todd Plantega. 1998. On the implementation of an algorithm for large-scale equality constrained optimization. SIAM Journal on Optimization 8.3: 682-706.

Examples

Let us consider the problem of minimizing the Rosenbrock function. This function (and its respective derivatives) is implemented in rosen (resp. rosen_der, rosen_hess) in the scipy.optimize.

>>> from scipy.optimize import minimize, rosen, rosen_der

A simple application of the Nelder-Mead method is:

>>> x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
>>> res = minimize(rosen, x0, method='Nelder-Mead', tol=1e-6)
>>> res.x
array([ 1.,  1.,  1.,  1.,  1.])

Now using the BFGS algorithm, using the first derivative and a few options:

>>> res = minimize(rosen, x0, method='BFGS', jac=rosen_der,
...                options={'gtol': 1e-6, 'disp': True})
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 26
         Function evaluations: 31
         Gradient evaluations: 31
>>> res.x
array([ 1.,  1.,  1.,  1.,  1.])
>>> print(res.message)
Optimization terminated successfully.
>>> res.hess_inv
array([[ 0.00749589,  0.01255155,  0.02396251,  0.04750988,  0.09495377],  # may vary
       [ 0.01255155,  0.02510441,  0.04794055,  0.09502834,  0.18996269],
       [ 0.02396251,  0.04794055,  0.09631614,  0.19092151,  0.38165151],
       [ 0.04750988,  0.09502834,  0.19092151,  0.38341252,  0.7664427 ],
       [ 0.09495377,  0.18996269,  0.38165151,  0.7664427,   1.53713523]])

Next, consider a minimization problem with several constraints (namely Example 16.4 from [5]). The objective function is:

>>> fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2

There are three constraints defined as:

>>> cons = ({'type': 'ineq', 'fun': lambda x:  x[0] - 2 * x[1] + 2},
...         {'type': 'ineq', 'fun': lambda x: -x[0] - 2 * x[1] + 6},
...         {'type': 'ineq', 'fun': lambda x: -x[0] + 2 * x[1] + 2})

And variables must be positive, hence the following bounds:

>>> bnds = ((0, None), (0, None))

The optimization problem is solved using the SLSQP method as:

>>> res = minimize(fun, (2, 0), method='SLSQP', bounds=bnds,
...                constraints=cons)

It should converge to the theoretical solution (1.4 ,1.7).

sparse_nnls

dipy.core.optimize.sparse_nnls(y, X, momentum=1, step_size=0.01, non_neg=True, check_error_iter=10, max_error_checks=10, converge_on_sse=0.99)

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

Parameters
y1-d array of shape (N)

The data. Needs to be dense.

Xndarray. May be either sparse or dense. Shape (N, M)

The regressors

momentumfloat, optional (default: 1).

The persistence of the gradient.

step_sizefloat, optional (default: 0.01).

The increment of parameter update in each iteration

non_negBoolean, optional (default: True)

Whether to enforce non-negativity of the solution.

check_error_iterint (default:10)

How many rounds to run between error evaluation for convergence-checking.

max_error_checksint (default: 10)

Don’t check errors more than this number of times if no improvement in r-squared is seen.

converge_on_ssefloat (default: 0.99)

a percentage improvement in SSE that is required each time to say that things are still going well.

Returns
h_bestThe best estimate of the parameters.

spdot

dipy.core.optimize.spdot(A, B)

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

Parameters
A, Barrays of shape (m, n), (n, k)
Returns
The matrix product AB. If both A and B are sparse, the result will be a
sparse matrix. Otherwise, a dense result is returned
See discussion here:
http://mail.scipy.org/pipermail/scipy-user/2010-November/027700.html

Profiler

class dipy.core.profile.Profiler(call=None, *args)

Bases: object

Profile python/cython files or functions

If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file

and for the functions that you do not want to profile you can use this decorator in your cython files

@cython.profile(False)

Parameters
callerfile or function call
argsfunction arguments

References

http://docs.cython.org/src/tutorial/profiling_tutorial.html http://docs.python.org/library/profile.html http://packages.python.org/line_profiler/

Examples

from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname=’test.py’ p=Profiler(fname) p.print_stats(10) p.print_stats(‘det’)

Attributes
statsfunction, stats.print_stats(10) will prin the 10 slower functions

Methods

print_stats([N])

Print stats for profiling

__init__(call=None, *args)
print_stats(N=10)

Print stats for profiling

You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats(‘function_name’) will give you the stats for all the calls with name ‘function_name’

Parameters
Nstats.print_stats argument

optional_package

dipy.core.profile.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True

LEcuyer

dipy.core.rng.LEcuyer(s1=100001, s2=200002)

Return a LEcuyer random number generator.

Generate uniformly distributed random numbers using the 32-bit generator from figure 3 of:

L’Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742-749 & 774-?, June 1988.

The cycle length is claimed to be 2.30584E+18

Parameters
s1: int

First seed value. Should not be null. (default 100001)

s2: int

Second seed value. Should not be null. (default 200002)

Returns
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.LEcuyer() for i in range(N)]

WichmannHill1982

dipy.core.rng.WichmannHill1982(ix=100001, iy=200002, iz=300003)

Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2.

Returns a pseudo-random number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article.

ix, iy and iz should be set to integer values between 1 and 30000 before the first entry.

Integer arithmetic up to 5212632 is required.

Parameters
ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

Returns
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill1982() for i in range(N)]

WichmannHill2006

dipy.core.rng.WichmannHill2006(ix=100001, iy=200002, iz=300003, it=400004)

Wichmann Hill (2006) random number generator.

B.A. Wichmann, I.D. Hill, Generating good pseudo-random numbers, Computational Statistics & Data Analysis, Volume 51, Issue 3, 1 December 2006, Pages 1614-1622, ISSN 0167-9473, DOI: 10.1016/j.csda.2006.05.019. (http://www.sciencedirect.com/science/article/B6V8V-4K7F86W-2/2/a3a33291b8264e4c882a8f21b6e43351) for advice on generating many sequences for use together, and on alternative algorithms and codes

Parameters
ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

it: int

Fourth seed value. Should not be null. (default 400004)

Returns
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill2006() for i in range(N)]

architecture

dipy.core.rng.architecture(executable='/N/u/skoudoro/Carbonate/.conda/envs/dipy-env-39/bin/python', bits='', linkage='')

Queries the given executable (defaults to the Python interpreter binary) for various architecture information.

Returns a tuple (bits, linkage) which contains information about the bit architecture and the linkage format used for the executable. Both values are returned as strings.

Values that cannot be determined are returned as given by the parameter presets. If bits is given as ‘’, the sizeof(pointer) (or sizeof(long) on Python version < 1.5.2) is used as indicator for the supported pointer size.

The function relies on the system’s “file” command to do the actual work. This is available on most if not all Unix platforms. On some non-Unix platforms where the “file” command does not exist and the executable is set to the Python interpreter binary defaults from _default_architecture are used.

floor

dipy.core.rng.floor(x, /)

Return the floor of x as an Integral.

This is the largest integer <= x.

HemiSphere

class dipy.core.sphere.HemiSphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Bases: dipy.core.sphere.Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Parameters
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See also

Sphere
Attributes
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

from_sphere(sphere[, tol])

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide([n])

Create a more subdivided HemiSphere

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Create a HemiSphere from points

faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters
xyzarray-like, 3 elements

A unit vector

Returns
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, tol=1e-05)

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide(n=1)

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

Sphere

class dipy.core.sphere.Sphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)

Bases: object

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Parameters
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

Attributes
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector

subdivide([n])

Subdivides each face of the sphere into four new faces.

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)
edges()
faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector

Parameters
xyzarray-like, 3 elements

A unit vector

Returns
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

subdivide(n=1)

Subdivides each face of the sphere into four new faces.

New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].

      y
      /\
     /  \
   a/____\b
   /\    /\
  /  \  /  \
 /____\/____\
x      c     z
Parameters
nint, optional

The number of subdivisions to preform.

Returns
new_sphereSphere

The subdivided sphere.

vertices()
property x
property y
property z

auto_attr

dipy.core.sphere.auto_attr(func)

Decorator to create OneTimeProperty attributes.

Parameters
funcmethod

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

Examples

>>> class MagicProp(object):
...     @auto_attr
...     def a(self):
...         return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True

cart2sphere

dipy.core.sphere.cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z

See doc for sphere2cart for angle conventions and derivation of the formulae.

\(0\le\theta\mathrm{(theta)}\le\pi\) and \(-\pi\le\phi\mathrm{(phi)}\le\pi\)

Parameters
xarray_like

x coordinate in Cartesian space

yarray_like

y coordinate in Cartesian space

zarray_like

z coordinate

Returns
rarray

radius

thetaarray

inclination (polar) angle

phiarray

azimuth angle

disperse_charges

dipy.core.sphere.disperse_charges(hemi, iters, const=0.2)

Models electrostatic repulsion on the unit sphere

Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.

Parameters
hemiHemiSphere

Points on a unit sphere.

itersint

Number of iterations to run.

constfloat

Using a smaller const could provide a more accurate result, but will need more iterations to converge.

Returns
hemiHemiSphere

Distributed points on a unit sphere.

potentialndarray

The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.

Notes

This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefor each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at -x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.

disperse_charges_alt

dipy.core.sphere.disperse_charges_alt(init_pointset, iters, tol=0.001)

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

Parameters
init_pointset(N, 3) ndarray

Points on a unit sphere.

itersint

Number of iterations to run.

tolfloat

Tolerance for the optimization.

Returns
array-like (N, 3)

Distributed points on a unit sphere.

euler_characteristic_check

dipy.core.sphere.euler_characteristic_check(sphere, chi=2)

Checks the euler characteristic of a sphere

If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(f-e+v = 2\) for a mesh on a sphere. More generally, whether \(f -e + v == \chi\) where \(\chi\) is the Euler characteristic of the mesh.

  • Open chain (track) has \(\chi=1\)

  • Closed chain (loop) has \(\chi=0\)

  • Disk has \(\chi=1\)

  • Sphere has \(\chi=2\)

  • HemiSphere has \(\chi=1\)

Parameters
sphereSphere

A Sphere instance with vertices, edges and faces attributes.

chiint, optional

The Euler characteristic of the mesh to be checked

Returns
checkbool

True if the mesh has Euler characteristic \(\chi\)

Examples

>>> euler_characteristic_check(unit_octahedron)
True
>>> hemisphere = HemiSphere.from_sphere(unit_icosahedron)
>>> euler_characteristic_check(hemisphere, chi=1)
True

faces_from_sphere_vertices

dipy.core.sphere.faces_from_sphere_vertices(vertices)

Triangulate a set of vertices on the sphere.

Parameters
vertices(M, 3) ndarray

XYZ coordinates of vertices on the sphere.

Returns
faces(N, 3) ndarray

Indices into vertices; forms triangular faces.

remove_similar_vertices

dipy.core.sphere.remove_similar_vertices(vertices, theta, return_mapping=False, return_index=False)

Remove vertices that are less than theta degrees from any other

Returns vertices that are at least theta degrees from any other vertex. Vertex v and -v are considered the same so if v and -v are both in vertices only one is kept. Also if v and w are both in vertices, w must be separated by theta degrees from both v and -v to be unique.

Parameters
vertices(N, 3) ndarray

N unit vectors.

thetafloat

The minimum separation between vertices in degrees.

return_mapping{False, True}, optional

If True, return mapping as well as vertices and maybe indices (see below).

return_indices{False, True}, optional

If True, return indices as well as vertices and maybe mapping (see below).

Returns
unique_vertices(M, 3) ndarray

Vertices sufficiently separated from one another.

mapping(N,) ndarray

For each element vertices[i] (\(i \in 0..N-1\)), the index \(j\) to a vertex in unique_vertices that is less than theta degrees from vertices[i]. Only returned if return_mapping is True.

indices(N,) ndarray

indices gives the reverse of mapping. For each element unique_vertices[j] (\(j \in 0..M-1\)), the index \(i\) to a vertex in vertices that is less than theta degrees from unique_vertices[j]. If there is more than one element of vertices that is less than theta degrees from unique_vertices[j], return the first (lowest index) matching value. Only return if return_indices is True.

sphere2cart

dipy.core.sphere.sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.

Imagine a sphere with center (0,0,0). Orient it with the z axis running south-north, the y axis running west-east and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the z-axis (the zenith) around the y-axis, towards the x axis. Thus the rotation is counter-clockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the z-axis towards the y axis. The rotation is counter-clockwise from the point of view of positive z.

Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the z-axis, and phi is the angle between the projection of P onto the XY plane, and the X axis.

Geographical nomenclature designates theta as ‘co-latitude’, and phi as ‘longitude’

Parameters
rarray_like

radius

thetaarray_like

inclination or polar angle

phiarray_like

azimuth angle

Returns
xarray

x coordinate(s) in Cartesion space

yarray

y coordinate(s) in Cartesian space

zarray

z coordinate

Notes

See these pages:

for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.

Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r -> z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) -> x == r * sin(theta) * cos(phi) and so on.

We have deliberately named this function sphere2cart rather than sph2cart to distinguish it from the Matlab function of that name, because the Matlab function uses an unusual convention for the angles that we did not want to replicate. The Matlab function is trivial to implement with the formulae given in the Matlab help.

unique_edges

dipy.core.sphere.unique_edges(faces, return_mapping=False)

Extract all unique edges from given triangular faces.

Parameters
faces(N, 3) ndarray

Vertex indices forming triangular faces.

return_mappingbool

If true, a mapping to the edges of each face is returned.

Returns
edges(N, 2) ndarray

Unique edges.

mapping(N, 3)

For each face, [x, y, z], a mapping to it’s edges [a, b, c].

   y
   /               /               a/    
/                  /                   /__________          x      c     z

unique_sets

dipy.core.sphere.unique_sets(sets, return_inverse=False)

Remove duplicate sets.

Parameters
setsarray (N, k)

N sets of size k.

return_inversebool

If True, also returns the indices of unique_sets that can be used to reconstruct sets (the original ordering of each set may not be preserved).

Returns
unique_setsarray

Unique sets.

inversearray (N,)

The indices to reconstruct sets from unique_sets.

vector_norm

dipy.core.sphere.vector_norm(vec, axis=- 1, keepdims=False)

Return vector Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters
vecarray_like

Vectors to norm.

axisint

Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.

keepdimsbool

If True, the output will have the same number of dimensions as vec, with shape 1 on axis.

Returns
normarray

Euclidean norms of vectors.

Examples

>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17.,  85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
       [ 85.]])
>>> vector_norm(vec, axis=0)
array([  8.,  39.,  77.])

permutations

class dipy.core.sphere_stats.permutations(iterable, r=None)

Bases: object

Return successive r-length permutations of elements in the iterable.

permutations(range(3), 2) –> (0,1), (0,2), (1,0), (1,2), (2,0), (2,1)

__init__(*args, **kwargs)

angular_similarity

dipy.core.sphere_stats.angular_similarity(S, T)

Computes the cosine distance of the best match between points of two sets of vectors S and T

Parameters
Sarray, shape (m,d)
Tarray, shape (n,d)
Returns
max_cosine_distance:float

Examples

>>> import numpy as np
>>> from dipy.core.sphere_stats import angular_similarity
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> angular_similarity(S,T)
3.0
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]])
>>> angular_similarity(S,T)
2.7071067811865475
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0]])
>>> angular_similarity(S,T)
1.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,0,1]])
>>> angular_similarity(S,T)
0.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])

Now we use print to reduce the precision of of the printed output (so the doctests don’t detect unimportant differences)

>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0],[0,0,1]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187

compare_orientation_sets

dipy.core.sphere_stats.compare_orientation_sets(S, T)

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

Parameters
Sarray, shape (m,d)

First set of vectors.

Tarray, shape (n,d)

Second set of vectors.

Returns
max_mean_cosinefloat

Maximum mean cosine distance.

Examples

>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> compare_orientation_sets(S,T)
1.0

eigenstats

dipy.core.sphere_stats.eigenstats(points, alpha=0.05)

Principal direction and confidence ellipse

Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5.

Parameters
pointsarraey_like (N,3)

array of points on the sphere of radius 1 in \(\mathbb{R}^3\)

alphareal or None

1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage.

Returns
centrevector (3,)

centre of ellipsoid

b1vector (2,)

lengths of semi-axes of ellipsoid

random_uniform_on_sphere

dipy.core.sphere_stats.random_uniform_on_sphere(n=1, coords='xyz')

Random unit vectors from a uniform distribution on the sphere.

Parameters
nint

Number of random vectors

coords{‘xyz’, ‘radians’, ‘degrees’}

‘xyz’ for cartesian form ‘radians’ for spherical form in rads ‘degrees’ for spherical form in degrees

Returns
Xarray, shape (n,3) if coords=’xyz’ or shape (n,2) otherwise

Uniformly distributed vectors on the unit sphere.

Notes

The uniform distribution on the sphere, parameterized by spherical coordinates \((\theta, \phi)\), should verify \(\phi\sim U[0,2\pi]\), while \(z=\cos(\theta)\sim U[-1,1]\).

References

1

http://mathworld.wolfram.com/SpherePointPicking.html.

Examples

>>> from dipy.core.sphere_stats import random_uniform_on_sphere
>>> X = random_uniform_on_sphere(4, 'radians')
>>> X.shape == (4, 2)
True
>>> X = random_uniform_on_sphere(4, 'xyz')
>>> X.shape == (4, 3)
True

HemiSphere

class dipy.core.subdivide_octahedron.HemiSphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Bases: dipy.core.sphere.Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Parameters
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See also

Sphere
Attributes
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

from_sphere(sphere[, tol])

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide([n])

Create a more subdivided HemiSphere

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)

Create a HemiSphere from points

faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters
xyzarray-like, 3 elements

A unit vector

Returns
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, tol=1e-05)

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide(n=1)

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

create_unit_hemisphere

dipy.core.subdivide_octahedron.create_unit_hemisphere(recursion_level=2)

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.

Parameters
recursion_levelint

Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \((4^recursion_level+2)/2\) vertices.

Returns
HemiSphere

Half of a unit sphere.

See also

create_unit_sphere, Sphere, HemiSphere

create_unit_sphere

dipy.core.subdivide_octahedron.create_unit_sphere(recursion_level=2)

Creates a unit sphere by subdividing a unit octahedron.

Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere.

Parameters
recursion_levelint

Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \(4^recursion_level+2\) vertices.

Returns
Sphere

The unit sphere.

See also

create_unit_hemisphere, Sphere

afb3D

dipy.core.wavelet.afb3D(x, af1, af2=None, af3=None)

3D Analysis Filter Bank

Parameters
x3D ndarray

N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3)

afi2D ndarray

analysis filters for dimension i afi[:, 1] - lowpass filter afi[:, 2] - highpass filter

Returns
lo1D array

lowpass subband

hi1D array

highpass subbands, h[d]- d = 1..7

afb3D_A

dipy.core.wavelet.afb3D_A(x, af, d)
3D Analysis Filter Bank

(along one dimension only)

Parameters
x3D ndarray
N1xN2xN2 matrix, where min(N1,N2,N3) > 2*length(filter)

(Ni are even)

af2D ndarray

analysis filter for the columns af[:, 1] - lowpass filter af[:, 2] - highpass filter

dint

dimension of filtering (d = 1, 2 or 3)

Returns
lo1D array

lowpass subbands

hi1D array

highpass subbands

cshift3D

dipy.core.wavelet.cshift3D(x, m, d)

3D Circular Shift

Parameters
x3D ndarray

N1 by N2 by N3 array

mint

amount of shift

dint

dimension of shift (d = 1,2,3)

Returns
y3D ndarray

array x will be shifed by m samples down along dimension d

dwt3D

dipy.core.wavelet.dwt3D(x, J, af)

3-D Discrete Wavelet Transform

Parameters
x3D ndarray

N1 x N2 x N3 matrix 1) Ni all even 2) min(Ni) >= 2^(J-1)*length(af)

Jint

number of stages

af2D ndarray

analysis filters

Returns
wcell array

wavelet coefficients

idwt3D

dipy.core.wavelet.idwt3D(w, J, sf)

Inverse 3-D Discrete Wavelet Transform

Parameters
wcell array

wavelet coefficient

Jint

number of stages

sf2D ndarray

synthesis filters

Returns
y3D ndarray

output array

permutationinverse

dipy.core.wavelet.permutationinverse(perm)

Function generating inverse of the permutation

Parameters
perm1D array
Returns
inverse1D array

permutation inverse of the input

sfb3D

dipy.core.wavelet.sfb3D(lo, hi, sf1, sf2=None, sf3=None)

3D Synthesis Filter Bank

Parameters
lo1D array

lowpass subbands

hi1D array

highpass subbands

sfi2D ndarray

synthesis filters for dimension i

Returns
y3D ndarray

output array

sfb3D_A

dipy.core.wavelet.sfb3D_A(lo, hi, sf, d)
3D Synthesis Filter Bank

(along single dimension only)

Parameters
lo1D array

lowpass subbands

hi1D array

highpass subbands

sf2D ndarray

synthesis filters

dint

dimension of filtering

Returns
y3D ndarray

the N1xN2xN3 matrix