tracking
tracking._utils
tracking.benchmarks
tracking.benchmarks.bench_streamline
tracking.eudx
tracking.learning
tracking.life
tracking.local
tracking.local.localtracking
tracking.metrics
tracking.streamline
tracking.utils
Streamlines
Streamlines
EuDX
FiberFit
FiberModel
LifeSignalMaker
ReconstFit
ReconstModel
range
ActTissueClassifier
BinaryTissueClassifier
CmcTissueClassifier
ConstrainedTissueClassifier
DirectionGetter
LocalTracking
ParticleFilteringTracking
ThresholdTissueClassifier
TissueClassifier
Bunch
ConstrainedTissueClassifier
LocalTracking
ParticleFilteringTracking
xrange
LooseVersion
Streamlines
defaultdict
map
xrange
tracking
Tracking objects
Streamlines |
alias of nibabel.streamlines.array_sequence.ArraySequence |
bench |
Run benchmarks for module using nose. |
test |
Run tests for module using nose. |
tracking._utils
warn |
Issue a warning, or maybe ignore it or raise an exception. |
tracking.benchmarks.bench_streamline
Benchmarks for functions related to streamline
Run all benchmarks with:
import dipy.tracking as dipytracking
dipytracking.bench()
With Pytest, Run this benchmark with:
pytest -svv -c bench.ini /path/to/bench_streamline.py
Streamlines |
alias of nibabel.streamlines.array_sequence.ArraySequence |
assert_array_almost_equal (x, y[, decimal, …]) |
Raises an AssertionError if two objects are not equal up to desired precision. |
assert_array_equal (x, y[, err_msg, verbose]) |
Raises an AssertionError if two array_like objects are not equal. |
bench_compress_streamlines () |
|
bench_length () |
|
bench_set_number_of_points () |
|
compress_streamlines |
Compress streamlines by linearization as in [Presseau15]. |
compress_streamlines_python (streamline[, …]) |
Python version of the FiberCompression found on https://github.com/scilus/FiberCompression. |
generate_streamlines (nb_streamlines, …) |
|
get_fnames ([name]) |
provides filenames of some test datasets or other useful parametrisations |
length |
Euclidean length of streamlines |
length_python (xyz[, along]) |
|
measure (code_str[, times, label]) |
Return elapsed time for executing code in the namespace of the caller. |
set_number_of_points |
Change the number of points of streamlines |
set_number_of_points_python (xyz[, n_pols]) |
|
setup () |
tracking.eudx
EuDX (a, ind, seeds, odf_vertices[, a_low, …]) |
Euler Delta Crossings | ||
eudx_both_directions |
|
||
get_sphere ([name]) |
provide triangulated spheres |
tracking.learning
Learning algorithms for tractography
detect_corresponding_tracks (indices, …) |
Detect corresponding tracks from list tracks1 to list tracks2 where tracks1 & tracks2 are lists of tracks |
detect_corresponding_tracks_plus (indices, …) |
Detect corresponding tracks from 1 to 2 where tracks1 & tracks2 are sequences of tracks |
tracking.life
This is an implementation of the Linear Fascicle Evaluation (LiFE) algorithm described in:
Pestilli, F., Yeatman, J, Rokem, A. Kay, K. and Wandell B.A. (2014). Validation and statistical inference in living connectomes. Nature Methods 11: 1058-1063. doi:10.1038/nmeth.3098
FiberFit (fiber_model, life_matrix, …) |
A fit of the LiFE model to diffusion data |
FiberModel (gtab) |
A class for representing and solving predictive models based on tractography solutions. |
LifeSignalMaker (gtab[, evals, sphere]) |
A class for generating signals from streamlines in an efficient and speedy manner. |
ReconstFit (model, data) |
Abstract class which holds the fit result of ReconstModel |
ReconstModel (gtab) |
Abstract class for signal reconstruction models |
range (stop) |
range(start, stop[, step]) -> range object |
grad_tensor (grad, evals) |
Calculate the 3 by 3 tensor for a given spatial gradient, given a canonical tensor shape (also as a 3 by 3), pointing at [1,0,0] |
gradient (f) |
Return the gradient of an N-dimensional array. |
streamline_gradients (streamline) |
Calculate the gradients of the streamline along the spatial dimension |
streamline_signal (streamline, gtab[, evals]) |
The signal from a single streamline estimate along each of its nodes. |
streamline_tensors (streamline[, evals]) |
The tensors generated by this fiber. |
transform_streamlines (streamlines, mat[, …]) |
Apply affine transformation to streamlines |
unique_rows (in_array[, dtype]) |
This (quickly) finds the unique rows in an array |
voxel2streamline (streamline[, transformed, …]) |
Maps voxels to streamlines and streamlines to voxels, for setting up the LiFE equations matrix |
tracking.local
ActTissueClassifier |
Anatomically-Constrained Tractography (ACT) stopping criteria from [1]. |
BinaryTissueClassifier |
cdef: |
CmcTissueClassifier |
Continuous map criterion (CMC) stopping criteria from [1]. |
ConstrainedTissueClassifier |
Abstract class that takes as input included and excluded tissue maps. |
DirectionGetter |
Methods |
LocalTracking (direction_getter, …[, …]) |
|
ParticleFilteringTracking (direction_getter, …) |
|
ThresholdTissueClassifier |
# Declarations from tissue_classifier.pxd bellow cdef: double threshold, interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] metric_map |
TissueClassifier |
Methods |
tracking.local.localtracking
Bunch (**kwds) |
|
ConstrainedTissueClassifier |
Abstract class that takes as input included and excluded tissue maps. |
LocalTracking (direction_getter, …[, …]) |
|
ParticleFilteringTracking (direction_getter, …) |
|
local_tracker |
Tracks one direction from a seed. |
pft_tracker |
Tracks one direction from a seed using the particle filtering algorithm. |
tracking.metrics
Metrics for tracks, where tracks are arrays of points
xrange |
alias of builtins.range |
||
arbitrarypoint (xyz, distance) |
Select an arbitrary point along distance on the track (curve) | ||
bytes (xyz) |
Size of track in bytes. | ||
center_of_mass (xyz) |
Center of mass of streamline | ||
downsample (xyz[, n_pols]) |
downsample for a specific number of points along the curve/track | ||
endpoint (xyz) |
|
||
frenet_serret (xyz) |
Frenet-Serret Space Curve Invariants | ||
generate_combinations (items, n) |
Combine sets of size n from items | ||
inside_sphere (xyz, center, radius) |
If any point of the track is inside a sphere of a specified center and radius return True otherwise False. | ||
inside_sphere_points (xyz, center, radius) |
If a track intersects with a sphere of a specified center and radius return the points that are inside the sphere otherwise False. | ||
intersect_sphere (xyz, center, radius) |
If any segment of the track is intersecting with a sphere of specific center and radius return True otherwise False | ||
length (xyz[, along]) |
Euclidean length of track line | ||
longest_track_bundle (bundle[, sort]) |
Return longest track or length sorted track indices in bundle | ||
magn (xyz[, n]) |
magnitude of vector | ||
mean_curvature (xyz) |
Calculates the mean curvature of a curve | ||
mean_orientation (xyz) |
Calculates the mean orientation of a curve | ||
midpoint (xyz) |
Midpoint of track | ||
midpoint2point (xyz, p) |
Calculate distance from midpoint of a curve to arbitrary point p | ||
principal_components (xyz) |
We use PCA to calculate the 3 principal directions for a track | ||
splev (x, tck[, der, ext]) |
Evaluate a B-spline or its derivatives. | ||
spline (xyz[, s, k, nest]) |
Generate B-splines as documented in http://www.scipy.org/Cookbook/Interpolation | ||
splprep (x[, w, u, ub, ue, k, task, s, t, …]) |
Find the B-spline representation of an N-dimensional curve. | ||
startpoint (xyz) |
First point of the track | ||
winding (xyz) |
Total turning angle projected. |
tracking.streamline
LooseVersion ([vstring]) |
Version numbering for anarchists and software realists. |
Streamlines |
alias of nibabel.streamlines.array_sequence.ArraySequence |
apply_affine (aff, pts) |
Apply affine matrix aff to points pts |
bundles_distances_mdf |
Calculate distances between list of tracks A and list of tracks B |
cdist (XA, XB[, metric]) |
Compute distance between each pair of the two collections of inputs. |
center_streamlines (streamlines) |
Move streamlines to the origin |
cluster_confidence (streamlines[, max_mdf, …]) |
Computes the cluster confidence index (cci), which is an estimation of the support a set of streamlines gives to a particular pathway. |
compress_streamlines |
Compress streamlines by linearization as in [Presseau15]. |
deepcopy (x[, memo, _nil]) |
Deep copy operation on arbitrary Python objects. |
deform_streamlines (streamlines, …) |
Apply deformation field to streamlines |
dist_to_corner (affine) |
Calculate the maximal distance from the center to a corner of a voxel, given an affine |
length |
Euclidean length of streamlines |
nbytes (streamlines) |
|
orient_by_rois (streamlines, roi1, roi2[, …]) |
Orient a set of streamlines according to a pair of ROIs |
orient_by_streamline (streamlines, standard) |
Orient a bundle of streamlines to a standard streamline. |
relist_streamlines (points, offsets) |
Given a representation of a set of streamlines as a large array and an offsets array return the streamlines as a list of shorter arrays. |
select_by_rois (streamlines, rois, include[, …]) |
Select streamlines based on logical relations with several regions of interest (ROIs). |
select_random_set_of_streamlines (…[, rng]) |
Select a random set of streamlines |
set_number_of_points |
Change the number of points of streamlines |
streamline_near_roi (streamline, roi_coords, tol) |
Is a streamline near an ROI. |
transform_streamlines (streamlines, mat[, …]) |
Apply affine transformation to streamlines |
unlist_streamlines (streamlines) |
Return the streamlines not as a list but as an array and an offset |
values_from_volume (data, streamlines[, affine]) |
Extract values of a scalar/vector along each streamline from a volume. |
warn |
Issue a warning, or maybe ignore it or raise an exception. |
tracking.utils
Various tools related to creating and working with streamlines
This module provides tools for targeting streamlines using ROIs, for making connectivity matrices from whole brain fiber tracking and some other tools that allow streamlines to interact with image data.
Dipy uses affine matrices to represent the relationship between streamline
points, which are defined as points in a continuous 3d space, and image voxels,
which are typically arranged in a discrete 3d grid. Dipy uses a convention
similar to nifti files to interpret these affine matrices. This convention is
that the point at the center of voxel [i, j, k]
is represented by the point
[x, y, z]
where [x, y, z, 1] = affine * [i, j, k, 1]
. Also when the
phrase “voxel coordinates” is used, it is understood to be the same as affine
= eye(4)
.
As an example, lets take a 2d image where the affine is:
[[1., 0., 0.],
[0., 2., 0.],
[0., 0., 1.]]
The pixels of an image with this affine would look something like:
A------------
| | | |
| C | | |
| | | |
----B--------
| | | |
| | | |
| | | |
-------------
| | | |
| | | |
| | | |
------------D
And the letters A-D represent the following points in “real world coordinates”:
A = [-.5, -1.]
B = [ .5, 1.]
C = [ 0., 0.]
D = [ 2.5, 5.]
defaultdict |
defaultdict(default_factory[, …]) –> dict with default factory |
map |
map(func, *iterables) –> map object |
xrange |
alias of builtins.range |
affine_for_trackvis (voxel_size[, …]) |
Returns an affine which maps points for voxel indices to trackvis space. |
affine_from_fsl_mat_file (mat_affine, …) |
Converts an affine matrix from flirt (FSLdot) and a given voxel size for input and output images and returns an adjusted affine matrix for trackvis. |
apply_affine (aff, pts) |
Apply affine matrix aff to points pts |
asarray (a[, dtype, order]) |
Convert the input to an array. |
cdist (XA, XB[, metric]) |
Compute distance between each pair of the two collections of inputs. |
connectivity_matrix (streamlines, label_volume) |
Counts the streamlines that start and end at each label pair. |
density_map (streamlines, vol_dims[, …]) |
Counts the number of unique streamlines that pass through each voxel. |
dist_to_corner (affine) |
Calculate the maximal distance from the center to a corner of a voxel, given an affine |
dot (a, b[, out]) |
Dot product of two arrays. |
empty (shape[, dtype, order]) |
Return a new array of given shape and type, without initializing entries. |
eye (N[, M, k, dtype, order]) |
Return a 2-D array with ones on the diagonal and zeros elsewhere. |
flexi_tvis_affine (sl_vox_order, grid_affine, …) |
Computes the mapping from voxel indices to streamline points, |
get_flexi_tvis_affine (tvis_hdr, nii_aff) |
Computes the mapping from voxel indices to streamline points, |
length (streamlines[, affine]) |
Calculate the lengths of many streamlines in a bundle. |
minimum_at (a, indices[, b]) |
Performs unbuffered in place operation on operand ‘a’ for elements specified by ‘indices’. |
move_streamlines (streamlines, output_space) |
Applies a linear transformation, given by affine, to streamlines. |
ndbincount (x[, weights, shape]) |
Like bincount, but for nd-indicies. |
near_roi (streamlines, region_of_interest[, …]) |
Provide filtering criteria for a set of streamlines based on whether they fall within a tolerance distance from an ROI |
orientation_from_string (string_ornt) |
Returns an array representation of an ornt string |
ornt_mapping (ornt1, ornt2) |
Calculates the mapping needing to get from orn1 to orn2 |
path_length (streamlines, aoi, affine[, …]) |
Computes the shortest path, along any streamline, between aoi and each voxel. |
random_seeds_from_mask (mask[, seeds_count, …]) |
Creates randomly placed seeds for fiber tracking from a binary mask. |
ravel_multi_index (multi_index, dims[, mode, …]) |
Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index. |
reduce_labels (label_volume) |
Reduces an array of labels to the integers from 0 to n with smallest possible n. |
reduce_rois (rois, include) |
Reduce multiple ROIs to one inclusion and one exclusion ROI |
reorder_voxels_affine (input_ornt, …) |
Calculates a linear transformation equivalent to changing voxel order. |
seeds_from_mask (mask[, density, voxel_size, …]) |
Creates seeds for fiber tracking from a binary mask. |
streamline_near_roi (streamline, roi_coords, tol) |
Is a streamline near an ROI. |
subsegment (streamlines, max_segment_length) |
Splits the segments of the streamlines into small segments. |
target (streamlines, target_mask, affine[, …]) |
Filters streamlines based on whether or not they pass through an ROI. |
target_line_based (streamlines, target_mask) |
Filters streamlines based on whether or not they pass through a ROI, using a line-based algorithm. |
unique_rows (in_array[, dtype]) |
This (quickly) finds the unique rows in an array |
warn |
Issue a warning, or maybe ignore it or raise an exception. |
wraps (wrapped[, assigned, updated]) |
Decorator factory to apply update_wrapper() to a wrapper function |
dipy.tracking.
bench
(label='fast', verbose=1, extra_argv=None)Run benchmarks for module using nose.
Parameters: |
|
---|---|
Returns: |
|
Notes
Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” sub-directory of the module.
Each NumPy module exposes bench in its namespace to run all benchmarks for it.
Examples
>>> success = np.lib.bench()
Running benchmarks for numpy.lib
...
using 562341 items:
unique:
0.11
unique1d:
0.11
ratio: 1.0
nUnique: 56230 == 56230
...
OK
>>> success
True
dipy.tracking.
test
(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)Run tests for module using nose.
Parameters: |
|
---|---|
Returns: |
|
Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test()
Examples
>>> result = np.lib.test()
Running unit tests for numpy.lib
...
Ran 976 tests in 3.933s
OK
>>> result.errors
[]
>>> result.knownfail
[]
dipy.tracking.benchmarks.bench_streamline.
assert_array_almost_equal
(x, y, decimal=6, err_msg='', verbose=True)Raises an AssertionError if two objects are not equal up to desired precision.
Note
It is recommended to use one of assert_allclose, assert_array_almost_equal_nulp or assert_array_max_ulp instead of this function for more consistent floating point comparisons.
The test verifies identical shapes and that the elements of actual
and
desired
satisfy.
abs(desired-actual) < 1.5 * 10**(-decimal)
That is a looser test than originally documented, but agrees with what the actual implementation did up to rounding vagaries. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions.
Parameters: |
|
---|---|
Raises: |
|
See also
assert_allclose
assert_array_almost_equal_nulp
, assert_array_max_ulp
, assert_equal
Examples
the first assert does not raise an exception
>>> np.testing.assert_array_almost_equal([1.0,2.333,np.nan],
[1.0,2.333,np.nan])
>>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan],
... [1.0,2.33339,np.nan], decimal=5)
...
<type 'exceptions.AssertionError'>:
AssertionError:
Arrays are not almost equal
(mismatch 50.0%)
x: array([ 1. , 2.33333, NaN])
y: array([ 1. , 2.33339, NaN])
>>> np.testing.assert_array_almost_equal([1.0,2.33333,np.nan],
... [1.0,2.33333, 5], decimal=5)
<type 'exceptions.ValueError'>:
ValueError:
Arrays are not almost equal
x: array([ 1. , 2.33333, NaN])
y: array([ 1. , 2.33333, 5. ])
dipy.tracking.benchmarks.bench_streamline.
assert_array_equal
(x, y, err_msg='', verbose=True)Raises an AssertionError if two array_like objects are not equal.
Given two array_like objects, check that the shape is equal and all elements of these objects are equal. An exception is raised at shape mismatch or conflicting values. In contrast to the standard usage in numpy, NaNs are compared like numbers, no assertion is raised if both objects have NaNs in the same positions.
The usual caution for verifying equality with floating point numbers is advised.
Parameters: |
|
---|---|
Raises: |
|
See also
assert_allclose
assert_array_almost_equal_nulp
, assert_array_max_ulp
, assert_equal
Examples
The first assert does not raise an exception:
>>> np.testing.assert_array_equal([1.0,2.33333,np.nan],
... [np.exp(0),2.33333, np.nan])
Assert fails with numerical inprecision with floats:
>>> np.testing.assert_array_equal([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan])
...
<type 'exceptions.ValueError'>:
AssertionError:
Arrays are not equal
(mismatch 50.0%)
x: array([ 1. , 3.14159265, NaN])
y: array([ 1. , 3.14159265, NaN])
Use assert_allclose or one of the nulp (number of floating point values) functions for these cases instead:
>>> np.testing.assert_allclose([1.0,np.pi,np.nan],
... [1, np.sqrt(np.pi)**2, np.nan],
... rtol=1e-10, atol=0)
dipy.tracking.benchmarks.bench_streamline.
compress_streamlines
()Compress streamlines by linearization as in [Presseau15].
The compression consists in merging consecutive segments that are nearly collinear. The merging is achieved by removing the point the two segments have in common.
The linearization process [Presseau15] ensures that every point being removed are within a certain margin (in mm) of the resulting streamline. Recommendations for setting this margin can be found in [Presseau15] (in which they called it tolerance error).
The compression also ensures that two consecutive points won’t be too far from each other (precisely less or equal than `max_segment_length`mm). This is a tradeoff to speed up the linearization process [Rheault15]. A low value will result in a faster linearization but low compression, whereas a high value will result in a slower linearization but high compression.
Parameters: |
|
---|---|
Returns: |
|
Notes
Be aware that compressed streamlines have variable step sizes. One needs to be careful when computing streamlines-based metrics [Houde15].
References
[Presseau15] | (1, 2, 3, 4, 5, 6) Presseau C. et al., A new compression format for fiber tracking datasets, NeuroImage, no 109, 73-83, 2015. |
[Rheault15] | (1, 2) Rheault F. et al., Real Time Interaction with Millions of Streamlines, ISMRM, 2015. |
[Houde15] | (1, 2) Houde J.-C. et al. How to Avoid Biased Streamlines-Based Metrics for Streamlines with Variable Step Sizes, ISMRM, 2015. |
Examples
>>> from dipy.tracking.streamline import compress_streamlines
>>> import numpy as np
>>> # One streamline: a wiggling line
>>> rng = np.random.RandomState(42)
>>> streamline = np.linspace(0, 10, 100*3).reshape((100, 3))
>>> streamline += 0.2 * rng.rand(100, 3)
>>> c_streamline = compress_streamlines(streamline, tol_error=0.2)
>>> len(streamline)
100
>>> len(c_streamline)
10
>>> # Multiple streamlines
>>> streamlines = [streamline, streamline[::2]]
>>> c_streamlines = compress_streamlines(streamlines, tol_error=0.2)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in c_streamlines]
[10, 7]
dipy.tracking.benchmarks.bench_streamline.
compress_streamlines_python
(streamline, tol_error=0.01, max_segment_length=10)Python version of the FiberCompression found on https://github.com/scilus/FiberCompression.
dipy.tracking.benchmarks.bench_streamline.
get_fnames
(name='small_64D')provides filenames of some test datasets or other useful parametrisations
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> from dipy.data import get_fnames
>>> fimg,fbvals,fbvecs=get_fnames('small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> import nibabel as nib
>>> img=nib.load(fimg)
>>> data=img.get_data()
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True
dipy.tracking.benchmarks.bench_streamline.
length
()Euclidean length of streamlines
Length is in mm only if streamlines are expressed in world coordinates.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import length
>>> import numpy as np
>>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]])
>>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum()
>>> length(streamline) == expected_length
True
>>> streamlines = [streamline, np.vstack([streamline, streamline[::-1]])]
>>> expected_lengths = [expected_length, 2*expected_length]
>>> lengths = [length(streamlines[0]), length(streamlines[1])]
>>> np.allclose(lengths, expected_lengths)
True
>>> length([])
0.0
>>> length(np.array([[1, 2, 3]]))
0.0
dipy.tracking.benchmarks.bench_streamline.
measure
(code_str, times=1, label=None)Return elapsed time for executing code in the namespace of the caller.
The supplied code string is compiled with the Python builtin compile
.
The precision of the timing is 10 milli-seconds. If the code will execute
fast on this timescale, it can be executed many times to get reasonable
timing accuracy.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> etime = np.testing.measure('for i in range(1000): np.sqrt(i**2)',
... times=times)
>>> print("Time for a single execution : ", etime / times, "s")
Time for a single execution : 0.005 s
dipy.tracking.benchmarks.bench_streamline.
set_number_of_points
()Change the number of points of streamlines in order to obtain nb_points-1 segments of equal length. Points of streamlines will be modified along the curve.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semi-circle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
EuDX
dipy.tracking.eudx.
EuDX
(a, ind, seeds, odf_vertices, a_low=0.0239, step_sz=0.5, ang_thr=60.0, length_thr=0.0, total_weight=0.5, max_points=1000, affine=None)Bases: object
Euler Delta Crossings
Generates tracks with termination criteria defined by a delta function [1] and it has similarities with FACT algorithm [2] and Basser’s method but uses trilinear interpolation.
Can be used with any reconstruction method as DTI, DSI, QBI, GQI which can calculate an orientation distribution function and find the local peaks of that function. For example a single tensor model can give you only one peak a dual tensor model 2 peaks and quantitative anisotropy method as used in GQI can give you 3,4,5 or even more peaks.
The parameters of the delta function are checking thresholds for the direction propagation magnitude and the angle of propagation.
A specific number of seeds is defined randomly and then the tracks are generated for that seed if the delta function returns true.
Trilinear interpolation is being used for defining the weights of the propagation.
Notes
The coordinate system of the tractography is that of native space of image coordinates not native space world coordinates therefore voxel size is always considered as having size (1,1,1). Therefore, the origin is at the center of the center of the first voxel of the volume and all i,j,k coordinates start from the center of the voxel they represent.
References
[1] | (1, 2) Garyfallidis, Towards an accurate brain tractography, PhD thesis, University of Cambridge, 2012. |
[2] | (1, 2) Mori et al. Three-dimensional tracking of axonal projections in the brain by magnetic resonance imaging. Ann. Neurol. 1999. |
__init__
(a, ind, seeds, odf_vertices, a_low=0.0239, step_sz=0.5, ang_thr=60.0, length_thr=0.0, total_weight=0.5, max_points=1000, affine=None)Euler integration with multiple stopping criteria and supporting multiple multiple fibres in crossings [1].
Parameters: |
|
---|---|
Returns: |
|
Notes
This works as an iterator class because otherwise it could fill your entire memory if you generate many tracks. Something very common as you can easily generate millions of tracks if you have many seeds.
References
[1] | (1, 2) E. Garyfallidis (2012), “Towards an accurate brain tractography”, PhD thesis, University of Cambridge, UK. |
Examples
>>> import nibabel as nib
>>> from dipy.reconst.dti import TensorModel, quantize_evecs
>>> from dipy.data import get_fnames, get_sphere
>>> from dipy.core.gradients import gradient_table
>>> fimg,fbvals,fbvecs = get_fnames('small_101D')
>>> img = nib.load(fimg)
>>> affine = img.affine
>>> data = img.get_data()
>>> gtab = gradient_table(fbvals, fbvecs)
>>> model = TensorModel(gtab)
>>> ten = model.fit(data)
>>> sphere = get_sphere('symmetric724')
>>> ind = quantize_evecs(ten.evecs, sphere.vertices)
>>> eu = EuDX(a=ten.fa, ind=ind, seeds=100, odf_vertices=sphere.vertices, a_low=.2)
>>> tracks = [e for e in eu]
dipy.tracking.eudx.
eudx_both_directions
()Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.eudx.
get_sphere
(name='symmetric362')provide triangulated spheres
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere('symmetric362')
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere('not a sphere name')
Traceback (most recent call last):
...
DataError: No sphere called "not a sphere name"
dipy.tracking.learning.
detect_corresponding_tracks
(indices, tracks1, tracks2)Detect corresponding tracks from list tracks1 to list tracks2 where tracks1 & tracks2 are lists of tracks
Parameters: |
|
---|---|
Returns: |
|
Notes
To find the corresponding tracks we use mam_distances with ‘avg’ option. Then we calculate the argmin of all the calculated distances and return it for every index. (See 3rd column of arr in the example given below.)
Examples
>>> import numpy as np
>>> import dipy.tracking.learning as tl
>>> A = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> B = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]])
>>> C = np.array([[0, 0, -1], [0, 0, -2], [0, 0, -3]])
>>> bundle1 = [A, B, C]
>>> bundle2 = [B, A]
>>> indices = [0, 1]
>>> arr = tl.detect_corresponding_tracks(indices, bundle1, bundle2)
dipy.tracking.learning.
detect_corresponding_tracks_plus
(indices, tracks1, indices2, tracks2)Detect corresponding tracks from 1 to 2 where tracks1 & tracks2 are sequences of tracks
Parameters: |
|
---|---|
Returns: |
|
See also
distances.mam_distances
Notes
To find the corresponding tracks we use mam_distances with ‘avg’ option. Then we calculate the argmin of all the calculated distances and return it for every index. (See 3rd column of arr in the example given below.)
Examples
>>> import numpy as np
>>> import dipy.tracking.learning as tl
>>> A = np.array([[0, 0, 0], [1, 1, 1], [2, 2, 2]])
>>> B = np.array([[1, 0, 0], [2, 0, 0], [3, 0, 0]])
>>> C = np.array([[0, 0, -1], [0, 0, -2], [0, 0, -3]])
>>> bundle1 = [A, B, C]
>>> bundle2 = [B, A]
>>> indices = [0, 1]
>>> indices2 = indices
>>> arr = tl.detect_corresponding_tracks_plus(indices, bundle1, indices2, bundle2)
FiberFit
dipy.tracking.life.
FiberFit
(fiber_model, life_matrix, vox_coords, to_fit, beta, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data, streamline, affine, evals)Bases: dipy.reconst.base.ReconstFit
A fit of the LiFE model to diffusion data
Methods
predict ([gtab, S0]) |
Predict the signal |
__init__
(fiber_model, life_matrix, vox_coords, to_fit, beta, weighted_signal, b0_signal, relative_signal, mean_sig, vox_data, streamline, affine, evals)Parameters: |
|
---|
predict
(gtab=None, S0=None)Predict the signal
Parameters: |
|
---|---|
Returns: |
|
FiberModel
dipy.tracking.life.
FiberModel
(gtab)Bases: dipy.reconst.base.ReconstModel
A class for representing and solving predictive models based on tractography solutions.
Notes
This is an implementation of the LiFE model described in [1]_
Methods
fit (data, streamline[, affine, evals, sphere]) |
Fit the LiFE FiberModel for data and a set of streamlines associated with this data |
setup (streamline, affine[, evals, sphere]) |
Set up the necessary components for the LiFE model: the matrix of fiber-contributions to the DWI signal, and the coordinates of voxels for which the equations will be solved |
fit
(data, streamline, affine=None, evals=[0.001, 0, 0], sphere=None)Fit the LiFE FiberModel for data and a set of streamlines associated with this data
Parameters: |
|
---|---|
Returns: |
|
setup
(streamline, affine, evals=[0.001, 0, 0], sphere=None)Set up the necessary components for the LiFE model: the matrix of fiber-contributions to the DWI signal, and the coordinates of voxels for which the equations will be solved
Parameters: |
|
---|
LifeSignalMaker
dipy.tracking.life.
LifeSignalMaker
(gtab, evals=[0.001, 0, 0], sphere=None)Bases: object
A class for generating signals from streamlines in an efficient and speedy manner.
Methods
streamline_signal (streamline) |
Approximate the signal for a given streamline |
calc_signal |
__init__
(gtab, evals=[0.001, 0, 0], sphere=None)Initialize a signal maker
Parameters: |
|
---|
ReconstFit
dipy.tracking.life.
ReconstFit
(model, data)Bases: object
Abstract class which holds the fit result of ReconstModel
For example that could be holding FA or GFA etc.
ReconstModel
dipy.tracking.life.
ReconstModel
(gtab)Bases: object
Abstract class for signal reconstruction models
Methods
fit |
range
dipy.tracking.life.
range
(stop) → range objectBases: object
range(start, stop[, step]) -> range object
Return an object that produces a sequence of integers from start (inclusive) to stop (exclusive) by step. range(i, j) produces i, i+1, i+2, …, j-1. start defaults to 0, and stop is omitted! range(4) produces 0, 1, 2, 3. These are exactly the valid indices for a list of 4 elements. When step is given, it specifies the increment (or decrement).
Attributes: |
|
---|
Methods
count (value) |
|
index (value, [start, [stop]]) |
Raise ValueError if the value is not present. |
dipy.tracking.life.
grad_tensor
(grad, evals)Calculate the 3 by 3 tensor for a given spatial gradient, given a canonical tensor shape (also as a 3 by 3), pointing at [1,0,0]
Parameters: |
|
---|
dipy.tracking.life.
gradient
(f)Return the gradient of an N-dimensional array.
The gradient is computed using central differences in the interior and first differences at the boundaries. The returned gradient hence has the same shape as the input array.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> x = np.array([1, 2, 4, 7, 11, 16], dtype=np.float)
>>> gradient(x)
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=np.float))
[array([[ 2., 2., -1.],
[ 2., 2., -1.]]), array([[ 1. , 2.5, 4. ],
[ 1. , 1. , 1. ]])]
dipy.tracking.life.
streamline_gradients
(streamline)Calculate the gradients of the streamline along the spatial dimension
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.life.
streamline_signal
(streamline, gtab, evals=[0.001, 0, 0])The signal from a single streamline estimate along each of its nodes.
Parameters: |
|
---|
dipy.tracking.life.
streamline_tensors
(streamline, evals=[0.001, 0, 0])The tensors generated by this fiber.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.life.
transform_streamlines
(streamlines, mat, in_place=False)Apply affine transformation to streamlines
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.life.
unique_rows
(in_array, dtype='f4')This (quickly) finds the unique rows in an array
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.life.
voxel2streamline
(streamline, transformed=False, affine=None, unique_idx=None)Maps voxels to streamlines and streamlines to voxels, for setting up the LiFE equations matrix
Parameters: |
|
---|---|
Returns: |
|
ActTissueClassifier
dipy.tracking.local.
ActTissueClassifier
Bases: dipy.tracking.local.tissue_classifier.ConstrainedTissueClassifier
Anatomically-Constrained Tractography (ACT) stopping criteria from [1]. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops. The proposed ([1]) method that cuts streamlines going through subcortical gray matter regions is not implemented here. The backtracking technique for streamlines reaching INVALIDPOINT is not implemented either. cdef:
double interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] include_map, exclude_map
[1] | (1, 2, 3) Smith, R. E., Tournier, J.-D., Calamante, F., & Connelly, A. |
“Anatomically-constrained tractography: Improved diffusion MRI streamlines tractography through effective use of anatomical information.” NeuroImage, 63(3), 1924-1938, 2012.
Methods
from_pve |
ConstrainedTissueClassifier from partial volume fraction (PVE) maps. |
check_point | |
get_exclude | |
get_include |
CmcTissueClassifier
dipy.tracking.local.
CmcTissueClassifier
Bases: dipy.tracking.local.tissue_classifier.ConstrainedTissueClassifier
Continuous map criterion (CMC) stopping criteria from [1]. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops.
References
[1] | (1, 2, 3) Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. |
“Towards quantitative connectivity analysis: reducing tractography biases.” NeuroImage, 98, 266-278, 2014.
Methods
from_pve |
ConstrainedTissueClassifier from partial volume fraction (PVE) maps. |
check_point | |
get_exclude | |
get_include |
ConstrainedTissueClassifier
dipy.tracking.local.
ConstrainedTissueClassifier
Bases: dipy.tracking.local.tissue_classifier.TissueClassifier
Abstract class that takes as input included and excluded tissue maps. The ‘include_map’ defines when the streamline reached a ‘valid’ stopping region (e.g. gray matter partial volume estimation (PVE) map) and the ‘exclude_map’ defines when the streamline reached an ‘invalid’ stopping region (e.g. corticospinal fluid PVE map). The background of the anatomical image should be added to the ‘include_map’ to keep streamlines exiting the brain (e.g. through the brain stem).
Methods
from_pve |
ConstrainedTissueClassifier from partial volume fraction (PVE) maps. |
check_point | |
get_exclude | |
get_include |
from_pve
()ConstrainedTissueClassifier from partial volume fraction (PVE) maps.
Parameters: |
|
---|
DirectionGetter
dipy.tracking.local.
DirectionGetter
Bases: object
Methods
get_direction | |
initial_direction |
LocalTracking
dipy.tracking.local.
LocalTracking
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)Bases: object
__init__
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)Creates streamlines by using local fiber-tracking.
Parameters: |
|
---|
ParticleFilteringTracking
dipy.tracking.local.
ParticleFilteringTracking
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)Bases: dipy.tracking.local.localtracking.LocalTracking
__init__
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)A streamline generator using the particle filtering tractography method [1].
Parameters: |
|
---|
References
[1] | (1, 2) Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014. |
ThresholdTissueClassifier
TissueClassifier
dipy.tracking.local.
TissueClassifier
Bases: object
Methods
check_point |
ConstrainedTissueClassifier
dipy.tracking.local.localtracking.
ConstrainedTissueClassifier
Bases: dipy.tracking.local.tissue_classifier.TissueClassifier
Abstract class that takes as input included and excluded tissue maps. The ‘include_map’ defines when the streamline reached a ‘valid’ stopping region (e.g. gray matter partial volume estimation (PVE) map) and the ‘exclude_map’ defines when the streamline reached an ‘invalid’ stopping region (e.g. corticospinal fluid PVE map). The background of the anatomical image should be added to the ‘include_map’ to keep streamlines exiting the brain (e.g. through the brain stem).
Methods
from_pve |
ConstrainedTissueClassifier from partial volume fraction (PVE) maps. |
check_point | |
get_exclude | |
get_include |
from_pve
()ConstrainedTissueClassifier from partial volume fraction (PVE) maps.
Parameters: |
|
---|
LocalTracking
dipy.tracking.local.localtracking.
LocalTracking
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)Bases: object
__init__
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)Creates streamlines by using local fiber-tracking.
Parameters: |
|
---|
ParticleFilteringTracking
dipy.tracking.local.localtracking.
ParticleFilteringTracking
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)Bases: dipy.tracking.local.localtracking.LocalTracking
__init__
(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)A streamline generator using the particle filtering tractography method [1].
Parameters: |
|
---|
References
[1] | (1, 2) Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014. |
dipy.tracking.local.localtracking.
local_tracker
()Tracks one direction from a seed.
This function is the main workhorse of the LocalTracking
class defined
in dipy.tracking.local.localtracking
.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.local.localtracking.
pft_tracker
()Tracks one direction from a seed using the particle filtering algorithm.
This function is the main workhorse of the ParticleFilteringTracking
class defined in dipy.tracking.local.localtracking
.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.metrics.
arbitrarypoint
(xyz, distance)Select an arbitrary point along distance on the track (curve)
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> from dipy.tracking.metrics import arbitrarypoint, length
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> ap=arbitrarypoint(xyz,length(xyz)/3)
dipy.tracking.metrics.
center_of_mass
(xyz)Center of mass of streamline
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import center_of_mass
>>> center_of_mass([])
Traceback (most recent call last):
...
ValueError: xyz array cannot be empty
>>> center_of_mass([[1,1,1]])
array([ 1., 1., 1.])
>>> xyz = np.array([[0,0,0],[1,1,1],[2,2,2]])
>>> center_of_mass(xyz)
array([ 1., 1., 1.])
dipy.tracking.metrics.
downsample
(xyz, n_pols=3)downsample for a specific number of points along the curve/track
Uses the length of the curve. It works in a similar fashion to midpoint and arbitrarypoint but it also reduces the number of segments of a track.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> # a semi-circle
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> xyz2=downsample(xyz,3)
>>> # a cosine
>>> x=np.pi*np.linspace(0,1,100)
>>> y=np.cos(theta)
>>> z=0*y
>>> xyz=np.vstack((x,y,z)).T
>>> _= downsample(xyz,3)
>>> len(xyz2)
3
>>> xyz3=downsample(xyz,10)
>>> len(xyz3)
10
dipy.tracking.metrics.
endpoint
(xyz)Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import endpoint
>>> import numpy as np
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> ep=endpoint(xyz)
>>> ep.any()==xyz[-1].any()
True
dipy.tracking.metrics.
frenet_serret
(xyz)Frenet-Serret Space Curve Invariants
Calculates the 3 vector and 2 scalar invariants of a space curve defined by vectors r = (x,y,z). If z is omitted (i.e. the array xyz has shape (N,2)), then the curve is only 2D (planar), but the equations are still valid.
Similar to http://www.mathworks.com/matlabcentral/fileexchange/11169
In the following equations the prime (\('\)) indicates differentiation with respect to the parameter \(s\) of a parametrised curve \(\mathbf{r}(s)\).
Parameters: |
|
---|---|
Returns: |
|
Examples
Create a helix and calculate its tangent, normal, binormal, curvature and torsion
>>> from dipy.tracking import metrics as tm
>>> import numpy as np
>>> theta = 2*np.pi*np.linspace(0,2,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=theta/(2*np.pi)
>>> xyz=np.vstack((x,y,z)).T
>>> T,N,B,k,t=tm.frenet_serret(xyz)
dipy.tracking.metrics.
generate_combinations
(items, n)Combine sets of size n from items
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import generate_combinations
>>> ic=generate_combinations(range(3),2)
>>> for i in ic: print(i)
[0, 1]
[0, 2]
[1, 2]
dipy.tracking.metrics.
inside_sphere
(xyz, center, radius)If any point of the track is inside a sphere of a specified center and radius return True otherwise False. Mathematicaly this can be simply described by \(|x-c|\le r\) where \(x\) a point \(c\) the center of the sphere and \(r\) the radius of the sphere.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import inside_sphere
>>> line=np.array(([0,0,0],[1,1,1],[2,2,2]))
>>> sph_cent=np.array([1,1,1])
>>> sph_radius = 1
>>> inside_sphere(line,sph_cent,sph_radius)
True
dipy.tracking.metrics.
inside_sphere_points
(xyz, center, radius)If a track intersects with a sphere of a specified center and radius return the points that are inside the sphere otherwise False. Mathematicaly this can be simply described by \(|x-c| \le r\) where \(x\) a point \(c\) the center of the sphere and \(r\) the radius of the sphere.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import inside_sphere_points
>>> line=np.array(([0,0,0],[1,1,1],[2,2,2]))
>>> sph_cent=np.array([1,1,1])
>>> sph_radius = 1
>>> inside_sphere_points(line,sph_cent,sph_radius)
array([[1, 1, 1]])
dipy.tracking.metrics.
intersect_sphere
(xyz, center, radius)If any segment of the track is intersecting with a sphere of specific center and radius return True otherwise False
Parameters: |
|
---|---|
Returns: |
|
Notes
The ray to sphere intersection method used here is similar with http://local.wasp.uwa.edu.au/~pbourke/geometry/sphereline/ http://local.wasp.uwa.edu.au/~pbourke/geometry/sphereline/source.cpp we just applied it for every segment neglecting the intersections where the intersecting points are not inside the segment
dipy.tracking.metrics.
length
(xyz, along=False)Euclidean length of track line
This will give length in mm if tracks are expressed in world coordinates.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import length
>>> xyz = np.array([[1,1,1],[2,3,4],[0,0,0]])
>>> expected_lens = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2])
>>> length(xyz) == expected_lens.sum()
True
>>> len_along = length(xyz, along=True)
>>> np.allclose(len_along, expected_lens.cumsum())
True
>>> length([])
0
>>> length([[1, 2, 3]])
0
>>> length([], along=True)
array([0])
dipy.tracking.metrics.
longest_track_bundle
(bundle, sort=False)Return longest track or length sorted track indices in bundle
If sort == True, return the indices of the sorted tracks in the bundle, otherwise return the longest track.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import longest_track_bundle
>>> import numpy as np
>>> bundle = [np.array([[0,0,0],[2,2,2]]),np.array([[0,0,0],[4,4,4]])]
>>> longest_track_bundle(bundle)
array([[0, 0, 0],
[4, 4, 4]])
>>> longest_track_bundle(bundle, True)
array([0, 1]...)
dipy.tracking.metrics.
mean_curvature
(xyz)Calculates the mean curvature of a curve
Parameters: |
|
---|---|
Returns: |
|
Examples
Create a straight line and a semi-circle and print their mean curvatures
>>> from dipy.tracking import metrics as tm
>>> import numpy as np
>>> x=np.linspace(0,1,100)
>>> y=0*x
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> m=tm.mean_curvature(xyz) #mean curvature straight line
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> _= tm.mean_curvature(xyz) #mean curvature for semi-circle
dipy.tracking.metrics.
midpoint
(xyz)Midpoint of track
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import midpoint
>>> midpoint([])
Traceback (most recent call last):
...
ValueError: xyz array cannot be empty
>>> midpoint([[1, 2, 3]])
array([1, 2, 3])
>>> xyz = np.array([[1,1,1],[2,3,4]])
>>> midpoint(xyz)
array([ 1.5, 2. , 2.5])
>>> xyz = np.array([[0,0,0],[1,1,1],[2,2,2]])
>>> midpoint(xyz)
array([ 1., 1., 1.])
>>> xyz = np.array([[0,0,0],[1,0,0],[3,0,0]])
>>> midpoint(xyz)
array([ 1.5, 0. , 0. ])
>>> xyz = np.array([[0,9,7],[1,9,7],[3,9,7]])
>>> midpoint(xyz)
array([ 1.5, 9. , 7. ])
dipy.tracking.metrics.
midpoint2point
(xyz, p)Calculate distance from midpoint of a curve to arbitrary point p
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> from dipy.tracking.metrics import midpoint2point, midpoint
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> dist=midpoint2point(xyz,np.array([0,0,0]))
dipy.tracking.metrics.
principal_components
(xyz)We use PCA to calculate the 3 principal directions for a track
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import numpy as np
>>> from dipy.tracking.metrics import principal_components
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> va, ve = principal_components(xyz)
>>> np.allclose(va, [0.51010101, 0.09883545, 0])
True
dipy.tracking.metrics.
splev
(x, tck, der=0, ext=0)Evaluate a B-spline or its derivatives.
Given the knots and coefficients of a B-spline representation, evaluate the value of the smoothing polynomial and its derivatives. This is a wrapper around the FORTRAN routines splev and splder of FITPACK.
Parameters: |
|
---|---|
Returns: |
|
See also
splprep
, splrep
, sproot
, spalde
, splint
, bisplrep
, bisplev
, BSpline
Notes
Manipulating the tck-tuples directly is not recommended. In new code, prefer using BSpline objects.
References
[1] | C. de Boor, “On calculating with b-splines”, J. Approximation Theory, 6, p.50-62, 1972. |
[2] | M. G. Cox, “The numerical evaluation of b-splines”, J. Inst. Maths Applics, 10, p.134-149, 1972. |
[3] | P. Dierckx, “Curve and surface fitting with splines”, Monographs on Numerical Analysis, Oxford University Press, 1993. |
dipy.tracking.metrics.
spline
(xyz, s=3, k=2, nest=-1)Generate B-splines as documented in http://www.scipy.org/Cookbook/Interpolation
The scipy.interpolate packages wraps the netlib FITPACK routines (Dierckx) for calculating smoothing splines for various kinds of data and geometries. Although the data is evenly spaced in this example, it need not be so to use this routine.
Parameters: |
|
---|---|
Returns: |
|
See also
scipy.interpolate.splprep
, scipy.interpolate.splev
Examples
>>> import numpy as np
>>> t=np.linspace(0,1.75*2*np.pi,100)# make ascending spiral in 3-space
>>> x = np.sin(t)
>>> y = np.cos(t)
>>> z = t
>>> x+= np.random.normal(scale=0.1, size=x.shape) # add noise
>>> y+= np.random.normal(scale=0.1, size=y.shape)
>>> z+= np.random.normal(scale=0.1, size=z.shape)
>>> xyz=np.vstack((x,y,z)).T
>>> xyzn=spline(xyz,3,2,-1)
>>> len(xyzn) > len(xyz)
True
dipy.tracking.metrics.
splprep
(x, w=None, u=None, ub=None, ue=None, k=3, task=0, s=None, t=None, full_output=0, nest=None, per=0, quiet=1)Find the B-spline representation of an N-dimensional curve.
Given a list of N rank-1 arrays, x, which represent a curve in N-dimensional space parametrized by u, find a smooth approximating spline curve g(u). Uses the FORTRAN routine parcur from FITPACK.
Parameters: |
|
---|---|
Returns: |
|
See also
splrep
, splev
, sproot
, spalde
, splint
, bisplrep
, bisplev
, UnivariateSpline
, BivariateSpline
, BSpline
, make_interp_spline
Notes
See splev for evaluation of the spline and its derivatives. The number of dimensions N must be smaller than 11.
The number of coefficients in the c array is k+1
less then the number
of knots, len(t)
. This is in contrast with splrep, which zero-pads
the array of coefficients to have the same length as the array of knots.
These additional coefficients are ignored by evaluation routines, splev
and BSpline.
References
[1] | P. Dierckx, “Algorithms for smoothing data with periodic and parametric splines, Computer Graphics and Image Processing”, 20 (1982) 171-184. |
[2] | P. Dierckx, “Algorithms for smoothing data with periodic and parametric splines”, report tw55, Dept. Computer Science, K.U.Leuven, 1981. |
[3] | P. Dierckx, “Curve and surface fitting with splines”, Monographs on Numerical Analysis, Oxford University Press, 1993. |
Examples
Generate a discretization of a limacon curve in the polar coordinates:
>>> phi = np.linspace(0, 2.*np.pi, 40)
>>> r = 0.5 + np.cos(phi) # polar coords
>>> x, y = r * np.cos(phi), r * np.sin(phi) # convert to cartesian
And interpolate:
>>> from scipy.interpolate import splprep, splev
>>> tck, u = splprep([x, y], s=0)
>>> new_points = splev(u, tck)
Notice that (i) we force interpolation by using s=0,
(ii) the parameterization, u
, is generated automatically.
Now plot the result:
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> ax.plot(x, y, 'ro')
>>> ax.plot(new_points[0], new_points[1], 'r-')
>>> plt.show()
dipy.tracking.metrics.
startpoint
(xyz)First point of the track
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.metrics import startpoint
>>> import numpy as np
>>> theta=np.pi*np.linspace(0,1,100)
>>> x=np.cos(theta)
>>> y=np.sin(theta)
>>> z=0*x
>>> xyz=np.vstack((x,y,z)).T
>>> sp=startpoint(xyz)
>>> sp.any()==xyz[0].any()
True
dipy.tracking.metrics.
winding
(xyz)Total turning angle projected.
Project space curve to best fitting plane. Calculate the cumulative signed angle between each line segment and the previous one.
Parameters: |
|
---|---|
Returns: |
|
LooseVersion
dipy.tracking.streamline.
LooseVersion
(vstring=None)Bases: distutils.version.Version
Version numbering for anarchists and software realists. Implements the standard interface for version number classes as described above. A version number consists of a series of numbers, separated by either periods or strings of letters. When comparing version numbers, the numeric components will be compared numerically, and the alphabetic components lexically. The following are all valid version numbers, in no particular order:
1.5.1 1.5.2b2 161 3.10a 8.02 3.4j 1996.07.12 3.2.pl0 3.1.1.6 2g6 11g 0.960923 2.2beta29 1.13++ 5.5.kw 2.0b1pl0
In fact, there is no such thing as an invalid version number under this scheme; the rules for comparison are simple and predictable, but may not always give the results you want (for some definition of “want”).
Methods
parse |
dipy.tracking.streamline.
apply_affine
(aff, pts)Apply affine matrix aff to points pts
Returns result of application of aff to the right of pts. The coordinate dimension of pts should be the last.
For the 3D case, aff will be shape (4,4) and pts will have final axis length 3 - maybe it will just be N by 3. The return value is the transformed points, in this case:
res = np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]
transformed_pts = res.T
This routine is more general than 3D, in that aff can have any shape (N,N), and pts can have any shape, as long as the last dimension is for the coordinates, and is therefore length N-1.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> aff = np.array([[0,2,0,10],[3,0,0,11],[0,0,4,12],[0,0,0,1]])
>>> pts = np.array([[1,2,3],[2,3,4],[4,5,6],[6,7,8]])
>>> apply_affine(aff, pts)
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
Just to show that in the simple 3D case, it is equivalent to:
>>> (np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]).T
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
But pts can be a more complicated shape:
>>> pts = pts.reshape((2,2,3))
>>> apply_affine(aff, pts)
array([[[14, 14, 24],
[16, 17, 28]],
[[20, 23, 36],
[24, 29, 44]]]...)
dipy.tracking.streamline.
bundles_distances_mdf
()Calculate distances between list of tracks A and list of tracks B
All tracks need to have the same number of points
Parameters: |
|
---|---|
Returns: |
|
See also
dipy.metrics.downsample
dipy.tracking.streamline.
cdist
(XA, XB, metric='euclidean', *args, **kwargs)Compute distance between each pair of the two collections of inputs.
See Notes for common calling conventions.
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
Notes
The following are common calling conventions:
Y = cdist(XA, XB, 'euclidean')
Computes the distance between \(m\) points using Euclidean distance (2-norm) as the distance metric between the points. The points are arranged as \(m\) \(n\)-dimensional row vectors in the matrix X.
Y = cdist(XA, XB, 'minkowski', p=2.)
Computes the distances using the Minkowski distance \(||u-v||_p\) (\(p\)-norm) where \(p \geq 1\).
Y = cdist(XA, XB, 'cityblock')
Computes the city block or Manhattan distance between the points.
Y = cdist(XA, XB, 'seuclidean', V=None)
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors u
and v
is
V is the variance vector; V[i] is the variance computed over all the i’th components of the points. If not passed, it is automatically computed.
Y = cdist(XA, XB, 'sqeuclidean')
Computes the squared Euclidean distance \(||u-v||_2^2\) between the vectors.
Y = cdist(XA, XB, 'cosine')
Computes the cosine distance between vectors u and v,
where \(||*||_2\) is the 2-norm of its argument *
, and
\(u \cdot v\) is the dot product of \(u\) and \(v\).
Y = cdist(XA, XB, 'correlation')
Computes the correlation distance between vectors u and v. This is
where \(\bar{v}\) is the mean of the elements of vector v, and \(x \cdot y\) is the dot product of \(x\) and \(y\).
Y = cdist(XA, XB, 'hamming')
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors u
and v
which disagree. To save memory, the matrix X
can be of type
boolean.
Y = cdist(XA, XB, 'jaccard')
Computes the Jaccard distance between the points. Given two
vectors, u
and v
, the Jaccard distance is the
proportion of those elements u[i]
and v[i]
that
disagree where at least one of them is non-zero.
Y = cdist(XA, XB, 'chebyshev')
Computes the Chebyshev distance between the points. The Chebyshev distance between two n-vectors
u
andv
is the maximum norm-1 distance between their respective elements. More precisely, the distance is given by\[d(u,v) = \max_i {|u_i-v_i|}.\]
Y = cdist(XA, XB, 'canberra')
Computes the Canberra distance between the points. The Canberra distance between two points
u
andv
is\[d(u,v) = \sum_i \frac{|u_i-v_i|} {|u_i|+|v_i|}.\]
Y = cdist(XA, XB, 'braycurtis')
Computes the Bray-Curtis distance between the points. The Bray-Curtis distance between two points
u
andv
is\[d(u,v) = \frac{\sum_i (|u_i-v_i|)} {\sum_i (|u_i+v_i|)}\]
Y = cdist(XA, XB, 'mahalanobis', VI=None)
Computes the Mahalanobis distance between the points. The Mahalanobis distance between two pointsu
andv
is \(\sqrt{(u-v)(1/V)(u-v)^T}\) where \((1/V)\) (theVI
variable) is the inverse covariance. IfVI
is not None,VI
will be used as the inverse covariance matrix.
Y = cdist(XA, XB, 'yule')
Computes the Yule distance between the boolean vectors. (see yule function documentation)
Y = cdist(XA, XB, 'matching')
Synonym for ‘hamming’.
Y = cdist(XA, XB, 'dice')
Computes the Dice distance between the boolean vectors. (see dice function documentation)
Y = cdist(XA, XB, 'kulsinski')
Computes the Kulsinski distance between the boolean vectors. (see kulsinski function documentation)
Y = cdist(XA, XB, 'rogerstanimoto')
Computes the Rogers-Tanimoto distance between the boolean vectors. (see rogerstanimoto function documentation)
Y = cdist(XA, XB, 'russellrao')
Computes the Russell-Rao distance between the boolean vectors. (see russellrao function documentation)
Y = cdist(XA, XB, 'sokalmichener')
Computes the Sokal-Michener distance between the boolean vectors. (see sokalmichener function documentation)
Y = cdist(XA, XB, 'sokalsneath')
Computes the Sokal-Sneath distance between the vectors. (see sokalsneath function documentation)
Y = cdist(XA, XB, 'wminkowski', p=2., w=w)
Computes the weighted Minkowski distance between the vectors. (see wminkowski function documentation)
Y = cdist(XA, XB, f)
Computes the distance between all pairs of vectors in X using the user supplied 2-arity function f. For example, Euclidean distance between the vectors could be computed as follows:
dm = cdist(XA, XB, lambda u, v: np.sqrt(((u-v)**2).sum()))Note that you should avoid passing a reference to one of the distance functions defined in this library. For example,:
dm = cdist(XA, XB, sokalsneath)would calculate the pair-wise distances between the vectors in X using the Python function sokalsneath. This would result in sokalsneath being called \({n \choose 2}\) times, which is inefficient. Instead, the optimized C version is more efficient, and we call it using the following syntax:
dm = cdist(XA, XB, 'sokalsneath')
Examples
Find the Euclidean distances between four 2-D coordinates:
>>> from scipy.spatial import distance
>>> coords = [(35.0456, -85.2672),
... (35.1174, -89.9711),
... (35.9728, -83.9422),
... (36.1667, -86.7833)]
>>> distance.cdist(coords, coords, 'euclidean')
array([[ 0. , 4.7044, 1.6172, 1.8856],
[ 4.7044, 0. , 6.0893, 3.3561],
[ 1.6172, 6.0893, 0. , 2.8477],
[ 1.8856, 3.3561, 2.8477, 0. ]])
Find the Manhattan distance from a 3-D point to the corners of the unit cube:
>>> a = np.array([[0, 0, 0],
... [0, 0, 1],
... [0, 1, 0],
... [0, 1, 1],
... [1, 0, 0],
... [1, 0, 1],
... [1, 1, 0],
... [1, 1, 1]])
>>> b = np.array([[ 0.1, 0.2, 0.4]])
>>> distance.cdist(a, b, 'cityblock')
array([[ 0.7],
[ 0.9],
[ 1.3],
[ 1.5],
[ 1.5],
[ 1.7],
[ 2.1],
[ 2.3]])
dipy.tracking.streamline.
cluster_confidence
(streamlines, max_mdf=5, subsample=12, power=1, override=False)Computes the cluster confidence index (cci), which is an estimation of the support a set of streamlines gives to a particular pathway.
Ex: A single streamline with no others in the dataset following a similar pathway has a low cci. A streamline in a bundle of 100 streamlines that follow similar pathways has a high cci.
See: Jordan et al. 2017 (Based on streamline MDF distance from Garyfallidis et al. 2012)
Parameters: |
|
---|---|
Returns: |
|
References
[Jordan17] Jordan K. Et al., Cluster Confidence Index: A Streamline-Wise Pathway Reproducibility Metric for Diffusion-Weighted MRI Tractography, Journal of Neuroimaging, vol 28, no 1, 2017.
[Garyfallidis12] Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012.
dipy.tracking.streamline.
compress_streamlines
()Compress streamlines by linearization as in [Presseau15].
The compression consists in merging consecutive segments that are nearly collinear. The merging is achieved by removing the point the two segments have in common.
The linearization process [Presseau15] ensures that every point being removed are within a certain margin (in mm) of the resulting streamline. Recommendations for setting this margin can be found in [Presseau15] (in which they called it tolerance error).
The compression also ensures that two consecutive points won’t be too far from each other (precisely less or equal than `max_segment_length`mm). This is a tradeoff to speed up the linearization process [Rheault15]. A low value will result in a faster linearization but low compression, whereas a high value will result in a slower linearization but high compression.
Parameters: |
|
---|---|
Returns: |
|
Notes
Be aware that compressed streamlines have variable step sizes. One needs to be careful when computing streamlines-based metrics [Houde15].
References
[Presseau15] | (1, 2, 3, 4, 5, 6) Presseau C. et al., A new compression format for fiber tracking datasets, NeuroImage, no 109, 73-83, 2015. |
[Rheault15] | (1, 2) Rheault F. et al., Real Time Interaction with Millions of Streamlines, ISMRM, 2015. |
[Houde15] | (1, 2) Houde J.-C. et al. How to Avoid Biased Streamlines-Based Metrics for Streamlines with Variable Step Sizes, ISMRM, 2015. |
Examples
>>> from dipy.tracking.streamline import compress_streamlines
>>> import numpy as np
>>> # One streamline: a wiggling line
>>> rng = np.random.RandomState(42)
>>> streamline = np.linspace(0, 10, 100*3).reshape((100, 3))
>>> streamline += 0.2 * rng.rand(100, 3)
>>> c_streamline = compress_streamlines(streamline, tol_error=0.2)
>>> len(streamline)
100
>>> len(c_streamline)
10
>>> # Multiple streamlines
>>> streamlines = [streamline, streamline[::2]]
>>> c_streamlines = compress_streamlines(streamlines, tol_error=0.2)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in c_streamlines]
[10, 7]
dipy.tracking.streamline.
deform_streamlines
(streamlines, deform_field, stream_to_current_grid, current_grid_to_world, stream_to_ref_grid, ref_grid_to_world)Apply deformation field to streamlines
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.streamline.
dist_to_corner
(affine)Calculate the maximal distance from the center to a corner of a voxel, given an affine
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.streamline.
length
()Euclidean length of streamlines
Length is in mm only if streamlines are expressed in world coordinates.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import length
>>> import numpy as np
>>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]])
>>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum()
>>> length(streamline) == expected_length
True
>>> streamlines = [streamline, np.vstack([streamline, streamline[::-1]])]
>>> expected_lengths = [expected_length, 2*expected_length]
>>> lengths = [length(streamlines[0]), length(streamlines[1])]
>>> np.allclose(lengths, expected_lengths)
True
>>> length([])
0.0
>>> length(np.array([[1, 2, 3]]))
0.0
dipy.tracking.streamline.
orient_by_rois
(streamlines, roi1, roi2, in_place=False, as_generator=False, affine=None)Orient a set of streamlines according to a pair of ROIs
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> streamlines = [np.array([[0, 0., 0],
... [1, 0., 0.],
... [2, 0., 0.]]),
... np.array([[2, 0., 0.],
... [1, 0., 0],
... [0, 0, 0.]])]
>>> roi1 = np.zeros((4, 4, 4), dtype=bool)
>>> roi2 = np.zeros_like(roi1)
>>> roi1[0, 0, 0] = True
>>> roi2[1, 0, 0] = True
>>> orient_by_rois(streamlines, roi1, roi2)
[array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 2., 0., 0.]]), array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 2., 0., 0.]])]
dipy.tracking.streamline.
orient_by_streamline
(streamlines, standard, n_points=12, in_place=False, as_generator=False, affine=None)Orient a bundle of streamlines to a standard streamline.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.streamline.
select_by_rois
(streamlines, rois, include, mode=None, affine=None, tol=None)Select streamlines based on logical relations with several regions of interest (ROIs). For example, select streamlines that pass near ROI1, but only if they do not pass near ROI2.
Parameters: |
|
---|---|
Returns: |
|
Notes
The only operation currently possible is “(A or B or …) and not (X or Y or …)”, where A, B are inclusion regions and X, Y are exclusion regions.
Examples
>>> streamlines = [np.array([[0, 0., 0.9],
... [1.9, 0., 0.]]),
... np.array([[0., 0., 0],
... [0, 1., 1.],
... [0, 2., 2.]]),
... np.array([[2, 2, 2],
... [3, 3, 3]])]
>>> mask1 = np.zeros((4, 4, 4), dtype=bool)
>>> mask2 = np.zeros_like(mask1)
>>> mask1[0, 0, 0] = True
>>> mask2[1, 0, 0] = True
>>> selection = select_by_rois(streamlines, [mask1, mask2],
... [True, True],
... tol=1)
>>> list(selection) # The result is a generator
[array([[ 0. , 0. , 0.9],
[ 1.9, 0. , 0. ]]), array([[ 0., 0., 0.],
[ 0., 1., 1.],
[ 0., 2., 2.]])]
>>> selection = select_by_rois(streamlines, [mask1, mask2],
... [True, False],
... tol=0.87)
>>> list(selection)
[array([[ 0., 0., 0.],
[ 0., 1., 1.],
[ 0., 2., 2.]])]
>>> selection = select_by_rois(streamlines, [mask1, mask2],
... [True, True],
... mode="both_end",
... tol=1.0)
>>> list(selection)
[array([[ 0. , 0. , 0.9],
[ 1.9, 0. , 0. ]])]
>>> mask2[0, 2, 2] = True
>>> selection = select_by_rois(streamlines, [mask1, mask2],
... [True, True],
... mode="both_end",
... tol=1.0)
>>> list(selection)
[array([[ 0. , 0. , 0.9],
[ 1.9, 0. , 0. ]]), array([[ 0., 0., 0.],
[ 0., 1., 1.],
[ 0., 2., 2.]])]
dipy.tracking.streamline.
select_random_set_of_streamlines
(streamlines, select, rng=None)Select a random set of streamlines
Parameters: |
|
---|---|
Returns: |
|
Notes
The same streamline will not be selected twice.
dipy.tracking.streamline.
set_number_of_points
()Change the number of points of streamlines in order to obtain nb_points-1 segments of equal length. Points of streamlines will be modified along the curve.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semi-circle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
dipy.tracking.streamline.
streamline_near_roi
(streamline, roi_coords, tol, mode='any')Is a streamline near an ROI.
Implements the inner loops of the near_roi()
function.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.streamline.
transform_streamlines
(streamlines, mat, in_place=False)Apply affine transformation to streamlines
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.streamline.
values_from_volume
(data, streamlines, affine=None)Extract values of a scalar/vector along each streamline from a volume.
Parameters: |
|
---|
Notes
Values are extracted from the image based on the 3D coordinates of the nodes that comprise the points in the streamline, without any interpolation into segments between the nodes. Using this function with streamlines that have been resampled into a very small number of nodes will result in very few values.
defaultdict
dipy.tracking.utils.
defaultdict
Bases: dict
defaultdict(default_factory[, …]) –> dict with default factory
The default factory is called without arguments to produce a new value when a key is not present, in __getitem__ only. A defaultdict compares equal to a dict with the same items. All remaining arguments are treated the same as if they were passed to the dict constructor, including keyword arguments.
Attributes: |
|
---|
Methods
clear () |
|
copy () |
|
fromkeys ($type, iterable[, value]) |
Returns a new dict with keys from iterable and values equal to value. |
get (k[,d]) |
|
items () |
|
keys () |
|
pop (k[,d]) |
If key is not found, d is returned if given, otherwise KeyError is raised |
popitem () |
2-tuple; but raise KeyError if D is empty. |
setdefault (k[,d]) |
|
update ([E, ]**F) |
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k] |
values () |
map
dipy.tracking.utils.
affine_for_trackvis
(voxel_size, voxel_order=None, dim=None, ref_img_voxel_order=None)Returns an affine which maps points for voxel indices to trackvis space.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
affine_from_fsl_mat_file
(mat_affine, input_voxsz, output_voxsz)Converts an affine matrix from flirt (FSLdot) and a given voxel size for input and output images and returns an adjusted affine matrix for trackvis.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
apply_affine
(aff, pts)Apply affine matrix aff to points pts
Returns result of application of aff to the right of pts. The coordinate dimension of pts should be the last.
For the 3D case, aff will be shape (4,4) and pts will have final axis length 3 - maybe it will just be N by 3. The return value is the transformed points, in this case:
res = np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]
transformed_pts = res.T
This routine is more general than 3D, in that aff can have any shape (N,N), and pts can have any shape, as long as the last dimension is for the coordinates, and is therefore length N-1.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> aff = np.array([[0,2,0,10],[3,0,0,11],[0,0,4,12],[0,0,0,1]])
>>> pts = np.array([[1,2,3],[2,3,4],[4,5,6],[6,7,8]])
>>> apply_affine(aff, pts)
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
Just to show that in the simple 3D case, it is equivalent to:
>>> (np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]).T
array([[14, 14, 24],
[16, 17, 28],
[20, 23, 36],
[24, 29, 44]]...)
But pts can be a more complicated shape:
>>> pts = pts.reshape((2,2,3))
>>> apply_affine(aff, pts)
array([[[14, 14, 24],
[16, 17, 28]],
[[20, 23, 36],
[24, 29, 44]]]...)
dipy.tracking.utils.
asarray
(a, dtype=None, order=None)Convert the input to an array.
Parameters: |
|
---|---|
Returns: |
|
See also
asanyarray
ascontiguousarray
asfarray
asfortranarray
asarray_chkfinite
fromiter
fromfunction
Examples
Convert a list into an array:
>>> a = [1, 2]
>>> np.asarray(a)
array([1, 2])
Existing arrays are not copied:
>>> a = np.array([1, 2])
>>> np.asarray(a) is a
True
If dtype is set, array is copied only if dtype does not match:
>>> a = np.array([1, 2], dtype=np.float32)
>>> np.asarray(a, dtype=np.float32) is a
True
>>> np.asarray(a, dtype=np.float64) is a
False
Contrary to asanyarray, ndarray subclasses are not passed through:
>>> issubclass(np.recarray, np.ndarray)
True
>>> a = np.array([(1.0, 2), (3.0, 4)], dtype='f4,i4').view(np.recarray)
>>> np.asarray(a) is a
False
>>> np.asanyarray(a) is a
True
dipy.tracking.utils.
cdist
(XA, XB, metric='euclidean', *args, **kwargs)Compute distance between each pair of the two collections of inputs.
See Notes for common calling conventions.
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
Notes
The following are common calling conventions:
Y = cdist(XA, XB, 'euclidean')
Computes the distance between \(m\) points using Euclidean distance (2-norm) as the distance metric between the points. The points are arranged as \(m\) \(n\)-dimensional row vectors in the matrix X.
Y = cdist(XA, XB, 'minkowski', p=2.)
Computes the distances using the Minkowski distance \(||u-v||_p\) (\(p\)-norm) where \(p \geq 1\).
Y = cdist(XA, XB, 'cityblock')
Computes the city block or Manhattan distance between the points.
Y = cdist(XA, XB, 'seuclidean', V=None)
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors u
and v
is
V is the variance vector; V[i] is the variance computed over all the i’th components of the points. If not passed, it is automatically computed.
Y = cdist(XA, XB, 'sqeuclidean')
Computes the squared Euclidean distance \(||u-v||_2^2\) between the vectors.
Y = cdist(XA, XB, 'cosine')
Computes the cosine distance between vectors u and v,
where \(||*||_2\) is the 2-norm of its argument *
, and
\(u \cdot v\) is the dot product of \(u\) and \(v\).
Y = cdist(XA, XB, 'correlation')
Computes the correlation distance between vectors u and v. This is
where \(\bar{v}\) is the mean of the elements of vector v, and \(x \cdot y\) is the dot product of \(x\) and \(y\).
Y = cdist(XA, XB, 'hamming')
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors u
and v
which disagree. To save memory, the matrix X
can be of type
boolean.
Y = cdist(XA, XB, 'jaccard')
Computes the Jaccard distance between the points. Given two
vectors, u
and v
, the Jaccard distance is the
proportion of those elements u[i]
and v[i]
that
disagree where at least one of them is non-zero.
Y = cdist(XA, XB, 'chebyshev')
Computes the Chebyshev distance between the points. The Chebyshev distance between two n-vectors
u
andv
is the maximum norm-1 distance between their respective elements. More precisely, the distance is given by\[d(u,v) = \max_i {|u_i-v_i|}.\]
Y = cdist(XA, XB, 'canberra')
Computes the Canberra distance between the points. The Canberra distance between two points
u
andv
is\[d(u,v) = \sum_i \frac{|u_i-v_i|} {|u_i|+|v_i|}.\]
Y = cdist(XA, XB, 'braycurtis')
Computes the Bray-Curtis distance between the points. The Bray-Curtis distance between two points
u
andv
is\[d(u,v) = \frac{\sum_i (|u_i-v_i|)} {\sum_i (|u_i+v_i|)}\]
Y = cdist(XA, XB, 'mahalanobis', VI=None)
Computes the Mahalanobis distance between the points. The Mahalanobis distance between two pointsu
andv
is \(\sqrt{(u-v)(1/V)(u-v)^T}\) where \((1/V)\) (theVI
variable) is the inverse covariance. IfVI
is not None,VI
will be used as the inverse covariance matrix.
Y = cdist(XA, XB, 'yule')
Computes the Yule distance between the boolean vectors. (see yule function documentation)
Y = cdist(XA, XB, 'matching')
Synonym for ‘hamming’.
Y = cdist(XA, XB, 'dice')
Computes the Dice distance between the boolean vectors. (see dice function documentation)
Y = cdist(XA, XB, 'kulsinski')
Computes the Kulsinski distance between the boolean vectors. (see kulsinski function documentation)
Y = cdist(XA, XB, 'rogerstanimoto')
Computes the Rogers-Tanimoto distance between the boolean vectors. (see rogerstanimoto function documentation)
Y = cdist(XA, XB, 'russellrao')
Computes the Russell-Rao distance between the boolean vectors. (see russellrao function documentation)
Y = cdist(XA, XB, 'sokalmichener')
Computes the Sokal-Michener distance between the boolean vectors. (see sokalmichener function documentation)
Y = cdist(XA, XB, 'sokalsneath')
Computes the Sokal-Sneath distance between the vectors. (see sokalsneath function documentation)
Y = cdist(XA, XB, 'wminkowski', p=2., w=w)
Computes the weighted Minkowski distance between the vectors. (see wminkowski function documentation)
Y = cdist(XA, XB, f)
Computes the distance between all pairs of vectors in X using the user supplied 2-arity function f. For example, Euclidean distance between the vectors could be computed as follows:
dm = cdist(XA, XB, lambda u, v: np.sqrt(((u-v)**2).sum()))Note that you should avoid passing a reference to one of the distance functions defined in this library. For example,:
dm = cdist(XA, XB, sokalsneath)would calculate the pair-wise distances between the vectors in X using the Python function sokalsneath. This would result in sokalsneath being called \({n \choose 2}\) times, which is inefficient. Instead, the optimized C version is more efficient, and we call it using the following syntax:
dm = cdist(XA, XB, 'sokalsneath')
Examples
Find the Euclidean distances between four 2-D coordinates:
>>> from scipy.spatial import distance
>>> coords = [(35.0456, -85.2672),
... (35.1174, -89.9711),
... (35.9728, -83.9422),
... (36.1667, -86.7833)]
>>> distance.cdist(coords, coords, 'euclidean')
array([[ 0. , 4.7044, 1.6172, 1.8856],
[ 4.7044, 0. , 6.0893, 3.3561],
[ 1.6172, 6.0893, 0. , 2.8477],
[ 1.8856, 3.3561, 2.8477, 0. ]])
Find the Manhattan distance from a 3-D point to the corners of the unit cube:
>>> a = np.array([[0, 0, 0],
... [0, 0, 1],
... [0, 1, 0],
... [0, 1, 1],
... [1, 0, 0],
... [1, 0, 1],
... [1, 1, 0],
... [1, 1, 1]])
>>> b = np.array([[ 0.1, 0.2, 0.4]])
>>> distance.cdist(a, b, 'cityblock')
array([[ 0.7],
[ 0.9],
[ 1.3],
[ 1.5],
[ 1.5],
[ 1.7],
[ 2.1],
[ 2.3]])
dipy.tracking.utils.
connectivity_matrix
(streamlines, label_volume, voxel_size=None, affine=None, symmetric=True, return_mapping=False, mapping_as_streamlines=False)Counts the streamlines that start and end at each label pair.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
density_map
(streamlines, vol_dims, voxel_size=None, affine=None)Counts the number of unique streamlines that pass through each voxel.
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
Notes
A streamline can pass through a voxel even if one of the points of the streamline does not lie in the voxel. For example a step from [0,0,0] to [0,0,2] passes through [0,0,1]. Consider subsegmenting the streamlines when the edges of the voxels are smaller than the steps of the streamlines.
dipy.tracking.utils.
dist_to_corner
(affine)Calculate the maximal distance from the center to a corner of a voxel, given an affine
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
dot
(a, b, out=None)Dot product of two arrays. Specifically,
If both a and b are 1-D arrays, it is inner product of vectors (without complex conjugation).
If both a and b are 2-D arrays, it is matrix multiplication,
but using matmul()
or a @ b
is preferred.
If either a or b is 0-D (scalar), it is equivalent to multiply()
and using numpy.multiply(a, b)
or a * b
is preferred.
If a is an N-D array and b is a 1-D array, it is a sum product over the last axis of a and b.
If a is an N-D array and b is an M-D array (where M>=2
), it is a
sum product over the last axis of a and the second-to-last axis of b:
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
See also
vdot
tensordot
einsum
matmul
Examples
>>> np.dot(3, 4)
12
Neither argument is complex-conjugated:
>>> np.dot([2j, 3j], [2j, 3j])
(-13+0j)
For 2-D arrays it is the matrix product:
>>> a = [[1, 0], [0, 1]]
>>> b = [[4, 1], [2, 2]]
>>> np.dot(a, b)
array([[4, 1],
[2, 2]])
>>> a = np.arange(3*4*5*6).reshape((3,4,5,6))
>>> b = np.arange(3*4*5*6)[::-1].reshape((5,4,6,3))
>>> np.dot(a, b)[2,3,2,1,2,2]
499128
>>> sum(a[2,3,2,:] * b[1,2,:,2])
499128
dipy.tracking.utils.
empty
(shape, dtype=float, order='C')Return a new array of given shape and type, without initializing entries.
Parameters: |
|
---|---|
Returns: |
|
See also
empty_like
ones
zeros
full
Notes
empty, unlike zeros, does not set the array values to zero, and may therefore be marginally faster. On the other hand, it requires the user to manually set all the values in the array, and should be used with caution.
Examples
>>> np.empty([2, 2])
array([[ -9.74499359e+001, 6.69583040e-309],
[ 2.13182611e-314, 3.06959433e-309]]) #random
>>> np.empty([2, 2], dtype=int)
array([[-1073741821, -1067949133],
[ 496041986, 19249760]]) #random
dipy.tracking.utils.
eye
(N, M=None, k=0, dtype=<class 'float'>, order='C')Return a 2-D array with ones on the diagonal and zeros elsewhere.
Parameters: |
|
---|---|
Returns: |
|
See also
identity
diag
Examples
>>> np.eye(2, dtype=int)
array([[1, 0],
[0, 1]])
>>> np.eye(3, k=1)
array([[ 0., 1., 0.],
[ 0., 0., 1.],
[ 0., 0., 0.]])
dipy.tracking.utils.
flexi_tvis_affine
(sl_vox_order, grid_affine, dim, voxel_size)Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
get_flexi_tvis_affine
(tvis_hdr, nii_aff)Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
length
(streamlines, affine=None)Calculate the lengths of many streamlines in a bundle.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
minimum_at
(a, indices, b=None)Performs unbuffered in place operation on operand ‘a’ for elements
specified by ‘indices’. For addition ufunc, this method is equivalent to
a[indices] += b
, except that results are accumulated for elements that
are indexed more than once. For example, a[[0,0]] += 1
will only
increment the first element once because of buffering, whereas
add.at(a, [0,0], 1)
will increment the first element twice.
New in version 1.8.0.
Parameters: |
|
---|
Examples
Set items 0 and 1 to their negative values:
>>> a = np.array([1, 2, 3, 4])
>>> np.negative.at(a, [0, 1])
>>> print(a)
array([-1, -2, 3, 4])
Increment items 0 and 1, and increment item 2 twice:
>>> a = np.array([1, 2, 3, 4])
>>> np.add.at(a, [0, 1, 2, 2], 1)
>>> print(a)
array([2, 3, 5, 4])
Add items 0 and 1 in first array to second array, and store results in first array:
>>> a = np.array([1, 2, 3, 4])
>>> b = np.array([1, 2])
>>> np.add.at(a, [0, 1], b)
>>> print(a)
array([2, 4, 3, 4])
dipy.tracking.utils.
move_streamlines
(streamlines, output_space, input_space=None)Applies a linear transformation, given by affine, to streamlines.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
near_roi
(streamlines, region_of_interest, affine=None, tol=None, mode='any')Provide filtering criteria for a set of streamlines based on whether they fall within a tolerance distance from an ROI
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
path_length
(streamlines, aoi, affine, fill_value=-1)Computes the shortest path, along any streamline, between aoi and each voxel.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
random_seeds_from_mask
(mask, seeds_count=1, seed_count_per_voxel=True, affine=None, random_seed=None)Creates randomly placed seeds for fiber tracking from a binary mask.
Seeds points are placed randomly distributed in voxels of mask
which are True
.
If seed_count_per_voxel
is True
, this function is
similar to seeds_from_mask()
, with the difference that instead of
evenly distributing the seeds, it randomly places the seeds within the
voxels specified by the mask
.
Parameters: |
|
---|---|
Raises: |
|
See also
Examples
>>> mask = np.zeros((3,3,3), 'bool')
>>> mask[0,0,0] = 1
>>> random_seeds_from_mask(mask, seeds_count=1, seed_count_per_voxel=True,
... random_seed=1)
array([[-0.0640051 , -0.47407377, 0.04966248]])
>>> random_seeds_from_mask(mask, seeds_count=6, seed_count_per_voxel=True,
... random_seed=1)
array([[-0.0640051 , -0.47407377, 0.04966248],
[ 0.0507979 , 0.20814782, -0.20909526],
[ 0.46702984, 0.04723225, 0.47268436],
[-0.27800683, 0.37073231, -0.29328084],
[ 0.39286015, -0.16802019, 0.32122912],
[-0.42369171, 0.27991879, -0.06159077]])
>>> mask[0,1,2] = 1
>>> random_seeds_from_mask(mask, seeds_count=2, seed_count_per_voxel=True,
... random_seed=1)
array([[-0.0640051 , -0.47407377, 0.04966248],
[-0.27800683, 1.37073231, 1.70671916],
[ 0.0507979 , 0.20814782, -0.20909526],
[-0.48962585, 1.00187459, 1.99577329]])
dipy.tracking.utils.
ravel_multi_index
(multi_index, dims, mode='raise', order='C')Converts a tuple of index arrays into an array of flat indices, applying boundary modes to the multi-index.
Parameters: |
|
---|---|
Returns: |
|
See also
unravel_index
Notes
New in version 1.6.0.
Examples
>>> arr = np.array([[3,6,6],[4,5,1]])
>>> np.ravel_multi_index(arr, (7,6))
array([22, 41, 37])
>>> np.ravel_multi_index(arr, (7,6), order='F')
array([31, 41, 13])
>>> np.ravel_multi_index(arr, (4,6), mode='clip')
array([22, 23, 19])
>>> np.ravel_multi_index(arr, (4,4), mode=('clip','wrap'))
array([12, 13, 13])
>>> np.ravel_multi_index((3,1,4,1), (6,7,8,9))
1621
dipy.tracking.utils.
reduce_labels
(label_volume)Reduces an array of labels to the integers from 0 to n with smallest possible n.
Examples
>>> labels = np.array([[1, 3, 9],
... [1, 3, 8],
... [1, 3, 7]])
>>> new_labels, lookup = reduce_labels(labels)
>>> lookup
array([1, 3, 7, 8, 9])
>>> new_labels
array([[0, 1, 4],
[0, 1, 3],
[0, 1, 2]]...)
>>> (lookup[new_labels] == labels).all()
True
dipy.tracking.utils.
reduce_rois
(rois, include)Reduce multiple ROIs to one inclusion and one exclusion ROI
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
reorder_voxels_affine
(input_ornt, output_ornt, shape, voxel_size)Calculates a linear transformation equivalent to changing voxel order.
Calculates a linear tranformation A such that [a, b, c, 1] = A[x, y, z, 1]. where [x, y, z] is a point in the coordinate system defined by input_ornt and [a, b, c] is the same point in the coordinate system defined by output_ornt.
Parameters: |
|
---|---|
Returns: |
|
See also
nibabel.orientation
, dipy.io.bvectxt.orientation_to_string
, dipy.io.bvectxt.orientation_from_string
dipy.tracking.utils.
seeds_from_mask
(mask, density=[1, 1, 1], voxel_size=None, affine=None)Creates seeds for fiber tracking from a binary mask.
Seeds points are placed evenly distributed in all voxels of mask
which
are True
.
Parameters: |
|
---|---|
Raises: |
|
See also
Examples
>>> mask = np.zeros((3,3,3), 'bool')
>>> mask[0,0,0] = 1
>>> seeds_from_mask(mask, [1,1,1], [1,1,1])
array([[ 0.5, 0.5, 0.5]])
>>> seeds_from_mask(mask, [1,2,3], [1,1,1])
array([[ 0.5 , 0.25 , 0.16666667],
[ 0.5 , 0.75 , 0.16666667],
[ 0.5 , 0.25 , 0.5 ],
[ 0.5 , 0.75 , 0.5 ],
[ 0.5 , 0.25 , 0.83333333],
[ 0.5 , 0.75 , 0.83333333]])
>>> mask[0,1,2] = 1
>>> seeds_from_mask(mask, [1,1,2], [1.1,1.1,2.5])
array([[ 0.55 , 0.55 , 0.625],
[ 0.55 , 0.55 , 1.875],
[ 0.55 , 1.65 , 5.625],
[ 0.55 , 1.65 , 6.875]])
dipy.tracking.utils.
streamline_near_roi
(streamline, roi_coords, tol, mode='any')Is a streamline near an ROI.
Implements the inner loops of the near_roi()
function.
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
subsegment
(streamlines, max_segment_length)Splits the segments of the streamlines into small segments.
Replaces each segment of each of the streamlines with the smallest possible number of equally sized smaller segments such that no segment is longer than max_segment_length. Among other things, this can useful for getting streamline counts on a grid that is smaller than the length of the streamline segments.
Parameters: |
|
---|---|
Returns: |
|
Notes
Segments of 0 length are removed. If unchanged
Examples
>>> streamlines = [np.array([[0,0,0],[2,0,0],[5,0,0]])]
>>> list(subsegment(streamlines, 3.))
[array([[ 0., 0., 0.],
[ 2., 0., 0.],
[ 5., 0., 0.]])]
>>> list(subsegment(streamlines, 1))
[array([[ 0., 0., 0.],
[ 1., 0., 0.],
[ 2., 0., 0.],
[ 3., 0., 0.],
[ 4., 0., 0.],
[ 5., 0., 0.]])]
>>> list(subsegment(streamlines, 1.6))
[array([[ 0. , 0. , 0. ],
[ 1. , 0. , 0. ],
[ 2. , 0. , 0. ],
[ 3.5, 0. , 0. ],
[ 5. , 0. , 0. ]])]
dipy.tracking.utils.
target
(streamlines, target_mask, affine, include=True)Filters streamlines based on whether or not they pass through an ROI.
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
See also
dipy.tracking.utils.
target_line_based
(streamlines, target_mask, affine=None, include=True)Filters streamlines based on whether or not they pass through a ROI, using a line-based algorithm. Mostly used as a replacement of target for compressed streamlines.
This function never returns single-point streamlines, whatever the value of include.
Parameters: |
|
---|---|
Returns: |
|
References
dipy.tracking.utils.
unique_rows
(in_array, dtype='f4')This (quickly) finds the unique rows in an array
Parameters: |
|
---|---|
Returns: |
|
dipy.tracking.utils.
wraps
(wrapped, assigned=('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'), updated=('__dict__', ))Decorator factory to apply update_wrapper() to a wrapper function
Returns a decorator that invokes update_wrapper() with the decorated function as the wrapper argument and the arguments to wraps() as the remaining arguments. Default arguments are as for update_wrapper(). This is a convenience function to simplify applying partial() to update_wrapper().