workflows

Module: workflows.align

AffineMap(affine[, domain_grid_shape, …])

Methods

AffineRegistration([metric, level_iters, …])

Methods

AffineTransform3D

Methods

ApplyTransformFlow([output_strategy, …])

Methods

CCMetric(dim[, sigma_diff, radius])

Methods

DiffeomorphicMap(dim, disp_shape[, …])

Methods

EMMetric(dim[, smooth, inner_iter, …])

Methods

ImageRegistrationFlow([output_strategy, …]) The registration workflow is organized as a collection of different functions.
MutualInformationMetric([nbins, …])

Methods

ResliceFlow([output_strategy, mix_names, …])

Methods

RigidTransform3D

Methods

SSDMetric(dim[, smooth, inner_iter, step_type])

Methods

SlrWithQbxFlow([output_strategy, mix_names, …])

Methods

SymmetricDiffeomorphicRegistration(metric[, …])

Methods

SynRegistrationFlow([output_strategy, …])

Methods

TranslationTransform3D

Methods

Workflow([output_strategy, mix_names, …])

Methods

check_dimensions(static, moving) Check the dimensions of the input images.
load_nifti(fname[, return_img, …])
load_trk(filename[, lazy_load]) Loads tractogram files (*.tck)
reslice(data, affine, zooms, new_zooms[, …]) Reslice data with new voxel resolution defined by new_zooms
save_nifti(fname, data, affine[, hdr])
save_qa_metric(fname, xopt, fopt) Save Quality Assurance metrics.
slr_with_qbx(static, moving[, x0, …]) Utility function for registering large tractograms.
transform_centers_of_mass(static, …) Transformation to align the center of mass of the input images
transform_streamlines(streamlines, mat[, …]) Apply affine transformation to streamlines

Module: workflows.base

IntrospectiveArgumentParser([prog, usage, …])
Attributes:
NumpyDocString(docstring[, config])
get_args_default(func)

Module: workflows.combined_workflow

CombinedWorkflow([output_strategy, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

iteritems(d, **kw) Return an iterator over the (key, value) pairs of a dictionary.

Module: workflows.denoise

NLMeansFlow([output_strategy, mix_names, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

estimate_sigma(arr[, …]) Standard deviation estimation from local patches
load_nifti(fname[, return_img, …])
nlmeans(arr, sigma[, mask, patch_radius, …]) Non-local means for denoising 3D and 4D images
save_nifti(fname, data, affine[, hdr])

Module: workflows.docstring_parser

This was taken directly from the file docscrape.py of numpydoc package.

Copyright (C) 2008 Stefan van der Walt <stefan@mentat.za.net>, Pauli Virtanen <pav@iki.fi>

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

NumpyDocString(docstring[, config])
Reader(data) A line-based string reader.
dedent_lines(lines) Deindent a list of lines maximally
warn Issue a warning, or maybe ignore it or raise an exception.

Module: workflows.flow_runner

IntrospectiveArgumentParser([prog, usage, …])
Attributes:
get_level(lvl) Transforms the loggin level passed on the commandline into a proper logging level name.
iteritems(d, **kw) Return an iterator over the (key, value) pairs of a dictionary.
run_flow(flow) Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies.

Module: workflows.io

IoInfoFlow([output_strategy, mix_names, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

load_nifti(fname[, return_img, …])

Module: workflows.mask

MaskFlow([output_strategy, mix_names, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

load_nifti(fname[, return_img, …])
save_nifti(fname, data, affine[, hdr])

Module: workflows.multi_io

IOIterator([output_strategy, mix_names]) Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command)
basename_without_extension(fname)
common_start(sa, sb) Return the longest common substring from the beginning of sa and sb.
concatenate_inputs(multi_inputs) Concatenate list of inputs
connect_output_paths(inputs, out_dir, out_files) Generates a list of output files paths based on input files and output strategies.
get_args_default(func)
glob(pathname, *[, recursive]) Return a list of paths matching a pathname pattern.
io_iterator(inputs, out_dir, fnames[, …]) Creates an IOIterator from the parameters.
io_iterator_(frame, fnc[, output_strategy, …]) Creates an IOIterator using introspection.
slash_to_under(dir_str)

Module: workflows.reconst

ConstrainedSphericalDeconvModel(gtab, response)

Methods

CsaOdfModel(gtab, sh_order[, smooth, …]) Implementation of Constant Solid Angle reconstruction method.
DiffusionKurtosisModel(gtab[, fit_method]) Class for the Diffusion Kurtosis Model
ReconstCSAFlow([output_strategy, mix_names, …])

Methods

ReconstCSDFlow([output_strategy, mix_names, …])

Methods

ReconstDkiFlow([output_strategy, mix_names, …])

Methods

ReconstDtiFlow([output_strategy, mix_names, …])

Methods

ReconstIvimFlow([output_strategy, …])

Methods

ReconstMAPMRIFlow([output_strategy, …])

Methods

TensorModel(gtab[, fit_method, return_S0_hat]) Diffusion Tensor
Workflow([output_strategy, mix_names, …])

Methods

IvimModel(gtab[, fit_method]) Selector function to switch between the 2-stage Levenberg-Marquardt based NLLS fitting method (also containing the linear fit): LM and the Variable Projections based fitting method: VarPro.
auto_response(gtab, data[, roi_center, …]) Automatic estimation of response function using FA.
axial_diffusivity(evals[, axis]) Axial Diffusivity (AD) of a diffusion tensor.
color_fa(fa, evecs) Color fractional anisotropy of diffusion tensor
fractional_anisotropy(evals[, axis]) Fractional anisotropy (FA) of a diffusion tensor.
geodesic_anisotropy(evals[, axis]) Geodesic anisotropy (GA) of a diffusion tensor.
get_mode(q_form) Mode (MO) of a diffusion tensor [1].
get_sphere([name]) provide triangulated spheres
gradient_table(bvals[, bvecs, big_delta, …]) A general function for creating diffusion MR gradients.
literal_eval(node_or_string) Safely evaluate an expression node or a string containing a Python expression.
load_nifti(fname[, return_img, …])
lower_triangular(tensor[, b0]) Returns the six lower triangular values of the tensor and a dummy variable if b0 is not None
mean_diffusivity(evals[, axis]) Mean Diffusivity (MD) of a diffusion tensor.
peaks_from_model(model, data, sphere, …[, …]) Fit the model to data and computes peaks and metrics
peaks_to_niftis(pam, fname_shm, fname_dirs, …) Save SH, directions, indices and values of peaks to Nifti.
radial_diffusivity(evals[, axis]) Radial Diffusivity (RD) of a diffusion tensor.
read_bvals_bvecs(fbvals, fbvecs) Read b-values and b-vectors from disk
save_nifti(fname, data, affine[, hdr])
save_peaks(fname, pam[, affine, verbose]) Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5).
split_dki_param(dki_params) Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the DKI model
warn Issue a warning, or maybe ignore it or raise an exception.

Module: workflows.segment

LabelsBundlesFlow([output_strategy, …])

Methods

MedianOtsuFlow([output_strategy, mix_names, …])

Methods

RecoBundles(streamlines[, greater_than, …])

Methods

RecoBundlesFlow([output_strategy, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

load_nifti(fname[, return_img, …])
load_trk(filename[, lazy_load]) Loads tractogram files (*.tck)
median_otsu(input_volume[, median_radius, …]) Simple brain extraction tool method for images from DWI data.
save_nifti(fname, data, affine[, hdr])
time() Return the current time in seconds since the Epoch.

Module: workflows.stats

BundleAnalysisPopulationFlow([…])

Methods

LinearMixedModelsFlow([output_strategy, …])

Methods

SNRinCCFlow([output_strategy, mix_names, …])

Methods

TensorModel(gtab[, fit_method, return_S0_hat]) Diffusion Tensor
Workflow([output_strategy, mix_names, …])

Methods

binary_dilation(input[, structure, …]) Multi-dimensional binary dilation with the given structuring element.
bounding_box(vol) Compute the bounding box of nonzero intensity voxels in the volume.
bundle_analysis(model_bundle_folder, …[, …]) Applies statistical analysis on bundles and saves the results in a directory specified by out_dir.
gradient_table(bvals[, bvecs, big_delta, …]) A general function for creating diffusion MR gradients.
load_nifti(fname[, return_img, …])
median_otsu(input_volume[, median_radius, …]) Simple brain extraction tool method for images from DWI data.
optional_package(name[, trip_msg]) Return package-like thing and module setup for package name
read_bvals_bvecs(fbvals, fbvecs) Read b-values and b-vectors from disk
save_nifti(fname, data, affine[, hdr])
segment_from_cfa(tensor_fit, roi, threshold) Segment the cfa inside roi using the values from threshold as bounds.
simple_plot(file_name, title, x, y, xlabel, …) Saves the simple plot with given x and y values

Module: workflows.tracking

ClosestPeakDirectionGetter A direction getter that returns the closest odf peak to previous tracking direction.
CmcTissueClassifier Continuous map criterion (CMC) stopping criteria from [1].
DeterministicMaximumDirectionGetter Return direction of a sphere with the highest probability mass function (pmf).
LocalFiberTrackingPAMFlow([output_strategy, …])

Methods

LocalTracking(direction_getter, …[, …])
PFTrackingPAMFlow([output_strategy, …])

Methods

ParticleFilteringTracking(direction_getter, …)
ProbabilisticDirectionGetter Randomly samples direction of a sphere based on probability mass function (pmf).
ThresholdTissueClassifier # Declarations from tissue_classifier.pxd bellow cdef: double threshold, interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] metric_map
Tractogram([streamlines, …]) Container for streamlines and their data information.
Workflow([output_strategy, mix_names, …])

Methods

load_nifti(fname[, return_img, …])
load_peaks(fname[, verbose]) Load a PeaksAndMetrics HDF5 file (PAM5)
save(tractogram, filename, **kwargs) Saves a tractogram to a file.

Module: workflows.viz

HorizonFlow([output_strategy, mix_names, …])

Methods

Workflow([output_strategy, mix_names, …])

Methods

horizon(tractograms, images, cluster, …[, …]) Highly interactive visualization - invert the Horizon!
load_nifti(fname[, return_img, …])
load_tractogram(filename[, lazy_load]) Loads tractogram files (*.tck)
pjoin(a, *p) Join two or more pathname components, inserting ‘/’ as needed.

Module: workflows.workflow

Workflow([output_strategy, mix_names, …])

Methods

io_iterator_(frame, fnc[, output_strategy, …]) Creates an IOIterator using introspection.

AffineMap

class dipy.workflows.align.AffineMap(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)

Bases: object

Methods

get_affine() Returns the value of the transformation, not a reference!
set_affine(affine) Sets the affine transform (operating in physical space)
transform(image[, interp, image_grid2world, …]) Transforms the input image from co-domain to domain space
transform_inverse(image[, interp, …]) Transforms the input image from domain to co-domain space
__init__(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)

AffineMap

Implements an affine transformation whose domain is given by domain_grid and domain_grid2world, and whose co-domain is given by codomain_grid and codomain_grid2world.

The actual transform is represented by the affine matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying domain_grid2world. Then we apply the affine transform to (x,y,z) obtaining (x’, y’, z’) in moving image’s world coordinates. Finally, (x’, y’, z’) is mapped to voxel coordinates (i’, j’, k’) in the moving image by multiplying (x’, y’, z’) by the inverse of codomain_grid2world. The codomain_grid_shape is used analogously to transform the static image towards the moving image when calling transform_inverse.

If the domain/co-domain information is not provided (None) then the sampling information needs to be specified each time the transform or transform_inverse is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as stream lines.

Parameters:
affine : array, shape (dim + 1, dim + 1)

the matrix defining the affine transform, where dim is the dimension of the space this map operates in (2 for 2D images, 3 for 3D images). If None, then self represents the identity transformation.

domain_grid_shape : sequence, shape (dim,), optional

the shape of the default domain sampling grid. When transform is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None, then the sampling grid shape must be specified each time the transform method is called.

domain_grid2world : array, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity.

codomain_grid_shape : sequence of integers, shape (dim,)

the shape of the default co-domain sampling grid. When transform_inverse is called to transform an image, the resulting image will have this shape, unless a different sampling information is provided. If None (the default), then the sampling grid shape must be specified each time the transform_inverse method is called.

codomain_grid2world : array, shape (dim + 1, dim + 1)

the grid-to-world transform associated with the co-domain grid. If None (the default), then the grid-to-world transform is assumed to be the identity.

get_affine()

Returns the value of the transformation, not a reference!

Returns:
affine : ndarray

Copy of the transform, not a reference.

set_affine(affine)

Sets the affine transform (operating in physical space)

Also sets self.affine_inv - the inverse of affine, or None if there is no inverse.

Parameters:
affine : array, shape (dim + 1, dim + 1)

the matrix representing the affine transform operating in physical space. The domain and co-domain information remains unchanged. If None, then self represents the identity transformation.

transform(image, interp='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)

Transforms the input image from co-domain to domain space

By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image : 2D or 3D array

the image to be transformed

interp : string, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2world : array, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shape : sequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2world : array, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_only : Boolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns
——-
transformed : array, shape sampling_grid_shape or

self.codomain_shape

the transformed image, sampled at the requested grid

transform_inverse(image, interp='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)

Transforms the input image from domain to co-domain space

By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image : 2D or 3D array

the image to be transformed

interp : string, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2world : array, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shape : sequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2world : array, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_only : Boolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns
——-
transformed : array, shape sampling_grid_shape or

self.codomain_shape

the transformed image, sampled at the requested grid

AffineRegistration

class dipy.workflows.align.AffineRegistration(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)

Bases: object

Methods

optimize(static, moving, transform, params0) Starts the optimization process
__init__(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)

Initializes an instance of the AffineRegistration class

Parameters:
metric : None or object, optional

an instance of a metric. The default is None, implying the Mutual Information metric with default settings.

level_iters : sequence, optional

the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.

sigmas : sequence of floats, optional

custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].

factors : sequence of floats, optional

custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].

method : string, optional

optimization method to be used. If Scipy version < 0.12, then only L-BFGS-B is available. Otherwise, method can be any gradient-based method available in dipy.core.Optimize: CG, BFGS, Newton-CG, dogleg or trust-ncg. The default is ‘L-BFGS-B’.

ss_sigma_factor : float, optional

If None, this parameter is not used and an isotropic scale space with the given factors and sigmas will be built. If not None, an anisotropic scale space will be used by automatically selecting the smoothing sigmas along each axis according to the voxel dimensions of the given image. The ss_sigma_factor is used to scale the automatically computed sigmas. For example, in the isotropic case, the sigma of the kernel will be \(factor * (2 ^ i)\) where \(i = 1, 2, ..., n_scales - 1\) is the scale (the finest resolution image \(i=0\) is never smoothed). The default is None.

options : dict, optional

extra optimization options. The default is None, implying no extra options are passed to the optimizer.

verbosity: int (one of {0, 1, 2, 3}), optional

Set the verbosity level of the algorithm: 0 : do not print anything 1 : print information about the current status of the algorithm 2 : print high level information of the components involved in

the registration that can be used to detect a failing component.

3 : print as much information as possible to isolate the cause

of a bug.

Default: 1

docstring_addendum = 'verbosity: int (one of {0, 1, 2, 3}), optional\n Set the verbosity level of the algorithm:\n 0 : do not print anything\n 1 : print information about the current status of the algorithm\n 2 : print high level information of the components involved in\n the registration that can be used to detect a failing\n component.\n 3 : print as much information as possible to isolate the cause\n of a bug.\n Default: 1\n '
optimize(static, moving, transform, params0, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False)

Starts the optimization process

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

transform : instance of Transform

the transformation with respect to whose parameters the gradient must be computed

params0 : array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

starting_affine : string, or matrix, or None, optional
If string:

‘mass’: align centers of gravity ‘voxel-origin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels

If matrix:

array, shape (dim+1, dim+1).

If None:

Start from identity.

The default is None.

ret_metric : boolean, optional

if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.

Returns:
affine_map : instance of AffineMap

the affine resulting affine transformation

xopt : optimal parameters

the optimal parameters (translation, rotation shear etc.)

fopt : Similarity metric

the value of the function at the optimal parameters.

AffineTransform3D

class dipy.workflows.align.AffineTransform3D

Bases: dipy.align.transforms.Transform

Methods

get_identity_parameters Parameter values corresponding to the identity transform
jacobian Jacobian function of this transform
param_to_matrix Matrix representation of this transform with the given parameters
get_dim  
get_number_of_parameters  
__init__()

Affine transform in 3D

ApplyTransformFlow

class dipy.workflows.align.ApplyTransformFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(static_image_files, moving_image_files, …)
Parameters:
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(static_image_files, moving_image_files, transform_map_file, transform_type='affine', out_dir='', out_file='transformed.nii.gz')
Parameters:
static_image_files : string

Path of the static image file.

moving_image_files : string

Path of the moving image(s). It can be a single image or a folder containing multiple images.

transform_map_file : string

For the affine case, it should be a text(*.txt) file containing the affine matrix. For the diffeomorphic case, it should be a nifti file containing the mapping displacement field in each voxel with this shape (x, y, z, 3, 2)

transform_type : string, optional

Select the transformation type to apply between ‘affine’ or ‘diffeomorphic’. (default affine)

out_dir : string, optional

Directory to save the transformed files (default ‘’).

out_file : string, optional
Name of the transformed file (default ‘transformed.nii.gz’).
It is recommended to use the flag –mix-names to

prevent the output files from being overwritten.

CCMetric

class dipy.workflows.align.CCMetric(dim, sigma_diff=2.0, radius=4)

Bases: dipy.align.metrics.SimilarityMetric

Methods

compute_backward() Computes one step bringing the static image towards the moving.
compute_forward() Computes one step bringing the moving image towards the static.
free_iteration() Frees the resources allocated during initialization
get_energy() Numerical value assigned by this metric to the current image pair
initialize_iteration() Prepares the metric to compute one displacement field iteration.
set_levels_above(levels) Informs the metric how many pyramid levels are above the current one
set_levels_below(levels) Informs the metric how many pyramid levels are below the current one
set_moving_image(moving_image, …) Sets the moving image being compared against the static one.
set_static_image(static_image, …) Sets the static image being compared against the moving one.
use_moving_image_dynamics(…) This is called by the optimizer just after setting the moving image
use_static_image_dynamics(…) This is called by the optimizer just after setting the static image.
__init__(dim, sigma_diff=2.0, radius=4)

Normalized Cross-Correlation Similarity metric.

Parameters:
dim : int (either 2 or 3)

the dimension of the image domain

sigma_diff : the standard deviation of the Gaussian smoothing kernel to

be applied to the update field at each iteration

radius : int

the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation

compute_backward()

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_forward()

Computes one step bringing the moving image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

free_iteration()

Frees the resources allocated during initialization

get_energy()

Numerical value assigned by this metric to the current image pair

Returns the Cross Correlation (data term) energy computed at the largest iteration

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations.

DiffeomorphicMap

class dipy.workflows.align.DiffeomorphicMap(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)

Bases: object

Methods

allocate() Creates a zero displacement field
compute_inversion_error() Inversion error of the displacement fields
expand_fields(expand_factors, new_shape) Expands the displacement fields from current shape to new_shape
get_backward_field() Deformation field to transform an image in the backward direction
get_forward_field() Deformation field to transform an image in the forward direction
get_simplified_transform() Constructs a simplified version of this Diffeomorhic Map
interpret_matrix(obj) Try to interpret obj as a matrix
inverse() Inverse of this DiffeomorphicMap instance
shallow_copy() Shallow copy of this DiffeomorphicMap instance
transform(image[, interpolation, …]) Warps an image in the forward direction
transform_inverse(image[, interpolation, …]) Warps an image in the backward direction
warp_endomorphism(phi) Composition of this DiffeomorphicMap with a given endomorphism
__init__(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)

DiffeomorphicMap

Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxel-to-space matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates.

Parameters:
dim : int, 2 or 3

the transformation’s dimension

disp_shape : array, shape (dim,)

the number of slices (if 3D), rows and columns of the deformation field’s discretization

disp_grid2world : the voxel-to-space transform between the def. fields

grid and space

domain_shape : array, shape (dim,)

the number of slices (if 3D), rows and columns of the default discretizatio of this map’s domain

domain_grid2world : array, shape (dim+1, dim+1)

the default voxel-to-space transformation between this map’s discretization and physical space

codomain_shape : array, shape (dim,)

the number of slices (if 3D), rows and columns of the images that are ‘normally’ warped using this transformation in the forward direction (this will provide default transformation parameters to warp images under this transformation). By default, we assume that the inverse transformation is ‘normally’ used to warp images with the same discretization and voxel-to-space transformation as the deformation field grid.

codomain_grid2world : array, shape (dim+1, dim+1)

the voxel-to-space transformation of images that are ‘normally’ warped using this transformation (in the forward direction).

prealign : array, shape (dim+1, dim+1)

the linear transformation to be applied to align input images to the reference space before warping under the deformation field.

allocate()

Creates a zero displacement field

Creates a zero displacement field (the identity transformation).

compute_inversion_error()

Inversion error of the displacement fields

Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.

Returns:
residual : array, shape (R, C) or (S, R, C)

the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)

stats : array, shape (3,)

statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation

Notes

Since the forward and backward displacement fields have the same discretization, the final composition is given by

comp[i] = forward[ i + Dinv * backward[i]]

where Dinv is the space-to-grid transformation of the displacement fields

expand_fields(expand_factors, new_shape)

Expands the displacement fields from current shape to new_shape

Up-samples the discretization of the displacement fields to be of new_shape shape.

Parameters:
expand_factors : array, shape (dim,)

the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.

new_shape : array, shape (dim,)

the shape of the arrays holding the up-sampled discretization

get_backward_field()

Deformation field to transform an image in the backward direction

Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).

get_forward_field()

Deformation field to transform an image in the forward direction

Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).

get_simplified_transform()

Constructs a simplified version of this Diffeomorhic Map

The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.

interpret_matrix(obj)

Try to interpret obj as a matrix

Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.

Parameters:
obj : object

any object

Returns:
obj : object

the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.

inverse()

Inverse of this DiffeomorphicMap instance

Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.

Returns:
inv : DiffeomorphicMap object

the inverse of this diffeomorphic map.

shallow_copy()

Shallow copy of this DiffeomorphicMap instance

Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)

Returns:
new_map : DiffeomorphicMap object

the shallow copy of this diffeomorphic map

transform(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)

Warps an image in the forward direction

Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).

Parameters:
image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolation : string, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2grid : array, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shape : array, shape (dim,)

the number of slices, rows and columns of the desired warped image

out_grid2world : the transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warped : array, shape = out_shape or self.codomain_shape if None

the warped image under this transformation in the forward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

transform_inverse(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)

Warps an image in the backward direction

Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)

Parameters:
image : array, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolation : string, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2grid : array, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shape : array, shape (dim,)

the number of slices, rows and columns of the desired warped image

out_grid2world : the transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warped : array, shape = out_shape or self.codomain_shape if None

warped image under this transformation in the backward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

warp_endomorphism(phi)

Composition of this DiffeomorphicMap with a given endomorphism

Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the pre-aligning matrix of phi is None or identity).

Parameters:
phi : DiffeomorphicMap object

the endomorphism to be warped by this diffeomorphic map

Returns:
composition : the composition of this diffeomorphic map with the

endomorphism given as input

Notes

The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation.

Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ mid-point algorithm (SyN) are much simpler.

EMMetric

class dipy.workflows.align.EMMetric(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')

Bases: dipy.align.metrics.SimilarityMetric

Methods

compute_backward() Computes one step bringing the static image towards the moving.
compute_demons_step([forward_step]) Demons step for EM metric
compute_forward() Computes one step bringing the reference image towards the static.
compute_gauss_newton_step([forward_step]) Computes the Gauss-Newton energy minimization step
free_iteration() Frees the resources allocated during initialization
get_energy() The numerical value assigned by this metric to the current image pair
initialize_iteration() Prepares the metric to compute one displacement field iteration.
set_levels_above(levels) Informs the metric how many pyramid levels are above the current one
set_levels_below(levels) Informs the metric how many pyramid levels are below the current one
set_moving_image(moving_image, …) Sets the moving image being compared against the static one.
set_static_image(static_image, …) Sets the static image being compared against the moving one.
use_moving_image_dynamics(…) This is called by the optimizer just after setting the moving image.
use_static_image_dynamics(…) This is called by the optimizer just after setting the static image.
__init__(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')

Expectation-Maximization Metric

Similarity metric based on the Expectation-Maximization algorithm to handle multi-modal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm.

Parameters:
dim : int (either 2 or 3)

the dimension of the image domain

smooth : float

smoothness parameter, the larger the value the smoother the deformation field

inner_iter : int

number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)

q_levels : number of quantization levels (equal to the number of hidden

variables in the EM algorithm)

double_gradient : boolean

if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.

step_type : string (‘gauss_newton’, ‘demons’)

the optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected)

compute_backward()

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_demons_step(forward_step=True)

Demons step for EM metric

Parameters:
forward_step : boolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacement : array, shape (R, C, 2) or (S, R, C, 3)

the Demons step

compute_forward()

Computes one step bringing the reference image towards the static.

Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm

compute_gauss_newton_step(forward_step=True)

Computes the Gauss-Newton energy minimization step

Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by Bruhn and Weickert et al [Bruhn05]

Parameters:
forward_step : boolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacement : array, shape (R, C, 2) or (S, R, C, 3)

the Newton step

References

[Bruhn05] Andres Bruhn and Joachim Weickert, “Towards ultimate motion
estimation: combining highest accuracy with real-time performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
free_iteration()

Frees the resources allocated during initialization

get_energy()

The numerical value assigned by this metric to the current image pair

Returns the EM (data term) energy computed at the largest iteration

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged.

use_moving_image_dynamics(original_moving_image, transformation)

This is called by the optimizer just after setting the moving image.

EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)

Parameters:
original_moving_image : array, shape (R, C) or (S, R, C)

the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.

transformation : DiffeomorphicMap object

the transformation that was applied to the original_moving_image to generate the current moving image

use_static_image_dynamics(original_static_image, transformation)

This is called by the optimizer just after setting the static image.

EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)

Parameters:
original_static_image : array, shape (R, C) or (S, R, C)

the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).

transformation : DiffeomorphicMap object

the transformation that was applied to the original_static_image to generate the current static image

ImageRegistrationFlow

class dipy.workflows.align.ImageRegistrationFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

The registration workflow is organized as a collection of different functions. The user can intend to use only one type of registration (such as center of mass or rigid body registration only).

Alternatively, a registration can be done in a progressive manner. For example, using affine registration with progressive set to ‘True’ will involve center of mass, translation, rigid body and full affine registration. Whereas, when progressive is False the registration will include only center of mass and affine registration. The progressive registration will be slower but will improve the quality.

This can be controlled by using the progressive flag (True by default).

Methods

affine(static, static_grid2world, moving, …) Function for full affine registration.
center_of_mass(static, static_grid2world, …) Function for the center of mass based image registration.
get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
perform_transformation(static, …) Function to apply the transformation.
rigid(static, static_grid2world, moving, …) Function for rigid body based image registration.
run(static_img_files, moving_img_files[, …])
Parameters:
translate(static, static_grid2world, moving, …) Function for translation based registration.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

affine(static, static_grid2world, moving, moving_grid2world, affreg, params0, progressive)

Function for full affine registration.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

affreg : An object of the image registration class.
params0 : array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

progressive : boolean

Flag to enable or disable the progressive registration. (defa ult True)

center_of_mass(static, static_grid2world, moving, moving_grid2world)

Function for the center of mass based image registration.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

perform_transformation(static, static_grid2world, moving, moving_grid2world, affreg, params0, transform, affine)

Function to apply the transformation.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

affreg : An object of the image registration class.
params0 : array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

transform : An instance of transform type.
affine : Affine matrix to be used as starting affine
rigid(static, static_grid2world, moving, moving_grid2world, affreg, params0, progressive)

Function for rigid body based image registration.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

affreg : An object of the image registration class.
params0 : array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

progressive : boolean

Flag to enable or disable the progressive registration. (defa ult True)

run(static_img_files, moving_img_files, transform='affine', nbins=32, sampling_prop=None, metric='mi', level_iters=[10000, 1000, 100], sigmas=[3.0, 1.0, 0.0], factors=[4, 2, 1], progressive=True, save_metric=False, out_dir='', out_moved='moved.nii.gz', out_affine='affine.txt', out_quality='quality_metric.txt')
Parameters:
static_img_files : string

Path to the static image file.

moving_img_files : string

Path to the moving image file.

transform : string, optional
com: center of mass, trans: translation, rigid: rigid body

affine: full affine including translation, rotation, shearing and scaling (default ‘affine’).

nbins : int, optional
Number of bins to discretize the joint and marginal PDF

(default ‘32’).

sampling_prop : int, optional
Number ([0-100]) of voxels for calculating the PDF.

‘None’ implies all voxels (default ‘None’).

metric : string, optional
Similarity metric for gathering mutual information

(default ‘mi’ , Mutual Information metric).

level_iters : variable int, optional
The number of iterations at each scale of the scale space.

level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the

sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.

sigmas : variable floats, optional
Custom smoothing parameter to build the scale space (one parameter

for each scale). By default, the sequence of sigmas will be [3, 1, 0].

factors : variable floats, optional
Custom scale factors to build the scale space (one factor for each

scale). By default, the sequence of factors will be [4, 2, 1].

progressive : boolean, optional

Enable/Disable the progressive registration (default ‘True’).

save_metric : boolean, optional

If true, quality assessment metric are saved in ‘quality_metric.txt’ (default ‘False’).

out_dir : string, optional
Directory to save the transformed image and the affine matrix

(default ‘’).

out_moved : string, optional
Name for the saved transformed image

(default ‘moved.nii.gz’).

out_affine : string, optional
Name for the saved affine matrix

(default ‘affine.txt’).

out_quality : string, optional
Name of the file containing the saved quality

metric (default ‘quality_metric.txt’).

translate(static, static_grid2world, moving, moving_grid2world, affreg, params0)

Function for translation based registration.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

static_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving : 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

moving_grid2world : array, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

affreg : An object of the image registration class.
params0 : array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

MutualInformationMetric

class dipy.workflows.align.MutualInformationMetric(nbins=32, sampling_proportion=None)

Bases: object

Methods

distance(params) Numeric value of the negative Mutual Information
distance_and_gradient(params) Numeric value of the metric and its gradient at given parameters
gradient(params) Numeric value of the metric’s gradient at the given parameters
setup(transform, static, moving[, …]) Prepares the metric to compute intensity densities and gradients
__init__(nbins=32, sampling_proportion=None)

Initializes an instance of the Mutual Information metric

This class implements the methods required by Optimizer to drive the registration process.

Parameters:
nbins : int, optional

the number of bins to be used for computing the intensity histograms. The default is 32.

sampling_proportion : None or float in interval (0, 1], optional

There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None.

Notes

Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied.

distance(params)

Numeric value of the negative Mutual Information

We need to change the sign so we can use standard minimization algorithms.

Parameters:
params : array, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mi : float

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

distance_and_gradient(params)

Numeric value of the metric and its gradient at given parameters

Parameters:
params : array, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mi : float

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

neg_mi_grad : array, shape (n,)

the gradient of the negative Mutual Information

gradient(params)

Numeric value of the metric’s gradient at the given parameters

Parameters:
params : array, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
grad : array, shape (n,)

the gradient of the negative Mutual Information

setup(transform, static, moving, static_grid2world=None, moving_grid2world=None, starting_affine=None)

Prepares the metric to compute intensity densities and gradients

The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving

Parameters:
transform: instance of Transform

the transformation with respect to whose parameters the gradient must be computed

static : array, shape (S, R, C) or (R, C)

static image

moving : array, shape (S’, R’, C’) or (R’, C’)

moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.

static_grid2world : array (dim+1, dim+1), optional

the grid-to-space transform of the static image. The default is None, implying the transform is the identity.

moving_grid2world : array (dim+1, dim+1)

the grid-to-space transform of the moving image. The default is None, implying the spacing along all axes is 1.

starting_affine : array, shape (dim+1, dim+1), optional

the pre-aligning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no pre-alignment is performed. If a pre-alignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no pre-alignment is performed.

ResliceFlow

class dipy.workflows.align.ResliceFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, new_vox_size[, order, …]) Reslice data with new voxel resolution defined by new_vox_sz
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, new_vox_size, order=1, mode='constant', cval=0, num_processes=1, out_dir='', out_resliced='resliced.nii.gz')

Reslice data with new voxel resolution defined by new_vox_sz

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

new_vox_size : variable float

new voxel size

order : int, optional

order of interpolation, from 0 to 5, for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need (default 1)

mode : string, optional

Points outside the boundaries of the input are filled according to the given mode ‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’ (default ‘constant’)

cval : float, optional

Value used for points outside the boundaries of the input if mode=’constant’ (default 0)

num_processes : int, optional

Split the calculation to a pool of children processes. This only applies to 4D data arrays. If a positive integer then it defines the size of the multiprocessing pool that will be used. If 0, then the size of the pool will equal the number of cores available. (default 1)

out_dir : string, optional

Output directory (default input file directory)

out_resliced : string, optional

Name of the resliced dataset to be saved (default ‘resliced.nii.gz’)

RigidTransform3D

class dipy.workflows.align.RigidTransform3D

Bases: dipy.align.transforms.Transform

Methods

get_identity_parameters Parameter values corresponding to the identity transform
jacobian Jacobian function of this transform
param_to_matrix Matrix representation of this transform with the given parameters
get_dim  
get_number_of_parameters  
__init__()

Rigid transform in 3D (rotation + translation) The parameter vector theta of length 6 is interpreted as follows: theta[0] : rotation about the x axis theta[1] : rotation about the y axis theta[2] : rotation about the z axis theta[3] : translation along the x axis theta[4] : translation along the y axis theta[5] : translation along the z axis

SSDMetric

class dipy.workflows.align.SSDMetric(dim, smooth=4, inner_iter=10, step_type='demons')

Bases: dipy.align.metrics.SimilarityMetric

Methods

compute_backward() Computes one step bringing the static image towards the moving.
compute_demons_step([forward_step]) Demons step for SSD metric
compute_forward() Computes one step bringing the reference image towards the static.
compute_gauss_newton_step([forward_step]) Computes the Gauss-Newton energy minimization step
free_iteration() Nothing to free for the SSD metric
get_energy() The numerical value assigned by this metric to the current image pair
initialize_iteration() Prepares the metric to compute one displacement field iteration.
set_levels_above(levels) Informs the metric how many pyramid levels are above the current one
set_levels_below(levels) Informs the metric how many pyramid levels are below the current one
set_moving_image(moving_image, …) Sets the moving image being compared against the static one.
set_static_image(static_image, …) Sets the static image being compared against the moving one.
use_moving_image_dynamics(…) This is called by the optimizer just after setting the moving image
use_static_image_dynamics(…) This is called by the optimizer just after setting the static image.
__init__(dim, smooth=4, inner_iter=10, step_type='demons')

Sum of Squared Differences (SSD) Metric

Similarity metric for (mono-modal) nonlinear image registration defined by the sum of squared differences (SSD)

Parameters:
dim : int (either 2 or 3)

the dimension of the image domain

smooth : float

smoothness parameter, the larger the value the smoother the deformation field

inner_iter : int

number of iterations to be performed at each level of the multi- resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric)

step_type : string

the displacement field step to be computed when ‘compute_forward’ and ‘compute_backward’ are called. Either ‘demons’ or ‘gauss_newton’

compute_backward()

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_demons_step(forward_step=True)

Demons step for SSD metric

Computes the demons step proposed by Vercauteren et al.[Vercauteren09] for the SSD metric.

Parameters:
forward_step : boolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacement : array, shape (R, C, 2) or (S, R, C, 3)

the Demons step

References

[Vercauteren09] Tom Vercauteren, Xavier Pennec, Aymeric Perchant,
Nicholas Ayache, “Diffeomorphic Demons: Efficient Non-parametric Image Registration”, Neuroimage 2009
compute_forward()

Computes one step bringing the reference image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

compute_gauss_newton_step(forward_step=True)

Computes the Gauss-Newton energy minimization step

Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.

Parameters:
forward_step : boolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacement : array, shape = static_image.shape + (3,)

if forward_step==True, the forward SSD Gauss-Newton step, else, the backward step

free_iteration()

Nothing to free for the SSD metric

get_energy()

The numerical value assigned by this metric to the current image pair

Returns the Sum of Squared Differences (data term) energy computed at the largest iteration

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

Pre-computes the gradient of the input images to be used in the computation of the forward and backward steps.

SlrWithQbxFlow

class dipy.workflows.align.SlrWithQbxFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(static_files, moving_files[, x0, …]) Streamline-based linear registration.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(static_files, moving_files, x0='affine', rm_small_clusters=50, qbx_thr=[40, 30, 20, 15], num_threads=None, greater_than=50, less_than=250, nb_pts=20, progressive=True, out_dir='', out_moved='moved.trk', out_affine='affine.txt', out_stat_centroids='static_centroids.trk', out_moving_centroids='moving_centroids.trk', out_moved_centroids='moved_centroids.trk')

Streamline-based linear registration.

For efficiency we apply the registration on cluster centroids and remove small clusters.

Parameters:
static_files : string
moving_files : string
x0 : string, optional

rigid, similarity or affine transformation model (default affine)

rm_small_clusters : int, optional

Remove clusters that have less than rm_small_clusters (default 50)

qbx_thr : variable int, optional

Thresholds for QuickBundlesX (default [40, 30, 20, 15])

num_threads : int, optional

Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.

greater_than : int, optional

Keep streamlines that have length greater than this value (default 50)

less_than : int, optional

Keep streamlines have length less than this value (default 250)

np_pts : int, optional

Number of points for discretizing each streamline (default 20)

progressive : boolean, optional

(default True)

out_dir : string, optional

Output directory (default input file directory)

out_moved : string, optional

Filename of moved tractogram (default ‘moved.trk’)

out_affine : string, optional

Filename of affine for SLR transformation (default ‘affine.txt’)

out_stat_centroids : string, optional

Filename of static centroids (default ‘static_centroids.trk’)

out_moving_centroids : string, optional

Filename of moving centroids (default ‘moving_centroids.trk’)

out_moved_centroids : string, optional

Filename of moved centroids (default ‘moved_centroids.trk’)

Notes

The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundlesX. Then SLR [Garyfallidis15] is applied.

References

[Garyfallidis15](1, 2) Garyfallidis et al. “Robust and efficient linear

registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015

[Garyfallidis14]Garyfallidis et al., “Direct native-space fiber

bundle alignment for group comparisons”, ISMRM, 2014.

[Garyfallidis17]Garyfallidis et al. Recognition of white matter

bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

SymmetricDiffeomorphicRegistration

class dipy.workflows.align.SymmetricDiffeomorphicRegistration(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)

Bases: dipy.align.imwarp.DiffeomorphicRegistration

Methods

get_map() Returns the resulting diffeomorphic map Returns the DiffeomorphicMap registering the moving image towards the static image.
optimize(static, moving[, …]) Starts the optimization
set_level_iters(level_iters) Sets the number of iterations at each pyramid level
update(current_displacement, …) Composition of the current displacement field with the given field
__init__(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)

Symmetric Diffeomorphic Registration (SyN) Algorithm

Performs the multi-resolution optimization algorithm for non-linear registration using a given similarity metric.

Parameters:
metric : SimilarityMetric object

the metric to be optimized

level_iters : list of int

the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used)

opt_tol : float

the optimization will stop when the estimated derivative of the energy profile w.r.t. time falls below this threshold

inv_iter : int

the number of iterations to be performed by the displacement field inversion algorithm

step_length : float

the length of the maximum displacement vector of the update displacement field at each iteration

ss_sigma_factor : float

parameter of the scale-space smoothing kernel. For example, the std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale

inv_tol : float

the displacement field inversion algorithm will stop iterating when the inversion error falls below this threshold

callback : function(SymmetricDiffeomorphicRegistration)

a function receiving a SymmetricDiffeomorphicRegistration object to be called after each iteration (this optimizer will call this function passing self as parameter)

get_map()

Returns the resulting diffeomorphic map Returns the DiffeomorphicMap registering the moving image towards the static image.

optimize(static, moving, static_grid2world=None, moving_grid2world=None, prealign=None)

Starts the optimization

Parameters:
static : array, shape (S, R, C) or (R, C)

the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.

moving : array, shape (S, R, C) or (R, C)

the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix

static_grid2world : array, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the static image

moving_grid2world : array, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the moving image

prealign : array, shape (dim+1, dim+1)

the affine transformation (operating on the physical space) pre-aligning the moving image towards the static

Returns:
static_to_ref : DiffeomorphicMap object

the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).

update(current_displacement, new_displacement, disp_world2grid, time_scaling)

Composition of the current displacement field with the given field

Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).

Parameters:
current_displacement : array, shape (R’, C’, 2) or (S’, R’, C’, 3)

the displacement field defining where to interpolate new_displacement

new_displacement : array, shape (R, C, 2) or (S, R, C, 3)

the displacement field to be warped by current_displacement

disp_world2grid : array, shape (dim+1, dim+1)

the space-to-grid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)

time_scaling : float

scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.

Returns:
updated : array, shape (the same as new_displacement)

the warped displacement field

mean_norm : the mean norm of all vectors in current_displacement

SynRegistrationFlow

class dipy.workflows.align.SynRegistrationFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(static_image_files, moving_image_files)
Parameters:
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(static_image_files, moving_image_files, prealign_file='', inv_static=False, level_iters=[10, 10, 5], metric='cc', mopt_sigma_diff=2.0, mopt_radius=4, mopt_smooth=0.0, mopt_inner_iter=0.0, mopt_q_levels=256, mopt_double_gradient=True, mopt_step_type='', step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, out_dir='', out_warped='warped_moved.nii.gz', out_inv_static='inc_static.nii.gz', out_field='displacement_field.nii.gz')
Parameters:
static_image_files : string

Path of the static image file.

moving_image_files : string

Path to the moving image file.

prealign_file : string, optional
The text file containing pre alignment information via an

affine matrix.

inv_static : boolean, optional

Apply the inverse mapping to the static image (default ‘False’).

level_iters : variable int, optional
The number of iterations at each level of the gaussian pyramid.

By default, a 3-level scale space with iterations sequence equal to [10, 10, 5] will be used. The 0-th level corresponds to the finest resolution.

metric : string, optional

The metric to be used (Default cc, ‘Cross Correlation metric’). metric available: cc (Cross Correlation), ssd (Sum Squared Difference), em (Expectation-Maximization).

mopt_sigma_diff : float, optional

Metric option applied on Cross correlation (CC). The standard deviation of the Gaussian smoothing kernel to be applied to the update field at each iteration (default 2.0)

mopt_radius : int, optional

Metric option applied on Cross correlation (CC). the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation. (default 4)

mopt_smooth : float, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). Smoothness parameter, the larger the value the smoother the deformation field. (default 1.0 for EM, 4.0 for SSD)

mopt_inner_iter : int, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). This is number of iterations to be performed at each level of the multi-resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric). Default 5 for EM, 10 for SSD.

mopt_q_levels : int, optional

Metric option applied on Expectation Maximization (EM). Number of quantization levels (Default: 256 for EM)

mopt_double_gradient : bool, optional

Metric option applied on Expectation Maximization (EM). if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.

mopt_step_type : string, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). The optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected). Possible value: (‘gauss_newton’, ‘demons’). default: ‘gauss_newton’ for EM, ‘demons’ for SSD.

step_length : float, optional
the length of the maximum displacement vector of the update

displacement field at each iteration.

ss_sigma_factor : float, optional
parameter of the scale-space smoothing kernel. For example, the

std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale.

opt_tol : float, optional
the optimization will stop when the estimated derivative of the

energy profile w.r.t. time falls below this threshold.

inv_iter : int, optional
the number of iterations to be performed by the displacement field

inversion algorithm.

inv_tol : float, optional
the displacement field inversion algorithm will stop iterating

when the inversion error falls below this threshold.

out_dir : string, optional

Directory to save the transformed files (default ‘’).

out_warped : string, optional

Name of the warped file. (default ‘warped_moved.nii.gz’).

out_inv_static : string, optional
Name of the file to save the static image after applying the

inverse mapping (default ‘inv_static.nii.gz’).

out_field : string, optional

Name of the file to save the diffeomorphic map. (default ‘displacement_field.nii.gz’)

TranslationTransform3D

class dipy.workflows.align.TranslationTransform3D

Bases: dipy.align.transforms.Transform

Methods

get_identity_parameters Parameter values corresponding to the identity transform
jacobian Jacobian function of this transform
param_to_matrix Matrix representation of this transform with the given parameters
get_dim  
get_number_of_parameters  
__init__()

Translation transform in 3D

Workflow

class dipy.workflows.align.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

check_dimensions

dipy.workflows.align.check_dimensions(static, moving)

Check the dimensions of the input images.

Parameters:
static : 2D or 3D array

the image to be used as reference during optimization.

moving: 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

load_nifti

dipy.workflows.align.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

load_trk

dipy.workflows.align.load_trk(filename, lazy_load=False)

Loads tractogram files (*.tck)

Parameters:
filename : str

input trk filename

lazy_load : {False, True}, optional

If True, load streamlines in a lazy manner i.e. they will not be kept in memory and only be loaded when needed. Otherwise, load all streamlines in memory.

Returns:
streamlines : list of 2D arrays

Each 2D array represents a sequence of 3D points (points, 3).

hdr : dict

header from a trk file

reslice

dipy.workflows.align.reslice(data, affine, zooms, new_zooms, order=1, mode='constant', cval=0, num_processes=1)

Reslice data with new voxel resolution defined by new_zooms

Parameters:
data : array, shape (I,J,K) or (I,J,K,N)

3d volume or 4d volume with datasets

affine : array, shape (4,4)

mapping from voxel coordinates to world coordinates

zooms : tuple, shape (3,)

voxel size for (i,j,k) dimensions

new_zooms : tuple, shape (3,)

new voxel size for (i,j,k) after resampling

order : int, from 0 to 5

order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.

mode : string (‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’)

Points outside the boundaries of the input are filled according to the given mode.

cval : float

Value used for points outside the boundaries of the input if mode=’constant’.

num_processes : int

Split the calculation to a pool of children processes. This only applies to 4D data arrays. If a positive integer then it defines the size of the multiprocessing pool that will be used. If 0, then the size of the pool will equal the number of cores available.

Returns:
data2 : array, shape (I,J,K) or (I,J,K,N)

datasets resampled into isotropic voxel size

affine2 : array, shape (4,4)

new affine for the resampled image

Examples

>>> import nibabel as nib
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> fimg = get_fnames('aniso_vox')
>>> img = nib.load(fimg)
>>> data = img.get_data()
>>> data.shape == (58, 58, 24)
True
>>> affine = img.affine
>>> zooms = img.header.get_zooms()[:3]
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True

save_nifti

dipy.workflows.align.save_nifti(fname, data, affine, hdr=None)

save_qa_metric

dipy.workflows.align.save_qa_metric(fname, xopt, fopt)

Save Quality Assurance metrics.

Parameters:
fname: string

File name to save the metric values.

xopt: numpy array

The metric containing the optimal parameters for image registration.

fopt: int

The distance between the registered images.

slr_with_qbx

dipy.workflows.align.slr_with_qbx(static, moving, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=[40, 30, 20, 15], nb_pts=20, progressive=True, rng=None, num_threads=None)

Utility function for registering large tractograms.

For efficiency we apply the registration on cluster centroids and remove small clusters.

Parameters:
static : Streamlines
moving : Streamlines
x0 : str

rigid, similarity or affine transformation model (default affine)

rm_small_clusters : int

Remove clusters that have less than rm_small_clusters (default 50)

select_random : int

If not None select a random number of streamlines to apply clustering Default None.

verbose : bool,

If True then information about the optimization is shown.

greater_than : int, optional

Keep streamlines that have length greater than this value (default 50)

less_than : int, optional

Keep streamlines have length less than this value (default 250)

qbx_thr : variable int

Thresholds for QuickBundlesX (default [40, 30, 20, 15])

np_pts : int, optional

Number of points for discretizing each streamline (default 20)

progressive : boolean, optional

(default True)

rng : RandomState

If None creates RandomState in function.

num_threads : int

Number of threads. If None (default) then all available threads will be used. Only metrics using OpenMP will use this variable.

Notes

The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.

References

[Garyfallidis15](1, 2) Garyfallidis et al. “Robust and efficient linear

registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R890e584ccf15-Garyfallidis14] Garyfallidis et al., “Direct native-space fiber

bundle alignment for group comparisons”, ISMRM, 2014.
[Garyfallidis17]Garyfallidis et al. Recognition of white matter

bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

transform_centers_of_mass

dipy.workflows.align.transform_centers_of_mass(static, static_grid2world, moving, moving_grid2world)

Transformation to align the center of mass of the input images

Parameters:
static : array, shape (S, R, C)

static image

static_grid2world : array, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

moving : array, shape (S, R, C)

moving image

moving_grid2world : array, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_map : instance of AffineMap

the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image

transform_streamlines

dipy.workflows.align.transform_streamlines(streamlines, mat, in_place=False)

Apply affine transformation to streamlines

Parameters:
streamlines : Streamlines

Streamlines object

mat : array, (4, 4)

transformation matrix

in_place : bool

If True then change data in place. Be careful changes input streamlines.

Returns:
new_streamlines : Streamlines

Sequence transformed 2D ndarrays of shape[-1]==3

IntrospectiveArgumentParser

class dipy.workflows.base.IntrospectiveArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)

Bases: argparse.ArgumentParser

Attributes:
optional_parameters
output_parameters
positional_parameters

Methods

add_argument(dest, …[, name, name])
add_sub_flow_args(sub_flows) Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method.
add_subparsers(**kwargs)
add_workflow(workflow) Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method.
error(message) Prints a usage message incorporating the message to stderr and exits.
exit([status, message])
format_usage()
get_flow_args([args, namespace]) Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments.
parse_args([args, namespace])
print_usage([file])
register(registry_name, value, object)
set_defaults(**kwargs)
add_argument_group  
add_description  
add_epilogue  
add_mutually_exclusive_group  
convert_arg_line_to_args  
format_help  
get_default  
parse_known_args  
print_help  
show_argument  
update_argument  
__init__(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)

Augmenting the argument parser to allow automatic creation of arguments from workflows

Parameters:
prog : None

The name of the program (default: sys.argv[0])

usage : None

A usage message (default: auto-generated from arguments)

description : str

A description of what the program does

epilog : str

Text following the argument descriptions

parents : list

Parsers whose arguments should be copied into this one

formatter_class : obj

HelpFormatter class for printing help messages

prefix_chars : str

Characters that prefix optional arguments

fromfile_prefix_chars : None

Characters that prefix files containing additional arguments

argument_default : None

The default value for all arguments

conflict_handler : str

String indicating how to handle conflicts

add_help : bool

Add a -h/-help option

add_description()
add_epilogue()
add_sub_flow_args(sub_flows)

Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows.

Parameters:
sub_flows : array of dipy.workflows.workflow.Workflow

Workflows to inspect.

Returns:
sub_flow_optionals : dictionary of all sub workflow optional parameters
add_workflow(workflow)

Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser’s own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added.

Parameters:
workflow : dipy.workflows.workflow.Workflow

Workflow from which to infer parameters.

Returns:
sub_flow_optionals : dictionary of all sub workflow optional parameters
get_flow_args(args=None, namespace=None)

Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments.

optional_parameters
output_parameters
positional_parameters
show_argument(dest)
update_argument(*args, **kargs)

NumpyDocString

class dipy.workflows.base.NumpyDocString(docstring, config={})

Bases: object

__init__(docstring, config={})

Initialize self. See help(type(self)) for accurate signature.

get_args_default

dipy.workflows.base.get_args_default(func)

CombinedWorkflow

class dipy.workflows.combined_workflow.CombinedWorkflow(output_strategy='append', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_optionals(flow, **kwargs) Returns the sub flow’s optional arguments merged with those passed as params in kwargs.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
run_sub_flow(flow, *args, **kwargs) Runs the sub flow with the optional parameters passed via the command line.
set_sub_flows_optionals(opts) Sets the self._optionals variable with all sub flow arguments that were passed in the commandline.
__init__(output_strategy='append', mix_names=False, force=False, skip=False)

Workflow that combines multiple workflows. The workflow combined together are referred as sub flows in this class.

get_optionals(flow, **kwargs)

Returns the sub flow’s optional arguments merged with those passed as params in kwargs.

get_sub_runs()

Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction.

run_sub_flow(flow, *args, **kwargs)

Runs the sub flow with the optional parameters passed via the command line. This is a convenience method to make sub flow running more intuitive on the concrete CombinedWorkflow side.

set_sub_flows_optionals(opts)

Sets the self._optionals variable with all sub flow arguments that were passed in the commandline.

Workflow

class dipy.workflows.combined_workflow.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

iteritems

dipy.workflows.combined_workflow.iteritems(d, **kw)

Return an iterator over the (key, value) pairs of a dictionary.

NLMeansFlow

class dipy.workflows.denoise.NLMeansFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files[, sigma, out_dir, out_denoised]) Workflow wrapping the nlmeans denoising method.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, sigma=0, out_dir='', out_denoised='dwi_nlmeans.nii.gz')

Workflow wrapping the nlmeans denoising method.

It applies nlmeans denoise on each file found by ‘globing’ input_files and saves the results in a directory specified by out_dir.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

sigma : float, optional

Sigma parameter to pass to the nlmeans algorithm (default: auto estimation).

out_dir : string, optional

Output directory (default input file directory)

out_denoised : string, optional

Name of the resuting denoised volume (default: dwi_nlmeans.nii.gz)

Workflow

class dipy.workflows.denoise.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

estimate_sigma

dipy.workflows.denoise.estimate_sigma(arr, disable_background_masking=False, N=0)

Standard deviation estimation from local patches

Parameters:
arr : 3D or 4D ndarray

The array to be estimated

disable_background_masking : bool, default False

If True, uses all voxels for the estimation, otherwise, only non-zeros voxels are used. Useful if the background is masked by the scanner.

N : int, default 0

Number of coils of the receiver array. Use N = 1 in case of a SENSE reconstruction (Philips scanners) or the number of coils for a GRAPPA reconstruction (Siemens and GE). Use 0 to disable the correction factor, as for example if the noise is Gaussian distributed. See [1] for more information.

Returns:
sigma : ndarray

standard deviation of the noise, one estimation per volume.

load_nifti

dipy.workflows.denoise.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

nlmeans

dipy.workflows.denoise.nlmeans(arr, sigma, mask=None, patch_radius=1, block_radius=5, rician=True, num_threads=None)

Non-local means for denoising 3D and 4D images

Parameters:
arr : 3D or 4D ndarray

The array to be denoised

mask : 3D ndarray
sigma : float or 3D array

standard deviation of the noise estimated from the data

patch_radius : int

patch size is 2 x patch_radius + 1. Default is 1.

block_radius : int

block size is 2 x block_radius + 1. Default is 5.

rician : boolean

If True the noise is estimated as Rician, otherwise Gaussian noise is assumed.

num_threads : int

Number of threads. If None (default) then all available threads will be used (all CPU cores).

Returns:
denoised_arr : ndarray

the denoised arr which has the same shape as arr.

References

[Descoteaux08]Descoteaux, Maxim and Wiest-Daessle`, Nicolas and Prima, Sylvain and Barillot, Christian and Deriche, Rachid Impact of Rician Adapted Non-Local Means Filtering on HARDI, MICCAI 2008

save_nifti

dipy.workflows.denoise.save_nifti(fname, data, affine, hdr=None)

NumpyDocString

class dipy.workflows.docstring_parser.NumpyDocString(docstring, config={})

Bases: object

__init__(docstring, config={})

Initialize self. See help(type(self)) for accurate signature.

Reader

class dipy.workflows.docstring_parser.Reader(data)

Bases: object

A line-based string reader.

Methods

eof  
is_empty  
peek  
read  
read_to_condition  
read_to_next_empty_line  
read_to_next_unindented_line  
reset  
seek_next_non_empty_line  
__init__(data)
Parameters:
data : str

String with lines separated by ‘

‘.
eof()
is_empty()
peek(n=0)
read()
read_to_condition(condition_func)
read_to_next_empty_line()
read_to_next_unindented_line()
reset()
seek_next_non_empty_line()

dedent_lines

dipy.workflows.docstring_parser.dedent_lines(lines)

Deindent a list of lines maximally

warn

dipy.workflows.docstring_parser.warn()

Issue a warning, or maybe ignore it or raise an exception.

IntrospectiveArgumentParser

class dipy.workflows.flow_runner.IntrospectiveArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)

Bases: argparse.ArgumentParser

Attributes:
optional_parameters
output_parameters
positional_parameters

Methods

add_argument(dest, …[, name, name])
add_sub_flow_args(sub_flows) Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method.
add_subparsers(**kwargs)
add_workflow(workflow) Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method.
error(message) Prints a usage message incorporating the message to stderr and exits.
exit([status, message])
format_usage()
get_flow_args([args, namespace]) Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments.
parse_args([args, namespace])
print_usage([file])
register(registry_name, value, object)
set_defaults(**kwargs)
add_argument_group  
add_description  
add_epilogue  
add_mutually_exclusive_group  
convert_arg_line_to_args  
format_help  
get_default  
parse_known_args  
print_help  
show_argument  
update_argument  
__init__(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)

Augmenting the argument parser to allow automatic creation of arguments from workflows

Parameters:
prog : None

The name of the program (default: sys.argv[0])

usage : None

A usage message (default: auto-generated from arguments)

description : str

A description of what the program does

epilog : str

Text following the argument descriptions

parents : list

Parsers whose arguments should be copied into this one

formatter_class : obj

HelpFormatter class for printing help messages

prefix_chars : str

Characters that prefix optional arguments

fromfile_prefix_chars : None

Characters that prefix files containing additional arguments

argument_default : None

The default value for all arguments

conflict_handler : str

String indicating how to handle conflicts

add_help : bool

Add a -h/-help option

add_description()
add_epilogue()
add_sub_flow_args(sub_flows)

Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows.

Parameters:
sub_flows : array of dipy.workflows.workflow.Workflow

Workflows to inspect.

Returns:
sub_flow_optionals : dictionary of all sub workflow optional parameters
add_workflow(workflow)

Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser’s own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added.

Parameters:
workflow : dipy.workflows.workflow.Workflow

Workflow from which to infer parameters.

Returns:
sub_flow_optionals : dictionary of all sub workflow optional parameters
get_flow_args(args=None, namespace=None)

Returns the parsed arguments as a dictionary that will be used as a workflow’s run method arguments.

optional_parameters
output_parameters
positional_parameters
show_argument(dest)
update_argument(*args, **kargs)

get_level

dipy.workflows.flow_runner.get_level(lvl)

Transforms the loggin level passed on the commandline into a proper logging level name.

iteritems

dipy.workflows.flow_runner.iteritems(d, **kw)

Return an iterator over the (key, value) pairs of a dictionary.

run_flow

dipy.workflows.flow_runner.run_flow(flow)

Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies. The resulting parameters are then fed to the workflow’s run method.

IoInfoFlow

class dipy.workflows.io.IoInfoFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files[, b0_threshold, bvecs_tol, …]) Provides useful information about different files used in medical imaging.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, b0_threshold=50, bvecs_tol=0.01, bshell_thr=100)

Provides useful information about different files used in medical imaging. Any number of input files can be provided. The program identifies the type of file by its extension.

Parameters:
input_files : variable string

Any number of Nifti1, bvals or bvecs files.

b0_threshold : float, optional

(default 50)

bvecs_tol : float, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors (default 0.01)

bshell_thr : float, optional

Threshold for distinguishing b-values in different shells (default 100)

Workflow

class dipy.workflows.io.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

load_nifti

dipy.workflows.io.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

MaskFlow

class dipy.workflows.mask.MaskFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, lb[, ub, out_dir, out_mask]) Workflow for creating a binary mask
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, lb, ub=inf, out_dir='', out_mask='mask.nii.gz')

Workflow for creating a binary mask

Parameters:
input_files : string

Path to image to be masked.

lb : float

Lower bound value.

ub : float, optional

Upper bound value (default Inf)

out_dir : string, optional

Output directory (default input file directory)

out_mask : string, optional

Name of the masked file (default ‘mask.nii.gz’)

Workflow

class dipy.workflows.mask.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

load_nifti

dipy.workflows.mask.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

save_nifti

dipy.workflows.mask.save_nifti(fname, data, affine, hdr=None)

IOIterator

class dipy.workflows.multi_io.IOIterator(output_strategy='absolute', mix_names=False)

Bases: object

Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command)

Use information from input files, out_dir and out_fnames to generate correct outputs which can come from long lists of multiple or single inputs.

Methods

create_directories  
create_outputs  
file_existence_check  
set_inputs  
set_out_dir  
set_out_fnames  
set_output_keys  
__init__(output_strategy='absolute', mix_names=False)

Initialize self. See help(type(self)) for accurate signature.

create_directories()
create_outputs()
file_existence_check(args)
set_inputs(*args)
set_out_dir(out_dir)
set_out_fnames(*args)
set_output_keys(*args)

basename_without_extension

dipy.workflows.multi_io.basename_without_extension(fname)

common_start

dipy.workflows.multi_io.common_start(sa, sb)

Return the longest common substring from the beginning of sa and sb.

concatenate_inputs

dipy.workflows.multi_io.concatenate_inputs(multi_inputs)

Concatenate list of inputs

connect_output_paths

dipy.workflows.multi_io.connect_output_paths(inputs, out_dir, out_files, output_strategy='absolute', mix_names=True)

Generates a list of output files paths based on input files and output strategies.

Parameters:
inputs : array

List of input paths.

out_dir : string

The output directory.

out_files : array

List of output files.

output_strategy : string
Which strategy to use to generate the output paths.

‘append’: Add out_dir to the path of the input. ‘prepend’: Add the input path directory tree to out_dir. ‘absolute’: Put directly in out_dir.

mix_names : bool

Whether or not prepend a string composed of a mix of the input names to the final output name.

Returns:
A list of output file paths.

get_args_default

dipy.workflows.multi_io.get_args_default(func)

glob

dipy.workflows.multi_io.glob(pathname, *, recursive=False)

Return a list of paths matching a pathname pattern.

The pattern may contain simple shell-style wildcards a la fnmatch. However, unlike fnmatch, filenames starting with a dot are special cases that are not matched by ‘*’ and ‘?’ patterns.

If recursive is true, the pattern ‘**’ will match any files and zero or more directories and subdirectories.

io_iterator

dipy.workflows.multi_io.io_iterator(inputs, out_dir, fnames, output_strategy='absolute', mix_names=False, out_keys=None)

Creates an IOIterator from the parameters.

Parameters:
inputs : array

List of input files.

out_dir : string

Output directory.

fnames : array

File names of all outputs to be created.

output_strategy : string

Controls the behavior of the IOIterator for output paths.

mix_names : bool

Whether or not to append a mix of input names at the beginning.

Returns
——-

Properly instantiated IOIterator object.

io_iterator_

dipy.workflows.multi_io.io_iterator_(frame, fnc, output_strategy='absolute', mix_names=False)

Creates an IOIterator using introspection.

Parameters:
frame : frameobject

Contains the info about the current local variables values.

fnc : function

The function to inspect

output_strategy : string

Controls the behavior of the IOIterator for output paths.

mix_names : bool

Whether or not to append a mix of input names at the beginning.

Returns
——-

Properly instantiated IOIterator object.

slash_to_under

dipy.workflows.multi_io.slash_to_under(dir_str)

ConstrainedSphericalDeconvModel

class dipy.workflows.reconst.ConstrainedSphericalDeconvModel(gtab, response, reg_sphere=None, sh_order=8, lambda_=1, tau=0.1)

Bases: dipy.reconst.shm.SphHarmModel

Methods

cache_clear() Clear the cache.
cache_get(tag, key[, default]) Retrieve a value from the cache.
cache_set(tag, key, value) Store a value in the cache.
fit(data[, mask]) Fit method for every voxel in data
predict(sh_coeff[, gtab, S0]) Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance.
sampling_matrix(sphere) The matrix needed to sample ODFs from coefficients of the model.
__init__(gtab, response, reg_sphere=None, sh_order=8, lambda_=1, tau=0.1)

Constrained Spherical Deconvolution (CSD) [1].

Spherical deconvolution computes a fiber orientation distribution (FOD), also called fiber ODF (fODF) [2], as opposed to a diffusion ODF as the QballModel or the CsaOdfModel. This results in a sharper angular profile with better angular resolution that is the best object to be used for later deterministic and probabilistic tractography [3].

A sharp fODF is obtained because a single fiber response function is injected as a priori knowledge. The response function is often data-driven and is thus provided as input to the ConstrainedSphericalDeconvModel. It will be used as deconvolution kernel, as described in [1].

Parameters:
gtab : GradientTable
response : tuple or AxSymShResponse object

A tuple with two elements. The first is the eigen-values as an (3,) ndarray and the second is the signal value for the response function without diffusion weighting. This is to be able to generate a single fiber synthetic signal. The response function will be used as deconvolution kernel ([1])

reg_sphere : Sphere (optional)

sphere used to build the regularization B matrix. Default: ‘symmetric362’.

sh_order : int (optional)

maximal spherical harmonics order. Default: 8

lambda_ : float (optional)

weight given to the constrained-positivity regularization part of the deconvolution equation (see [1]). Default: 1

tau : float (optional)

threshold controlling the amplitude below which the corresponding fODF is assumed to be zero. Ideally, tau should be set to zero. However, to improve the stability of the algorithm, tau is set to tau*100 % of the mean fODF amplitude (here, 10% by default) (see [1]). Default: 0.1

References

[1](1, 2, 3, 4, 5, 6) Tournier, J.D., et al. NeuroImage 2007. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution
[2](1, 2) Descoteaux, M., et al. IEEE TMI 2009. Deterministic and Probabilistic Tractography Based on Complex Fibre Orientation Distributions
[3](1, 2) C^ot’e, M-A., et al. Medical Image Analysis 2013. Tractometer: Towards validation of tractography pipelines
[4]Tournier, J.D, et al. Imaging Systems and Technology 2012. MRtrix: Diffusion Tractography in Crossing Fiber Regions
fit(data, mask=None)

Fit method for every voxel in data

predict(sh_coeff, gtab=None, S0=1.0)

Compute a signal prediction given spherical harmonic coefficients for the provided GradientTable class instance.

Parameters:
sh_coeff : ndarray

The spherical harmonic representation of the FOD from which to make the signal prediction.

gtab : GradientTable

The gradients for which the signal will be predicted. Use the model’s gradient table by default.

S0 : ndarray or float

The non diffusion-weighted signal value.

Returns:
pred_sig : ndarray

The predicted signal.

CsaOdfModel

class dipy.workflows.reconst.CsaOdfModel(gtab, sh_order, smooth=0.006, min_signal=1.0, assume_normed=False)

Bases: dipy.reconst.shm.QballBaseModel

Implementation of Constant Solid Angle reconstruction method.

References

[1]Aganj, I., et al. 2009. ODF Reconstruction in Q-Ball Imaging With Solid Angle Consideration.

Methods

cache_clear() Clear the cache.
cache_get(tag, key[, default]) Retrieve a value from the cache.
cache_set(tag, key, value) Store a value in the cache.
fit(data[, mask]) Fits the model to diffusion data and returns the model fit
sampling_matrix(sphere) The matrix needed to sample ODFs from coefficients of the model.
__init__(gtab, sh_order, smooth=0.006, min_signal=1.0, assume_normed=False)

Creates a model that can be used to fit or sample diffusion data

See also

normalize_data

max = 0.999
min = 0.001

DiffusionKurtosisModel

class dipy.workflows.reconst.DiffusionKurtosisModel(gtab, fit_method='WLS', *args, **kwargs)

Bases: dipy.reconst.base.ReconstModel

Class for the Diffusion Kurtosis Model

Methods

fit(data[, mask]) Fit method of the DKI model class
predict(dki_params[, S0]) Predict a signal for this DKI model class instance given parameters.
__init__(gtab, fit_method='WLS', *args, **kwargs)

Diffusion Kurtosis Tensor Model [1]

Parameters:
gtab : GradientTable class instance
fit_method : str or callable

str can be one of the following: ‘OLS’ or ‘ULLS’ for ordinary least squares

dki.ols_fit_dki

‘WLS’ or ‘UWLLS’ for weighted ordinary least squares

dki.wls_fit_dki

callable has to have the signature:

fit_method(design_matrix, data, *args, **kwargs)

args, kwargs : arguments and key-word arguments passed to the

fit_method. See dki.ols_fit_dki, dki.wls_fit_dki for details

References

[1]Tabesh, A., Jensen, J.H., Ardekani, B.A., Helpern, J.A., 2011.

Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging. Magn Reson Med. 65(3), 823-836

fit(data, mask=None)

Fit method of the DKI model class

Parameters:
data : array

The measured signal from one voxel.

mask : array

A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[-1]

predict(dki_params, S0=1.0)

Predict a signal for this DKI model class instance given parameters.

Parameters:
dki_params : ndarray (x, y, z, 27) or (n, 27)

All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows:

  1. Three diffusion tensor’s eigenvalues
  2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector
  3. Fifteen elements of the kurtosis tensor
S0 : float or ndarray (optional)

The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1

ReconstCSAFlow

class dipy.workflows.reconst.ReconstCSAFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, bvalues_files, …[, …]) Constant Solid Angle.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, sh_order=6, odf_to_sh_order=8, b0_threshold=50.0, bvecs_tol=0.01, extract_pam_values=False, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')

Constant Solid Angle.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_files : string

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_files : string

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

sh_order : int, optional

Spherical harmonics order (default 6) used in the CSA fit.

odf_to_sh_order : int, optional

Spherical harmonics order used for peak_from_model to compress the ODF to spherical harmonics coefficients (default 8)

b0_threshold : float, optional

Threshold used to find b=0 directions

bvecs_tol : float, optional

Threshold used so that norm(bvec)=1 (default 0.01)

extract_pam_values : bool, optional

Wheter or not to save pam volumes as single nifti files.

out_dir : string, optional

Output directory (default input file directory)

out_pam : string, optional

Name of the peaks volume to be saved (default ‘peaks.pam5’)

out_shm : string, optional

Name of the shperical harmonics volume to be saved (default ‘shm.nii.gz’)

out_peaks_dir : string, optional

Name of the peaks directions volume to be saved (default ‘peaks_dirs.nii.gz’)

out_peaks_values : string, optional

Name of the peaks values volume to be saved (default ‘peaks_values.nii.gz’)

out_peaks_indices : string, optional

Name of the peaks indices volume to be saved (default ‘peaks_indices.nii.gz’)

out_gfa : string, optional

Name of the generalise fa volume to be saved (default ‘gfa.nii.gz’)

References

[1]Aganj, I., et al. 2009. ODF Reconstruction in Q-Ball Imaging with Solid Angle Consideration.

ReconstCSDFlow

class dipy.workflows.reconst.ReconstCSDFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, bvalues_files, …[, …]) Constrained spherical deconvolution
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radius=10, fa_thr=0.7, frf=None, extract_pam_values=False, sh_order=8, odf_to_sh_order=8, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')

Constrained spherical deconvolution

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_files : string

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_files : string

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

b0_threshold : float, optional

Threshold used to find b=0 directions

bvecs_tol : float, optional

Bvecs should be unit vectors. (default:0.01)

roi_center : variable int, optional

Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3] (default None)

roi_radius : int, optional

radius of cubic ROI in voxels (default 10)

fa_thr : float, optional

FA threshold for calculating the response function (default 0.7)

frf : variable float, optional

Fiber response function can be for example inputed as 15 4 4 (from the command line) or [15, 4, 4] from a Python script to be converted to float and mutiplied by 10**-4 . If None the fiber response function will be computed automatically (default: None).

extract_pam_values : bool, optional

Save or not to save pam volumes as single nifti files.

sh_order : int, optional

Spherical harmonics order (default 6) used in the CSA fit.

odf_to_sh_order : int, optional

Spherical harmonics order used for peak_from_model to compress the ODF to spherical harmonics coefficients (default 8)

out_dir : string, optional

Output directory (default input file directory)

out_pam : string, optional

Name of the peaks volume to be saved (default ‘peaks.pam5’)

out_shm : string, optional

Name of the shperical harmonics volume to be saved (default ‘shm.nii.gz’)

out_peaks_dir : string, optional

Name of the peaks directions volume to be saved (default ‘peaks_dirs.nii.gz’)

out_peaks_values : string, optional

Name of the peaks values volume to be saved (default ‘peaks_values.nii.gz’)

out_peaks_indices : string, optional

Name of the peaks indices volume to be saved (default ‘peaks_indices.nii.gz’)

out_gfa : string, optional

Name of the generalise fa volume to be saved (default ‘gfa.nii.gz’)

References

[1]Tournier, J.D., et al. NeuroImage 2007. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution.

ReconstDkiFlow

class dipy.workflows.reconst.ReconstDkiFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, bvalues_files, …[, …]) Workflow for Diffusion Kurtosis reconstruction and for computing DKI metrics.
get_dki_model  
get_fitted_tensor  
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_dki_model(gtab)
get_fitted_tensor(data, mask, bval, bvec, b0_threshold=50)
classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, save_metrics=[], out_dir='', out_dt_tensor='dti_tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz', out_dk_tensor='dki_tensors.nii.gz', out_mk='mk.nii.gz', out_ak='ak.nii.gz', out_rk='rk.nii.gz')

Workflow for Diffusion Kurtosis reconstruction and for computing DKI metrics. Performs a DKI reconstruction on the files by ‘globing’ input_files and saves the DKI metrics in a directory specified by out_dir.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

mask_files : string

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

b0_threshold : float, optional

Threshold used to find b=0 directions (default 0.0)

save_metrics : variable string, optional

List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval (default [] (all))

out_dir : string, optional

Output directory (default input file directory)

out_dt_tensor : string, optional

Name of the tensors volume to be saved (default: ‘dti_tensors.nii.gz’)

out_dk_tensor : string, optional

Name of the tensors volume to be saved (default ‘dki_tensors.nii.gz’)

out_fa : string, optional

Name of the fractional anisotropy volume to be saved (default ‘fa.nii.gz’)

out_ga : string, optional

Name of the geodesic anisotropy volume to be saved (default ‘ga.nii.gz’)

out_rgb : string, optional

Name of the color fa volume to be saved (default ‘rgb.nii.gz’)

out_md : string, optional

Name of the mean diffusivity volume to be saved (default ‘md.nii.gz’)

out_ad : string, optional

Name of the axial diffusivity volume to be saved (default ‘ad.nii.gz’)

out_rd : string, optional

Name of the radial diffusivity volume to be saved (default ‘rd.nii.gz’)

out_mode : string, optional

Name of the mode volume to be saved (default ‘mode.nii.gz’)

out_evec : string, optional

Name of the eigenvectors volume to be saved (default ‘evecs.nii.gz’)

out_eval : string, optional

Name of the eigenvalues to be saved (default ‘evals.nii.gz’)

out_mk : string, optional

Name of the mean kurtosis to be saved (default: ‘mk.nii.gz’)

out_ak : string, optional

Name of the axial kurtosis to be saved (default: ‘ak.nii.gz’)

out_rk : string, optional

Name of the radial kurtosis to be saved (default: ‘rk.nii.gz’)

References

[1]Tabesh, A., Jensen, J.H., Ardekani, B.A., Helpern, J.A., 2011. Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging. Magn Reson Med. 65(3), 823-836
[2]Jensen, Jens H., Joseph A. Helpern, Anita Ramani, Hanzhang Lu, and Kyle Kaczynski. 2005. Diffusional Kurtosis Imaging: The Quantification of Non-Gaussian Water Diffusion by Means of Magnetic Resonance Imaging. MRM 53 (6):1432-40.

ReconstDtiFlow

class dipy.workflows.reconst.ReconstDtiFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, bvalues_files, …[, …]) Workflow for tensor reconstruction and for computing DTI metrics.
get_fitted_tensor  
get_tensor_model  
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_fitted_tensor(data, mask, bval, bvec, b0_threshold=50, bvecs_tol=0.01)
classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_tensor_model(gtab)
run(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50, bvecs_tol=0.01, save_metrics=[], out_dir='', out_tensor='tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz')

Workflow for tensor reconstruction and for computing DTI metrics. using Weighted Least-Squares. Performs a tensor reconstruction on the files by ‘globing’ input_files and saves the DTI metrics in a directory specified by out_dir.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_files : string

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_files : string

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

b0_threshold : float, optional

Threshold used to find b=0 directions (default 0.0)

bvecs_tol : float, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors (default 0.01)

save_metrics : variable string, optional

List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval (default [] (all))

out_dir : string, optional

Output directory (default input file directory)

out_tensor : string, optional

Name of the tensors volume to be saved (default ‘tensors.nii.gz’)

out_fa : string, optional

Name of the fractional anisotropy volume to be saved (default ‘fa.nii.gz’)

out_ga : string, optional

Name of the geodesic anisotropy volume to be saved (default ‘ga.nii.gz’)

out_rgb : string, optional

Name of the color fa volume to be saved (default ‘rgb.nii.gz’)

out_md : string, optional

Name of the mean diffusivity volume to be saved (default ‘md.nii.gz’)

out_ad : string, optional

Name of the axial diffusivity volume to be saved (default ‘ad.nii.gz’)

out_rd : string, optional

Name of the radial diffusivity volume to be saved (default ‘rd.nii.gz’)

out_mode : string, optional

Name of the mode volume to be saved (default ‘mode.nii.gz’)

out_evec : string, optional

Name of the eigenvectors volume to be saved (default ‘evecs.nii.gz’)

out_eval : string, optional

Name of the eigenvalues to be saved (default ‘evals.nii.gz’)

References

[1]Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
[2]Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
[3]Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
[4]hung, SW., Lu, Y., Henry, R.G., 2006. Comparison of bootstrap approaches for estimation of uncertainties of DTI parameters. NeuroImage 33, 531-541.

ReconstIvimFlow

class dipy.workflows.reconst.ReconstIvimFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files, bvalues_files, …[, …]) Workflow for Intra-voxel Incoherent Motion reconstruction and for computing IVIM metrics.
get_fitted_ivim  
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_fitted_ivim(data, mask, bval, bvec, b0_threshold=50)
classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, split_b_D=400, split_b_S0=200, b0_threshold=0, save_metrics=[], out_dir='', out_S0_predicted='S0_predicted.nii.gz', out_perfusion_fraction='perfusion_fraction.nii.gz', out_D_star='D_star.nii.gz', out_D='D.nii.gz')

Workflow for Intra-voxel Incoherent Motion reconstruction and for computing IVIM metrics. Performs a IVIM reconstruction on the files by ‘globing’ input_files and saves the IVIM metrics in a directory specified by out_dir.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_files : string

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

mask_files : string

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

split_b_D : int, optional

Value to split the bvals to estimate D for the two-stage process of fitting (default 400)

split_b_S0 : int, optional

Value to split the bvals to estimate S0 for the two-stage process of fitting. (default 200)

b0_threshold : int, optional

Threshold value for the b0 bval. (default 0)

save_metrics : variable string, optional

List of metrics to save. Possible values: S0_predicted, perfusion_fraction, D_star, D (default [] (all))

out_dir : string, optional

Output directory (default input file directory)

out_S0_predicted : string, optional

Name of the S0 signal estimated to be saved (default: ‘S0_predicted.nii.gz’)

out_perfusion_fraction : string, optional

Name of the estimated volume fractions to be saved (default ‘perfusion_fraction.nii.gz’)

out_D_star : string, optional

Name of the estimated pseudo-diffusion parameter to be saved (default ‘D_star.nii.gz’)

out_D : string, optional

Name of the estimated diffusion parameter to be saved (default ‘D.nii.gz’)

References

[Stejskal65]Stejskal, E. O.; Tanner, J. E. (1 January 1965). “Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient”. The Journal of Chemical Physics 42 (1): 288. Bibcode: 1965JChPh..42..288S. doi:10.1063/1.1695690.
[LeBihan84]Le Bihan, Denis, et al. “Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging.” Radiology 168.2 (1988): 497-505.

ReconstMAPMRIFlow

class dipy.workflows.reconst.ReconstMAPMRIFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(data_files, bvals_files, bvecs_files, …) Workflow for fitting the MAPMRI model (with optional Laplacian regularization).
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(data_files, bvals_files, bvecs_files, small_delta, big_delta, b0_threshold=50.0, laplacian=True, positivity=True, bval_threshold=2000, save_metrics=[], laplacian_weighting=0.05, radial_order=6, out_dir='', out_rtop='rtop.nii.gz', out_lapnorm='lapnorm.nii.gz', out_msd='msd.nii.gz', out_qiv='qiv.nii.gz', out_rtap='rtap.nii.gz', out_rtpp='rtpp.nii.gz', out_ng='ng.nii.gz', out_perng='perng.nii.gz', out_parng='parng.nii.gz')

Workflow for fitting the MAPMRI model (with optional Laplacian regularization). Generates rtop, lapnorm, msd, qiv, rtap, rtpp, non-gaussian (ng), parallel ng, perpendicular ng saved in a nifti format in input files provided by data_files and saves the nifti files to an output directory specified by out_dir.

In order for the MAPMRI workflow to work in the way intended either the laplacian or positivity or both must be set to True.

Parameters:
data_files : string

Path to the input volume.

bvals_files : string

Path to the bval files.

bvecs_files : string

Path to the bvec files.

small_delta : float

Small delta value used in generation of gradient table of provided bval and bvec.

big_delta : float

Big delta value used in generation of gradient table of provided bval and bvec.

b0_threshold : float, optional

Threshold used to find b=0 directions (default 0.0)

laplacian : bool, optional

Regularize using the Laplacian of the MAP-MRI basis (default True)

positivity : bool, optional

Constrain the propagator to be positive. (default True)

bval_threshold : float, optional

Sets the b-value threshold to be used in the scale factor estimation. In order for the estimated non-Gaussianity to have meaning this value should set to a lower value (b<2000 s/mm^2) such that the scale factors are estimated on signal points that reasonably represent the spins at Gaussian diffusion. (default: 2000)

save_metrics : variable string, optional

List of metrics to save. Possible values: rtop, laplacian_signal, msd, qiv, rtap, rtpp, ng, perng, parng (default: [] (all))

laplacian_weighting : float, optional

Weighting value used in fitting the MAPMRI model in the laplacian and both model types. (default: 0.05)

radial_order : unsigned int, optional

Even value used to set the order of the basis (default: 6)

out_dir : string, optional

Output directory (default: input file directory)

out_rtop : string, optional

Name of the rtop to be saved

out_lapnorm : string, optional

Name of the norm of laplacian signal to be saved

out_msd : string, optional

Name of the msd to be saved

out_qiv : string, optional

Name of the qiv to be saved

out_rtap : string, optional

Name of the rtap to be saved

out_rtpp : string, optional

Name of the rtpp to be saved

out_ng : string, optional

Name of the Non-Gaussianity to be saved

out_perng : string, optional

Name of the Non-Gaussianity perpendicular to be saved

out_parng : string, optional

Name of the Non-Gaussianity parallel to be saved

TensorModel

class dipy.workflows.reconst.TensorModel(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)

Bases: dipy.reconst.base.ReconstModel

Diffusion Tensor

Methods

fit(data[, mask]) Fit method of the DTI model class
predict(dti_params[, S0]) Predict a signal for this TensorModel class instance given parameters.
__init__(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)

A Diffusion Tensor Model [1], [2].

Parameters:
gtab : GradientTable class instance
fit_method : str or callable

str can be one of the following:

‘WLS’ for weighted least squares

dti.wls_fit_tensor()

‘LS’ or ‘OLS’ for ordinary least squares

dti.ols_fit_tensor()

‘NLLS’ for non-linear least-squares

dti.nlls_fit_tensor()

‘RT’ or ‘restore’ or ‘RESTORE’ for RESTORE robust tensor

fitting [3] dti.restore_fit_tensor()

callable has to have the signature:

fit_method(design_matrix, data, *args, **kwargs)

return_S0_hat : bool

Boolean to return (True) or not (False) the S0 values for the fit.

args, kwargs : arguments and key-word arguments passed to the

fit_method. See dti.wls_fit_tensor, dti.ols_fit_tensor for details

min_signal : float

The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to fit.

Notes

In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. Many fit_methods use the ‘step’ parameter to set the number of voxels that will be fit at once in each iteration. This is the chunk size as a number of voxels. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased.

Example : In iter_fit_tensor() we have a default step value of 1e4

References

[1](1, 2) Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
[2](1, 2) Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
[3](1, 2) Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
fit(data, mask=None)

Fit method of the DTI model class

Parameters:
data : array

The measured signal from one voxel.

mask : array

A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1]

predict(dti_params, S0=1.0)

Predict a signal for this TensorModel class instance given parameters.

Parameters:
dti_params : ndarray

The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 eigenvectors

S0 : float or ndarray

The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1

Workflow

class dipy.workflows.reconst.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

IvimModel

dipy.workflows.reconst.IvimModel(gtab, fit_method='LM', **kwargs)

Selector function to switch between the 2-stage Levenberg-Marquardt based NLLS fitting method (also containing the linear fit): LM and the Variable Projections based fitting method: VarPro.

Parameters:
fit_method : string, optional

The value fit_method can either be ‘LM’ or ‘VarPro’. default : LM

auto_response

dipy.workflows.reconst.auto_response(gtab, data, roi_center=None, roi_radius=10, fa_thr=0.7, fa_callable=<function fa_superior>, return_number_of_voxels=False)

Automatic estimation of response function using FA.

Parameters:
gtab : GradientTable
data : ndarray

diffusion data

roi_center : tuple, (3,)

Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3].

roi_radius : int

radius of cubic ROI

fa_thr : float

FA threshold

fa_callable : callable

A callable that defines an operation that compares FA with the fa_thr. The operator should have two positional arguments (e.g., fa_operator(FA, fa_thr)) and it should return a bool array.

return_number_of_voxels : bool

If True, returns the number of voxels used for estimating the response function.

Returns:
response : tuple, (2,)

(evals, S0)

ratio : float

The ratio between smallest versus largest eigenvalue of the response.

number of voxels : int (optional)

The number of voxels used for estimating the response function.

Notes

In CSD there is an important pre-processing step: the estimation of the fiber response function. In order to do this we look for voxels with very anisotropic configurations. For example we can use an ROI (20x20x20) at the center of the volume and store the signal values for the voxels with FA values higher than 0.7. Of course, if we haven’t precalculated FA we need to fit a Tensor model to the datasets. Which is what we do in this function.

For the response we also need to find the average S0 in the ROI. This is possible using gtab.b0s_mask() we can find all the S0 volumes (which correspond to b-values equal 0) in the dataset.

The response consists always of a prolate tensor created by averaging the highest and second highest eigenvalues in the ROI with FA higher than threshold. We also include the average S0s.

We also return the ratio which is used for the SDT models. If requested, the number of voxels used for estimating the response function is also returned, which can be used to judge the fidelity of the response function. As a rule of thumb, at least 300 voxels should be used to estimate a good response function (see [1]).

References

[1](1, 2) Tournier, J.D., et al. NeuroImage 2004. Direct estimation of the

fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution

axial_diffusivity

dipy.workflows.reconst.axial_diffusivity(evals, axis=-1)

Axial Diffusivity (AD) of a diffusion tensor. Also called parallel diffusivity.

Parameters:
evals : array-like

Eigenvalues of a diffusion tensor, must be sorted in descending order along axis.

axis : int

Axis of evals which contains 3 eigenvalues.

Returns:
ad : array

Calculated AD.

Notes

AD is calculated with the following equation:

\[AD = \lambda_1\]

color_fa

dipy.workflows.reconst.color_fa(fa, evecs)

Color fractional anisotropy of diffusion tensor

Parameters:
fa : array-like

Array of the fractional anisotropy (can be 1D, 2D or 3D)

evecs : array-like

eigen vectors from the tensor model

Returns:
rgb : Array with 3 channels for each color as the last dimension.

Colormap of the FA with red for the x value, y for the green value and z for the blue value.

ec{e})) imes fa

fractional_anisotropy

dipy.workflows.reconst.fractional_anisotropy(evals, axis=-1)

Fractional anisotropy (FA) of a diffusion tensor.

Parameters:
evals : array-like

Eigenvalues of a diffusion tensor.

axis : int

Axis of evals which contains 3 eigenvalues.

Returns:
fa : array

Calculated FA. Range is 0 <= FA <= 1.

Notes

FA is calculated using the following equation:

\[FA = \sqrt{\frac{1}{2}\frac{(\lambda_1-\lambda_2)^2+(\lambda_1- \lambda_3)^2+(\lambda_2-\lambda_3)^2}{\lambda_1^2+ \lambda_2^2+\lambda_3^2}}\]

geodesic_anisotropy

dipy.workflows.reconst.geodesic_anisotropy(evals, axis=-1)

Geodesic anisotropy (GA) of a diffusion tensor.

Parameters:
evals : array-like

Eigenvalues of a diffusion tensor.

axis : int

Axis of evals which contains 3 eigenvalues.

Returns:
ga : array

Calculated GA. In the range 0 to +infinity

Notes

GA is calculated using the following equation given in [1]:

\[GA = \sqrt{\sum_{i=1}^3 \log^2{\left ( \lambda_i/<\mathbf{D}> \right )}}, \quad \textrm{where} \quad <\mathbf{D}> = (\lambda_1\lambda_2\lambda_3)^{1/3}\]

Note that the notation, \(<D>\), is often used as the mean diffusivity (MD) of the diffusion tensor and can lead to confusions in the literature (see [1] versus [2] versus [3] for example). Reference [2] defines geodesic anisotropy (GA) with \(<D>\) as the MD in the denominator of the sum. This is wrong. The original paper [1] defines GA with \(<D> = det(D)^{1/3}\), as the isotropic part of the distance. This might be an explanation for the confusion. The isotropic part of the diffusion tensor in Euclidean space is the MD whereas the isotropic part of the tensor in log-Euclidean space is \(det(D)^{1/3}\). The Appendix of [1] and log-Euclidean derivations from [3] are clear on this. Hence, all that to say that \(<D> = det(D)^{1/3}\) here for the GA definition and not MD.

References

[1](1, 2, 3, 4, 5) P. G. Batchelor, M. Moakher, D. Atkinson, F. Calamante, A. Connelly, “A rigorous framework for diffusion tensor calculus”, Magnetic Resonance in Medicine, vol. 53, pp. 221-225, 2005.
[2](1, 2, 3) M. M. Correia, V. F. Newcombe, G.B. Williams. “Contrast-to-noise ratios for indices of anisotropy obtained from diffusion MRI: a study with standard clinical b-values at 3T”. NeuroImage, vol. 57, pp. 1103-1115, 2011.
[3](1, 2, 3) A. D. Lee, etal, P. M. Thompson. “Comparison of fractional and geodesic anisotropy in diffusion tensor images of 90 monozygotic and dizygotic twins”. 5th IEEE International Symposium on Biomedical Imaging (ISBI), pp. 943-946, May 2008.
[4]V. Arsigny, P. Fillard, X. Pennec, N. Ayache. “Log-Euclidean metrics for fast and simple calculus on diffusion tensors.” Magnetic Resonance in Medecine, vol 56, pp. 411-421, 2006.

get_mode

dipy.workflows.reconst.get_mode(q_form)

Mode (MO) of a diffusion tensor [1].

Parameters:
q_form : ndarray

The quadratic form of a tensor, or an array with quadratic forms of tensors. Should be of shape (x, y, z, 3, 3) or (n, 3, 3) or (3, 3).

Returns:
mode : array

Calculated tensor mode in each spatial coordinate.

Notes

Mode ranges between -1 (planar anisotropy) and +1 (linear anisotropy) with 0 representing orthotropy. Mode is calculated with the following equation (equation 9 in [1]):

\[Mode = 3*\sqrt{6}*det(\widetilde{A}/norm(\widetilde{A}))\]

Where \(\widetilde{A}\) is the deviatoric part of the tensor quadratic form.

References

[1](1, 2, 3, 4) Daniel B. Ennis and G. Kindlmann, “Orthogonal Tensor Invariants and the Analysis of Diffusion Tensor Magnetic Resonance Images”, Magnetic Resonance in Medicine, vol. 55, no. 1, pp. 136-146, 2006.

get_sphere

dipy.workflows.reconst.get_sphere(name='symmetric362')

provide triangulated spheres

Parameters:
name : str

which sphere - one of: * ‘symmetric362’ * ‘symmetric642’ * ‘symmetric724’ * ‘repulsion724’ * ‘repulsion100’ * ‘repulsion200’

Returns:
sphere : a dipy.core.sphere.Sphere class instance

Examples

>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere('symmetric362')
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere('not a sphere name') 
Traceback (most recent call last):
    ...
DataError: No sphere called "not a sphere name"

gradient_table

dipy.workflows.reconst.gradient_table(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01)

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters:
bvals : can be any of the four options
  1. an array of shape (N,) or (1, N) or (N, 1) with the b-values.
  2. a path for the file which contains an array like the above (1).
  3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.
  4. a path for the file which contains an array like the one at (3).
bvecs : can be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.
  2. a path for the file which contains an array like the previous.
big_delta : float

acquisition pulse separation time in seconds (default None)

small_delta : float

acquisition pulse duration time in seconds (default None)

b0_threshold : float

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atol : float

All b-vectors need to be unit vectors up to a tolerance.

Returns:
gradients : GradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
  2. We assume that the minimum number of b-values is 7.
  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False

literal_eval

dipy.workflows.reconst.literal_eval(node_or_string)

Safely evaluate an expression node or a string containing a Python expression. The string or node provided may only consist of the following Python literal structures: strings, bytes, numbers, tuples, lists, dicts, sets, booleans, and None.

load_nifti

dipy.workflows.reconst.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

lower_triangular

dipy.workflows.reconst.lower_triangular(tensor, b0=None)

Returns the six lower triangular values of the tensor and a dummy variable if b0 is not None

Parameters:
tensor : array_like (…, 3, 3)

a collection of 3, 3 diffusion tensors

b0 : float

if b0 is not none log(b0) is returned as the dummy variable

Returns:
D : ndarray

If b0 is none, then the shape will be (…, 6) otherwise (…, 7)

mean_diffusivity

dipy.workflows.reconst.mean_diffusivity(evals, axis=-1)

Mean Diffusivity (MD) of a diffusion tensor.

Parameters:
evals : array-like

Eigenvalues of a diffusion tensor.

axis : int

Axis of evals which contains 3 eigenvalues.

Returns:
md : array

Calculated MD.

Notes

MD is calculated with the following equation:

\[MD = \frac{\lambda_1 + \lambda_2 + \lambda_3}{3}\]

peaks_from_model

dipy.workflows.reconst.peaks_from_model(model, data, sphere, relative_peak_threshold, min_separation_angle, mask=None, return_odf=False, return_sh=True, gfa_thr=0, normalize_peaks=False, sh_order=8, sh_basis_type=None, npeaks=5, B=None, invB=None, parallel=False, nbr_processes=None)

Fit the model to data and computes peaks and metrics

Parameters:
model : a model instance

model will be used to fit the data.

sphere : Sphere

The Sphere providing discrete directions for evaluation.

relative_peak_threshold : float

Only return peaks greater than relative_peak_threshold * m where m is the largest peak.

min_separation_angle : float in [0, 90] The minimum distance between

directions. If two peaks are too close only the larger of the two is returned.

mask : array, optional

If mask is provided, voxels that are False in mask are skipped and no peaks are returned.

return_odf : bool

If True, the odfs are returned.

return_sh : bool

If True, the odf as spherical harmonics coefficients is returned

gfa_thr : float

Voxels with gfa less than gfa_thr are skipped, no peaks are returned.

normalize_peaks : bool

If true, all peak values are calculated relative to max(odf).

sh_order : int, optional

Maximum SH order in the SH fit. For sh_order, there will be (sh_order + 1) * (sh_order + 2) / 2 SH coefficients (default 8).

sh_basis_type : {None, ‘tournier07’, ‘descoteaux07’}

None for the default DIPY basis, tournier07 for the Tournier 2007 [2] basis, and descoteaux07 for the Descoteaux 2007 [1] basis (None defaults to descoteaux07).

sh_smooth : float, optional

Lambda-regularization in the SH fit (default 0.0).

npeaks : int

Maximum number of peaks found (default 5 peaks).

B : ndarray, optional

Matrix that transforms spherical harmonics to spherical function sf = np.dot(sh, B).

invB : ndarray, optional

Inverse of B.

parallel: bool

If True, use multiprocessing to compute peaks and metric (default False). Temporary files are saved in the default temporary directory of the system. It can be changed using import tempfile and tempfile.tempdir = '/path/to/tempdir'.

nbr_processes: int

If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()).

Returns:
pam : PeaksAndMetrics

An object with gfa, peak_directions, peak_values, peak_indices, odf, shm_coeffs as attributes

References

[1](1, 2) Descoteaux, M., Angelino, E., Fitzgibbons, S. and Deriche, R. Regularized, Fast, and Robust Analytical Q-ball Imaging. Magn. Reson. Med. 2007;58:497-510.
[2](1, 2) Tournier J.D., Calamante F. and Connelly A. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage. 2007;35(4):1459-1472.

peaks_to_niftis

dipy.workflows.reconst.peaks_to_niftis(pam, fname_shm, fname_dirs, fname_values, fname_indices, fname_gfa, reshape_dirs=False)

Save SH, directions, indices and values of peaks to Nifti.

radial_diffusivity

dipy.workflows.reconst.radial_diffusivity(evals, axis=-1)

Radial Diffusivity (RD) of a diffusion tensor. Also called perpendicular diffusivity.

Parameters:
evals : array-like

Eigenvalues of a diffusion tensor, must be sorted in descending order along axis.

axis : int

Axis of evals which contains 3 eigenvalues.

Returns:
rd : array

Calculated RD.

Notes

RD is calculated with the following equation:

\[RD = \frac{\lambda_2 + \lambda_3}{2}\]

read_bvals_bvecs

dipy.workflows.reconst.read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk

Parameters:
fbvals : str

Full path to file with b-values. None to not read bvals.

fbvecs : str

Full path of file with b-vectors. None to not read bvecs.

Returns:
bvals : array, (N,) or None
bvecs : array, (N, 3) or None

Notes

Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).

save_nifti

dipy.workflows.reconst.save_nifti(fname, data, affine, hdr=None)

save_peaks

dipy.workflows.reconst.save_peaks(fname, pam, affine=None, verbose=False)

Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5).

Parameters:
fname : string

Filename of PAM5 file

pam : PeaksAndMetrics

Object holding peak_dirs, shm_coeffs and other attributes

affine : array

The 4x4 matrix transforming the date from native to world coordinates. PeaksAndMetrics should have that attribute but if not it can be provided here. Default None.

verbose : bool

Print summary information about the saved file.

split_dki_param

dipy.workflows.reconst.split_dki_param(dki_params)

Extract the diffusion tensor eigenvalues, the diffusion tensor eigenvector matrix, and the 15 independent elements of the kurtosis tensor from the model parameters estimated from the DKI model

Parameters:
dki_params : ndarray (x, y, z, 27) or (n, 27)

All parameters estimated from the diffusion kurtosis model. Parameters are ordered as follows:

  1. Three diffusion tensor’s eigenvalues
  2. Three lines of the eigenvector matrix each containing the first, second and third coordinates of the eigenvector
  3. Fifteen elements of the kurtosis tensor
Returns:
eigvals : array (x, y, z, 3) or (n, 3)

Eigenvalues from eigen decomposition of the tensor.

eigvecs : array (x, y, z, 3, 3) or (n, 3, 3)

Associated eigenvectors from eigen decomposition of the tensor. Eigenvectors are columnar (e.g. eigvecs[:,j] is associated with eigvals[j])

kt : array (x, y, z, 15) or (n, 15)

Fifteen elements of the kurtosis tensor

warn

dipy.workflows.reconst.warn()

Issue a warning, or maybe ignore it or raise an exception.

LabelsBundlesFlow

class dipy.workflows.segment.LabelsBundlesFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(streamline_files, labels_files[, …]) Extract bundles using existing indices (labels)
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(streamline_files, labels_files, out_dir='', out_bundle='recognized_orig.trk')

Extract bundles using existing indices (labels)

Parameters:
streamline_files : string

The path of streamline files where you want to recognize bundles

labels_files : string

The path of model bundle files

out_dir : string, optional

Output directory (default input file directory)

out_bundle : string, optional

Recognized bundle in the space of the model bundle (default ‘recognized_orig.trk’)

References

[Garyfallidis17]Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

MedianOtsuFlow

class dipy.workflows.segment.MedianOtsuFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files[, save_masked, …]) Workflow wrapping the median_otsu segmentation method.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, save_masked=False, median_radius=2, numpass=5, autocrop=False, vol_idx=None, dilate=None, out_dir='', out_mask='brain_mask.nii.gz', out_masked='dwi_masked.nii.gz')

Workflow wrapping the median_otsu segmentation method.

Applies median_otsu segmentation on each file found by ‘globing’ input_files and saves the results in a directory specified by out_dir.

Parameters:
input_files : string

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

save_masked : bool, optional

Save mask

median_radius : int, optional

Radius (in voxels) of the applied median filter (default 2)

numpass : int, optional

Number of pass of the median filter (default 5)

autocrop : bool, optional

If True, the masked input_volumes will also be cropped using the bounding box defined by the masked data. For example, if diffusion images are of 1x1x1 (mm^3) or higher resolution auto-cropping could reduce their size in memory and speed up some of the analysis. (default False)

vol_idx : variable int, optional

1D array representing indices of axis=3 of a 4D input_volume ‘None’ (the default) corresponds to (0,) (assumes first volume in 4D array). From cmd line use 3 4 5 6. From script use [3, 4, 5, 6].

dilate : int, optional

number of iterations for binary dilation (default ‘None’)

out_dir : string, optional

Output directory (default input file directory)

out_mask : string, optional

Name of the mask volume to be saved (default ‘brain_mask.nii.gz’)

out_masked : string, optional

Name of the masked volume to be saved (default ‘dwi_masked.nii.gz’)

RecoBundles

class dipy.workflows.segment.RecoBundles(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=True)

Bases: object

Methods

evaluate_results(model_bundle, …) Comapare the similiarity between two given bundles, model bundle, and extracted bundle.
recognize(model_bundle, model_clust_thr[, …]) Recognize the model_bundle in self.streamlines
refine(model_bundle, pruned_streamlines, …) Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input.
__init__(streamlines, greater_than=50, less_than=1000000, cluster_map=None, clust_thr=15, nb_pts=20, rng=None, verbose=True)

Recognition of bundles

Extract bundles from a participants’ tractograms using model bundles segmented from a different subject or an atlas of bundles. See [Garyfallidis17] for the details.

Parameters:
streamlines : Streamlines

The tractogram in which you want to recognize bundles.

greater_than : int, optional

Keep streamlines that have length greater than this value (default 50)

less_than : int, optional

Keep streamlines have length less than this value (default 1000000)

cluster_map : QB map

Provide existing clustering to start RB faster (default None).

clust_thr : float

Distance threshold in mm for clustering streamlines

rng : RandomState

If None define RandomState in initialization function.

nb_pts : int

Number of points per streamline (default 20)

Notes

Make sure that before creating this class that the streamlines and the model bundles are roughly in the same space. Also default thresholds are assumed in RAS 1mm^3 space. You may want to adjust those if your streamlines are not in world coordinates.

References

[Garyfallidis17](1, 2) Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
evaluate_results(model_bundle, pruned_streamlines, slr_select)

Comapare the similiarity between two given bundles, model bundle, and extracted bundle.

Parameters:
model_bundle : Streamlines
pruned_streamlines : Streamlines
slr_select : tuple

Select the number of streamlines from model to neirborhood of model to perform the local SLR.

Returns:
ba_value : float

bundle analytics value between model bundle and pruned bundle

bmd_value : float

bundle minimum distance value between model bundle and pruned bundle

recognize(model_bundle, model_clust_thr, reduction_thr=10, reduction_distance='mdf', slr=True, slr_num_threads=None, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=5, pruning_distance='mdf')

Recognize the model_bundle in self.streamlines

Parameters:
model_bundle : Streamlines
model_clust_thr : float
reduction_thr : float
reduction_distance : string

mdf or mam (default mam)

slr : bool

Use Streamline-based Linear Registration (SLR) locally (default True)

slr_metric : BundleMinDistanceMetric
slr_x0 : array

(default None)

slr_bounds : array

(default None)

slr_select : tuple

Select the number of streamlines from model to neirborhood of model to perform the local SLR.

slr_method : string

Optimization method (default ‘L-BFGS-B’)

pruning_thr : float
pruning_distance : string

MDF (‘mdf’) and MAM (‘mam’)

Returns:
recognized_transf : Streamlines

Recognized bundle in the space of the model tractogram

recognized_labels : array

Indices of recognized bundle in the original tractogram

References

[Garyfallidis17]Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
refine(model_bundle, pruned_streamlines, model_clust_thr, reduction_thr=14, reduction_distance='mdf', slr=True, slr_metric=None, slr_x0=None, slr_bounds=None, slr_select=(400, 600), slr_method='L-BFGS-B', pruning_thr=6, pruning_distance='mdf')

Refine and recognize the model_bundle in self.streamlines This method expects once pruned streamlines as input. It refines the first ouput of recobundle by applying second local slr (optional), and second pruning. This method is useful when we are dealing with noisy data or when we want to extract small tracks from tractograms.

Parameters:
model_bundle : Streamlines
pruned_streamlines : Streamlines
model_clust_thr : float
reduction_thr : float
reduction_distance : string

mdf or mam (default mam)

slr : bool

Use Streamline-based Linear Registration (SLR) locally (default True)

slr_metric : BundleMinDistanceMetric
slr_x0 : array

(default None)

slr_bounds : array

(default None)

slr_select : tuple

Select the number of streamlines from model to neirborhood of model to perform the local SLR.

slr_method : string

Optimization method (default ‘L-BFGS-B’)

pruning_thr : float
pruning_distance : string

MDF (‘mdf’) and MAM (‘mam’)

Returns:
recognized_transf : Streamlines

Recognized bundle in the space of the model tractogram

recognized_labels : array

Indices of recognized bundle in the original tractogram

References

[Garyfallidis17]Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

RecoBundlesFlow

class dipy.workflows.segment.RecoBundlesFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(streamline_files, model_bundle_files[, …]) Recognize bundles
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(streamline_files, model_bundle_files, greater_than=50, less_than=1000000, no_slr=False, clust_thr=15.0, reduction_thr=15.0, reduction_distance='mdf', model_clust_thr=2.5, pruning_thr=8.0, pruning_distance='mdf', slr_metric='symmetric', slr_transform='similarity', slr_matrix='small', refine=False, r_reduction_thr=12.0, r_pruning_thr=6.0, no_r_slr=False, out_dir='', out_recognized_transf='recognized.trk', out_recognized_labels='labels.npy')

Recognize bundles

Parameters:
streamline_files : string

The path of streamline files where you want to recognize bundles

model_bundle_files : string

The path of model bundle files

greater_than : int, optional

Keep streamlines that have length greater than this value (default 50) in mm.

less_than : int, optional

Keep streamlines have length less than this value (default 1000000) in mm.

no_slr : bool, optional

Don’t enable local Streamline-based Linear Registration (default False).

clust_thr : float, optional

MDF distance threshold for all streamlines (default 15)

reduction_thr : float, optional

Reduce search space by (mm) (default 15)

reduction_distance : string, optional

Reduction distance type can be mdf or mam (default mdf)

model_clust_thr : float, optional

MDF distance threshold for the model bundles (default 2.5)

pruning_thr : float, optional

Pruning after matching (default 8).

pruning_distance : string, optional

Pruning distance type can be mdf or mam (default mdf)

slr_metric : string, optional

Options are None, symmetric, asymmetric or diagonal (default symmetric).

slr_transform : string, optional

Transformation allowed. translation, rigid, similarity or scaling (Default ‘similarity’).

slr_matrix : string, optional

Options are ‘nano’, ‘tiny’, ‘small’, ‘medium’, ‘large’, ‘huge’ (default ‘small’)

refine : bool, optional

Enable refine recognized bunle (default False)

r_reduction_thr : float, optional

Refine reduce search space by (mm) (default 12)

r_pruning_thr : float, optional

Refine pruning after matching (default 6).

no_r_slr : bool, optional

Don’t enable Refine local Streamline-based Linear Registration (default False).

out_dir : string, optional

Output directory (default input file directory)

out_recognized_transf : string, optional

Recognized bundle in the space of the model bundle (default ‘recognized.trk’)

out_recognized_labels : string, optional

Indices of recognized bundle in the original tractogram (default ‘labels.npy’)

References

[Garyfallidis17]Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

Workflow

class dipy.workflows.segment.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

load_nifti

dipy.workflows.segment.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

load_trk

dipy.workflows.segment.load_trk(filename, lazy_load=False)

Loads tractogram files (*.tck)

Parameters:
filename : str

input trk filename

lazy_load : {False, True}, optional

If True, load streamlines in a lazy manner i.e. they will not be kept in memory and only be loaded when needed. Otherwise, load all streamlines in memory.

Returns:
streamlines : list of 2D arrays

Each 2D array represents a sequence of 3D points (points, 3).

hdr : dict

header from a trk file

median_otsu

dipy.workflows.segment.median_otsu(input_volume, median_radius=4, numpass=4, autocrop=False, vol_idx=None, dilate=None)

Simple brain extraction tool method for images from DWI data.

It uses a median filter smoothing of the input_volumes vol_idx and an automatic histogram Otsu thresholding technique, hence the name median_otsu.

This function is inspired from Mrtrix’s bet which has default values median_radius=3, numpass=2. However, from tests on multiple 1.5T and 3T data from GE, Philips, Siemens, the most robust choice is median_radius=4, numpass=4.

Parameters:
input_volume : ndarray

ndarray of the brain volume

median_radius : int

Radius (in voxels) of the applied median filter (default: 4).

numpass: int

Number of pass of the median filter (default: 4).

autocrop: bool, optional

if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. (default: False).

vol_idx : None or array, optional

1D array representing indices of axis=3 of a 4D input_volume None (the default) corresponds to (0,) (assumes first volume in 4D array).

dilate : None or int, optional

number of iterations for binary dilation

Returns:
maskedvolume : ndarray

Masked input_volume

mask : 3D ndarray

The binary brain mask

Notes

Copyright (C) 2011, the scikit-image team All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

save_nifti

dipy.workflows.segment.save_nifti(fname, data, affine, hdr=None)

time

dipy.workflows.segment.time() → floating point number

Return the current time in seconds since the Epoch. Fractions of a second may be present if the system clock provides them.

BundleAnalysisPopulationFlow

class dipy.workflows.stats.BundleAnalysisPopulationFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(model_bundle_folder, subject_folder[, …]) Workflow of bundle analytics.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(model_bundle_folder, subject_folder, no_disks=100, out_dir='')

Workflow of bundle analytics.

Applies statistical analysis on bundles of subjects and saves the results in a directory specified by out_dir.

Parameters:
model_bundle_folder : string

Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.

subject_folder : string

Path to the input subject folder. This path may contain wildcards to process multiple inputs at once.

no_disks : integer, optional

Number of disks used for dividing bundle into disks. (Default 100)

out_dir : string, optional

Output directory (default input file directory)

References

[Chandio19]Chandio, B.Q., S. Koudoro, D. Reagan, J. Harezlak,

E. Garyfallidis, Bundle Analytics: a computational and statistical analyses framework for tractometric studies, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.

LinearMixedModelsFlow

class dipy.workflows.stats.LinearMixedModelsFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(h5_files[, no_disks, out_dir]) Workflow of linear Mixed Models.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(h5_files, no_disks=100, out_dir='')

Workflow of linear Mixed Models.

Applies linear Mixed Models on bundles of subjects and saves the results in a directory specified by out_dir.

Parameters:
h5_files : string

Path to the input metric files. This path may contain wildcards to process multiple inputs at once.

no_disks : integer, optional

Number of disks used for dividing bundle into disks. (Default 100)

out_dir : string, optional

Output directory (default input file directory)

SNRinCCFlow

class dipy.workflows.stats.SNRinCCFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(data_files, bvals_files, bvecs_files, …) Compute the signal-to-noise ratio in the corpus callosum.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(data_files, bvals_files, bvecs_files, mask_files, bbox_threshold=[0.6, 1, 0, 0.1, 0, 0.1], out_dir='', out_file='product.json', out_mask_cc='cc.nii.gz', out_mask_noise='mask_noise.nii.gz')

Compute the signal-to-noise ratio in the corpus callosum.

Parameters:
data_files : string

Path to the dwi.nii.gz file. This path may contain wildcards to process multiple inputs at once.

bvals_files : string

Path of bvals.

bvecs_files : string

Path of bvecs.

mask_files : string

Path of brain mask

bbox_threshold : variable float, optional

Threshold for bounding box, values separated with commas for ex. [0.6,1,0,0.1,0,0.1]. (default (0.6, 1, 0, 0.1, 0, 0.1))

out_dir : string, optional

Where the resulting file will be saved. (default ‘’)

out_file : string, optional

Name of the result file to be saved. (default ‘product.json’)

out_mask_cc : string, optional

Name of the CC mask volume to be saved (default ‘cc.nii.gz’)

out_mask_noise : string, optional

Name of the mask noise volume to be saved (default ‘mask_noise.nii.gz’)

TensorModel

class dipy.workflows.stats.TensorModel(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)

Bases: dipy.reconst.base.ReconstModel

Diffusion Tensor

Methods

fit(data[, mask]) Fit method of the DTI model class
predict(dti_params[, S0]) Predict a signal for this TensorModel class instance given parameters.
__init__(gtab, fit_method='WLS', return_S0_hat=False, *args, **kwargs)

A Diffusion Tensor Model [1], [2].

Parameters:
gtab : GradientTable class instance
fit_method : str or callable

str can be one of the following:

‘WLS’ for weighted least squares

dti.wls_fit_tensor()

‘LS’ or ‘OLS’ for ordinary least squares

dti.ols_fit_tensor()

‘NLLS’ for non-linear least-squares

dti.nlls_fit_tensor()

‘RT’ or ‘restore’ or ‘RESTORE’ for RESTORE robust tensor

fitting [3] dti.restore_fit_tensor()

callable has to have the signature:

fit_method(design_matrix, data, *args, **kwargs)

return_S0_hat : bool

Boolean to return (True) or not (False) the S0 values for the fit.

args, kwargs : arguments and key-word arguments passed to the

fit_method. See dti.wls_fit_tensor, dti.ols_fit_tensor for details

min_signal : float

The minimum signal value. Needs to be a strictly positive number. Default: minimal signal in the data provided to fit.

Notes

In order to increase speed of processing, tensor fitting is done simultaneously over many voxels. Many fit_methods use the ‘step’ parameter to set the number of voxels that will be fit at once in each iteration. This is the chunk size as a number of voxels. A larger step value should speed things up, but it will also take up more memory. It is advisable to keep an eye on memory consumption as this value is increased.

Example : In iter_fit_tensor() we have a default step value of 1e4

References

[1](1, 2) Basser, P.J., Mattiello, J., LeBihan, D., 1994. Estimation of the effective self-diffusion tensor from the NMR spin echo. J Magn Reson B 103, 247-254.
[2](1, 2) Basser, P., Pierpaoli, C., 1996. Microstructural and physiological features of tissues elucidated by quantitative diffusion-tensor MRI. Journal of Magnetic Resonance 111, 209-219.
[3](1, 2) Lin-Ching C., Jones D.K., Pierpaoli, C. 2005. RESTORE: Robust estimation of tensors by outlier rejection. MRM 53: 1088-1095
fit(data, mask=None)

Fit method of the DTI model class

Parameters:
data : array

The measured signal from one voxel.

mask : array

A boolean array used to mark the coordinates in the data that should be analyzed that has the shape data.shape[:-1]

predict(dti_params, S0=1.0)

Predict a signal for this TensorModel class instance given parameters.

Parameters:
dti_params : ndarray

The last dimension should have 12 tensor parameters: 3 eigenvalues, followed by the 3 eigenvectors

S0 : float or ndarray

The non diffusion-weighted signal in every voxel, or across all voxels. Default: 1

Workflow

class dipy.workflows.stats.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

binary_dilation

dipy.workflows.stats.binary_dilation(input, structure=None, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False)

Multi-dimensional binary dilation with the given structuring element.

Parameters:
input : array_like

Binary array_like to be dilated. Non-zero (True) elements form the subset to be dilated.

structure : array_like, optional

Structuring element used for the dilation. Non-zero elements are considered True. If no structuring element is provided an element is generated with a square connectivity equal to one.

iterations : {int, float}, optional

The dilation is repeated iterations times (one, by default). If iterations is less than 1, the dilation is repeated until the result does not change anymore.

mask : array_like, optional

If a mask is given, only those elements with a True value at the corresponding mask element are modified at each iteration.

output : ndarray, optional

Array of the same shape as input, into which the output is placed. By default, a new array is created.

border_value : int (cast to 0 or 1), optional

Value at the border in the output array.

origin : int or tuple of ints, optional

Placement of the filter, by default 0.

brute_force : boolean, optional

Memory condition: if False, only the pixels whose value was changed in the last iteration are tracked as candidates to be updated (dilated) in the current iteration; if True all pixels are considered as candidates for dilation, regardless of what happened in the previous iteration. False by default.

Returns:
binary_dilation : ndarray of bools

Dilation of the input by the structuring element.

See also

grey_dilation, binary_erosion, binary_closing, binary_opening, generate_binary_structure

Notes

Dilation [1] is a mathematical morphology operation [2] that uses a structuring element for expanding the shapes in an image. The binary dilation of an image by a structuring element is the locus of the points covered by the structuring element, when its center lies within the non-zero points of the image.

References

[1](1, 2) http://en.wikipedia.org/wiki/Dilation_%28morphology%29
[2](1, 2) http://en.wikipedia.org/wiki/Mathematical_morphology

Examples

>>> from scipy import ndimage
>>> a = np.zeros((5, 5))
>>> a[2, 2] = 1
>>> a
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
>>> ndimage.binary_dilation(a)
array([[False, False, False, False, False],
       [False, False,  True, False, False],
       [False,  True,  True,  True, False],
       [False, False,  True, False, False],
       [False, False, False, False, False]], dtype=bool)
>>> ndimage.binary_dilation(a).astype(a.dtype)
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
>>> # 3x3 structuring element with connectivity 1, used by default
>>> struct1 = ndimage.generate_binary_structure(2, 1)
>>> struct1
array([[False,  True, False],
       [ True,  True,  True],
       [False,  True, False]], dtype=bool)
>>> # 3x3 structuring element with connectivity 2
>>> struct2 = ndimage.generate_binary_structure(2, 2)
>>> struct2
array([[ True,  True,  True],
       [ True,  True,  True],
       [ True,  True,  True]], dtype=bool)
>>> ndimage.binary_dilation(a, structure=struct1).astype(a.dtype)
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
>>> ndimage.binary_dilation(a, structure=struct2).astype(a.dtype)
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
>>> ndimage.binary_dilation(a, structure=struct1,\
... iterations=2).astype(a.dtype)
array([[ 0.,  0.,  1.,  0.,  0.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 1.,  1.,  1.,  1.,  1.],
       [ 0.,  1.,  1.,  1.,  0.],
       [ 0.,  0.,  1.,  0.,  0.]])

bounding_box

dipy.workflows.stats.bounding_box(vol)

Compute the bounding box of nonzero intensity voxels in the volume.

Parameters:
vol : ndarray

Volume to compute bounding box on.

Returns:
npmins : list

Array containg minimum index of each dimension

npmaxs : list

Array containg maximum index of each dimension

bundle_analysis

dipy.workflows.stats.bundle_analysis(model_bundle_folder, bundle_folder, orig_bundle_folder, metric_folder, group, subject, no_disks=100, out_dir='')

Applies statistical analysis on bundles and saves the results in a directory specified by out_dir.

Parameters:
model_bundle_folder : string

Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.

bundle_folder : string

Path to the input bundle files in common space. This path may contain wildcards to process multiple inputs at once.

orig_folder : string

Path to the input bundle files in native space. This path may contain wildcards to process multiple inputs at once.

metric_folder : string

Path to the input dti metric or/and peak files. It will be used as metric for statistical analysis of bundles.

group : string

what group subject belongs to e.g. control or patient

subject : string

subject id e.g. 10001

no_disks : integer, optional

Number of disks used for dividing bundle into disks. (Default 100)

out_dir : string, optional

Output directory (default input file directory)

References

[Chandio19]Chandio, B.Q., S. Koudoro, D. Reagan, J. Harezlak,

E. Garyfallidis, Bundle Analytics: a computational and statistical analyses framework for tractometric studies, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.

gradient_table

dipy.workflows.stats.gradient_table(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01)

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters:
bvals : can be any of the four options
  1. an array of shape (N,) or (1, N) or (N, 1) with the b-values.
  2. a path for the file which contains an array like the above (1).
  3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.
  4. a path for the file which contains an array like the one at (3).
bvecs : can be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.
  2. a path for the file which contains an array like the previous.
big_delta : float

acquisition pulse separation time in seconds (default None)

small_delta : float

acquisition pulse duration time in seconds (default None)

b0_threshold : float

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atol : float

All b-vectors need to be unit vectors up to a tolerance.

Returns:
gradients : GradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.
  2. We assume that the minimum number of b-values is 7.
  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False

load_nifti

dipy.workflows.stats.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

median_otsu

dipy.workflows.stats.median_otsu(input_volume, median_radius=4, numpass=4, autocrop=False, vol_idx=None, dilate=None)

Simple brain extraction tool method for images from DWI data.

It uses a median filter smoothing of the input_volumes vol_idx and an automatic histogram Otsu thresholding technique, hence the name median_otsu.

This function is inspired from Mrtrix’s bet which has default values median_radius=3, numpass=2. However, from tests on multiple 1.5T and 3T data from GE, Philips, Siemens, the most robust choice is median_radius=4, numpass=4.

Parameters:
input_volume : ndarray

ndarray of the brain volume

median_radius : int

Radius (in voxels) of the applied median filter (default: 4).

numpass: int

Number of pass of the median filter (default: 4).

autocrop: bool, optional

if True, the masked input_volume will also be cropped using the bounding box defined by the masked data. Should be on if DWI is upsampled to 1x1x1 resolution. (default: False).

vol_idx : None or array, optional

1D array representing indices of axis=3 of a 4D input_volume None (the default) corresponds to (0,) (assumes first volume in 4D array).

dilate : None or int, optional

number of iterations for binary dilation

Returns:
maskedvolume : ndarray

Masked input_volume

mask : 3D ndarray

The binary brain mask

Notes

Copyright (C) 2011, the scikit-image team All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  3. Neither the name of skimage nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

optional_package

dipy.workflows.stats.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters:
name : str

package name

trip_msg : None or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns:
pkg_like : module or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkg : bool

True if import for package was successful, false otherwise

module_setup : function

callable usually set as setup_module in calling namespace, to allow skipping tests.

read_bvals_bvecs

dipy.workflows.stats.read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk

Parameters:
fbvals : str

Full path to file with b-values. None to not read bvals.

fbvecs : str

Full path of file with b-vectors. None to not read bvecs.

Returns:
bvals : array, (N,) or None
bvecs : array, (N, 3) or None

Notes

Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).

save_nifti

dipy.workflows.stats.save_nifti(fname, data, affine, hdr=None)

segment_from_cfa

dipy.workflows.stats.segment_from_cfa(tensor_fit, roi, threshold, return_cfa=False)

Segment the cfa inside roi using the values from threshold as bounds.

Parameters:
tensor_fit : TensorFit object

TensorFit object

roi : ndarray

A binary mask, which contains the bounding box for the segmentation.

threshold : array-like

An iterable that defines the min and max values to use for the thresholding. The values are specified as (R_min, R_max, G_min, G_max, B_min, B_max)

return_cfa : bool, optional

If True, the cfa is also returned.

Returns:
mask : ndarray

Binary mask of the segmentation.

cfa : ndarray, optional

Array with shape = (…, 3), where … is the shape of tensor_fit. The color fractional anisotropy, ordered as a nd array with the last dimension of size 3 for the R, G and B channels.

simple_plot

dipy.workflows.stats.simple_plot(file_name, title, x, y, xlabel, ylabel)

Saves the simple plot with given x and y values

Parameters:
file_name : string

file name for saving the plot

title : string

title of the plot

x : integer list

x-axis values to be ploted

y : integer list

y-axis values to be ploted

xlabel : string

label for x-axis

ylable : string

label for y-axis

ClosestPeakDirectionGetter

class dipy.workflows.tracking.ClosestPeakDirectionGetter

Bases: dipy.direction.closest_peak_direction_getter.PmfGenDirectionGetter

A direction getter that returns the closest odf peak to previous tracking direction.

Methods

from_pmf Constructor for making a DirectionGetter from an array of Pmfs
from_shcoeff Probabilistic direction getter from a distribution of directions on the sphere
initial_direction Returns best directions at seed location to start tracking.
get_direction  
__init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

CmcTissueClassifier

class dipy.workflows.tracking.CmcTissueClassifier

Bases: dipy.tracking.local.tissue_classifier.ConstrainedTissueClassifier

Continuous map criterion (CMC) stopping criteria from [1]. This implements the use of partial volume fraction (PVE) maps to determine when the tracking stops.

cdef:
double interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] include_map, exclude_map double step_size double average_voxel_size double correction_factor

References

[1](1, 2, 3) Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M.

“Towards quantitative connectivity analysis: reducing tractography biases.” NeuroImage, 98, 266-278, 2014.

Methods

from_pve ConstrainedTissueClassifier from partial volume fraction (PVE) maps.
check_point  
get_exclude  
get_include  
__init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

DeterministicMaximumDirectionGetter

class dipy.workflows.tracking.DeterministicMaximumDirectionGetter

Bases: dipy.direction.probabilistic_direction_getter.ProbabilisticDirectionGetter

Return direction of a sphere with the highest probability mass function (pmf).

Methods

from_pmf Constructor for making a DirectionGetter from an array of Pmfs
from_shcoeff Probabilistic direction getter from a distribution of directions on the sphere
initial_direction Returns best directions at seed location to start tracking.
get_direction  
__init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

LocalFiberTrackingPAMFlow

class dipy.workflows.tracking.LocalFiberTrackingPAMFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(pam_files, stopping_files, seeding_files) Workflow for Local Fiber Tracking.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(pam_files, stopping_files, seeding_files, stopping_thr=0.2, seed_density=1, tracking_method='deterministic', pmf_threshold=0.1, max_angle=30.0, out_dir='', out_tractogram='tractogram.trk')

Workflow for Local Fiber Tracking.

This workflow use a saved peaks and metrics (PAM) file as input.

Parameters:
pam_files : string
Path to the peaks and metrics files. This path may contain

wildcards to use multiple masks at once.

stopping_files : string

Path of FA or other images used for stopping criteria for tracking.

seeding_files : string

A binary image showing where we need to seed for tracking.

stopping_thr : float, optional
Threshold applied to stopping volume’s data to identify where

tracking has to stop (default 0.25).

seed_density : int, optional
Number of seeds per dimension inside voxel (default 1).

For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.

tracking_method : string, optional
Select direction getter strategy:
  • “eudx” (Uses the peaks saved in the pam_files)
  • “deterministic” or “det” for a deterministic tracking (Uses the sh saved in the pam_files, default)
  • “probabilistic” or “prob” for a Probabilistic tracking (Uses the sh saved in the pam_files)
  • “closestpeaks” or “cp” for a ClosestPeaks tracking (Uses the sh saved in the pam_files)
pmf_threshold : float, optional

Threshold for ODF functions. (default 0.1)

max_angle : float, optional

Maximum angle between tract segments. This angle can be more generous (larger) than values typically used with probabilistic direction getters. The angle range is (0, 90)

out_dir : string, optional

Output directory (default input file directory)

out_tractogram : string, optional

Name of the tractogram file to be saved (default ‘tractogram.trk’)

References

Garyfallidis, University of Cambridge, PhD thesis 2012. Amirbekian, University of California San Francisco, PhD thesis 2017.

LocalTracking

class dipy.workflows.tracking.LocalTracking(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)

Bases: object

__init__(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, fixedstep=True, return_all=True, random_seed=None)

Creates streamlines by using local fiber-tracking.

Parameters:
direction_getter : instance of DirectionGetter

Used to get directions for fiber tracking.

tissue_classifier : instance of TissueClassifier

Identifies endpoints and invalid points to inform tracking.

seeds : array (N, 3)

Points to seed the tracking. Seed points should be given in point space of the track (see affine).

affine : array (4, 4)

Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in “voxel coordinates” as long as isotropic voxels were used to acquire the data.

step_size : float

Step size used for tracking.

max_cross : int or None

The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked.

maxlen : int

Maximum number of steps to track from seed. Used to prevent infinite loops.

fixedstep : bool

If true, a fixed stepsize is used, otherwise a variable step size is used.

return_all : bool

If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image.

random_seed : int

The seed for the random seed generator (numpy.random.seed and random.seed).

PFTrackingPAMFlow

class dipy.workflows.tracking.PFTrackingPAMFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(pam_files, wm_files, gm_files, …[, …]) Workflow for Particle Filtering Tracking.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(pam_files, wm_files, gm_files, csf_files, seeding_files, step_size=0.2, back_tracking_dist=2, front_tracking_dist=1, max_trial=20, particle_count=15, seed_density=1, pmf_threshold=0.1, max_angle=30.0, out_dir='', out_tractogram='tractogram.trk')

Workflow for Particle Filtering Tracking.

This workflow use a saved peaks and metrics (PAM) file as input.

Parameters:
pam_files : string
Path to the peaks and metrics files. This path may contain

wildcards to use multiple masks at once.

wm_files : string

Path of White matter for stopping criteria for tracking.

gm_files : string

Path of grey matter for stopping criteria for tracking.

csf_files : string

Path of cerebrospinal fluid for stopping criteria for tracking.

seeding_files : string

A binary image showing where we need to seed for tracking.

step_size : float, optional

Step size used for tracking.

back_tracking_dist : float, optional

Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 2 mm.

front_tracking_dist : float, optional

Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 1 mm.

max_trial : int, optional

Maximum number of trial for the particle filtering tractography (Prevents infinite loops, default=20).

particle_count : int, optional

Number of particles to use in the particle filter. (default 15)

seed_density : int, optional
Number of seeds per dimension inside voxel (default 1).

For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.

pmf_threshold : float, optional

Threshold for ODF functions. (default 0.1)

max_angle : float, optional

Maximum angle between tract segments. This angle can be more generous (larger) than values typically used with probabilistic direction getters. The angle range is (0, 90)

out_dir : string, optional

Output directory (default input file directory)

out_tractogram : string, optional

Name of the tractogram file to be saved (default ‘tractogram.trk’)

References

Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M.
Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014..

ParticleFilteringTracking

class dipy.workflows.tracking.ParticleFilteringTracking(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)

Bases: dipy.tracking.local.localtracking.LocalTracking

__init__(direction_getter, tissue_classifier, seeds, affine, step_size, max_cross=None, maxlen=500, pft_back_tracking_dist=2, pft_front_tracking_dist=1, pft_max_trial=20, particle_count=15, return_all=True, random_seed=None)

A streamline generator using the particle filtering tractography method [1].

Parameters:
direction_getter : instance of ProbabilisticDirectionGetter

Used to get directions for fiber tracking.

tissue_classifier : instance of ConstrainedTissueClassifier

Identifies endpoints and invalid points to inform tracking.

seeds : array (N, 3)

Points to seed the tracking. Seed points should be given in point space of the track (see affine).

affine : array (4, 4)

Coordinate space for the streamline point with respect to voxel indices of input data. This affine can contain scaling, rotational, and translational components but should not contain any shearing. An identity matrix can be used to generate streamlines in “voxel coordinates” as long as isotropic voxels were used to acquire the data.

step_size : float

Step size used for tracking.

max_cross : int or None

The maximum number of direction to track from each seed in crossing voxels. By default all initial directions are tracked.

maxlen : int

Maximum number of steps to track from seed. Used to prevent infinite loops.

pft_back_tracking_dist : float

Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 2 mm.

pft_front_tracking_dist : float

Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist. By default this is set to 1 mm.

pft_max_trial : int

Maximum number of trial for the particle filtering tractography (Prevents infinite loops).

particle_count : int

Number of particles to use in the particle filter.

return_all : bool

If true, return all generated streamlines, otherwise only streamlines reaching end points or exiting the image.

random_seed : int

The seed for the random seed generator (numpy.random.seed and random.seed).

References

[1](1, 2) Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014.

ProbabilisticDirectionGetter

class dipy.workflows.tracking.ProbabilisticDirectionGetter

Bases: dipy.direction.closest_peak_direction_getter.PmfGenDirectionGetter

Randomly samples direction of a sphere based on probability mass function (pmf).

The main constructors for this class are current from_pmf and from_shcoeff. The pmf gives the probability that each direction on the sphere should be chosen as the next direction. To get the true pmf from the “raw pmf” directions more than max_angle degrees from the incoming direction are set to 0 and the result is normalized.

Methods

from_pmf Constructor for making a DirectionGetter from an array of Pmfs
from_shcoeff Probabilistic direction getter from a distribution of directions on the sphere
initial_direction Returns best directions at seed location to start tracking.
get_direction  
__init__()

Direction getter from a pmf generator.

Parameters:
pmf_gen : PmfGen

Used to get probability mass function for selecting tracking directions.

max_angle : float, [0, 90]

The maximum allowed angle between incoming direction and new direction.

sphere : Sphere

The set of directions to be used for tracking.

pmf_threshold : float [0., 1.]

Used to remove direction from the probability mass function for selecting the tracking direction.

relative_peak_threshold : float in [0., 1.]

Used for extracting initial tracking directions. Passed to peak_directions.

min_separation_angle : float in [0, 90]

Used for extracting initial tracking directions. Passed to peak_directions.

ThresholdTissueClassifier

class dipy.workflows.tracking.ThresholdTissueClassifier

Bases: dipy.tracking.local.tissue_classifier.TissueClassifier

# Declarations from tissue_classifier.pxd bellow cdef:

double threshold, interp_out_double[1] double[:] interp_out_view = interp_out_view double[:, :, :] metric_map

Methods

check_point  
__init__($self, /, *args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

Tractogram

class dipy.workflows.tracking.Tractogram(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)

Bases: object

Container for streamlines and their data information.

Streamlines of a tractogram can be in any coordinate system of your choice as long as you provide the correct affine_to_rasmm matrix, at construction time. When applied to streamlines coordinates, that transformation matrix should bring the streamlines back to world space (RAS+ and mm space) [1]_.

Moreover, when streamlines are mapped back to voxel space [2]_, a streamline point located at an integer coordinate (i,j,k) is considered to be at the center of the corresponding voxel. This is in contrast with other conventions where it might have referred to a corner.

References

[1] http://nipy.org/nibabel/coordinate_systems.html#naming-reference-spaces [2] http://nipy.org/nibabel/coordinate_systems.html#voxel-coordinates-are-in-voxel-space

Attributes:
streamlines : ArraySequence object

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamline : PerArrayDict object

Dictionary where the items are (str, 2D array). Each key represents a piece of information \(i\) to be kept alongside every streamline, and its associated value is a 2D array of shape (\(T\), \(P_i\)) where \(T\) is the number of streamlines and \(P_i\) is the number of values to store for that particular piece of information \(i\).

data_per_point : PerArraySequenceDict object

Dictionary where the items are (str, ArraySequence). Each key represents a piece of information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number values to store for that particular piece of information \(i\).

Methods

apply_affine(affine[, lazy]) Applies an affine transformation on the points of each streamline.
copy() Returns a copy of this Tractogram object.
extend(other) Appends the data of another Tractogram.
to_world([lazy]) Brings the streamlines to world space (i.e.
__init__(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)
Parameters:
streamlines : iterable of ndarrays or ArraySequence, optional

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamline : dict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every streamline, and its associated value is an iterable of ndarrays of shape (\(P_i\),) where \(P_i\) is the number of scalar values to store for that particular information \(i\).

data_per_point : dict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number scalar values to store for that particular information \(i\).

affine_to_rasmm : ndarray of shape (4, 4) or None, optional

Transformation matrix that brings the streamlines contained in this tractogram to RAS+ and mm space where coordinate (0,0,0) refers to the center of the voxel. By default, the streamlines are in an unknown space, i.e. affine_to_rasmm is None.

affine_to_rasmm

Affine bringing streamlines in this tractogram to RAS+mm.

apply_affine(affine, lazy=False)

Applies an affine transformation on the points of each streamline.

If lazy is not specified, this is performed in-place.

Parameters:
affine : ndarray of shape (4, 4)

Transformation that will be applied to every streamline.

lazy : {False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogram : Tractogram or LazyTractogram object

Tractogram where the streamlines have been transformed according to the given affine transformation. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

copy()

Returns a copy of this Tractogram object.

data_per_point
data_per_streamline
extend(other)

Appends the data of another Tractogram.

Data that will be appended includes the streamlines and the content of both dictionaries data_per_streamline and data_per_point.

Parameters:
other : Tractogram object

Its data will be appended to the data of this tractogram.

Returns:
None

Notes

The entries in both dictionaries self.data_per_streamline and self.data_per_point must match respectively those contained in the other tractogram.

streamlines
to_world(lazy=False)

Brings the streamlines to world space (i.e. RAS+ and mm).

If lazy is not specified, this is performed in-place.

Parameters:
lazy : {False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogram : Tractogram or LazyTractogram object

Tractogram where the streamlines have been sent to world space. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

Workflow

class dipy.workflows.tracking.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

load_nifti

dipy.workflows.tracking.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

load_peaks

dipy.workflows.tracking.load_peaks(fname, verbose=False)

Load a PeaksAndMetrics HDF5 file (PAM5)

Parameters:
fname : string

Filename of PAM5 file.

verbose : bool

Print summary information about the loaded file.

Returns:
pam : PeaksAndMetrics object

save

dipy.workflows.tracking.save(tractogram, filename, **kwargs)

Saves a tractogram to a file.

Parameters:
tractogram : Tractogram object or TractogramFile object

If Tractogram object, the file format will be guessed from filename and a TractogramFile object will be created using provided keyword arguments. If TractogramFile object, the file format is known and will be used to save its content to filename.

filename : str

Name of the file where the tractogram will be saved.

**kwargs : keyword arguments

Keyword arguments passed to TractogramFile constructor. Should not be specified if tractogram is already an instance of TractogramFile.

HorizonFlow

class dipy.workflows.viz.HorizonFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: dipy.workflows.workflow.Workflow

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(input_files[, cluster, cluster_thr, …]) Highly interactive visualization - invert the Horizon!
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, cluster=False, cluster_thr=15.0, random_colors=False, length_lt=1000, length_gt=0, clusters_lt=100000000, clusters_gt=0, native_coords=False, stealth=False, out_dir='', out_stealth_png='tmp.png')

Highly interactive visualization - invert the Horizon!

Interact with any number of .trk, .tck or .dpy tractograms and anatomy files .nii or .nii.gz. Cluster streamlines on loading.

Parameters:
input_files : variable string
cluster : bool
cluster_thr : float
random_colors : bool
length_lt : float
length_gt : float
clusters_lt : int
clusters_gt : int
native_coords : bool
stealth : bool
out_dir : string
out_stealth_png : string

References

[Horizon_ISMRM19]Garyfallidis E., M-A. Cote, B.Q. Chandio, S. Fadnavis, J. Guaje, R. Aggarwal, E. St-Onge, K.S. Juneja, S. Koudoro, D. Reagan, DIPY Horizon: fast, modular, unified and adaptive visualization, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.

Workflow

class dipy.workflows.viz.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

horizon

dipy.workflows.viz.horizon(tractograms, images, cluster, cluster_thr, random_colors, length_lt, length_gt, clusters_lt, clusters_gt, world_coords=True, interactive=True, out_png='tmp.png')

Highly interactive visualization - invert the Horizon!

Parameters:
tractograms : sequence

Sequence of Streamlines objects

images : sequence of tuples

Each tuple contains data and affine

cluster : bool

Enable QuickBundlesX clustering

cluster_thr : float

Distance threshold used for clustering

random_colors : bool
length_lt : float
length_gt : float
clusters_lt : int
clusters_gt : int
world_coords : bool
interactive : bool
out_png : string

References

[Horizon_ISMRM19]Garyfallidis E., M-A. Cote, B.Q. Chandio, S. Fadnavis, J. Guaje, R. Aggarwal, E. St-Onge, K.S. Juneja, S. Koudoro, D. Reagan, DIPY Horizon: fast, modular, unified and adaptive visualization, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.

load_nifti

dipy.workflows.viz.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False)

load_tractogram

dipy.workflows.viz.load_tractogram(filename, lazy_load=False)

Loads tractogram files (*.tck)

Parameters:
filename : str

input trk filename

lazy_load : {False, True}, optional

If True, load streamlines in a lazy manner i.e. they will not be kept in memory and only be loaded when needed. Otherwise, load all streamlines in memory.

Returns:
streamlines : list of 2D arrays

Each 2D array represents a sequence of 3D points (points, 3).

hdr : dict

header from a trk file

pjoin

dipy.workflows.viz.pjoin(a, *p)

Join two or more pathname components, inserting ‘/’ as needed. If any component is an absolute path, all previous path components will be discarded. An empty last part will result in a path that ends with a separator.

Workflow

class dipy.workflows.workflow.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)

Bases: object

Methods

get_io_iterator() Create an iterator for IO.
get_short_name() Return A short name for the workflow used to subdivide.
get_sub_runs() Return No sub runs since this is a simple workflow.
manage_output_overwrite() Check if a file will be overwritten upon processing the inputs.
run(*args, **kwargs) Execute the workflow.
__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()

Return No sub runs since this is a simple workflow.

manage_output_overwrite()

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not impletemented in child class or literally called on this class)

io_iterator_

dipy.workflows.workflow.io_iterator_(frame, fnc, output_strategy='absolute', mix_names=False)

Creates an IOIterator using introspection.

Parameters:
frame : frameobject

Contains the info about the current local variables values.

fnc : function

The function to inspect

output_strategy : string

Controls the behavior of the IOIterator for output paths.

mix_names : bool

Whether or not to append a mix of input names at the beginning.

Returns
——-

Properly instantiated IOIterator object.