align
align.imaffine
align.imwarp
align.metrics
align.reslice
align.scalespace
align.streamlinear
Bunch
floating
AffineInvalidValuesError
AffineInversionError
AffineMap
AffineRegistration
IsotropicScaleSpace
MutualInformationMetric
Optimizer
ParzenJointHistogram
ScaleSpace
Bunch
DiffeomorphicMap
DiffeomorphicRegistration
ScaleSpace
SymmetricDiffeomorphicRegistration
floating
CCMetric
EMMetric
SSDMetric
SimilarityMetric
floating
IsotropicScaleSpace
ScaleSpace
floating
BundleMinDistanceAsymmetricMetric
BundleMinDistanceMatrixMetric
BundleMinDistanceMetric
BundleSumDistanceMatrixMetric
Optimizer
StreamlineDistanceMetric
StreamlineLinearRegistration
StreamlineRegistrationMap
Streamlines
align
Bunch (**kwds) |
|
floating |
alias of numpy.float32 |
align.imaffine
Affine image registration module consisting of the following classes:
AffineInvalidValuesError |
|||
AffineInversionError |
|||
AffineMap (affine[, domain_grid_shape, …]) |
Methods |
||
AffineRegistration ([metric, level_iters, …]) |
Methods |
||
IsotropicScaleSpace (image, factors, sigmas) |
Methods |
||
MutualInformationMetric ([nbins, …]) |
Methods |
||
Optimizer (fun, x0[, args, method, jac, …]) |
|
||
ParzenJointHistogram |
Methods |
||
ScaleSpace (image, num_levels[, …]) |
Methods |
||
align_centers_of_mass (static, …) |
|||
align_geometric_centers (static, …) |
|||
align_origins (static, static_grid2world, …) |
|||
compute_parzen_mi |
Computes the mutual information and its gradient (if requested) | ||
get_direction_and_spacings (affine, dim) |
Extracts the rotational and spacing components from a matrix | ||
sample_domain_regular |
Take floor(total_voxels/k) samples from a (2D or 3D) grid | ||
transform_centers_of_mass (static, …) |
Transformation to align the center of mass of the input images | ||
transform_geometric_centers (static, …) |
Transformation to align the geometric center of the input images | ||
transform_origins (static, static_grid2world, …) |
Transformation to align the origins of the input images | ||
warn |
Issue a warning, or maybe ignore it or raise an exception. |
align.imwarp
Classes and functions for Symmetric Diffeomorphic Registration
Bunch (**kwds) |
|
DiffeomorphicMap (dim, disp_shape[, …]) |
Methods |
DiffeomorphicRegistration ([metric]) |
Methods |
ScaleSpace (image, num_levels[, …]) |
Methods |
SymmetricDiffeomorphicRegistration (metric[, …]) |
Methods |
floating |
alias of numpy.float32 |
get_direction_and_spacings (affine, dim) |
Extracts the rotational and spacing components from a matrix |
mult_aff (A, B) |
Returns the matrix product A.dot(B) considering None as the identity |
with_metaclass (meta, *bases) |
Create a base class with a metaclass. |
align.metrics
Metrics for Symmetric Diffeomorphic Registration
CCMetric (dim[, sigma_diff, radius]) |
Methods |
EMMetric (dim[, smooth, inner_iter, …]) |
Methods |
SSDMetric (dim[, smooth, inner_iter, step_type]) |
Methods |
SimilarityMetric (dim) |
Methods |
floating |
alias of numpy.float32 |
gradient (f, *varargs, **kwargs) |
Return the gradient of an N-dimensional array. |
v_cycle_2d (n, k, delta_field, …[, depth]) |
Multi-resolution Gauss-Seidel solver using V-type cycles |
v_cycle_3d (n, k, delta_field, …[, depth]) |
Multi-resolution Gauss-Seidel solver using V-type cycles |
with_metaclass (meta, *bases) |
Create a base class with a metaclass. |
align.reslice
Pool |
Returns a process pool object |
affine_transform (input, matrix[, offset, …]) |
Apply an affine transformation. |
cpu_count |
Returns the number of CPUs in the system |
reslice (data, affine, zooms, new_zooms[, …]) |
Reslice data with new voxel resolution defined by new_zooms |
align.scalespace
IsotropicScaleSpace (image, factors, sigmas) |
Methods |
ScaleSpace (image, num_levels[, …]) |
Methods |
floating |
alias of numpy.float32 |
align.streamlinear
BundleMinDistanceAsymmetricMetric ([num_threads]) |
Asymmetric Bundle-based Minimum distance | ||
BundleMinDistanceMatrixMetric ([num_threads]) |
Bundle-based Minimum Distance aka BMD | ||
BundleMinDistanceMetric ([num_threads]) |
Bundle-based Minimum Distance aka BMD | ||
BundleSumDistanceMatrixMetric ([num_threads]) |
Bundle-based Sum Distance aka BMD | ||
Optimizer (fun, x0[, args, method, jac, …]) |
|
||
StreamlineDistanceMetric ([num_threads]) |
Methods |
||
StreamlineLinearRegistration ([metric, x0, …]) |
Methods |
||
StreamlineRegistrationMap (matopt, xopt, …) |
Methods |
||
Streamlines |
alias of nibabel.streamlines.array_sequence.ArraySequence |
||
bundle_min_distance (t, static, moving) |
MDF-based pairwise distance optimization function (MIN) | ||
bundle_min_distance_asymmetric_fast (t, …) |
MDF-based pairwise distance optimization function (MIN) | ||
bundle_min_distance_fast (t, static, moving, …) |
MDF-based pairwise distance optimization function (MIN) | ||
bundle_sum_distance (t, static, moving[, …]) |
MDF distance optimization function (SUM) | ||
center_streamlines (streamlines) |
Move streamlines to the origin | ||
compose_matrix ([scale, shear, angles, …]) |
Return 4x4 transformation matrix from sequence of transformations. | ||
compose_matrix44 (t[, dtype]) |
Compose a 4x4 transformation matrix | ||
compose_transformations (*mats) |
Compose multiple 4x4 affine transformations in one 4x4 matrix | ||
decompose_matrix (matrix) |
Return sequence of transformations from transformation matrix. | ||
decompose_matrix44 (mat[, size]) |
Given a 4x4 homogeneous matrix return the parameter vector | ||
distance_matrix_mdf |
Minimum direct flipped distance matrix between two streamline sets | ||
length |
Euclidean length of streamlines | ||
progressive_slr (static, moving, metric, x0, …) |
Progressive SLR | ||
qbx_and_merge (streamlines, thresholds[, …]) |
Run QuickBundlesX and then run again on the centroids of the last layer | ||
remove_clusters_by_size (clusters[, min_size]) |
|||
select_random_set_of_streamlines (…[, rng]) |
Select a random set of streamlines | ||
set_number_of_points |
Change the number of points of streamlines | ||
slr_with_qbx (static, moving[, x0, …]) |
Utility function for registering large tractograms. | ||
time () |
Return the current time in seconds since the Epoch. | ||
transform_streamlines (streamlines, mat[, …]) |
Apply affine transformation to streamlines | ||
unlist_streamlines (streamlines) |
Return the streamlines not as a list but as an array and an offset | ||
whole_brain_slr (static, moving[, x0, …]) |
Utility function for registering large tractograms. | ||
with_metaclass (meta, *bases) |
Create a base class with a metaclass. |
AffineInvalidValuesError
dipy.align.imaffine.
AffineInvalidValuesError
Bases: Exception
Attributes: |
|
---|
Methods
with_traceback |
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. |
AffineInversionError
dipy.align.imaffine.
AffineInversionError
Bases: Exception
Attributes: |
|
---|
Methods
with_traceback |
Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. |
AffineMap
dipy.align.imaffine.
AffineMap
(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)Bases: object
Methods
get_affine () |
Returns the value of the transformation, not a reference! |
set_affine (affine) |
Sets the affine transform (operating in physical space) |
transform (image[, interp, image_grid2world, …]) |
Transforms the input image from co-domain to domain space |
transform_inverse (image[, interp, …]) |
Transforms the input image from domain to co-domain space |
__init__
(affine, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)AffineMap
Implements an affine transformation whose domain is given by domain_grid and domain_grid2world, and whose co-domain is given by codomain_grid and codomain_grid2world.
The actual transform is represented by the affine matrix, which operate in world coordinates. Therefore, to transform a moving image towards a static image, we first map each voxel (i,j,k) of the static image to world coordinates (x,y,z) by applying domain_grid2world. Then we apply the affine transform to (x,y,z) obtaining (x’, y’, z’) in moving image’s world coordinates. Finally, (x’, y’, z’) is mapped to voxel coordinates (i’, j’, k’) in the moving image by multiplying (x’, y’, z’) by the inverse of codomain_grid2world. The codomain_grid_shape is used analogously to transform the static image towards the moving image when calling transform_inverse.
If the domain/co-domain information is not provided (None) then the sampling information needs to be specified each time the transform or transform_inverse is called to transform images. Note that such sampling information is not necessary to transform points defined in physical space, such as stream lines.
Parameters: |
|
---|
get_affine
()Returns the value of the transformation, not a reference!
Returns: |
|
---|
set_affine
(affine)Sets the affine transform (operating in physical space)
Also sets self.affine_inv - the inverse of affine, or None if there is no inverse.
Parameters: |
|
---|
transform
(image, interp='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)Transforms the input image from co-domain to domain space
By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
Parameters: |
|
---|
transform_inverse
(image, interp='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)Transforms the input image from domain to co-domain space
By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.
Parameters: |
|
---|
AffineRegistration
dipy.align.imaffine.
AffineRegistration
(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)Bases: object
Methods
optimize (static, moving, transform, params0) |
Starts the optimization process |
__init__
(metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)Initializes an instance of the AffineRegistration class
Parameters: |
|
---|
docstring_addendum
= 'verbosity: int (one of {0, 1, 2, 3}), optional\n Set the verbosity level of the algorithm:\n 0 : do not print anything\n 1 : print information about the current status of the algorithm\n 2 : print high level information of the components involved in\n the registration that can be used to detect a failing\n component.\n 3 : print as much information as possible to isolate the cause\n of a bug.\n Default: 1\n 'optimize
(static, moving, transform, params0, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False)Starts the optimization process
Parameters: |
|
---|---|
Returns: |
|
IsotropicScaleSpace
dipy.align.imaffine.
IsotropicScaleSpace
(image, factors, sigmas, image_grid2world=None, input_spacing=None, mask0=False)Bases: dipy.align.scalespace.ScaleSpace
Methods
get_affine (level) |
Voxel-to-space transformation at a given level |
get_affine_inv (level) |
Space-to-voxel transformation at a given level |
get_domain_shape (level) |
Shape the sub-sampled image must have at a particular level |
get_expand_factors (from_level, to_level) |
Ratio of voxel size from pyramid level from_level to to_level |
get_image (level) |
Smoothed image at a given level |
get_scaling (level) |
Adjustment factor for input-spacing to reflect voxel sizes at level |
get_sigmas (level) |
Smoothing parameters used at a given level |
get_spacing (level) |
Spacings the sub-sampled image must have at a particular level |
print_level (level) |
Prints properties of a pyramid level |
__init__
(image, factors, sigmas, image_grid2world=None, input_spacing=None, mask0=False)IsotropicScaleSpace
Computes the Scale Space representation of an image using isotropic smoothing kernels for all scales. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with different smoothing parameters.
This specialization of ScaleSpace allows the user to provide custom scale and smoothing factors for all scales.
Parameters: |
|
---|
MutualInformationMetric
dipy.align.imaffine.
MutualInformationMetric
(nbins=32, sampling_proportion=None)Bases: object
Methods
distance (params) |
Numeric value of the negative Mutual Information |
distance_and_gradient (params) |
Numeric value of the metric and its gradient at given parameters |
gradient (params) |
Numeric value of the metric’s gradient at the given parameters |
setup (transform, static, moving[, …]) |
Prepares the metric to compute intensity densities and gradients |
__init__
(nbins=32, sampling_proportion=None)Initializes an instance of the Mutual Information metric
This class implements the methods required by Optimizer to drive the registration process.
Parameters: |
|
---|
Notes
Since we use linear interpolation, images are not, in general, differentiable at exact voxel coordinates, but they are differentiable between voxel coordinates. When using sparse sampling, selected voxels are slightly moved by adding a small random displacement within one voxel to prevent sampling points from being located exactly at voxel coordinates. When using dense sampling, this random displacement is not applied.
distance
(params)Numeric value of the negative Mutual Information
We need to change the sign so we can use standard minimization algorithms.
Parameters: |
|
---|---|
Returns: |
|
distance_and_gradient
(params)Numeric value of the metric and its gradient at given parameters
Parameters: |
|
---|---|
Returns: |
|
gradient
(params)Numeric value of the metric’s gradient at the given parameters
Parameters: |
|
---|---|
Returns: |
|
setup
(transform, static, moving, static_grid2world=None, moving_grid2world=None, starting_affine=None)Prepares the metric to compute intensity densities and gradients
The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving
Parameters: |
|
---|
Optimizer
dipy.align.imaffine.
Optimizer
(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)Bases: object
Attributes: |
|
---|
Methods
print_summary |
__init__
(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)A class for handling minimization of scalar function of one or more variables.
Parameters: |
|
---|
See also
scipy.optimize.minimize
ParzenJointHistogram
dipy.align.imaffine.
ParzenJointHistogram
Bases: object
Methods
bin_index |
Bin index associated with the given normalized intensity |
bin_normalize_moving |
Maps intensity x to the range covered by the moving histogram |
bin_normalize_static |
Maps intensity x to the range covered by the static histogram |
setup |
Compute histogram settings to store the PDF of input images |
update_gradient_dense |
Computes the Gradient of the joint PDF w.r.t. |
update_gradient_sparse |
Computes the Gradient of the joint PDF w.r.t. |
update_pdfs_dense |
Computes the Probability Density Functions of two images |
update_pdfs_sparse |
Computes the Probability Density Functions from a set of samples |
__init__
()Computes joint histogram and derivatives with Parzen windows
Base class to compute joint and marginal probability density functions and their derivatives with respect to a transform’s parameters. The smooth histograms are computed by using Parzen windows [Parzen62] with a cubic spline kernel, as proposed by Mattes et al. [Mattes03]. This implementation is not tied to any optimization (registration) method, the idea is that information-theoretic matching functionals (such as Mutual Information) can inherit from this class to perform the low-level computations of the joint intensity distributions and its gradient w.r.t. the transform parameters. The derived class can then compute the similarity/dissimilarity measure and gradient, and finally communicate the results to the appropriate optimizer.
Parameters: |
|
---|
Notes
We need this class in cython to allow _joint_pdf_gradient_dense_2d and _joint_pdf_gradient_dense_3d to use a nogil Jacobian function (obtained from an instance of the Transform class), which allows us to evaluate Jacobians at all the sampling points (maybe the full grid) inside a nogil loop.
The reason we need a class is to encapsulate all the parameters related to the joint and marginal distributions.
References
bin_index
Bin index associated with the given normalized intensity
The return value is an integer in [padding, nbins - 1 - padding]
Parameters: |
|
---|---|
Returns: |
|
bin_normalize_moving
Maps intensity x to the range covered by the moving histogram
If the input intensity is in [self.mmin, self.mmax] then the normalized intensity will be in [self.padding, self.nbins - self.padding]
Parameters: |
|
---|---|
Returns: |
|
bin_normalize_static
Maps intensity x to the range covered by the static histogram
If the input intensity is in [self.smin, self.smax] then the normalized intensity will be in [self.padding, self.nbins - self.padding]
Parameters: |
|
---|---|
Returns: |
|
setup
Compute histogram settings to store the PDF of input images
Parameters: |
|
---|
update_gradient_dense
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The gradient is stored in self.joint_grad.
Parameters: |
|
---|
update_gradient_sparse
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting gradient corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The gradient is stored in self.joint_grad.
Parameters: |
|
---|
update_pdfs_dense
Computes the Probability Density Functions of two images
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
Parameters: |
|
---|
update_pdfs_sparse
Computes the Probability Density Functions from a set of samples
The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting distributions corresponds to the paired intensities according to the alignment at the moment the images were sampled.
The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.
Parameters: |
|
---|
ScaleSpace
dipy.align.imaffine.
ScaleSpace
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)Bases: object
Methods
get_affine (level) |
Voxel-to-space transformation at a given level |
get_affine_inv (level) |
Space-to-voxel transformation at a given level |
get_domain_shape (level) |
Shape the sub-sampled image must have at a particular level |
get_expand_factors (from_level, to_level) |
Ratio of voxel size from pyramid level from_level to to_level |
get_image (level) |
Smoothed image at a given level |
get_scaling (level) |
Adjustment factor for input-spacing to reflect voxel sizes at level |
get_sigmas (level) |
Smoothing parameters used at a given level |
get_spacing (level) |
Spacings the sub-sampled image must have at a particular level |
print_level (level) |
Prints properties of a pyramid level |
__init__
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)ScaleSpace
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L - 1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
Parameters: |
|
---|
get_affine
(level)Voxel-to-space transformation at a given level
Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_affine_inv
(level)Space-to-voxel transformation at a given level
Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_domain_shape
(level)Shape the sub-sampled image must have at a particular level
Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_expand_factors
(from_level, to_level)Ratio of voxel size from pyramid level from_level to to_level
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
Parameters: |
|
---|---|
Returns: |
|
get_image
(level)Smoothed image at a given level
Returns the smoothed image at the requested level in the Scale Space.
Parameters: |
|
---|---|
Returns: |
|
get_scaling
(level)Adjustment factor for input-spacing to reflect voxel sizes at level
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
Parameters: |
|
---|---|
Returns: |
|
get_sigmas
(level)Smoothing parameters used at a given level
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
Parameters: |
|
---|---|
Returns: |
|
get_spacing
(level)Spacings the sub-sampled image must have at a particular level
Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imaffine.
compute_parzen_mi
()Computes the mutual information and its gradient (if requested)
Parameters: |
|
---|
dipy.align.imaffine.
get_direction_and_spacings
(affine, dim)Extracts the rotational and spacing components from a matrix
Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to re-orient the gradients to be given in physical space coordinates.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imaffine.
sample_domain_regular
()Take floor(total_voxels/k) samples from a (2D or 3D) grid
The sampling is made by taking all pixels whose index (in lexicographical order) is a multiple of k. Each selected point is slightly perturbed by adding a realization of a normally distributed random variable and then mapped to physical space by the given grid-to-space transform.
The lexicographical order of a pixels in a grid of shape (a, b, c) is defined by assigning to each voxel position (i, j, k) the integer index
F((i, j, k)) = i * (b * c) + j * (c) + k
and sorting increasingly by this index.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imaffine.
transform_centers_of_mass
(static, static_grid2world, moving, moving_grid2world)Transformation to align the center of mass of the input images
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imaffine.
transform_geometric_centers
(static, static_grid2world, moving, moving_grid2world)Transformation to align the geometric center of the input images
With “geometric center” of a volume we mean the physical coordinates of its central voxel
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imaffine.
transform_origins
(static, static_grid2world, moving, moving_grid2world)Transformation to align the origins of the input images
With “origin” of a volume we mean the physical coordinates of voxel (0,0,0)
Parameters: |
|
---|---|
Returns: |
|
DiffeomorphicMap
dipy.align.imwarp.
DiffeomorphicMap
(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)Bases: object
Methods
allocate () |
Creates a zero displacement field |
compute_inversion_error () |
Inversion error of the displacement fields |
expand_fields (expand_factors, new_shape) |
Expands the displacement fields from current shape to new_shape |
get_backward_field () |
Deformation field to transform an image in the backward direction |
get_forward_field () |
Deformation field to transform an image in the forward direction |
get_simplified_transform () |
Constructs a simplified version of this Diffeomorhic Map |
interpret_matrix (obj) |
Try to interpret obj as a matrix |
inverse () |
Inverse of this DiffeomorphicMap instance |
shallow_copy () |
Shallow copy of this DiffeomorphicMap instance |
transform (image[, interpolation, …]) |
Warps an image in the forward direction |
transform_inverse (image[, interpolation, …]) |
Warps an image in the backward direction |
warp_endomorphism (phi) |
Composition of this DiffeomorphicMap with a given endomorphism |
__init__
(dim, disp_shape, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)DiffeomorphicMap
Implements a diffeomorphic transformation on the physical space. The deformation fields encoding the direct and inverse transformations share the same domain discretization (both the discretization grid shape and voxel-to-space matrix). The input coordinates (physical coordinates) are first aligned using prealign, and then displaced using the corresponding vector field interpolated at the aligned coordinates.
Parameters: |
|
---|
allocate
()Creates a zero displacement field
Creates a zero displacement field (the identity transformation).
compute_inversion_error
()Inversion error of the displacement fields
Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.
Returns: |
|
---|
Notes
Since the forward and backward displacement fields have the same discretization, the final composition is given by
comp[i] = forward[ i + Dinv * backward[i]]
where Dinv is the space-to-grid transformation of the displacement fields
expand_fields
(expand_factors, new_shape)Expands the displacement fields from current shape to new_shape
Up-samples the discretization of the displacement fields to be of new_shape shape.
Parameters: |
|
---|
get_backward_field
()Deformation field to transform an image in the backward direction
Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).
get_forward_field
()Deformation field to transform an image in the forward direction
Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).
get_simplified_transform
()Constructs a simplified version of this Diffeomorhic Map
The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.
interpret_matrix
(obj)Try to interpret obj as a matrix
Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.
Parameters: |
|
---|---|
Returns: |
|
inverse
()Inverse of this DiffeomorphicMap instance
Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.
Returns: |
|
---|
shallow_copy
()Shallow copy of this DiffeomorphicMap instance
Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)
Returns: |
|
---|
transform
(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)Warps an image in the forward direction
Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).
Parameters: |
|
---|---|
Returns: |
|
Notes
See _warp_forward and _warp_backward documentation for further information.
transform_inverse
(image, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)Warps an image in the backward direction
Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)
Parameters: |
|
---|---|
Returns: |
|
Notes
See _warp_forward and _warp_backward documentation for further information.
warp_endomorphism
(phi)Composition of this DiffeomorphicMap with a given endomorphism
Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the pre-aligning matrix of phi is None or identity).
Parameters: |
|
---|---|
Returns: |
|
Notes
The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation.
Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ mid-point algorithm (SyN) are much simpler.
DiffeomorphicRegistration
dipy.align.imwarp.
DiffeomorphicRegistration
(metric=None)Bases: abc.NewBase
Methods
get_map () |
Returns the resulting diffeomorphic map after optimization |
optimize () |
Starts the metric optimization |
set_level_iters (level_iters) |
Sets the number of iterations at each pyramid level |
__init__
(metric=None)Diffeomorphic Registration
This abstract class defines the interface to be implemented by any optimization algorithm for diffeomorphic registration.
Parameters: |
|
---|
optimize
()Starts the metric optimization
This is the main function each specialized class derived from this must implement. Upon completion, the deformation field must be available from the forward transformation model.
set_level_iters
(level_iters)Sets the number of iterations at each pyramid level
Establishes the maximum number of iterations to be performed at each level of the Gaussian pyramid, similar to ANTS.
Parameters: |
|
---|
ScaleSpace
dipy.align.imwarp.
ScaleSpace
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)Bases: object
Methods
get_affine (level) |
Voxel-to-space transformation at a given level |
get_affine_inv (level) |
Space-to-voxel transformation at a given level |
get_domain_shape (level) |
Shape the sub-sampled image must have at a particular level |
get_expand_factors (from_level, to_level) |
Ratio of voxel size from pyramid level from_level to to_level |
get_image (level) |
Smoothed image at a given level |
get_scaling (level) |
Adjustment factor for input-spacing to reflect voxel sizes at level |
get_sigmas (level) |
Smoothing parameters used at a given level |
get_spacing (level) |
Spacings the sub-sampled image must have at a particular level |
print_level (level) |
Prints properties of a pyramid level |
__init__
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)ScaleSpace
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L - 1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
Parameters: |
|
---|
get_affine
(level)Voxel-to-space transformation at a given level
Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_affine_inv
(level)Space-to-voxel transformation at a given level
Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_domain_shape
(level)Shape the sub-sampled image must have at a particular level
Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_expand_factors
(from_level, to_level)Ratio of voxel size from pyramid level from_level to to_level
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
Parameters: |
|
---|---|
Returns: |
|
get_image
(level)Smoothed image at a given level
Returns the smoothed image at the requested level in the Scale Space.
Parameters: |
|
---|---|
Returns: |
|
get_scaling
(level)Adjustment factor for input-spacing to reflect voxel sizes at level
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
Parameters: |
|
---|---|
Returns: |
|
get_sigmas
(level)Smoothing parameters used at a given level
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
Parameters: |
|
---|---|
Returns: |
|
get_spacing
(level)Spacings the sub-sampled image must have at a particular level
Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
SymmetricDiffeomorphicRegistration
dipy.align.imwarp.
SymmetricDiffeomorphicRegistration
(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)Bases: dipy.align.imwarp.DiffeomorphicRegistration
Methods
get_map () |
Returns the resulting diffeomorphic map Returns the DiffeomorphicMap registering the moving image towards the static image. |
optimize (static, moving[, …]) |
Starts the optimization |
set_level_iters (level_iters) |
Sets the number of iterations at each pyramid level |
update (current_displacement, …) |
Composition of the current displacement field with the given field |
__init__
(metric, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)Symmetric Diffeomorphic Registration (SyN) Algorithm
Performs the multi-resolution optimization algorithm for non-linear registration using a given similarity metric.
Parameters: |
|
---|
get_map
()Returns the resulting diffeomorphic map Returns the DiffeomorphicMap registering the moving image towards the static image.
optimize
(static, moving, static_grid2world=None, moving_grid2world=None, prealign=None)Starts the optimization
Parameters: |
|
---|---|
Returns: |
|
update
(current_displacement, new_displacement, disp_world2grid, time_scaling)Composition of the current displacement field with the given field
Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imwarp.
get_direction_and_spacings
(affine, dim)Extracts the rotational and spacing components from a matrix
Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to re-orient the gradients to be given in physical space coordinates.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.imwarp.
mult_aff
(A, B)Returns the matrix product A.dot(B) considering None as the identity
Parameters: |
|
---|---|
Returns: |
|
CCMetric
dipy.align.metrics.
CCMetric
(dim, sigma_diff=2.0, radius=4)Bases: dipy.align.metrics.SimilarityMetric
Methods
compute_backward () |
Computes one step bringing the static image towards the moving. |
compute_forward () |
Computes one step bringing the moving image towards the static. |
free_iteration () |
Frees the resources allocated during initialization |
get_energy () |
Numerical value assigned by this metric to the current image pair |
initialize_iteration () |
Prepares the metric to compute one displacement field iteration. |
set_levels_above (levels) |
Informs the metric how many pyramid levels are above the current one |
set_levels_below (levels) |
Informs the metric how many pyramid levels are below the current one |
set_moving_image (moving_image, …) |
Sets the moving image being compared against the static one. |
set_static_image (static_image, …) |
Sets the static image being compared against the moving one. |
use_moving_image_dynamics (…) |
This is called by the optimizer just after setting the moving image |
use_static_image_dynamics (…) |
This is called by the optimizer just after setting the static image. |
__init__
(dim, sigma_diff=2.0, radius=4)Normalized Cross-Correlation Similarity metric.
Parameters: |
|
---|
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
compute_forward
()Computes one step bringing the moving image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
get_energy
()Numerical value assigned by this metric to the current image pair
Returns the Cross Correlation (data term) energy computed at the largest iteration
initialize_iteration
()Prepares the metric to compute one displacement field iteration.
Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations.
EMMetric
dipy.align.metrics.
EMMetric
(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')Bases: dipy.align.metrics.SimilarityMetric
Methods
compute_backward () |
Computes one step bringing the static image towards the moving. |
compute_demons_step ([forward_step]) |
Demons step for EM metric |
compute_forward () |
Computes one step bringing the reference image towards the static. |
compute_gauss_newton_step ([forward_step]) |
Computes the Gauss-Newton energy minimization step |
free_iteration () |
Frees the resources allocated during initialization |
get_energy () |
The numerical value assigned by this metric to the current image pair |
initialize_iteration () |
Prepares the metric to compute one displacement field iteration. |
set_levels_above (levels) |
Informs the metric how many pyramid levels are above the current one |
set_levels_below (levels) |
Informs the metric how many pyramid levels are below the current one |
set_moving_image (moving_image, …) |
Sets the moving image being compared against the static one. |
set_static_image (static_image, …) |
Sets the static image being compared against the moving one. |
use_moving_image_dynamics (…) |
This is called by the optimizer just after setting the moving image. |
use_static_image_dynamics (…) |
This is called by the optimizer just after setting the static image. |
__init__
(dim, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')Expectation-Maximization Metric
Similarity metric based on the Expectation-Maximization algorithm to handle multi-modal images. The transfer function is modeled as a set of hidden random variables that are estimated at each iteration of the algorithm.
Parameters: |
|
---|
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
compute_demons_step
(forward_step=True)Demons step for EM metric
Parameters: |
|
---|---|
Returns: |
|
compute_forward
()Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm
compute_gauss_newton_step
(forward_step=True)Computes the Gauss-Newton energy minimization step
Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by Bruhn and Weickert et al [Bruhn05]
Parameters: |
|
---|---|
Returns: |
|
References
get_energy
()The numerical value assigned by this metric to the current image pair
Returns the EM (data term) energy computed at the largest iteration
initialize_iteration
()Prepares the metric to compute one displacement field iteration.
Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged.
use_moving_image_dynamics
(original_moving_image, transformation)This is called by the optimizer just after setting the moving image.
EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)
Parameters: |
|
---|
use_static_image_dynamics
(original_static_image, transformation)This is called by the optimizer just after setting the static image.
EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)
Parameters: |
|
---|
SSDMetric
dipy.align.metrics.
SSDMetric
(dim, smooth=4, inner_iter=10, step_type='demons')Bases: dipy.align.metrics.SimilarityMetric
Methods
compute_backward () |
Computes one step bringing the static image towards the moving. |
compute_demons_step ([forward_step]) |
Demons step for SSD metric |
compute_forward () |
Computes one step bringing the reference image towards the static. |
compute_gauss_newton_step ([forward_step]) |
Computes the Gauss-Newton energy minimization step |
free_iteration () |
Nothing to free for the SSD metric |
get_energy () |
The numerical value assigned by this metric to the current image pair |
initialize_iteration () |
Prepares the metric to compute one displacement field iteration. |
set_levels_above (levels) |
Informs the metric how many pyramid levels are above the current one |
set_levels_below (levels) |
Informs the metric how many pyramid levels are below the current one |
set_moving_image (moving_image, …) |
Sets the moving image being compared against the static one. |
set_static_image (static_image, …) |
Sets the static image being compared against the moving one. |
use_moving_image_dynamics (…) |
This is called by the optimizer just after setting the moving image |
use_static_image_dynamics (…) |
This is called by the optimizer just after setting the static image. |
__init__
(dim, smooth=4, inner_iter=10, step_type='demons')Sum of Squared Differences (SSD) Metric
Similarity metric for (mono-modal) nonlinear image registration defined by the sum of squared differences (SSD)
Parameters: |
|
---|
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the update displacement field to be used for registration of the static image towards the moving image
compute_demons_step
(forward_step=True)Demons step for SSD metric
Computes the demons step proposed by Vercauteren et al.[Vercauteren09] for the SSD metric.
Parameters: |
|
---|---|
Returns: |
|
References
compute_forward
()Computes one step bringing the reference image towards the static.
Computes the update displacement field to be used for registration of the moving image towards the static image
compute_gauss_newton_step
(forward_step=True)Computes the Gauss-Newton energy minimization step
Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.
Parameters: |
|
---|---|
Returns: |
|
SimilarityMetric
dipy.align.metrics.
SimilarityMetric
(dim)Bases: abc.NewBase
Methods
compute_backward () |
Computes one step bringing the static image towards the moving. |
compute_forward () |
Computes one step bringing the reference image towards the static. |
free_iteration () |
Releases the resources no longer needed by the metric |
get_energy () |
Numerical value assigned by this metric to the current image pair |
initialize_iteration () |
Prepares the metric to compute one displacement field iteration. |
set_levels_above (levels) |
Informs the metric how many pyramid levels are above the current one |
set_levels_below (levels) |
Informs the metric how many pyramid levels are below the current one |
set_moving_image (moving_image, …) |
Sets the moving image being compared against the static one. |
set_static_image (static_image, …) |
Sets the static image being compared against the moving one. |
use_moving_image_dynamics (…) |
This is called by the optimizer just after setting the moving image |
use_static_image_dynamics (…) |
This is called by the optimizer just after setting the static image. |
__init__
(dim)Similarity Metric abstract class
A similarity metric is in charge of keeping track of the numerical value of the similarity (or distance) between the two given images. It also computes the update field for the forward and inverse displacement fields to be used in a gradient-based optimization algorithm. Note that this metric does not depend on any transformation (affine or non-linear) so it assumes the static and moving images are already warped
Parameters: |
|
---|
compute_backward
()Computes one step bringing the static image towards the moving.
Computes the backward update field to register the static image towards the moving image in a gradient-based optimization algorithm
compute_forward
()Computes one step bringing the reference image towards the static.
Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm
free_iteration
()Releases the resources no longer needed by the metric
This method is called by the RegistrationOptimizer after the required iterations have been computed (forward and / or backward) so that the SimilarityMetric can safely delete any data it computed as part of the initialization
get_energy
()Numerical value assigned by this metric to the current image pair
Must return the numeric value of the similarity between the given static and moving images
initialize_iteration
()Prepares the metric to compute one displacement field iteration.
This method will be called before any compute_forward or compute_backward call, this allows the Metric to pre-compute any useful information for speeding up the update computations. This initialization was needed in ANTS because the updates are called once per voxel. In Python this is unpractical, though.
set_levels_above
(levels)Informs the metric how many pyramid levels are above the current one
Informs this metric the number of pyramid levels above the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly
Parameters: |
|
---|
set_levels_below
(levels)Informs the metric how many pyramid levels are below the current one
Informs this metric the number of pyramid levels below the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly
Parameters: |
|
---|
set_moving_image
(moving_image, moving_affine, moving_spacing, moving_direction)Sets the moving image being compared against the static one.
Sets the moving image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations
Parameters: |
|
---|
set_static_image
(static_image, static_affine, static_spacing, static_direction)Sets the static image being compared against the moving one.
Sets the static image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations
Parameters: |
|
---|
use_moving_image_dynamics
(original_moving_image, transformation)This is called by the optimizer just after setting the moving image
This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_moving_image equals self.moving_image.
Parameters: |
|
---|
use_static_image_dynamics
(original_static_image, transformation)This is called by the optimizer just after setting the static image.
This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_static_image equals self.moving_image.
Parameters: |
|
---|
dipy.align.metrics.
gradient
(f, *varargs, **kwargs)Return the gradient of an N-dimensional array.
The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
Parameters: |
|
---|---|
Returns: |
|
Notes
Assuming that \(f\in C^{3}\) (i.e., \(f\) has at least 3 continuous derivatives) and let \(h_{*}\) be a non-homogeneous stepsize, we minimize the “consistency error” \(\eta_{i}\) between the true gradient and its estimate from a linear combination of the neighboring grid-points:
By substituting \(f(x_{i} + h_{d})\) and \(f(x_{i} - h_{s})\) with their Taylor series expansion, this translates into solving the following the linear system:
The resulting approximation of \(f_{i}^{(1)}\) is the following:
It is worth noting that if \(h_{s}=h_{d}\) (i.e., data are evenly spaced) we find the standard second order approximation:
With a similar procedure the forward/backward approximations used for boundaries can be derived.
References
[1] | Quarteroni A., Sacco R., Saleri F. (2007) Numerical Mathematics (Texts in Applied Mathematics). New York: Springer. |
[2] | Durran D. R. (1999) Numerical Methods for Wave Equations in Geophysical Fluid Dynamics. New York: Springer. |
[3] | Fornberg B. (1988) Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Mathematics of Computation 51, no. 184 : 699-706. PDF. |
Examples
>>> f = np.array([1, 2, 4, 7, 11, 16], dtype=float)
>>> np.gradient(f)
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
>>> np.gradient(f, 2)
array([ 0.5 , 0.75, 1.25, 1.75, 2.25, 2.5 ])
Spacing can be also specified with an array that represents the coordinates of the values F along the dimensions. For instance a uniform spacing:
>>> x = np.arange(f.size)
>>> np.gradient(f, x)
array([ 1. , 1.5, 2.5, 3.5, 4.5, 5. ])
Or a non uniform one:
>>> x = np.array([0., 1., 1.5, 3.5, 4., 6.], dtype=float)
>>> np.gradient(f, x)
array([ 1. , 3. , 3.5, 6.7, 6.9, 2.5])
For two dimensional arrays, the return will be two arrays ordered by axis. In this example the first array stands for the gradient in rows and the second one in columns direction:
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float))
[array([[ 2., 2., -1.],
[ 2., 2., -1.]]), array([[ 1. , 2.5, 4. ],
[ 1. , 1. , 1. ]])]
In this example the spacing is also specified: uniform for axis=0 and non uniform for axis=1
>>> dx = 2.
>>> y = [1., 1.5, 3.5]
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), dx, y)
[array([[ 1. , 1. , -0.5],
[ 1. , 1. , -0.5]]), array([[ 2. , 2. , 2. ],
[ 2. , 1.7, 0.5]])]
It is possible to specify how boundaries are treated using edge_order
>>> x = np.array([0, 1, 2, 3, 4])
>>> f = x**2
>>> np.gradient(f, edge_order=1)
array([ 1., 2., 4., 6., 7.])
>>> np.gradient(f, edge_order=2)
array([-0., 2., 4., 6., 8.])
The axis keyword can be used to specify a subset of axes of which the gradient is calculated
>>> np.gradient(np.array([[1, 2, 6], [3, 4, 5]], dtype=float), axis=0)
array([[ 2., 2., -1.],
[ 2., 2., -1.]])
dipy.align.metrics.
v_cycle_2d
(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, depth=0)Multi-resolution Gauss-Seidel solver using V-type cycles
Multi-resolution Gauss-Seidel solver: solves the Gauss-Newton linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[Bruhn05].
Parameters: |
|
---|---|
Returns: |
|
References
dipy.align.metrics.
v_cycle_3d
(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, depth=0)Multi-resolution Gauss-Seidel solver using V-type cycles
Multi-resolution Gauss-Seidel solver: solves the linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[1]. [1] Andres Bruhn and Joachim Weickert, “Towards ultimate motion estimation:
combining highest accuracy with real-time performance”, 10th IEEE International Conference on Computer Vision, 2005. ICCV 2005.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.reslice.
affine_transform
(input, matrix, offset=0.0, output_shape=None, output=None, order=3, mode='constant', cval=0.0, prefilter=True)Apply an affine transformation.
Given an output image pixel index vector o
, the pixel value
is determined from the input image at position
np.dot(matrix, o) + offset
.
Parameters: |
|
---|---|
Returns: |
|
Notes
The given matrix and offset are used to find for each point in the output the corresponding coordinates in the input by an affine transformation. The value of the input at those coordinates is determined by spline interpolation of the requested order. Points outside the boundaries of the input are filled according to the given mode.
Changed in version 0.18.0: Previously, the exact interpretation of the affine transformation
depended on whether the matrix was supplied as a one-dimensional or
two-dimensional array. If a one-dimensional array was supplied
to the matrix parameter, the output pixel value at index o
was determined from the input image at position
matrix * (o + offset)
.
References
[1] | (1, 2) https://en.wikipedia.org/wiki/Homogeneous_coordinates |
dipy.align.reslice.
reslice
(data, affine, zooms, new_zooms, order=1, mode='constant', cval=0, num_processes=1)Reslice data with new voxel resolution defined by new_zooms
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import nibabel as nib
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> fimg = get_fnames('aniso_vox')
>>> img = nib.load(fimg)
>>> data = img.get_data()
>>> data.shape == (58, 58, 24)
True
>>> affine = img.affine
>>> zooms = img.header.get_zooms()[:3]
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True
IsotropicScaleSpace
dipy.align.scalespace.
IsotropicScaleSpace
(image, factors, sigmas, image_grid2world=None, input_spacing=None, mask0=False)Bases: dipy.align.scalespace.ScaleSpace
Methods
get_affine (level) |
Voxel-to-space transformation at a given level |
get_affine_inv (level) |
Space-to-voxel transformation at a given level |
get_domain_shape (level) |
Shape the sub-sampled image must have at a particular level |
get_expand_factors (from_level, to_level) |
Ratio of voxel size from pyramid level from_level to to_level |
get_image (level) |
Smoothed image at a given level |
get_scaling (level) |
Adjustment factor for input-spacing to reflect voxel sizes at level |
get_sigmas (level) |
Smoothing parameters used at a given level |
get_spacing (level) |
Spacings the sub-sampled image must have at a particular level |
print_level (level) |
Prints properties of a pyramid level |
__init__
(image, factors, sigmas, image_grid2world=None, input_spacing=None, mask0=False)IsotropicScaleSpace
Computes the Scale Space representation of an image using isotropic smoothing kernels for all scales. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with different smoothing parameters.
This specialization of ScaleSpace allows the user to provide custom scale and smoothing factors for all scales.
Parameters: |
|
---|
ScaleSpace
dipy.align.scalespace.
ScaleSpace
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)Bases: object
Methods
get_affine (level) |
Voxel-to-space transformation at a given level |
get_affine_inv (level) |
Space-to-voxel transformation at a given level |
get_domain_shape (level) |
Shape the sub-sampled image must have at a particular level |
get_expand_factors (from_level, to_level) |
Ratio of voxel size from pyramid level from_level to to_level |
get_image (level) |
Smoothed image at a given level |
get_scaling (level) |
Adjustment factor for input-spacing to reflect voxel sizes at level |
get_sigmas (level) |
Smoothing parameters used at a given level |
get_spacing (level) |
Spacings the sub-sampled image must have at a particular level |
print_level (level) |
Prints properties of a pyramid level |
__init__
(image, num_levels, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)ScaleSpace
Computes the Scale Space representation of an image. The scale space is simply a list of images produced by smoothing the input image with a Gaussian kernel with increasing smoothing parameter. If the image’s voxels are isotropic, the smoothing will be the same along all directions: at level L = 0, 1, …, the sigma is given by \(s * ( 2^L - 1 )\). If the voxel dimensions are not isotropic, then the smoothing is weaker along low resolution directions.
Parameters: |
|
---|
get_affine
(level)Voxel-to-space transformation at a given level
Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_affine_inv
(level)Space-to-voxel transformation at a given level
Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_domain_shape
(level)Shape the sub-sampled image must have at a particular level
Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
get_expand_factors
(from_level, to_level)Ratio of voxel size from pyramid level from_level to to_level
Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).
Parameters: |
|
---|---|
Returns: |
|
get_image
(level)Smoothed image at a given level
Returns the smoothed image at the requested level in the Scale Space.
Parameters: |
|
---|---|
Returns: |
|
get_scaling
(level)Adjustment factor for input-spacing to reflect voxel sizes at level
Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.
Parameters: |
|
---|---|
Returns: |
|
get_sigmas
(level)Smoothing parameters used at a given level
Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space
Parameters: |
|
---|---|
Returns: |
|
get_spacing
(level)Spacings the sub-sampled image must have at a particular level
Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).
Parameters: |
|
---|---|
Returns: |
|
BundleMinDistanceAsymmetricMetric
dipy.align.streamlinear.
BundleMinDistanceAsymmetricMetric
(num_threads=None)Bases: dipy.align.streamlinear.BundleMinDistanceMetric
Asymmetric Bundle-based Minimum distance
This is a cost function that can be used by the StreamlineLinearRegistration class.
Methods
distance (xopt) |
Distance calculated from this Metric |
setup (static, moving) |
Setup static and moving sets of streamlines |
__init__
(num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Parameters: |
|
---|
BundleMinDistanceMatrixMetric
dipy.align.streamlinear.
BundleMinDistanceMatrixMetric
(num_threads=None)Bases: dipy.align.streamlinear.StreamlineDistanceMetric
Bundle-based Minimum Distance aka BMD
This is the cost function used by the StreamlineLinearRegistration
Notes
The difference with BundleMinDistanceMetric is that this creates the entire distance matrix and therefore requires more memory.
Methods
setup(static, moving) | |
distance(xopt) |
__init__
(num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Parameters: |
|
---|
distance
(xopt)Distance calculated from this Metric
Parameters: |
|
---|
setup
(static, moving)Setup static and moving sets of streamlines
Parameters: |
|
---|
Notes
Call this after the object is initiated and before distance.
Num_threads is not used in this class. Use BundleMinDistanceMetric
for a faster, threaded and less memory hungry metric
BundleMinDistanceMetric
dipy.align.streamlinear.
BundleMinDistanceMetric
(num_threads=None)Bases: dipy.align.streamlinear.StreamlineDistanceMetric
Bundle-based Minimum Distance aka BMD
This is the cost function used by the StreamlineLinearRegistration
References
[Garyfallidis14] | Garyfallidis et al., “Direct native-space fiber bundle alignment for group comparisons”, ISMRM, 2014. |
Methods
setup(static, moving) | |
distance(xopt) |
__init__
(num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Parameters: |
|
---|
distance
(xopt)Distance calculated from this Metric
Parameters: |
|
---|
setup
(static, moving)Setup static and moving sets of streamlines
Parameters: |
|
---|
Notes
Call this after the object is initiated and before distance.
BundleSumDistanceMatrixMetric
dipy.align.streamlinear.
BundleSumDistanceMatrixMetric
(num_threads=None)Bases: dipy.align.streamlinear.BundleMinDistanceMatrixMetric
Bundle-based Sum Distance aka BMD
This is a cost function that can be used by the StreamlineLinearRegistration class.
Notes
The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.
Methods
setup(static, moving) | |
distance(xopt) |
__init__
(num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Parameters: |
|
---|
Optimizer
dipy.align.streamlinear.
Optimizer
(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)Bases: object
Attributes: |
|
---|
Methods
print_summary |
__init__
(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)A class for handling minimization of scalar function of one or more variables.
Parameters: |
|
---|
See also
scipy.optimize.minimize
StreamlineDistanceMetric
dipy.align.streamlinear.
StreamlineDistanceMetric
(num_threads=None)Bases: abc.NewBase
Methods
distance (xopt) |
calculate distance for current set of parameters |
setup |
__init__
(num_threads=None)An abstract class for the metric used for streamline registration
If the two sets of streamlines match exactly then method distance
of this object should be minimum.
Parameters: |
|
---|
StreamlineLinearRegistration
dipy.align.streamlinear.
StreamlineLinearRegistration
(metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)Bases: object
Methods
optimize (static, moving[, mat]) |
Find the minimum of the provided metric. |
__init__
(metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)Linear registration of 2 sets of streamlines [Garyfallidis15].
Parameters: |
|
---|
References
[Garyfallidis15] | (1, 2) Garyfallidis et al. “Robust and efficient linear registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 |
[Garyfallidis14] | Garyfallidis et al., “Direct native-space fiber bundle alignment for group comparisons”, ISMRM, 2014. |
[Garyfallidis17] | Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017. |
optimize
(static, moving, mat=None)Find the minimum of the provided metric.
Parameters: |
|
---|---|
Returns: |
|
StreamlineRegistrationMap
dipy.align.streamlinear.
StreamlineRegistrationMap
(matopt, xopt, fopt, matopt_history, funcs, iterations)Bases: object
Methods
transform (moving) |
Transform moving streamlines to the static. |
__init__
(matopt, xopt, fopt, matopt_history, funcs, iterations)A map holding the optimum affine matrix and some other parameters of the optimization
Parameters: |
|
---|
dipy.align.streamlinear.
bundle_min_distance
(t, static, moving)MDF-based pairwise distance optimization function (MIN)
We minimize the distance between moving streamlines as they align with the static streamlines.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
bundle_min_distance_asymmetric_fast
(t, static, moving, block_size)MDF-based pairwise distance optimization function (MIN)
We minimize the distance between moving streamlines as they align with the static streamlines.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
bundle_min_distance_fast
(t, static, moving, block_size, num_threads)MDF-based pairwise distance optimization function (MIN)
We minimize the distance between moving streamlines as they align with the static streamlines.
Parameters: |
|
---|---|
Returns: |
|
Notes
This is a faster implementation of bundle_min_distance
, which requires
that all the points of each streamline are allocated into an ndarray
(of shape N*M by 3, with N the number of points per streamline and M the
number of streamlines). This can be done by calling
dipy.tracking.streamlines.unlist_streamlines.
dipy.align.streamlinear.
bundle_sum_distance
(t, static, moving, num_threads=None)MDF distance optimization function (SUM)
We minimize the distance between moving streamlines as they align with the static streamlines.
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
compose_matrix
(scale=None, shear=None, angles=None, translate=None, perspective=None)Return 4x4 transformation matrix from sequence of transformations.
Code modified from the work of Christoph Gohlke link provided here http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
This is the inverse of the decompose_matrix
function.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3) - 0.5
>>> shear = np.random.random(3) - 0.5
>>> angles = (np.random.random(3) - 0.5) * (2*math.pi)
>>> trans = np.random.random(3) - 0.5
>>> persp = np.random.random(4) - 0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)
dipy.align.streamlinear.
compose_matrix44
(t, dtype=<class 'numpy.float64'>)Compose a 4x4 transformation matrix
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
decompose_matrix
(matrix)Return sequence of transformations from transformation matrix.
Code modified from the excellent work of Christoph Gohlke link provided here: http://www.lfd.uci.edu/~gohlke/code/transformations.py.html
Parameters: |
|
---|---|
Returns: |
|
Raises: |
|
Examples
>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)
dipy.align.streamlinear.
decompose_matrix44
(mat, size=12)Given a 4x4 homogeneous matrix return the parameter vector
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
distance_matrix_mdf
()Minimum direct flipped distance matrix between two streamline sets
All streamlines need to have the same number of points
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
length
()Euclidean length of streamlines
Length is in mm only if streamlines are expressed in world coordinates.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import length
>>> import numpy as np
>>> streamline = np.array([[1, 1, 1], [2, 3, 4], [0, 0, 0]])
>>> expected_length = np.sqrt([1+2**2+3**2, 2**2+3**2+4**2]).sum()
>>> length(streamline) == expected_length
True
>>> streamlines = [streamline, np.vstack([streamline, streamline[::-1]])]
>>> expected_lengths = [expected_length, 2*expected_length]
>>> lengths = [length(streamlines[0]), length(streamlines[1])]
>>> np.allclose(lengths, expected_lengths)
True
>>> length([])
0.0
>>> length(np.array([[1, 2, 3]]))
0.0
dipy.align.streamlinear.
progressive_slr
(static, moving, metric, x0, bounds, method='L-BFGS-B', verbose=True, num_threads=None)Progressive SLR
This is an utility function that allows for example to do affine registration using Streamline-based Linear Registration (SLR) [Garyfallidis15] by starting with translation first, then rigid, then similarity, scaling and finally affine.
Similarly, if for example you want to perform rigid then you start with translation first. This progressive strategy can helps with finding the optimal parameters of the final transformation.
Parameters: |
|
---|
References
[Garyfallidis15] | (1, 2) Garyfallidis et al. “Robust and efficient linear registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 |
dipy.align.streamlinear.
qbx_and_merge
(streamlines, thresholds, nb_pts=20, select_randomly=None, rng=None, verbose=True)Run QuickBundlesX and then run again on the centroids of the last layer
Running again QuickBundles at a layer has the effect of merging some of the clusters that maybe originally devided because of branching. This function help obtain a result at a QuickBundles quality but with QuickBundlesX speed. The merging phase has low cost because it is applied only on the centroids rather than the entire dataset.
Parameters: |
|
---|---|
Returns: |
|
References
[Garyfallidis12] | Garyfallidis E. et al., QuickBundles a method for tractography simplification, Frontiers in Neuroscience, vol 6, no 175, 2012. |
[Garyfallidis16] | Garyfallidis E. et al. QuickBundlesX: Sequential clustering of millions of streamlines in multiple levels of detail at record execution time. Proceedings of the, International Society of Magnetic Resonance in Medicine (ISMRM). Singapore, 4187, 2016. |
dipy.align.streamlinear.
select_random_set_of_streamlines
(streamlines, select, rng=None)Select a random set of streamlines
Parameters: |
|
---|---|
Returns: |
|
Notes
The same streamline will not be selected twice.
dipy.align.streamlinear.
set_number_of_points
()Change the number of points of streamlines in order to obtain nb_points-1 segments of equal length. Points of streamlines will be modified along the curve.
Parameters: |
|
---|---|
Returns: |
|
Examples
>>> from dipy.tracking.streamline import set_number_of_points
>>> import numpy as np
One streamline, a semi-circle:
>>> theta = np.pi*np.linspace(0, 1, 100)
>>> x = np.cos(theta)
>>> y = np.sin(theta)
>>> z = 0 * x
>>> streamline = np.vstack((x, y, z)).T
>>> modified_streamline = set_number_of_points(streamline, 3)
>>> len(modified_streamline)
3
Multiple streamlines:
>>> streamlines = [streamline, streamline[::2]]
>>> new_streamlines = set_number_of_points(streamlines, 10)
>>> [len(s) for s in streamlines]
[100, 50]
>>> [len(s) for s in new_streamlines]
[10, 10]
dipy.align.streamlinear.
slr_with_qbx
(static, moving, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=[40, 30, 20, 15], nb_pts=20, progressive=True, rng=None, num_threads=None)Utility function for registering large tractograms.
For efficiency we apply the registration on cluster centroids and remove small clusters.
Parameters: |
|
---|
Notes
The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.
References
[Garyfallidis15] | (1, 2) Garyfallidis et al. “Robust and efficient linear |
registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R778a6c20f622-Garyfallidis14] Garyfallidis et al., “Direct native-space fiber
bundle alignment for group comparisons”, ISMRM, 2014.
[Garyfallidis17] | Garyfallidis et al. Recognition of white matter |
bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.
dipy.align.streamlinear.
transform_streamlines
(streamlines, mat, in_place=False)Apply affine transformation to streamlines
Parameters: |
|
---|---|
Returns: |
|
dipy.align.streamlinear.
whole_brain_slr
(static, moving, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=[40, 30, 20, 15], nb_pts=20, progressive=True, rng=None, num_threads=None)Utility function for registering large tractograms.
For efficiency we apply the registration on cluster centroids and remove small clusters.
Parameters: |
|
---|
Notes
The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [Garyfallidis15] is applied.
References
[Garyfallidis15] | (1, 2) Garyfallidis et al. “Robust and efficient linear |
registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015 .. [R9eb8c2315518-Garyfallidis14] Garyfallidis et al., “Direct native-space fiber
bundle alignment for group comparisons”, ISMRM, 2014.
[Garyfallidis17] | Garyfallidis et al. Recognition of white matter |
bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.