io

Dpy(fname[, mode, compression])

Methods

load_pickle(fname)

Load object from pickle file fname.

orientation_from_string(string_ornt)

Return an array representation of an ornt string.

orientation_to_string(ornt)

Return a string representation of a 3d ornt.

ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk.

read_bvec_file(filename[, atol])

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

reorient_on_axis(bvecs, current_ornt, new_ornt)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

reorient_vectors(bvecs, current_ornt, new_ornt)

Change the orientation of gradients or other vectors.

save_pickle(fname, dix)

Save dix to fname as pickle.

Module: io.bvectxt

deprecate_with_version(message[, since, ...])

Return decorator function function for deprecation warning / error.

orientation_from_string(string_ornt)

Return an array representation of an ornt string.

orientation_to_string(ornt)

Return a string representation of a 3d ornt.

ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

read_bvec_file(filename[, atol])

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

reorient_on_axis(bvecs, current_ornt, new_ornt)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

reorient_vectors(bvecs, current_ornt, new_ornt)

Change the orientation of gradients or other vectors.

splitext(p)

Split the extension from a pathname.

Module: io.dpy

A class for handling large tractography datasets.

It is built using the h5py which in turn implement key features of the HDF5 (hierachical data format) API [1].

References

Dpy(fname[, mode, compression])

Methods

Streamlines

alias of ArraySequence

Module: io.gradients

read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk.

splitext(p)

Split the extension from a pathname.

Module: io.image

Version(version)

This class abstracts handling of a project's versions.

load_nifti(fname[, return_img, ...])

Load data and other information from a nifti file.

load_nifti_data(fname[, as_ndarray])

Load only the data array from a nifti file.

save_nifti(fname, data, affine[, hdr, dtype])

Save a data array into a nifti file.

save_qa_metric(fname, xopt, fopt)

Save Quality Assurance metrics.

Module: io.peaks

PeaksAndMetrics

Attributes:

Sphere([x, y, z, theta, phi, xyz, faces, edges])

Points on the unit sphere.

load_peaks(fname[, verbose])

Load a PeaksAndMetrics HDF5 file (PAM5)

peaks_to_niftis(pam, fname_shm, fname_dirs, ...)

Save SH, directions, indices and values of peaks to Nifti.

reshape_peaks_for_visualization(peaks)

Reshape peaks for visualization.

save_nifti(fname, data, affine[, hdr, dtype])

Save a data array into a nifti file.

save_peaks(fname, pam[, affine, verbose])

Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5).

Module: io.pickles

Load and save pickles

load_pickle(fname)

Load object from pickle file fname.

save_pickle(fname, dix)

Save dix to fname as pickle.

Module: io.stateful_tractogram

Origin(value)

Enum to simplify future change to convention

PerArrayDict([n_rows])

Dictionary for which key access can do slicing on the values.

PerArraySequenceDict([n_rows])

Dictionary for which key access can do slicing on the values.

Space(value)

Enum to simplify future change to convention

StatefulTractogram(streamlines, reference, space)

Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy).

Streamlines

alias of ArraySequence

Tractogram([streamlines, ...])

Container for streamlines and their data information.

product

product(*iterables, repeat=1) --> product object

apply_affine(aff, pts[, inplace])

Apply affine matrix aff to points pts

bisect(/, a, x[, lo, hi])

Return the index where to insert item x in list a, assuming a is sorted.

deepcopy(x[, memo, _nil])

Deep copy operation on arbitrary Python objects.

get_reference_info(reference)

Will compare the spatial attribute of 2 references

is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

is_reference_info_valid(affine, dimensions, ...)

Validate basic data type and value of spatial attribute.

set_sft_logger_level(log_level)

Change the logger of the StatefulTractogram to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR

Module: io.streamline

Dpy(fname[, mode, compression])

Methods

Origin(value)

Enum to simplify future change to convention

Space(value)

Enum to simplify future change to convention

StatefulTractogram(streamlines, reference, space)

Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy).

Tractogram([streamlines, ...])

Container for streamlines and their data information.

create_tractogram_header(tractogram_type, ...)

Write a standard trk/tck header from spatial attribute

deepcopy(x[, memo, _nil])

Deep copy operation on arbitrary Python objects.

detect_format(fileobj)

Returns the StreamlinesFile object guessed from the file-like object.

is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

load_dpy(filename, reference[, to_space, ...])

Load the stateful tractogram of the .dpy format

load_fib(filename, reference[, to_space, ...])

Load the stateful tractogram of the .fib format

load_generator(ttype)

Generate a loading function that performs a file extension check to restrict the user to a single file format.

load_tck(filename, reference[, to_space, ...])

Load the stateful tractogram of the .tck format

load_tractogram(filename, reference[, ...])

Load the stateful tractogram from any format (trk/tck/vtk/vtp/fib/dpy)

load_trk(filename, reference[, to_space, ...])

Load the stateful tractogram of the .trk format

load_vtk(filename, reference[, to_space, ...])

Load the stateful tractogram of the .vtk format

load_vtk_streamlines(filename[, to_lps])

Load streamlines from vtk polydata.

load_vtp(filename, reference[, to_space, ...])

Load the stateful tractogram of the .vtp format

save_dpy(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .dpy format

save_fib(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .fib format

save_generator(ttype)

Generate a saving function that performs a file extension check to restrict the user to a single file format.

save_tck(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .tck format

save_tractogram(sft, filename[, ...])

Save the stateful tractogram in any format (trk/tck/vtk/vtp/fib/dpy)

save_trk(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .trk format

save_vtk(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .vtk format

save_vtk_streamlines(streamlines, filename)

Save streamlines as vtk polydata to a supported format file.

save_vtp(sft, filename[, bbox_valid_check])

Save the stateful tractogram of the .vtp format

Module: io.utils

Utility functions for file formats

Nifti1Image(dataobj, affine[, header, ...])

Class for single file NIfTI1 format image

create_nifti_header(affine, dimensions, ...)

Write a standard nifti header from spatial attribute

create_tractogram_header(tractogram_type, ...)

Write a standard trk/tck header from spatial attribute

decfa(img_orig[, scale])

Create a nifti-compliant directional-encoded color FA image.

decfa_to_float(img_orig)

Convert a nifti-compliant directional-encoded color FA image into a nifti image with RGB encoded in floating point resolution.

detect_format(fileobj)

Returns the StreamlinesFile object guessed from the file-like object.

get_reference_info(reference)

Will compare the spatial attribute of 2 references

is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

is_reference_info_valid(affine, dimensions, ...)

Validate basic data type and value of spatial attribute.

make5d(data)

reshapes the input to have 5 dimensions, adds extra dimensions just before the last dimension

nifti1_symmat(image_data, *args, **kwargs)

Returns a Nifti1Image with a symmetric matrix intent

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

read_img_arr_or_path(data[, affine])

Helper function that handles inputs that can be paths, nifti img or arrays

save_buan_profiles_hdf5(fname, dt)

Saves the given input dataframe to .h5 file

Module: io.vtk

load_polydata(file_name)

Load a vtk polydata to a supported format file.

load_vtk_streamlines(filename[, to_lps])

Load streamlines from vtk polydata.

optional_package(name[, trip_msg])

Return package-like thing and module setup for package name

save_polydata(polydata, file_name[, binary, ...])

Save a vtk polydata to a supported format file.

save_vtk_streamlines(streamlines, filename)

Save streamlines as vtk polydata to a supported format file.

setup_module()

transform_streamlines(streamlines, mat[, ...])

Apply affine transformation to streamlines

Dpy

class dipy.io.Dpy(fname, mode='r', compression=0)

Bases: object

Methods

read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

close

version

__init__(fname, mode='r', compression=0)

Advanced storage system for tractography based on HDF5

Parameters:
fnamestr, full filename
mode‘r’ read

‘w’ write ‘r+’ read and write only if file already exists

compression0 no compression to 9 maximum compression

Examples

>>> import os
>>> from tempfile import mkstemp #temp file
>>> from dipy.io.dpy import Dpy
>>> def dpy_example():
...     fd,fname = mkstemp()
...     fname += '.dpy'#add correct extension
...     dpw = Dpy(fname,'w')
...     A=np.ones((5,3))
...     B=2*A.copy()
...     C=3*A.copy()
...     dpw.write_track(A)
...     dpw.write_track(B)
...     dpw.write_track(C)
...     dpw.close()
...     dpr = Dpy(fname,'r')
...     dpr.read_track()
...     dpr.read_track()
...     dpr.read_tracksi([0, 1, 2, 0, 0, 2])
...     dpr.close()
...     os.remove(fname) #delete file from disk
>>> dpy_example()
close()
read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

version()
write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

load_pickle

dipy.io.load_pickle(fname)

Load object from pickle file fname.

Parameters:
fnamestr

filename to load dict or other python object

Returns:
dixobject

dictionary or other object

Examples

dipy.io.pickles.save_pickle

orientation_from_string

dipy.io.orientation_from_string(string_ornt)

Return an array representation of an ornt string.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

orientation_to_string

dipy.io.orientation_to_string(ornt)

Return a string representation of a 3d ornt.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

ornt_mapping

dipy.io.ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

read_bvals_bvecs

dipy.io.read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk.

Parameters:
fbvalsstr

Full path to file with b-values. None to not read bvals.

fbvecsstr

Full path of file with b-vectors. None to not read bvecs.

Returns:
bvalsarray, (N,) or None
bvecsarray, (N, 3) or None

Notes

Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).

read_bvec_file

dipy.io.read_bvec_file(filename, atol=0.001)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

Read gradient table information from a pair of files with extensions .bvec and .bval. The bval file should have one row of values representing the bvalues of each volume in the dwi data set. The bvec file should have three rows, where the rows are the x, y, and z components of the normalized gradient direction for each of the volumes.

Parameters:
filename

The path to the either the bvec or bval file

atolfloat, optional

The tolerance used to check all the gradient directions are normalized. Default is .001

reorient_on_axis

dipy.io.reorient_on_axis(bvecs, current_ornt, new_ornt, axis=0)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

reorient_vectors

dipy.io.reorient_vectors(bvecs, current_ornt, new_ornt, axis=0)

Change the orientation of gradients or other vectors.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in “RAS” will be [-x, -y, z] in “LPS”.

R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior

save_pickle

dipy.io.save_pickle(fname, dix)

Save dix to fname as pickle.

Parameters:
fnamestr

filename to save object e.g. a dictionary

dixstr

dictionary or other object

Examples

>>> import os
>>> from tempfile import mkstemp
>>> fd, fname = mkstemp() # make temporary file (opened, attached to fh)
>>> d={0:{'d':1}}
>>> save_pickle(fname, d)
>>> d2=load_pickle(fname)

We remove the temporary file we created for neatness

>>> os.close(fd) # the file is still open, we need to close the fh
>>> os.remove(fname)

deprecate_with_version

dipy.io.bvectxt.deprecate_with_version(message, since='', until='', version_comparator=<function cmp_pkg_version>, warn_class=<class 'DeprecationWarning'>, error_class=<class 'dipy.utils.deprecator.ExpiredDeprecationError'>)

Return decorator function function for deprecation warning / error.

The decorated function / method will:

  • Raise the given warning_class warning when the function / method gets called, up to (and including) version until (if specified);

  • Raise the given error_class error when the function / method gets called, when the package version is greater than version until (if specified).

Parameters:
messagestr

Message explaining deprecation, giving possible alternatives.

sincestr, optional

Released version at which object was first deprecated.

untilstr, optional

Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error.

version_comparatorcallable

Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the version_comparator, 0 if the version is equal, and -1 if the version is lower. For example, the version_comparator may compare the input version string to the current package version string.

warn_classclass, optional

Class of warning to generate for deprecation.

error_classclass, optional

Class of error to generate when version_comparator returns 1 for a given argument of until.

Returns:
deprecatorfunc

Function returning a decorator.

orientation_from_string

dipy.io.bvectxt.orientation_from_string(string_ornt)

Return an array representation of an ornt string.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

orientation_to_string

dipy.io.bvectxt.orientation_to_string(ornt)

Return a string representation of a 3d ornt.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

ornt_mapping

dipy.io.bvectxt.ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

read_bvec_file

dipy.io.bvectxt.read_bvec_file(filename, atol=0.001)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

Read gradient table information from a pair of files with extensions .bvec and .bval. The bval file should have one row of values representing the bvalues of each volume in the dwi data set. The bvec file should have three rows, where the rows are the x, y, and z components of the normalized gradient direction for each of the volumes.

Parameters:
filename

The path to the either the bvec or bval file

atolfloat, optional

The tolerance used to check all the gradient directions are normalized. Default is .001

reorient_on_axis

dipy.io.bvectxt.reorient_on_axis(bvecs, current_ornt, new_ornt, axis=0)

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

reorient_vectors

dipy.io.bvectxt.reorient_vectors(bvecs, current_ornt, new_ornt, axis=0)

Change the orientation of gradients or other vectors.

dipy.io.bvectxt module is deprecated, Please use dipy.core.gradients module instead

  • deprecated from version: 1.4

  • Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.5

Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in “RAS” will be [-x, -y, z] in “LPS”.

R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior

splitext

dipy.io.bvectxt.splitext(p)

Split the extension from a pathname.

Extension is everything from the last dot to the end, ignoring leading dots. Returns “(root, ext)”; ext may be empty.

Dpy

class dipy.io.dpy.Dpy(fname, mode='r', compression=0)

Bases: object

Methods

read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

close

version

__init__(fname, mode='r', compression=0)

Advanced storage system for tractography based on HDF5

Parameters:
fnamestr, full filename
mode‘r’ read

‘w’ write ‘r+’ read and write only if file already exists

compression0 no compression to 9 maximum compression

Examples

>>> import os
>>> from tempfile import mkstemp #temp file
>>> from dipy.io.dpy import Dpy
>>> def dpy_example():
...     fd,fname = mkstemp()
...     fname += '.dpy'#add correct extension
...     dpw = Dpy(fname,'w')
...     A=np.ones((5,3))
...     B=2*A.copy()
...     C=3*A.copy()
...     dpw.write_track(A)
...     dpw.write_track(B)
...     dpw.write_track(C)
...     dpw.close()
...     dpr = Dpy(fname,'r')
...     dpr.read_track()
...     dpr.read_track()
...     dpr.read_tracksi([0, 1, 2, 0, 0, 2])
...     dpr.close()
...     os.remove(fname) #delete file from disk
>>> dpy_example()
close()
read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

version()
write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

Streamlines

dipy.io.dpy.Streamlines

alias of ArraySequence

read_bvals_bvecs

dipy.io.gradients.read_bvals_bvecs(fbvals, fbvecs)

Read b-values and b-vectors from disk.

Parameters:
fbvalsstr

Full path to file with b-values. None to not read bvals.

fbvecsstr

Full path of file with b-vectors. None to not read bvecs.

Returns:
bvalsarray, (N,) or None
bvecsarray, (N, 3) or None

Notes

Files can be either ‘.bvals’/’.bvecs’ or ‘.txt’ or ‘.npy’ (containing arrays stored with the appropriate values).

splitext

dipy.io.gradients.splitext(p)

Split the extension from a pathname.

Extension is everything from the last dot to the end, ignoring leading dots. Returns “(root, ext)”; ext may be empty.

Version

class dipy.io.image.Version(version: str)

Bases: _BaseVersion

This class abstracts handling of a project’s versions.

A Version instance is comparison aware and can be compared and sorted using the standard Python interfaces.

>>> v1 = Version("1.0a5")
>>> v2 = Version("1.0")
>>> v1
<Version('1.0a5')>
>>> v2
<Version('1.0')>
>>> v1 < v2
True
>>> v1 == v2
False
>>> v1 > v2
False
>>> v1 >= v2
False
>>> v1 <= v2
True
Attributes:
base_version

The “base version” of the version.

dev

The development number of the version.

epoch

The epoch of the version.

is_devrelease

Whether this version is a development release.

is_postrelease

Whether this version is a post-release.

is_prerelease

Whether this version is a pre-release.

local

The local version segment of the version.

major

The first item of release or 0 if unavailable.

micro

The third item of release or 0 if unavailable.

minor

The second item of release or 0 if unavailable.

post

The post-release number of the version.

pre

The pre-release segment of the version.

public

The public portion of the version.

release

The components of the “release” segment of the version.

__init__(version: str) None

Initialize a Version object.

Parameters:

version – The string representation of a version which will be parsed and normalized before use.

Raises:

InvalidVersion – If the version does not conform to PEP 440 in any way then this exception will be raised.

property base_version: str

The “base version” of the version.

>>> Version("1.2.3").base_version
'1.2.3'
>>> Version("1.2.3+abc").base_version
'1.2.3'
>>> Version("1!1.2.3+abc.dev1").base_version
'1!1.2.3'

The “base version” is the public version of the project without any pre or post release markers.

property dev: int | None

The development number of the version.

>>> print(Version("1.2.3").dev)
None
>>> Version("1.2.3.dev1").dev
1
property epoch: int

The epoch of the version.

>>> Version("2.0.0").epoch
0
>>> Version("1!2.0.0").epoch
1
property is_devrelease: bool

Whether this version is a development release.

>>> Version("1.2.3").is_devrelease
False
>>> Version("1.2.3.dev1").is_devrelease
True
property is_postrelease: bool

Whether this version is a post-release.

>>> Version("1.2.3").is_postrelease
False
>>> Version("1.2.3.post1").is_postrelease
True
property is_prerelease: bool

Whether this version is a pre-release.

>>> Version("1.2.3").is_prerelease
False
>>> Version("1.2.3a1").is_prerelease
True
>>> Version("1.2.3b1").is_prerelease
True
>>> Version("1.2.3rc1").is_prerelease
True
>>> Version("1.2.3dev1").is_prerelease
True
property local: str | None

The local version segment of the version.

>>> print(Version("1.2.3").local)
None
>>> Version("1.2.3+abc").local
'abc'
property major: int

The first item of release or 0 if unavailable.

>>> Version("1.2.3").major
1
property micro: int

The third item of release or 0 if unavailable.

>>> Version("1.2.3").micro
3
>>> Version("1").micro
0
property minor: int

The second item of release or 0 if unavailable.

>>> Version("1.2.3").minor
2
>>> Version("1").minor
0
property post: int | None

The post-release number of the version.

>>> print(Version("1.2.3").post)
None
>>> Version("1.2.3.post1").post
1
property pre: Tuple[str, int] | None

The pre-release segment of the version.

>>> print(Version("1.2.3").pre)
None
>>> Version("1.2.3a1").pre
('a', 1)
>>> Version("1.2.3b1").pre
('b', 1)
>>> Version("1.2.3rc1").pre
('rc', 1)
property public: str

The public portion of the version.

>>> Version("1.2.3").public
'1.2.3'
>>> Version("1.2.3+abc").public
'1.2.3'
>>> Version("1.2.3+abc.dev1").public
'1.2.3'
property release: Tuple[int, ...]

The components of the “release” segment of the version.

>>> Version("1.2.3").release
(1, 2, 3)
>>> Version("2.0.0").release
(2, 0, 0)
>>> Version("1!2.0.0.post0").release
(2, 0, 0)

Includes trailing zeroes but not the epoch or any pre-release / development / post-release suffixes.

load_nifti

dipy.io.image.load_nifti(fname, return_img=False, return_voxsize=False, return_coords=False, as_ndarray=True)

Load data and other information from a nifti file.

Parameters:
fnamestr

Full path to a nifti file.

return_imgbool, optional

Whether to return the nibabel nifti img object. Default: False

return_voxsize: bool, optional

Whether to return the nifti header zooms. Default: False

return_coordsbool, optional

Whether to return the nifti header aff2axcodes. Default: False

as_ndarray: bool, optional

convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)

Returns:
A tuple, with (at the most, if all keyword args are set to True):
(data, img.affine, img, vox_size, nib.aff2axcodes(img.affine))

See also

load_nifti_data

load_nifti_data

dipy.io.image.load_nifti_data(fname, as_ndarray=True)

Load only the data array from a nifti file.

Parameters:
fnamestr

Full path to the file.

as_ndarray: bool, optional

convert nibabel ArrayProxy to a numpy.ndarray. If you want to save memory and delay this casting, just turn this option to False (default: True)

Returns:
data: np.ndarray or nib.ArrayProxy

See also

load_nifti

save_nifti

dipy.io.image.save_nifti(fname, data, affine, hdr=None, dtype=None)

Save a data array into a nifti file.

Parameters:
fnamestr

The full path to the file to be saved.

datandarray

The array with the data to save.

affine4x4 array

The affine transform associated with the file.

hdrnifti header, optional

May contain additional information to store in the file header.

Returns:
None

save_qa_metric

dipy.io.image.save_qa_metric(fname, xopt, fopt)

Save Quality Assurance metrics.

Parameters:
fname: string

File name to save the metric values.

xopt: numpy array

The metric containing the optimal parameters for image registration.

fopt: int

The distance between the registered images.

PeaksAndMetrics

class dipy.io.peaks.PeaksAndMetrics

Bases: EuDXDirectionGetter

Attributes:
ang_thr
qa_thr
total_weight

Methods

initial_direction

The best starting directions for fiber tracking from point

generate_streamline

get_direction

__init__(*args, **kwargs)

Sphere

class dipy.io.peaks.Sphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)

Bases: object

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Parameters:
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

Attributes:
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector

subdivide([n])

Subdivides each face of the sphere into four new faces.

edges

faces

vertices

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)
edges()
faces()
find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector

Parameters:
xyzarray-like, 3 elements

A unit vector

Returns:
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

subdivide(n=1)

Subdivides each face of the sphere into four new faces.

New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].

      y
      /\
     /  \
   a/____\b
   /\    /\
  /  \  /  \
 /____\/____\
x      c     z
Parameters:
nint, optional

The number of subdivisions to preform.

Returns:
new_sphereSphere

The subdivided sphere.

vertices()
property x
property y
property z

load_peaks

dipy.io.peaks.load_peaks(fname, verbose=False)

Load a PeaksAndMetrics HDF5 file (PAM5)

Parameters:
fnamestring

Filename of PAM5 file.

verbosebool

Print summary information about the loaded file.

Returns:
pamPeaksAndMetrics object

peaks_to_niftis

dipy.io.peaks.peaks_to_niftis(pam, fname_shm, fname_dirs, fname_values, fname_indices, fname_gfa, reshape_dirs=False)

Save SH, directions, indices and values of peaks to Nifti.

reshape_peaks_for_visualization

dipy.io.peaks.reshape_peaks_for_visualization(peaks)

Reshape peaks for visualization.

Reshape and convert to float32 a set of peaks for visualisation with mrtrix or the fibernavigator.

Parameters:
peaks: nd array (…, N, 3) or PeaksAndMetrics object

The peaks to be reshaped and converted to float32.

Returns:
peaksnd array (…, 3*N)

save_nifti

dipy.io.peaks.save_nifti(fname, data, affine, hdr=None, dtype=None)

Save a data array into a nifti file.

Parameters:
fnamestr

The full path to the file to be saved.

datandarray

The array with the data to save.

affine4x4 array

The affine transform associated with the file.

hdrnifti header, optional

May contain additional information to store in the file header.

Returns:
None

save_peaks

dipy.io.peaks.save_peaks(fname, pam, affine=None, verbose=False)

Save all important attributes of object PeaksAndMetrics in a PAM5 file (HDF5).

Parameters:
fnamestring

Filename of PAM5 file

pamPeaksAndMetrics

Object holding peak_dirs, shm_coeffs and other attributes

affinearray

The 4x4 matrix transforming the date from native to world coordinates. PeaksAndMetrics should have that attribute but if not it can be provided here. Default None.

verbosebool

Print summary information about the saved file.

load_pickle

dipy.io.pickles.load_pickle(fname)

Load object from pickle file fname.

Parameters:
fnamestr

filename to load dict or other python object

Returns:
dixobject

dictionary or other object

Examples

dipy.io.pickles.save_pickle

save_pickle

dipy.io.pickles.save_pickle(fname, dix)

Save dix to fname as pickle.

Parameters:
fnamestr

filename to save object e.g. a dictionary

dixstr

dictionary or other object

Examples

>>> import os
>>> from tempfile import mkstemp
>>> fd, fname = mkstemp() # make temporary file (opened, attached to fh)
>>> d={0:{'d':1}}
>>> save_pickle(fname, d)
>>> d2=load_pickle(fname)

We remove the temporary file we created for neatness

>>> os.close(fd) # the file is still open, we need to close the fh
>>> os.remove(fname)

Origin

class dipy.io.stateful_tractogram.Origin(value)

Bases: Enum

Enum to simplify future change to convention

__init__(*args, **kwargs)
NIFTI = 'center'
TRACKVIS = 'corner'

PerArrayDict

class dipy.io.stateful_tractogram.PerArrayDict(n_rows=0, *args, **kwargs)

Bases: SliceableDataDict

Dictionary for which key access can do slicing on the values.

This container behaves like a standard dictionary but extends key access to allow keys for key access to be indices slicing into the contained ndarray values. The elements must also be ndarrays.

In addition, it makes sure the amount of data contained in those ndarrays matches the number of streamlines given at the instantiation of this instance.

Parameters:
n_rowsNone or int, optional

Number of rows per value in each key, value pair or None for not specified.

*args
**kwargs

Positional and keyword arguments, passed straight through the dict constructor.

Methods

clear()

extend(other)

Appends the elements of another PerArrayDict.

get(k[,d])

items()

keys()

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()

as a 2-tuple; but raise KeyError if D is empty.

setdefault(k[,d])

update([E, ]**F)

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()

__init__(n_rows=0, *args, **kwargs)
extend(other)

Appends the elements of another PerArrayDict.

That is, for each entry in this dictionary, we append the elements coming from the other dictionary at the corresponding entry.

Parameters:
otherPerArrayDict object

Its data will be appended to the data of this dictionary.

Returns:
None

Notes

The keys in both dictionaries must be the same.

PerArraySequenceDict

class dipy.io.stateful_tractogram.PerArraySequenceDict(n_rows=0, *args, **kwargs)

Bases: PerArrayDict

Dictionary for which key access can do slicing on the values.

This container behaves like a standard dictionary but extends key access to allow keys for key access to be indices slicing into the contained ndarray values. The elements must also be ArraySequence.

In addition, it makes sure the amount of data contained in those array sequences matches the number of elements given at the instantiation of the instance.

Methods

clear()

extend(other)

Appends the elements of another PerArrayDict.

get(k[,d])

items()

keys()

pop(k[,d])

If key is not found, d is returned if given, otherwise KeyError is raised.

popitem()

as a 2-tuple; but raise KeyError if D is empty.

setdefault(k[,d])

update([E, ]**F)

If E present and has a .keys() method, does: for k in E: D[k] = E[k] If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v In either case, this is followed by: for k, v in F.items(): D[k] = v

values()

__init__(n_rows=0, *args, **kwargs)

Space

class dipy.io.stateful_tractogram.Space(value)

Bases: Enum

Enum to simplify future change to convention

__init__(*args, **kwargs)
RASMM = 'rasmm'
VOX = 'vox'
VOXMM = 'voxmm'

StatefulTractogram

class dipy.io.stateful_tractogram.StatefulTractogram(streamlines, reference, space, origin=Origin.NIFTI, data_per_point=None, data_per_streamline=None)

Bases: object

Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point.

Attributes:
affine

Getter for the reference affine

data_per_point

Getter for data_per_point

data_per_streamline

Getter for data_per_streamline

dimensions

Getter for the reference dimensions

origin

Getter for origin standard

space

Getter for the current space

space_attributes

Getter for spatial attribute

streamlines

Partially safe getter for streamlines

voxel_order

Getter for the reference voxel order

voxel_sizes

Getter for the reference voxel sizes

Methods

are_compatible(sft_1, sft_2)

Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency

compute_bounding_box()

Compute the bounding box of the streamlines in their current state

from_sft(streamlines, sft[, data_per_point, ...])

Create an instance of StatefulTractogram from another instance of StatefulTractogram.

get_data_per_point_keys()

Return a list of the data_per_point attribute names

get_data_per_streamline_keys()

Return a list of the data_per_streamline attribute names

get_streamlines_copy()

Safe getter for streamlines (for slicing)

is_bbox_in_vox_valid()

Verify that the bounding box is valid in voxel space.

remove_invalid_streamlines([epsilon])

Remove streamlines with invalid coordinates from the object.

to_center()

Safe function to shift streamlines so the center of voxel is the origin

to_corner()

Safe function to shift streamlines so the corner of voxel is the origin

to_origin(target_origin)

Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)

to_rasmm()

Safe function to transform streamlines and update state

to_space(target_space)

Safe function to transform streamlines to a particular space using an enum and update state

to_vox()

Safe function to transform streamlines and update state

to_voxmm()

Safe function to transform streamlines and update state

__init__(streamlines, reference, space, origin=Origin.NIFTI, data_per_point=None, data_per_streamline=None)

Create a strict, state-aware, robust tractogram

Parameters:
streamlineslist or ArraySequence

Streamlines of the tractogram

referenceNifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation

spaceEnum (dipy.io.stateful_tractogram.Space)

Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM

originEnum (dipy.io.stateful_tractogram.Origin), optional

Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER

data_per_pointdict, optional

Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i

data_per_streamlinedict, optional

Dictionary in which each key has X items X being the number of streamlines

Notes

Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space.

Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification.

In a case of manipulation not allowed by this object, use Nibabel directly and be careful.

property affine

Getter for the reference affine

static are_compatible(sft_1, sft_2)

Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency

compute_bounding_box()

Compute the bounding box of the streamlines in their current state

Returns:
outputndarray

8 corners of the XYZ aligned box, all zeros if no streamlines

property data_per_point

Getter for data_per_point

property data_per_streamline

Getter for data_per_streamline

property dimensions

Getter for the reference dimensions

static from_sft(streamlines, sft, data_per_point=None, data_per_streamline=None)

Create an instance of StatefulTractogram from another instance of StatefulTractogram.

Parameters:
streamlineslist or ArraySequence

Streamlines of the tractogram

sftStatefulTractogram,

The other StatefulTractogram to copy the space_attribute AND state from.

data_per_pointdict, optional

Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i

data_per_streamlinedict, optional

Dictionary in which each key has X items X being the number of streamlines

—–
get_data_per_point_keys()

Return a list of the data_per_point attribute names

get_data_per_streamline_keys()

Return a list of the data_per_streamline attribute names

get_streamlines_copy()

Safe getter for streamlines (for slicing)

is_bbox_in_vox_valid()

Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space.

Returns:
outputbool

Are the streamlines within the volume of the associated reference

property origin

Getter for origin standard

remove_invalid_streamlines(epsilon=0.001)

Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero

Parameters:
epsilonfloat (optional)

Epsilon value for the bounding box verification. Default is 1e-6.

Returns:
outputtuple

Tuple of two list, indices_to_remove, indices_to_keep

property space

Getter for the current space

property space_attributes

Getter for spatial attribute

property streamlines

Partially safe getter for streamlines

to_center()

Safe function to shift streamlines so the center of voxel is the origin

to_corner()

Safe function to shift streamlines so the corner of voxel is the origin

to_origin(target_origin)

Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)

to_rasmm()

Safe function to transform streamlines and update state

to_space(target_space)

Safe function to transform streamlines to a particular space using an enum and update state

to_vox()

Safe function to transform streamlines and update state

to_voxmm()

Safe function to transform streamlines and update state

property voxel_order

Getter for the reference voxel order

property voxel_sizes

Getter for the reference voxel sizes

Streamlines

dipy.io.stateful_tractogram.Streamlines

alias of ArraySequence

Tractogram

class dipy.io.stateful_tractogram.Tractogram(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)

Bases: object

Container for streamlines and their data information.

Streamlines of a tractogram can be in any coordinate system of your choice as long as you provide the correct affine_to_rasmm matrix, at construction time. When applied to streamlines coordinates, that transformation matrix should bring the streamlines back to world space (RAS+ and mm space) [1].

Moreover, when streamlines are mapped back to voxel space [2], a streamline point located at an integer coordinate (i,j,k) is considered to be at the center of the corresponding voxel. This is in contrast with other conventions where it might have referred to a corner.

References

Attributes:
streamlinesArraySequence object

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamlinePerArrayDict object

Dictionary where the items are (str, 2D array). Each key represents a piece of information \(i\) to be kept alongside every streamline, and its associated value is a 2D array of shape (\(T\), \(P_i\)) where \(T\) is the number of streamlines and \(P_i\) is the number of values to store for that particular piece of information \(i\).

data_per_pointPerArraySequenceDict object

Dictionary where the items are (str, ArraySequence). Each key represents a piece of information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number values to store for that particular piece of information \(i\).

Methods

apply_affine(affine[, lazy])

Applies an affine transformation on the points of each streamline.

copy()

Returns a copy of this Tractogram object.

extend(other)

Appends the data of another Tractogram.

to_world([lazy])

Brings the streamlines to world space (i.e.

__init__(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)
Parameters:
streamlinesiterable of ndarrays or ArraySequence, optional

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamlinedict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every streamline, and its associated value is an iterable of ndarrays of shape (\(P_i\),) where \(P_i\) is the number of scalar values to store for that particular information \(i\).

data_per_pointdict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number scalar values to store for that particular information \(i\).

affine_to_rasmmndarray of shape (4, 4) or None, optional

Transformation matrix that brings the streamlines contained in this tractogram to RAS+ and mm space where coordinate (0,0,0) refers to the center of the voxel. By default, the streamlines are in an unknown space, i.e. affine_to_rasmm is None.

property affine_to_rasmm

Affine bringing streamlines in this tractogram to RAS+mm.

apply_affine(affine, lazy=False)

Applies an affine transformation on the points of each streamline.

If lazy is not specified, this is performed in-place.

Parameters:
affinendarray of shape (4, 4)

Transformation that will be applied to every streamline.

lazy{False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogramTractogram or LazyTractogram object

Tractogram where the streamlines have been transformed according to the given affine transformation. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

copy()

Returns a copy of this Tractogram object.

property data_per_point
property data_per_streamline
extend(other)

Appends the data of another Tractogram.

Data that will be appended includes the streamlines and the content of both dictionaries data_per_streamline and data_per_point.

Parameters:
otherTractogram object

Its data will be appended to the data of this tractogram.

Returns:
None

Notes

The entries in both dictionaries self.data_per_streamline and self.data_per_point must match respectively those contained in the other tractogram.

property streamlines
to_world(lazy=False)

Brings the streamlines to world space (i.e. RAS+ and mm).

If lazy is not specified, this is performed in-place.

Parameters:
lazy{False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogramTractogram or LazyTractogram object

Tractogram where the streamlines have been sent to world space. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

product

class dipy.io.stateful_tractogram.product

Bases: object

product(*iterables, repeat=1) –> product object

Cartesian product of input iterables. Equivalent to nested for-loops.

For example, product(A, B) returns the same as: ((x,y) for x in A for y in B). The leftmost iterators are in the outermost for-loop, so the output tuples cycle in a manner similar to an odometer (with the rightmost element changing on every iteration).

To compute the product of an iterable with itself, specify the number of repetitions with the optional repeat keyword argument. For example, product(A, repeat=4) means the same as product(A, A, A, A).

product(‘ab’, range(3)) –> (‘a’,0) (‘a’,1) (‘a’,2) (‘b’,0) (‘b’,1) (‘b’,2) product((0,1), (0,1), (0,1)) –> (0,0,0) (0,0,1) (0,1,0) (0,1,1) (1,0,0) …

__init__(*args, **kwargs)

apply_affine

dipy.io.stateful_tractogram.apply_affine(aff, pts, inplace=False)

Apply affine matrix aff to points pts

Returns result of application of aff to the right of pts. The coordinate dimension of pts should be the last.

For the 3D case, aff will be shape (4,4) and pts will have final axis length 3 - maybe it will just be N by 3. The return value is the transformed points, in this case:

res = np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]
transformed_pts = res.T

This routine is more general than 3D, in that aff can have any shape (N,N), and pts can have any shape, as long as the last dimension is for the coordinates, and is therefore length N-1.

Parameters:
aff(N, N) array-like

Homogeneous affine, for 3D points, will be 4 by 4. Contrary to first appearance, the affine will be applied on the left of pts.

pts(…, N-1) array-like

Points, where the last dimension contains the coordinates of each point. For 3D, the last dimension will be length 3.

inplacebool, optional

If True, attempt to apply the affine directly to pts. If False, or in-place application fails, a freshly allocated array will be returned.

Returns:
transformed_pts(…, N-1) array

transformed points

Examples

>>> aff = np.array([[0,2,0,10],[3,0,0,11],[0,0,4,12],[0,0,0,1]])
>>> pts = np.array([[1,2,3],[2,3,4],[4,5,6],[6,7,8]])
>>> apply_affine(aff, pts) 
array([[14, 14, 24],
       [16, 17, 28],
       [20, 23, 36],
       [24, 29, 44]]...)

Just to show that in the simple 3D case, it is equivalent to:

>>> (np.dot(aff[:3,:3], pts.T) + aff[:3,3:4]).T 
array([[14, 14, 24],
       [16, 17, 28],
       [20, 23, 36],
       [24, 29, 44]]...)

But pts can be a more complicated shape:

>>> pts = pts.reshape((2,2,3))
>>> apply_affine(aff, pts) 
array([[[14, 14, 24],
        [16, 17, 28]],

       [[20, 23, 36],
        [24, 29, 44]]]...)

bisect

dipy.io.stateful_tractogram.bisect(/, a, x, lo=0, hi=None)

Return the index where to insert item x in list a, assuming a is sorted.

The return value i is such that all e in a[:i] have e <= x, and all e in a[i:] have e > x. So if x already appears in the list, i points just beyond the rightmost x already there

Optional args lo (default 0) and hi (default len(a)) bound the slice of a to be searched.

deepcopy

dipy.io.stateful_tractogram.deepcopy(x, memo=None, _nil=[])

Deep copy operation on arbitrary Python objects.

See the module’s __doc__ string for more info.

get_reference_info

dipy.io.stateful_tractogram.get_reference_info(reference)

Will compare the spatial attribute of 2 references

Parameters:
referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict) Reference that provides the spatial attribute.

Returns:
outputtuple
  • affine ndarray (4,4), np.float32, transformation of VOX to RASMM

  • dimensions ndarray (3,), int16, volume shape for each axis

  • voxel_sizes ndarray (3,), float32, size of voxel for each axis

  • voxel_order, string, Typically ‘RAS’ or ‘LPS’

is_header_compatible

dipy.io.stateful_tractogram.is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

Parameters:
reference_1Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

reference_2Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

Returns:
outputbool

Does all the spatial attribute match

is_reference_info_valid

dipy.io.stateful_tractogram.is_reference_info_valid(affine, dimensions, voxel_sizes, voxel_order)

Validate basic data type and value of spatial attribute.

Does not ensure that voxel_sizes and voxel_order are self-coherent with the affine. Only verify the following:

  • affine is of the right type (float) and dimension (4,4)

  • affine contain values in the rotation part

  • dimensions is of right type (int) and length (3)

  • voxel_sizes is of right type (float) and length (3)

  • voxel_order is of right type (str) and length (3)

The listed parameters are what is expected, provide something else and this function should fail (cover common mistakes).

Parameters:
affine: ndarray (4,4)

Transformation of VOX to RASMM

dimensions: ndarray (3,), int16

Volume shape for each axis

voxel_sizes: ndarray (3,), float32

Size of voxel for each axis

voxel_order: string

Typically ‘RAS’ or ‘LPS’

Returns:
outputbool

Does the input represent a valid ‘state’ of spatial attribute

set_sft_logger_level

dipy.io.stateful_tractogram.set_sft_logger_level(log_level)

Change the logger of the StatefulTractogram to one on the following: DEBUG, INFO, WARNING, CRITICAL, ERROR

Parameters:
log_levelstr

Log level for the StatefulTractogram only

Dpy

class dipy.io.streamline.Dpy(fname, mode='r', compression=0)

Bases: object

Methods

read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

close

version

__init__(fname, mode='r', compression=0)

Advanced storage system for tractography based on HDF5

Parameters:
fnamestr, full filename
mode‘r’ read

‘w’ write ‘r+’ read and write only if file already exists

compression0 no compression to 9 maximum compression

Examples

>>> import os
>>> from tempfile import mkstemp #temp file
>>> from dipy.io.dpy import Dpy
>>> def dpy_example():
...     fd,fname = mkstemp()
...     fname += '.dpy'#add correct extension
...     dpw = Dpy(fname,'w')
...     A=np.ones((5,3))
...     B=2*A.copy()
...     C=3*A.copy()
...     dpw.write_track(A)
...     dpw.write_track(B)
...     dpw.write_track(C)
...     dpw.close()
...     dpr = Dpy(fname,'r')
...     dpr.read_track()
...     dpr.read_track()
...     dpr.read_tracksi([0, 1, 2, 0, 0, 2])
...     dpr.close()
...     os.remove(fname) #delete file from disk
>>> dpy_example()
close()
read_track()

read one track each time

read_tracks()

read the entire tractography

read_tracksi(indices)

read tracks with specific indices

version()
write_track(track)

write on track each time

write_tracks(tracks)

write many tracks together

Origin

class dipy.io.streamline.Origin(value)

Bases: Enum

Enum to simplify future change to convention

__init__(*args, **kwargs)
NIFTI = 'center'
TRACKVIS = 'corner'

Space

class dipy.io.streamline.Space(value)

Bases: Enum

Enum to simplify future change to convention

__init__(*args, **kwargs)
RASMM = 'rasmm'
VOX = 'vox'
VOXMM = 'voxmm'

StatefulTractogram

class dipy.io.streamline.StatefulTractogram(streamlines, reference, space, origin=Origin.NIFTI, data_per_point=None, data_per_streamline=None)

Bases: object

Class for stateful representation of collections of streamlines Object designed to be identical no matter the file format (trk, tck, vtk, fib, dpy). Facilitate transformation between space and data manipulation for each streamline / point.

Attributes:
affine

Getter for the reference affine

data_per_point

Getter for data_per_point

data_per_streamline

Getter for data_per_streamline

dimensions

Getter for the reference dimensions

origin

Getter for origin standard

space

Getter for the current space

space_attributes

Getter for spatial attribute

streamlines

Partially safe getter for streamlines

voxel_order

Getter for the reference voxel order

voxel_sizes

Getter for the reference voxel sizes

Methods

are_compatible(sft_1, sft_2)

Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency

compute_bounding_box()

Compute the bounding box of the streamlines in their current state

from_sft(streamlines, sft[, data_per_point, ...])

Create an instance of StatefulTractogram from another instance of StatefulTractogram.

get_data_per_point_keys()

Return a list of the data_per_point attribute names

get_data_per_streamline_keys()

Return a list of the data_per_streamline attribute names

get_streamlines_copy()

Safe getter for streamlines (for slicing)

is_bbox_in_vox_valid()

Verify that the bounding box is valid in voxel space.

remove_invalid_streamlines([epsilon])

Remove streamlines with invalid coordinates from the object.

to_center()

Safe function to shift streamlines so the center of voxel is the origin

to_corner()

Safe function to shift streamlines so the corner of voxel is the origin

to_origin(target_origin)

Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)

to_rasmm()

Safe function to transform streamlines and update state

to_space(target_space)

Safe function to transform streamlines to a particular space using an enum and update state

to_vox()

Safe function to transform streamlines and update state

to_voxmm()

Safe function to transform streamlines and update state

__init__(streamlines, reference, space, origin=Origin.NIFTI, data_per_point=None, data_per_streamline=None)

Create a strict, state-aware, robust tractogram

Parameters:
streamlineslist or ArraySequence

Streamlines of the tractogram

referenceNifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header, trk.header (dict) or another Stateful Tractogram Reference that provides the spatial attributes. Typically a nifti-related object from the native diffusion used for streamlines generation

spaceEnum (dipy.io.stateful_tractogram.Space)

Current space in which the streamlines are (vox, voxmm or rasmm) After tracking the space is VOX, after loading with nibabel the space is RASMM

originEnum (dipy.io.stateful_tractogram.Origin), optional

Current origin in which the streamlines are (center or corner) After loading with nibabel the origin is CENTER

data_per_pointdict, optional

Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i

data_per_streamlinedict, optional

Dictionary in which each key has X items X being the number of streamlines

Notes

Very important to respect the convention, verify that streamlines match the reference and are effectively in the right space.

Any change to the number of streamlines, data_per_point or data_per_streamline requires particular verification.

In a case of manipulation not allowed by this object, use Nibabel directly and be careful.

property affine

Getter for the reference affine

static are_compatible(sft_1, sft_2)

Compatibility verification of two StatefulTractogram to ensure space, origin, data_per_point and data_per_streamline consistency

compute_bounding_box()

Compute the bounding box of the streamlines in their current state

Returns:
outputndarray

8 corners of the XYZ aligned box, all zeros if no streamlines

property data_per_point

Getter for data_per_point

property data_per_streamline

Getter for data_per_streamline

property dimensions

Getter for the reference dimensions

static from_sft(streamlines, sft, data_per_point=None, data_per_streamline=None)

Create an instance of StatefulTractogram from another instance of StatefulTractogram.

Parameters:
streamlineslist or ArraySequence

Streamlines of the tractogram

sftStatefulTractogram,

The other StatefulTractogram to copy the space_attribute AND state from.

data_per_pointdict, optional

Dictionary in which each key has X items, each items has Y_i items X being the number of streamlines Y_i being the number of points on streamlines #i

data_per_streamlinedict, optional

Dictionary in which each key has X items X being the number of streamlines

—–
get_data_per_point_keys()

Return a list of the data_per_point attribute names

get_data_per_streamline_keys()

Return a list of the data_per_streamline attribute names

get_streamlines_copy()

Safe getter for streamlines (for slicing)

is_bbox_in_vox_valid()

Verify that the bounding box is valid in voxel space. Negative coordinates or coordinates above the volume dimensions are considered invalid in voxel space.

Returns:
outputbool

Are the streamlines within the volume of the associated reference

property origin

Getter for origin standard

remove_invalid_streamlines(epsilon=0.001)

Remove streamlines with invalid coordinates from the object. Will also remove the data_per_point and data_per_streamline. Invalid coordinates are any X,Y,Z values above the reference dimensions or below zero

Parameters:
epsilonfloat (optional)

Epsilon value for the bounding box verification. Default is 1e-6.

Returns:
outputtuple

Tuple of two list, indices_to_remove, indices_to_keep

property space

Getter for the current space

property space_attributes

Getter for spatial attribute

property streamlines

Partially safe getter for streamlines

to_center()

Safe function to shift streamlines so the center of voxel is the origin

to_corner()

Safe function to shift streamlines so the corner of voxel is the origin

to_origin(target_origin)

Safe function to change streamlines to a particular origin standard False means NIFTI (center) and True means TrackVis (corner)

to_rasmm()

Safe function to transform streamlines and update state

to_space(target_space)

Safe function to transform streamlines to a particular space using an enum and update state

to_vox()

Safe function to transform streamlines and update state

to_voxmm()

Safe function to transform streamlines and update state

property voxel_order

Getter for the reference voxel order

property voxel_sizes

Getter for the reference voxel sizes

Tractogram

class dipy.io.streamline.Tractogram(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)

Bases: object

Container for streamlines and their data information.

Streamlines of a tractogram can be in any coordinate system of your choice as long as you provide the correct affine_to_rasmm matrix, at construction time. When applied to streamlines coordinates, that transformation matrix should bring the streamlines back to world space (RAS+ and mm space) [3].

Moreover, when streamlines are mapped back to voxel space [4], a streamline point located at an integer coordinate (i,j,k) is considered to be at the center of the corresponding voxel. This is in contrast with other conventions where it might have referred to a corner.

References

Attributes:
streamlinesArraySequence object

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamlinePerArrayDict object

Dictionary where the items are (str, 2D array). Each key represents a piece of information \(i\) to be kept alongside every streamline, and its associated value is a 2D array of shape (\(T\), \(P_i\)) where \(T\) is the number of streamlines and \(P_i\) is the number of values to store for that particular piece of information \(i\).

data_per_pointPerArraySequenceDict object

Dictionary where the items are (str, ArraySequence). Each key represents a piece of information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number values to store for that particular piece of information \(i\).

Methods

apply_affine(affine[, lazy])

Applies an affine transformation on the points of each streamline.

copy()

Returns a copy of this Tractogram object.

extend(other)

Appends the data of another Tractogram.

to_world([lazy])

Brings the streamlines to world space (i.e.

__init__(streamlines=None, data_per_streamline=None, data_per_point=None, affine_to_rasmm=None)
Parameters:
streamlinesiterable of ndarrays or ArraySequence, optional

Sequence of \(T\) streamlines. Each streamline is an ndarray of shape (\(N_t\), 3) where \(N_t\) is the number of points of streamline \(t\).

data_per_streamlinedict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every streamline, and its associated value is an iterable of ndarrays of shape (\(P_i\),) where \(P_i\) is the number of scalar values to store for that particular information \(i\).

data_per_pointdict of iterable of ndarrays, optional

Dictionary where the items are (str, iterable). Each key represents an information \(i\) to be kept alongside every point of every streamline, and its associated value is an iterable of ndarrays of shape (\(N_t\), \(M_i\)) where \(N_t\) is the number of points for a particular streamline \(t\) and \(M_i\) is the number scalar values to store for that particular information \(i\).

affine_to_rasmmndarray of shape (4, 4) or None, optional

Transformation matrix that brings the streamlines contained in this tractogram to RAS+ and mm space where coordinate (0,0,0) refers to the center of the voxel. By default, the streamlines are in an unknown space, i.e. affine_to_rasmm is None.

property affine_to_rasmm

Affine bringing streamlines in this tractogram to RAS+mm.

apply_affine(affine, lazy=False)

Applies an affine transformation on the points of each streamline.

If lazy is not specified, this is performed in-place.

Parameters:
affinendarray of shape (4, 4)

Transformation that will be applied to every streamline.

lazy{False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogramTractogram or LazyTractogram object

Tractogram where the streamlines have been transformed according to the given affine transformation. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

copy()

Returns a copy of this Tractogram object.

property data_per_point
property data_per_streamline
extend(other)

Appends the data of another Tractogram.

Data that will be appended includes the streamlines and the content of both dictionaries data_per_streamline and data_per_point.

Parameters:
otherTractogram object

Its data will be appended to the data of this tractogram.

Returns:
None

Notes

The entries in both dictionaries self.data_per_streamline and self.data_per_point must match respectively those contained in the other tractogram.

property streamlines
to_world(lazy=False)

Brings the streamlines to world space (i.e. RAS+ and mm).

If lazy is not specified, this is performed in-place.

Parameters:
lazy{False, True}, optional

If True, streamlines are not transformed in-place and a LazyTractogram object is returned. Otherwise, streamlines are modified in-place.

Returns:
tractogramTractogram or LazyTractogram object

Tractogram where the streamlines have been sent to world space. If the lazy option is true, it returns a LazyTractogram object, otherwise it returns a reference to this Tractogram object with updated streamlines.

create_tractogram_header

dipy.io.streamline.create_tractogram_header(tractogram_type, affine, dimensions, voxel_sizes, voxel_order)

Write a standard trk/tck header from spatial attribute

deepcopy

dipy.io.streamline.deepcopy(x, memo=None, _nil=[])

Deep copy operation on arbitrary Python objects.

See the module’s __doc__ string for more info.

detect_format

dipy.io.streamline.detect_format(fileobj)

Returns the StreamlinesFile object guessed from the file-like object.

Parameters:
fileobjstring or file-like object

If string, a filename; otherwise an open file-like object pointing to a tractogram file (and ready to read from the beginning of the header)

Returns:
tractogram_fileTractogramFile class

The class type guessed from the content of fileobj.

is_header_compatible

dipy.io.streamline.is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

Parameters:
reference_1Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

reference_2Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

Returns:
outputbool

Does all the spatial attribute match

load_dpy

dipy.io.streamline.load_dpy(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .dpy format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_fib

dipy.io.streamline.load_fib(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .fib format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_generator

dipy.io.streamline.load_generator(ttype)

Generate a loading function that performs a file extension check to restrict the user to a single file format.

Parameters:
ttypestring

Extension of the file format that requires a loader

Returns
——-
outputfunction

Function (load_tractogram) that handle only one file format

load_tck

dipy.io.streamline.load_tck(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .tck format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_tractogram

dipy.io.streamline.load_tractogram(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram from any format (trk/tck/vtk/vtp/fib/dpy)

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_trk

dipy.io.streamline.load_trk(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .trk format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_vtk

dipy.io.streamline.load_vtk(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .vtk format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

load_vtk_streamlines

dipy.io.streamline.load_vtk_streamlines(filename, to_lps=True)

Load streamlines from vtk polydata.

Load formats can be VTK, FIB

Parameters:
filenamestring

input filename (.vtk or .fib)

to_lpsbool

Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain

Returns:
outputlist

list of 2D arrays

load_vtp

dipy.io.streamline.load_vtp(filename, reference, to_space=Space.RASMM, to_origin=Origin.NIFTI, bbox_valid_check=True, trk_header_check=True)

Load the stateful tractogram of the .vtp format

Parameters:
filenamestring

Filename with valid extension

referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict), or ‘same’ if the input is a trk file. Reference that provides the spatial attribute. Typically a nifti-related object from the native diffusion used for streamlines generation

to_spaceEnum (dipy.io.stateful_tractogram.Space)

Space to which the streamlines will be transformed after loading

to_originEnum (dipy.io.stateful_tractogram.Origin)
Origin to which the streamlines will be transformed after loading

NIFTI standard, default (center of the voxel) TRACKVIS standard (corner of the voxel)

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

trk_header_checkbool

Verification that the reference has the same header as the spatial attributes as the input tractogram when a Trk is loaded

Returns:
outputStatefulTractogram

The tractogram to load (must have been saved properly)

save_dpy

dipy.io.streamline.save_dpy(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .dpy format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_fib

dipy.io.streamline.save_fib(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .fib format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_generator

dipy.io.streamline.save_generator(ttype)

Generate a saving function that performs a file extension check to restrict the user to a single file format.

Parameters:
ttypestring

Extension of the file format that requires a saver

Returns
——-
outputfunction

Function (save_tractogram) that handle only one file format

save_tck

dipy.io.streamline.save_tck(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .tck format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_tractogram

dipy.io.streamline.save_tractogram(sft, filename, bbox_valid_check=True)

Save the stateful tractogram in any format (trk/tck/vtk/vtp/fib/dpy)

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_trk

dipy.io.streamline.save_trk(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .trk format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_vtk

dipy.io.streamline.save_vtk(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .vtk format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

save_vtk_streamlines

dipy.io.streamline.save_vtk_streamlines(streamlines, filename, to_lps=True, binary=False)

Save streamlines as vtk polydata to a supported format file.

File formats can be OBJ, VTK, VTP, FIB, PLY, STL and XML

Parameters:
streamlineslist

list of 2D arrays or ArraySequence

filenamestring

output filename (.obj, .vtk, .fib, .ply, .stl and .xml)

to_lpsbool

Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain

binarybool

save the file as binary

save_vtp

dipy.io.streamline.save_vtp(sft, filename, bbox_valid_check=True)

Save the stateful tractogram of the .vtp format

Parameters:
sftStatefulTractogram

The stateful tractogram to save

filenamestring

Filename with valid extension

bbox_valid_checkbool

Verification for negative voxel coordinates or values above the volume dimensions. Default is True, to enforce valid file.

Returns:
outputbool

True if the saving operation was successful

Nifti1Image

class dipy.io.utils.Nifti1Image(dataobj, affine, header=None, extra=None, file_map=None, dtype=None)

Bases: Nifti1Pair, SerializableImage

Class for single file NIfTI1 format image

Attributes:
affine
dataobj
header
in_memory

True when any array data is in memory cache

ndim
shape
slicer

Slicer object that returns cropped and subsampled images

Methods

ImageArrayProxy

alias of ArrayProxy

ImageSlicer

alias of SpatialFirstSlicer

as_reoriented(ornt)

Apply an orientation change and return a new image

filespec_to_file_map(filespec)

Make file_map for this class from filename filespec

from_bytes(bytestring)

Construct image from a byte string

from_file_map(file_map, *[, mmap, ...])

Class method to create image from mapping in file_map

from_filename(filename, *[, mmap, ...])

Class method to create image from filename filename

from_image(img)

Class method to create new instance of own class from img

from_stream(io_obj)

Load image from readable IO stream

from_url(url[, timeout])

Retrieve and load an image from a URL

get_data([caching])

Return image data from image with any necessary scaling applied

get_data_dtype([finalize])

Get numpy dtype for data

get_fdata([caching, dtype])

Return floating point image data with necessary scaling applied

get_filename()

Fetch the image filename

get_qform([coded])

Return 4x4 affine matrix from qform parameters in header

get_sform([coded])

Return 4x4 affine matrix from sform parameters in header

header_class

alias of Nifti1Header

instance_to_filename(img, filename)

Save img in our own format, to name implied by filename

load(filename, *[, mmap, keep_file_open])

Class method to create image from filename filename

make_file_map([mapping])

Class method to make files holder for this image type

orthoview()

Plot the image using OrthoSlicer3D

path_maybe_image(filename[, sniff, sniff_max])

Return True if filename may be image matching this class

set_data_dtype(datatype)

Set numpy dtype for data from code, dtype, type or alias

set_filename(filename)

Sets the files in the object from a given filename

set_qform(affine[, code, strip_shears])

Set qform header values from 4x4 affine

set_sform(affine[, code])

Set sform transform from 4x4 affine

to_bytes(**kwargs)

Return a bytes object with the contents of the file that would be written if the image were saved.

to_file_map([file_map, dtype])

Write image to file_map or contained self.file_map

to_filename(filename, **kwargs)

Write image to files implied by filename string

to_stream(io_obj, **kwargs)

Save image to writable IO stream

uncache()

Delete any cached read of data from proxied data

update_header()

Harmonize header with image data and affine

__init__(dataobj, affine, header=None, extra=None, file_map=None, dtype=None)

Initialize image

The image is a combination of (array-like, affine matrix, header), with optional metadata in extra, and filename / file-like objects contained in the file_map mapping.

Parameters:
dataobjobject

Object containing image data. It should be some object that returns an array from np.asanyarray. It should have a shape attribute or property

affineNone or (4,4) array-like

homogeneous affine giving relationship between voxel coordinates and world coordinates. Affine can also be None. In this case, obj.affine also returns None, and the affine as written to disk will depend on the file format.

headerNone or mapping or header instance, optional

metadata for this image format

extraNone or mapping, optional

metadata to associate with image that cannot be stored in the metadata of this image type

file_mapmapping, optional

mapping giving file information for this image format

Notes

If both a header and an affine are specified, and the affine does not match the affine that is in the header, the affine will be used, but the sform_code and qform_code fields in the header will be re-initialised to their default values. This is performed on the basis that, if you are changing the affine, you are likely to be changing the space to which the affine is pointing. The set_sform() and set_qform() methods can be used to update the codes after an image has been created - see those methods, and the manual for more details.

files_types: tuple[tuple[str, str], ...] = (('image', '.nii'),)
header_class

alias of Nifti1Header

update_header()

Harmonize header with image data and affine

valid_exts: tuple[str, ...] = ('.nii',)

create_nifti_header

dipy.io.utils.create_nifti_header(affine, dimensions, voxel_sizes)

Write a standard nifti header from spatial attribute

create_tractogram_header

dipy.io.utils.create_tractogram_header(tractogram_type, affine, dimensions, voxel_sizes, voxel_order)

Write a standard trk/tck header from spatial attribute

decfa

dipy.io.utils.decfa(img_orig, scale=False)

Create a nifti-compliant directional-encoded color FA image.

Parameters:
img_origNifti1Image class instance.

Contains encoding of the DEC FA image with a 4D volume of data, where the elements on the last dimension represent R, G and B components.

scale: bool.

Whether to scale the incoming data from the 0-1 to the 0-255 range expected in the output.

Returns:
imgNifti1Image class instance with dtype set to store tuples of

uint8 in (R, G, B) order.

Notes

For a description of this format, see:

https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/datatype.html

decfa_to_float

dipy.io.utils.decfa_to_float(img_orig)

Convert a nifti-compliant directional-encoded color FA image into a nifti image with RGB encoded in floating point resolution.

Parameters:
img_origNifti1Image class instance.

Contains encoding of the DEC FA image with a 3D volume of data, where each element is a (R, G, B) tuple in uint8.

Returns:
imgNifti1Image class instance with float dtype.

Notes

For a description of this format, see:

https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/datatype.html

detect_format

dipy.io.utils.detect_format(fileobj)

Returns the StreamlinesFile object guessed from the file-like object.

Parameters:
fileobjstring or file-like object

If string, a filename; otherwise an open file-like object pointing to a tractogram file (and ready to read from the beginning of the header)

Returns:
tractogram_fileTractogramFile class

The class type guessed from the content of fileobj.

get_reference_info

dipy.io.utils.get_reference_info(reference)

Will compare the spatial attribute of 2 references

Parameters:
referenceNifti or Trk filename, Nifti1Image or TrkFile, Nifti1Header or

trk.header (dict) Reference that provides the spatial attribute.

Returns:
outputtuple
  • affine ndarray (4,4), np.float32, transformation of VOX to RASMM

  • dimensions ndarray (3,), int16, volume shape for each axis

  • voxel_sizes ndarray (3,), float32, size of voxel for each axis

  • voxel_order, string, Typically ‘RAS’ or ‘LPS’

is_header_compatible

dipy.io.utils.is_header_compatible(reference_1, reference_2)

Will compare the spatial attribute of 2 references

Parameters:
reference_1Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

reference_2Nifti or Trk filename, Nifti1Image or TrkFile,

Nifti1Header or trk.header (dict) Reference that provides the spatial attribute.

Returns:
outputbool

Does all the spatial attribute match

is_reference_info_valid

dipy.io.utils.is_reference_info_valid(affine, dimensions, voxel_sizes, voxel_order)

Validate basic data type and value of spatial attribute.

Does not ensure that voxel_sizes and voxel_order are self-coherent with the affine. Only verify the following:

  • affine is of the right type (float) and dimension (4,4)

  • affine contain values in the rotation part

  • dimensions is of right type (int) and length (3)

  • voxel_sizes is of right type (float) and length (3)

  • voxel_order is of right type (str) and length (3)

The listed parameters are what is expected, provide something else and this function should fail (cover common mistakes).

Parameters:
affine: ndarray (4,4)

Transformation of VOX to RASMM

dimensions: ndarray (3,), int16

Volume shape for each axis

voxel_sizes: ndarray (3,), float32

Size of voxel for each axis

voxel_order: string

Typically ‘RAS’ or ‘LPS’

Returns:
outputbool

Does the input represent a valid ‘state’ of spatial attribute

make5d

dipy.io.utils.make5d(data)

reshapes the input to have 5 dimensions, adds extra dimensions just before the last dimension

nifti1_symmat

dipy.io.utils.nifti1_symmat(image_data, *args, **kwargs)

Returns a Nifti1Image with a symmetric matrix intent

Parameters:
image_dataarray-like

should have lower triangular elements of a symmetric matrix along the last dimension

all other arguments and keywords are passed to Nifti1Image
Returns:
imageNifti1Image

5d, extra dimensions added before the last. Has symmetric matrix intent code

optional_package

dipy.io.utils.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters:
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns:
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True

read_img_arr_or_path

dipy.io.utils.read_img_arr_or_path(data, affine=None)

Helper function that handles inputs that can be paths, nifti img or arrays

Parameters:
dataarray or nib.Nifti1Image or str.

Either as a 3D/4D array or as a nifti image object, or as a string containing the full path to a nifti file.

affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

Returns:
data, affinendarray and 4x4 array

save_buan_profiles_hdf5

dipy.io.utils.save_buan_profiles_hdf5(fname, dt)

Saves the given input dataframe to .h5 file

Parameters:
fnamestring

file name for saving the hdf5 file

dtPandas DataFrame

DataFrame to be saved as .h5 file

load_polydata

dipy.io.vtk.load_polydata(file_name)

Load a vtk polydata to a supported format file.

Supported file formats are OBJ, VTK, VTP, FIB, PLY, STL and XML

Parameters:
file_namestring
Returns:
outputvtkPolyData

load_vtk_streamlines

dipy.io.vtk.load_vtk_streamlines(filename, to_lps=True)

Load streamlines from vtk polydata.

Load formats can be VTK, FIB

Parameters:
filenamestring

input filename (.vtk or .fib)

to_lpsbool

Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain

Returns:
outputlist

list of 2D arrays

optional_package

dipy.io.vtk.optional_package(name, trip_msg=None)

Return package-like thing and module setup for package name

Parameters:
namestr

package name

trip_msgNone or str

message to give when someone tries to use the return package, but we could not import it, and have returned a TripWire object instead. Default message if None.

Returns:
pkg_likemodule or TripWire instance

If we can import the package, return it. Otherwise return an object raising an error when accessed

have_pkgbool

True if import for package was successful, false otherwise

module_setupfunction

callable usually set as setup_module in calling namespace, to allow skipping tests.

Examples

Typical use would be something like this at the top of a module using an optional package:

>>> from dipy.utils.optpkg import optional_package
>>> pkg, have_pkg, setup_module = optional_package('not_a_package')

Of course in this case the package doesn’t exist, and so, in the module:

>>> have_pkg
False

and

>>> pkg.some_function() 
Traceback (most recent call last):
    ...
TripWireError: We need package not_a_package for these functions, but
``import not_a_package`` raised an ImportError

If the module does exist - we get the module

>>> pkg, _, _ = optional_package('os')
>>> hasattr(pkg, 'path')
True

Or a submodule if that’s what we asked for

>>> subpkg, _, _ = optional_package('os.path')
>>> hasattr(subpkg, 'dirname')
True

save_polydata

dipy.io.vtk.save_polydata(polydata, file_name, binary=False, color_array_name=None)

Save a vtk polydata to a supported format file.

Save formats can be VTK, VTP, FIB, PLY, STL and XML.

Parameters:
polydatavtkPolyData
file_namestring

save_vtk_streamlines

dipy.io.vtk.save_vtk_streamlines(streamlines, filename, to_lps=True, binary=False)

Save streamlines as vtk polydata to a supported format file.

File formats can be OBJ, VTK, VTP, FIB, PLY, STL and XML

Parameters:
streamlineslist

list of 2D arrays or ArraySequence

filenamestring

output filename (.obj, .vtk, .fib, .ply, .stl and .xml)

to_lpsbool

Default to True, will follow the vtk file convention for streamlines Will be supported by MITKDiffusion and MI-Brain

binarybool

save the file as binary

setup_module

dipy.io.vtk.setup_module()

transform_streamlines

dipy.io.vtk.transform_streamlines(streamlines, mat, in_place=False)

Apply affine transformation to streamlines

Parameters:
streamlinesStreamlines

Streamlines object

matarray, (4, 4)

transformation matrix

in_placebool

If True then change data in place. Be careful changes input streamlines.

Returns:
new_streamlinesStreamlines

Sequence transformed 2D ndarrays of shape[-1]==3