{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n\n# Connectivity Matrices, ROI Intersections and Density Maps\n\nThis example is meant to be an introduction to some of the streamline tools\navailable in DIPY_. Some of the functions covered in this example are\n``target``, ``connectivity_matrix`` and ``density_map``. ``target`` allows one\nto filter streamlines that either pass through or do not pass through some\nregion of the brain, ``connectivity_matrix`` groups and counts streamlines\nbased on where in the brain they begin and end, and finally, density map counts\nthe number of streamlines that pass through every voxel of some image.\n\nTo get started we'll need to have a set of streamlines to work with. We'll use\nEuDX along with the CsaOdfModel to make some streamlines. Let's import the\nmodules and download the data we'll be using.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nfrom scipy.ndimage import binary_dilation\n\nfrom dipy.core.gradients import gradient_table\nfrom dipy.data import get_fnames\nfrom dipy.io.gradients import read_bvals_bvecs\nfrom dipy.io.image import load_nifti_data, load_nifti, save_nifti\nfrom dipy.direction import peaks\nfrom dipy.reconst import shm\nfrom dipy.tracking import utils\nfrom dipy.tracking.local_tracking import LocalTracking\nfrom dipy.tracking.stopping_criterion import BinaryStoppingCriterion\nfrom dipy.tracking.streamline import Streamlines\n\n\nhardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames('stanford_hardi')\nlabel_fname = get_fnames('stanford_labels')\nt1_fname = get_fnames('stanford_t1')\n\ndata, affine, hardi_img = load_nifti(hardi_fname, return_img=True)\nlabels = load_nifti_data(label_fname)\nt1_data = load_nifti_data(t1_fname)\nbvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname)\ngtab = gradient_table(bvals, bvecs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've loaded an image called ``labels_img`` which is a map of tissue types such\nthat every integer value in the array ``labels`` represents an anatomical\nstructure or tissue type [#]_. For this example, the image was created so that\nwhite matter voxels have values of either 1 or 2. We'll use\n``peaks_from_model`` to apply the ``CsaOdfModel`` to each white matter voxel\nand estimate fiber orientations which we can use for tracking. We will also\ndilate this mask by 1 voxel to ensure streamlines reach the grey matter.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "white_matter = binary_dilation((labels == 1) | (labels == 2))\ncsamodel = shm.CsaOdfModel(gtab, 6)\ncsapeaks = peaks.peaks_from_model(model=csamodel,\n data=data,\n sphere=peaks.default_sphere,\n relative_peak_threshold=.8,\n min_separation_angle=45,\n mask=white_matter)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can use EuDX to track all of the white matter. To keep things reasonably\nfast we use ``density=1`` which will result in 1 seeds per voxel. The stopping\ncriterion, determining when the tracking stops, is set to stop when the\ntracking exits the white matter.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "affine = np.eye(4)\nseeds = utils.seeds_from_mask(white_matter, affine, density=1)\nstopping_criterion = BinaryStoppingCriterion(white_matter)\n\nstreamline_generator = LocalTracking(csapeaks, stopping_criterion, seeds,\n affine=affine, step_size=0.5)\nstreamlines = Streamlines(streamline_generator)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first of the tracking utilities we'll cover here is ``target``. This\nfunction takes a set of streamlines and a region of interest (ROI) and returns\nonly those streamlines that pass through the ROI. The ROI should be an array\nsuch that the voxels that belong to the ROI are ``True`` and all other voxels\nare ``False`` (this type of binary array is sometimes called a mask). This\nfunction can also exclude all the streamlines that pass through an ROI by\nsetting the ``include`` flag to ``False``. In this example we'll target the\nstreamlines of the corpus callosum. Our ``labels`` array has a sagittal slice\nof the corpus callosum identified by the label value 2. We'll create an ROI\nmask from that label and create two sets of streamlines, those that intersect\nwith the ROI and those that don't.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cc_slice = labels == 2\ncc_streamlines = utils.target(streamlines, affine, cc_slice)\ncc_streamlines = Streamlines(cc_streamlines)\n\nother_streamlines = utils.target(streamlines, affine, cc_slice,\n include=False)\nother_streamlines = Streamlines(other_streamlines)\nassert len(other_streamlines) + len(cc_streamlines) == len(streamlines)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use some of DIPY_'s visualization tools to display the ROI we targeted\nabove and all the streamlines that pass through that ROI. The ROI is the yellow\nregion near the center of the axial image.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.viz import window, actor, colormap as cmap\n\n# Enables/disables interactive visualization\ninteractive = False\n\n# Make display objects\ncolor = cmap.line_colors(cc_streamlines)\ncc_streamlines_actor = actor.line(cc_streamlines,\n cmap.line_colors(cc_streamlines))\ncc_ROI_actor = actor.contour_from_roi(cc_slice, color=(1., 1., 0.),\n opacity=0.5)\n\nvol_actor = actor.slicer(t1_data)\n\nvol_actor.display(x=40)\nvol_actor2 = vol_actor.copy()\nvol_actor2.display(z=35)\n\n# Add display objects to canvas\nscene = window.Scene()\nscene.add(vol_actor)\nscene.add(vol_actor2)\nscene.add(cc_streamlines_actor)\nscene.add(cc_ROI_actor)\n\n# Save figures\nwindow.record(scene, n_frames=1, out_path='corpuscallosum_axial.png',\n size=(800, 800))\nif interactive:\n window.show(scene)\nscene.set_camera(position=[-1, 0, 0], focal_point=[0, 0, 0], view_up=[0, 0, 1])\nwindow.record(scene, n_frames=1, out_path='corpuscallosum_sagittal.png',\n size=(800, 800))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: corpuscallosum_axial.png\n :align: center\n\n **Corpus Callosum Axial**\n\n.. include:: ../links_names.inc\n\n.. figure:: corpuscallosum_sagittal.png\n :align: center\n\n **Corpus Callosum Sagittal**\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once we've targeted the corpus callosum ROI, we might want to find out which\nregions of the brain are connected by these streamlines. To do this we can use\nthe ``connectivity_matrix`` function. This function takes a set of streamlines\nand an array of labels as arguments. It returns the number of streamlines that\nstart and end at each pair of labels and it can return the streamlines grouped\nby their endpoints. Notice that this function only considers the endpoints of\neach streamline.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "M, grouping = utils.connectivity_matrix(cc_streamlines, affine,\n labels.astype(np.uint8),\n return_mapping=True,\n mapping_as_streamlines=True)\nM[:3, :] = 0\nM[:, :3] = 0" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We've set ``return_mapping`` and ``mapping_as_streamlines`` to ``True`` so that\n``connectivity_matrix`` returns all the streamlines in ``cc_streamlines``\ngrouped by their endpoint.\n\nBecause we're typically only interested in connections between gray matter\nregions, and because the label 0 represents background and the labels 1 and 2\nrepresent white matter, we discard the first three rows and columns of the\nconnectivity matrix.\n\nWe can now display this matrix using matplotlib. We display it using a log\nscale to make small values in the matrix easier to see.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nplt.imshow(np.log1p(M), interpolation='nearest')\nplt.savefig(\"connectivity.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: connectivity.png\n :align: center\n\n **Connectivity of Corpus Callosum**\n\n.. include:: ../links_names.inc\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In our example track there are more streamlines connecting regions 11 and\n54 than any other pair of regions. These labels represent the left and right\nsuperior frontal gyrus respectively. These two regions are large, close\ntogether, have lots of corpus callosum fibers and are easy to track so this\nresult should not be a surprise to anyone.\n\nHowever, the interpretation of streamline counts can be tricky. The\nrelationship between the underlying biology and the streamline counts will\ndepend on several factors, including how the tracking was done, and the correct\nway to interpret these kinds of connectivity matrices is still an open question\nin the diffusion imaging literature.\n\nThe next function we'll demonstrate is ``density_map``. This function allows\none to represent the spatial distribution of a track by counting the density of\nstreamlines in each voxel. For example, let's take the track connecting the\nleft and right superior frontal gyrus.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "lr_superiorfrontal_track = grouping[11, 54]\nshape = labels.shape\ndm = utils.density_map(lr_superiorfrontal_track, affine, shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's save this density map and the streamlines so that they can be\nvisualized together. In order to save the streamlines in a \".trk\" file we'll\nneed to move them to \"trackvis space\", or the representation of streamlines\nspecified by the trackvis Track File format.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.io.stateful_tractogram import Space, StatefulTractogram\nfrom dipy.io.streamline import save_trk\n\n# Save density map\nsave_nifti(\"lr-superiorfrontal-dm.nii.gz\", dm.astype(\"int16\"), affine)\n\nlr_sf_trk = Streamlines(lr_superiorfrontal_track)\n\n# Save streamlines\nsft = StatefulTractogram(lr_sf_trk, hardi_img, Space.VOX)\nsave_trk(sft, \"lr-superiorfrontal.trk\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. rubric:: Footnotes\n\n.. [#] The image `aparc-reduced.nii.gz`, which we load as ``labels_img``, is a\n modified version of label map `aparc+aseg.mgz` created by [FreeSurfer](https://surfer.nmr.mgh.harvard.edu/). The corpus callosum region is a\n combination of the FreeSurfer labels 251-255. The remaining FreeSurfer\n labels were re-mapped and reduced so that they lie between 0 and 88. To\n see the FreeSurfer region, label and name, represented by each value, see\n `label_info.txt` in `~/.dipy/stanford_hardi`.\n.. [#] An affine transformation is a mapping between two coordinate systems\n that can represent scaling, rotation, shear, translation and reflection.\n Affine transformations are often represented using a 4x4 matrix where\n the last row of the matrix is ``[0, 0, 0, 1]``.\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 0 }