{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Reconstruction with Robust and Unbiased Model-BAsed Spherical Deconvolution\n\nThis example shows how to use RUMBA-SD to reconstruct fiber orientation\ndensity functions (fODFs). This model was introduced by Canales-Rodriguez\net al [CanalesRodriguez2015]_.\n\nRUMBA-SD uses a priori information about the fiber response function (axial\nand perpendicular diffusivities) to generate a convolution kernel mapping the\nfODFs on a sphere to the recorded data. The fODFs are then estimated using an\niterative, maximum likelihood estimation algorithm adapted from Richardson-Lucy\n(RL) deconvolution [Richardson1972]_. Specifically, the RL algorithm assumes\nGaussian noise, while RUMBA assumes Rician/Noncentral Chi noise -- these more\naccurately reflect the noise generated by MRI scanners [Constantinides1997]_.\nThis algorithm also contains an optional compartment for estimating an\nisotropic volume fraction to account for partial volume effects. RUMBA-SD works\nwith single- and multi-shell data, as well as data recorded in Cartesian or\nspherical coordinate systems.\n\nThe result from RUMBA-SD can be smoothed by applying total variation spatial\nregularization (termed RUMBA-SD + TV), a technique which promotes a more\ncoherent estimate of the fODFs across neighboring voxels [Rudin1992]_.\nThis regularization ability is also included in this implementation.\n\nThis example will showcase how to:\n 1. Estimate the fiber response function\n 2. Reconstruct the fODFs voxel-wise or globally with TV regularization\n 3. Visualize fODF maps\n\nTo begin, we will load the data, consisting of 10 b0s and 150 non-b0s with a\nb-value of 2000.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\n\nfrom dipy.core.gradients import gradient_table\nfrom dipy.data import get_fnames, get_sphere\nfrom dipy.io.gradients import read_bvals_bvecs\nfrom dipy.io.image import load_nifti\n\nhardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames('stanford_hardi')\ndata, affine = load_nifti(hardi_fname)\n\nbvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname)\ngtab = gradient_table(bvals, bvecs)\n\nsphere = get_sphere('symmetric362')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1. Estimation of the fiber response function\n\nThere are multiple ways to estimate the fiber response function.\n\n**Strategy 1: use default values**\nOne simple approach is to use the values included as the default arguments in\nthe RumbaSDModel constructor. The white matter response, `wm_response` has three\nvalues corresponding to the tensor eigenvalues (1.7e-3, 0.2e-3, 0.2e-3). The\nmodel has compartments for cerebrospinal fluid (CSF) (`csf_response`) and\ngrey matter (GM) (`gm_response`) as well, with these mean diffusivities set\nto 3.0e-3 and 0.8e-3 respectively [CanalesRodriguez2015]_. These default\nvalues will often be adequate as RUMBA-SD is robust against impulse response\nimprecision [Dell'Acqua2007]_.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.reconst.rumba import RumbaSDModel\n\nrumba = RumbaSDModel(gtab)\nprint(f\"wm_response: {rumba.wm_response}, \" +\n f\"csf_response: {rumba.csf_response}, \" +\n f\"gm_response: {rumba.gm_response}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "wm_response: [0.0017 0.0002 0.0002]\ncsf_response: 0.003\ngm_response: 0.0008\n\nWe can visualize what this default response looks like.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.sims.voxel import single_tensor_odf\nfrom dipy.viz import window, actor\n\n# Enables/disables interactive visualization\ninteractive = False\n\nscene = window.Scene()\n\nevals = rumba.wm_response\nevecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T\n\nresponse_odf = single_tensor_odf(sphere.vertices, evals, evecs)\n# Transform our data from 1D to 4D\nresponse_odf = response_odf[None, None, None, :]\nresponse_actor = actor.odf_slicer(response_odf, sphere=sphere,\n colormap='plasma')\n\nscene.add(response_actor)\nprint('Saving illustration as default_response.png')\nwindow.record(scene, out_path='default_response.png', size=(200, 200))\n\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: default_response.png\n :align: center\n\n Default response function.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(response_actor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Strategy 2: estimate from local brain region**\nThe `csdeconv` module contains functions for estimating this response.\n`auto_response_sst` extracts an ROI in the center of the brain and isolates\nsingle fiber populations from the corpus callosum using an FA mask with a\nthreshold of 0.7. These voxels are used to estimate the response function.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.reconst.csdeconv import auto_response_ssst\n\nresponse, _ = auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7)\nprint(response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "(array([0.00139919, 0.0003007 , 0.0003007]), 416.7372408293461)\n\nThis response contains the estimated eigenvalues in its first element, and the\nestimated S0 in the second. The eigenvalues are all we care about for using\nRUMBA-SD.\n\nWe can visualize this estimated response as well.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "evals = response[0]\nevecs = np.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]]).T\n\nresponse_odf = single_tensor_odf(sphere.vertices, evals, evecs)\n# transform our data from 1D to 4D\nresponse_odf = response_odf[None, None, None, :]\nresponse_actor = actor.odf_slicer(response_odf, sphere=sphere,\n colormap='plasma')\nscene.add(response_actor)\nprint('Saving illustration as estimated_response.png')\nwindow.record(scene, out_path='estimated_response.png', size=(200, 200))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: estimated_response.png\n :align: center\n\n Estimated response function.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(response_actor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Strategy 3: recursive, data-driven estimation**\nThe other method for extracting a response function uses a recursive approach.\nHere, we initialize a \"fat\" response function, which is used in CSD. From this\ndeconvolution, the voxels with one peak are extracted and their data is\naveraged to get a new response function. This is repeated iteratively until\nconvergence [Tax2014]_.\n\nTo shorten computation time, a mask can be estimated for the data.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.reconst.csdeconv import recursive_response\nfrom dipy.segment.mask import median_otsu\n\nb0_mask, mask = median_otsu(data, median_radius=2,\n numpass=1, vol_idx=np.arange(10))\n\nrec_response = recursive_response(gtab, data, mask=mask, sh_order=8,\n peak_thr=0.01, init_fa=0.08,\n init_trace=0.0021, iter=4, convergence=0.001,\n parallel=True, num_processes=2)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now visualize this response, which will look like a pancake.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rec_response_signal = rec_response.on_sphere(sphere)\n# transform our data from 1D to 4D\nrec_response_signal = rec_response_signal[None, None, None, :]\nresponse_actor = actor.odf_slicer(rec_response_signal, sphere=sphere,\n colormap='plasma')\n\nscene.add(response_actor)\nprint('Saving illustration as recursive_response.png')\nwindow.record(scene, out_path='recursive_response.png', size=(200, 200))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: recursive_response.png\n :align: center\n\n Recursive response function.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(response_actor)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2. fODF Reconstruction\n\nWe will now use the estimated response function with the RUMBA-SD model to\nreconstruct the fODFs. We will use the default value for `csf_response` and\n`gm_response`. If one doesn't wish to fit these compartments, one can specify\neither argument as `None`. This will result in the corresponding volume\nfraction map being all zeroes. The GM compartment can only be accurately\nestimated with at least 3-shell data. With less shells, it is recommended\nto only keep the compartment for CSF. Since this data is single-shell, we will\nonly compute the CSF compartment.\n\nRUMBA-SD can fit the data voxelwise or globally. By default, a voxelwise\napproach is used (`voxelwise` is set to `True`). However, by setting\n`voxelwise` to false, the whole brain can be fit at once. In this global\nsetting, one can specify the use of TV regularization with `use_tv`, and the\nmodel can log updates on its progress and estimated signal-to-noise ratios by\nsetting `verbose` to True. By default, both `use_tv` and `verbose` are set to\n`False` as they have no bearing on the voxelwise fit.\n\nWhen constructing the RUMBA-SD model, one can also specify `n_iter`,\n`recon_type`, `n_coils`, `R`, and `sphere`. `n_iter` is the number of\niterations for the iterative estimation, and the default value of 600\nshould be suitable for most applications. `recon_type` is the technique used\nby the MRI scanner to reconstruct the MRI signal, and should be either 'smf'\nfor 'spatial matched filter', or 'sos' for 'sum-of-squares'; 'smf' is a common\nchoice and is the default, but the specifications of the MRI scanner used to\ncollect the data should be checked. If 'sos' is used, then it's important to\nspecify `n_coils`, which is the number of coils in the MRI scanner. With 'smf',\nthis isn't important and the default argument of 1 can be used. `R` is the\nacceleration factor of the MRI scanner, which is termed the iPAT factor for\nSIEMENS, the ASSET factor for GE, or the SENSE factor for PHILIPS. 1 is a\ncommon choice, and is the default for the model. This is only important when\nusing TV regularization, which will be covered later in the tutorial. Finally,\n`sphere` specifies the sphere on which to construct the fODF. The default is\n'repulsion724' sphere, but this tutorial will use `symmetric362`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rumba = RumbaSDModel(\n gtab, wm_response=response[0], gm_response=None, sphere=sphere)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For efficiency, we will only fit a small part of the data. This is the same\nportion of data used in `example_reconst_csd`.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_small = data[20:50, 55:85, 38:39]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Option 1: voxel-wise fit**\nThis is the default approach for generating ODFs, wherein each voxel is fit\nsequentially.\n\nWe will estimate the fODFs using the 'symmetric362' sphere. This\nwill take about a minute to compute.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rumba_fit = rumba.fit(data_small)\nodf = rumba_fit.odf()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The inclusion of RUMBA-SD's CSF compartment means we can also extract\nthe isotropic volume fraction map as well as the white matter volume fraction\nmap (the fODF sum at each voxel). These values are normalized such that they\nsum to 1. If neither isotropic compartment is included, then the isotropic\nvolume fraction map will all be zeroes.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "f_iso = rumba_fit.f_iso\nf_wm = rumba_fit.f_wm" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can visualize these maps using adjacent heatmaps.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n\nfig, axs = plt.subplots(1, 2, figsize=(12, 5))\n\nax0 = axs[0].imshow(f_wm[..., 0].T, origin='lower')\naxs[0].set_title(\"Voxelwise White Matter Volume Fraction\")\n\nax1 = axs[1].imshow(f_iso[..., 0].T, origin='lower')\naxs[1].set_title(\"Voxelwise Isotropic Volume Fraction\")\n\nplt.colorbar(ax0, ax=axs[0])\nplt.colorbar(ax1, ax=axs[1])\n\nplt.savefig('wm_iso_partition.png')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: wm_iso_partition.png\n :align: center\n\n White matter and isotropic volume fractions\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To visualize the fODFs, it's recommended to combine the fODF and the isotropic\ncomponents. This is done using the `RumbaFit` object's method\n`combined_odf_iso`. To reach a proper scale for visualization, the argument\n`norm=True` is used in FURY's `odf_slicer` method.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "combined = rumba_fit.combined_odf_iso\n\nfodf_spheres = actor.odf_slicer(\n combined, sphere=sphere, norm=True, scale=0.5, colormap=None)\nscene.add(fodf_spheres)\nprint('Saving illustration as rumba_odfs.png')\nwindow.record(scene, out_path='rumba_odfs.png', size=(600, 600))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: rumba_odfs.png\n :align: center\n\n RUMBA-SD fODFs\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(fodf_spheres)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can extract the peaks from these fODFs using `peaks_from_model`. This will\nreconstruct the fODFs again, so will take about a minute to run.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.direction import peaks_from_model\n\nrumba_peaks = peaks_from_model(model=rumba,\n data=data_small,\n sphere=sphere,\n relative_peak_threshold=.5,\n min_separation_angle=25,\n normalize_peaks=False,\n parallel=True,\n num_processes=4)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For visualization, we scale up the peak values.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "peak_values = np.clip(rumba_peaks.peak_values * 15, 0, 1)\npeak_dirs = rumba_peaks.peak_dirs\n\nfodf_peaks = actor.peak_slicer(peak_dirs, peak_values)\nscene.add(fodf_peaks)\n\nprint('Saving illustration as rumba_peaks.png')\nwindow.record(scene, out_path='rumba_peaks.png', size=(600, 600))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: rumba_peaks.png\n :align: center\n\n RUMBA-SD peaks\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(fodf_peaks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Option 2: global fit**\nInstead of the voxel-wise fit, RUMBA also comes with an implementation of\nglobal fitting where all voxels are fit simultaneously. This comes with some\npotential benefits such as:\n\n1. More efficient fitting due to matrix parallelization, in exchange for\n larger demands on RAM (>= 16 GB should be sufficient)\n2. The option for spatial regularization; specifically, TV regularization is\n built into the fitting function (RUMBA-SD + TV)\n\nThis is done by setting `voxelwise` to `False`, and setting `use_tv` to `True`.\n\nTV regularization requires a volume without any singleton dimensions, so we'll\nhave to start by expanding our data slice.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rumba = RumbaSDModel(gtab, wm_response=response[0], gm_response=None,\n voxelwise=False, use_tv=True, sphere=sphere)\ndata_tv = data[20:50, 55:85, 38:40]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we fit the model in the same way. This will take about 90 seconds.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "rumba_fit = rumba.fit(data_tv)\n\nodf = rumba_fit.odf()\ncombined = rumba_fit.combined_odf_iso" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can visualize the combined fODF map as before.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fodf_spheres = actor.odf_slicer(combined, sphere=sphere, norm=True,\n scale=0.5, colormap=None)\n\nscene.add(fodf_spheres)\nprint('Saving illustration as rumba_global_odfs.png')\nwindow.record(scene, out_path='rumba_global_odfs.png', size=(600, 600))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: rumba_global_odfs.png\n :align: center\n\n RUMBA-SD + TV fODFs\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This can be compared with the result without TV regularization, and one can\nobserve that the coherence between neighboring voxels is improved.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(fodf_spheres)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For peak detection, `peaks_from_model` cannot be used as it doesn't support\nglobal fitting approaches. Instead, we'll compute our peaks using a for loop.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.direction import peak_directions\n\nshape = odf.shape[:3]\nnpeaks = 5 # maximum number of peaks returned for a given voxel\npeak_dirs = np.zeros((shape + (npeaks, 3)))\npeak_values = np.zeros((shape + (npeaks,)))\n\nfor idx in np.ndindex(shape): # iterate through each voxel\n # Get peaks of odf\n direction, pk, _ = peak_directions(odf[idx], sphere,\n relative_peak_threshold=0.5,\n min_separation_angle=25)\n\n # Calculate peak metrics\n if pk.shape[0] != 0:\n n = min(npeaks, pk.shape[0])\n peak_dirs[idx][:n] = direction[:n]\n peak_values[idx][:n] = pk[:n]\n\n# Scale up for visualization\npeak_values = np.clip(peak_values * 15, 0, 1)\n\nfodf_peaks = actor.peak_slicer(peak_dirs[:, :, 0:1, :],\n peak_values[:, :, 0:1, :])\nscene.add(fodf_peaks)\n\nprint('Saving illustration as rumba_global_peaks.png')\nwindow.record(scene, out_path='rumba_global_peaks.png', size=(600, 600))\nif interactive:\n window.show(scene)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: rumba_global_peaks.png\n :align: center\n\n RUMBA-SD + TV peaks\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "scene.rm(fodf_peaks)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### References\n\n.. [CanalesRodriguez2015] Canales-Rodr\u00edguez, E. J., Daducci, A., Sotiropoulos,\n S. N., Caruyer, E., Aja-Fern\u00e1ndez, S., Radua, J., Mendizabal, J. M. Y.,\n Iturria-Medina, Y., Melie-Garc\u00eda, L., Alem\u00e1n-G\u00f3mez, Y., Thiran, J.-P.,\n Sarr\u00f3, S., Pomarol-Clotet, E., & Salvador, R. (2015). Spherical\n Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise\n Models and Spatial Regularization. PLOS ONE, 10(10), e0138910.\n https://doi.org/10.1371/journal.pone.0138910\n\n\n.. [Richardson1972] Richardson, W. H. (1972). Bayesian-Based Iterative Method\n of Image Restoration*. JOSA, 62(1), 55\u201359.\n https://doi.org/10.1364/JOSA.62.000055\n\n\n.. [Constantinides1997] Constantinides, C. D., Atalar, E., & McVeigh, E. R.\n (1997). Signal-to-Noise Measurements in Magnitude Images from NMR Phased\n Arrays. Magnetic Resonance in Medicine: Official Journal of the Society of\n Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine,\n 38(5), 852\u2013857.\n\n\n.. [Rudin1992] Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total\n variation based noise removal algorithms. Physica D: Nonlinear Phenomena,\n 60(1), 259\u2013268. https://doi.org/10.1016/0167-2789(92)90242-F\n\n\n.. [DellAcqua2007] Dell\u2019Acqua, F., Rizzo, G., Scifo, P., Clarke, R., Scotti,\n G., & Fazio, F. (2007). A Model-Based Deconvolution Approach to Solve Fiber\n Crossing in Diffusion-Weighted MR Imaging. IEEE Transactions on Bio-Medical\n Engineering, 54, 462\u2013472. https://doi.org/10.1109/TBME.2006.888830\n\n.. [Tax2014] Tax, C. M. W., Jeurissen, B., Vos, S. B., Viergever, M. A., &\n Leemans, A. (2014). Recursive calibration of the fiber response\n function for spherical deconvolution of diffusion MRI data.\n NeuroImage, 86, 67\u201380.\n https://doi.org/10.1016/j.neuroimage.2013.07.067\n\n\n.. include:: ../links_names.inc\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 0 }