{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Applying positivity constraints to Q-space Trajectory Imaging (QTI+)\n\nQ-space trajectory imaging (QTI) [1]_ with applied positivity constraints\n(QTI+) is an estimation framework proposed by Herberthson et al. [2]_ which\nenforces necessary constraints during the estimation of the QTI model\nparameters.\n\nThis tutorial briefly summarizes the theory and provides a comparison between\nperforming the constrained and unconstrained QTI reconstruction in DIPY.\n\n## Theory\n\nIn QTI, the tissue microstructure is represented by a diffusion tensor\ndistribution (DTD). Here, DTD is denoted by $\\mathbf{D}$ and the voxel-level\ndiffusion tensor from DTI by $\\langle\\mathbf{D}\rangle$, where\n$\\langle \\ \rangle$ denotes averaging over the DTD. The covariance of\n$\\mathbf{D}$ is given by a fourth-order covariance tensor $\\mathbb{C}$ defined\nas\n\n\\begin{align}\\mathbb{C} = \\langle \\mathbf{D} \\otimes \\mathbf{D} \rangle - \\langle\n \\mathbf{D} \rangle \\otimes \\langle \\mathbf{D} \rangle ,\\end{align}\n\nwhere $\\otimes$ denotes a tensor outer product. $\\mathbb{C}$ has 21 unique\nelements and enables the calculation of several microstructural parameters.\n\nUsing the cumulant expansion, the diffusion-weighted signal can be approximated\nas\n\n\\begin{align}S \u0007pprox S_0 \\exp \\left(- \\mathbf{b} : \\langle \\mathbf{D} \rangle +\n \frac{1}{2}(\\mathbf{b} \\otimes \\mathbf{b}) : \\mathbb{C} \right) ,\\end{align}\n\nwhere $S_0$ is the signal without diffusion-weighting, $\\mathbf{b}$ is the\nb-tensor used in the acquisition, and $:$ denotes a tensor inner product.\n\nThe model parameters $S_0$, $\\langle \\mathbf{D}\rangle$, and $\\mathbb{C}$\ncan be estimated by solving the following weighted problem, where the\nheteroskedasticity introduced by the taking the logarithm of the signal is\naccounted for:\n\n\\begin{align}{\\mathrm{argmin}}_{S_0,\\langle \\mathbf{D} \rangle, \\mathbb{C}}\n \\sum_{k=1}^n S_k^2 \\left| \\ln(S_k)-\\ln(S_0)+\\mathbf{b}^{(k)} \\langle\n \\mathbf{D} \rangle -\frac{1}{2} (\\mathbf{b} \\otimes \\mathbf{b})^{(k)}\n \\mathbb{C} \right|^2 ,\\end{align}\n\nthe above can be written as a weighted least squares problem\n\n\\begin{align}\\mathbf{Ax} = \\mathbf{y},\\end{align}\n\nwhere\n\n\\begin{align}y = \begin{pmatrix} \\ S_1 \\ ln S_1 \\ \u000bdots \\\n \\ S_n \\ ln S_n \\end{pmatrix} ,\\end{align}\n\n\\begin{align}x = \begin{pmatrix} \\ln S_0 & \\langle \\mathbf{D} \rangle & \\mathbb{C}\n \\end{pmatrix}^ ext{T} ,\\end{align}\n\n\\begin{align}A =\n \begin{pmatrix}\n S_1 & 0 & \\ldots & 0 \\ 0 & \\ddots & \\ddots & \u000bdots \\ \u000bdots & \\ddots\n & \\ddots & 0 \\ 0 & \\ldots & 0 & S_n\n \\end{pmatrix}\n \begin{pmatrix}\n 1 & -\\mathbf{b}_1^ ext{T} & \frac{1}{2} (\\mathbf{b}_1 \\otimes \\mathbf{b}_1)\n ext{T} \\\n \u000bdots & \u000bdots & \u000bdots \\\n \u000bdots & \u000bdots & \u000bdots \\\n 1 & -\\mathbf{b}_n^ ext{T} & \frac{1}{2} (\\mathbf{b}_n \\otimes \\mathbf{b}_n)\n ^ ext{T}\n \\end{pmatrix} ,\\end{align}\n\nwhere $n$ is the number of acquisitions and $\\langle\\mathbf{D}\rangle$,\n$\\mathbb{C}$, $\\mathbf{b}_i$, and $(\\mathbf{b}_i \\otimes \\mathbf{b}_i)$ are\nrepresented by column vectors using Voigt notation.\n\nThe estimated $\\langle\\mathbf{D}\rangle$ and $\\mathbb{C}$ tensors\nshould observe mathematical and physical conditions dictated by the model\nitself: since $\\langle\\mathbf{D}\rangle$ represents a diffusivity, it should be\npositive semi-definite: $\\langle\\mathbf{D}\rangle \\succeq 0$. Similarly, since\n$\\mathbf{C}$ represents a covariance, it's $6 imes 6$ representation,\n$\\mathbf{C}$, should be positive semi-definite: $\\mathbf{C} \\succeq 0$\n\nWhen not imposed, violations of these conditions can occur in presence of noise\nand/or in sparsely sampled data. This could results in metrics derived from the\nmodel parameters to be unreliable. Both these conditions can be enfoced while\nestimating the QTI model's parameters using semidefinite programming (SDP) as\nshown by Herberthson et al. [2]_. This corresponds to solving the problem\n\n\\begin{align}\\mathbf{Ax} = \\mathbf{y} \\\n\n ext{subject to:} \\\n\n \\langle\\mathbf{D}\rangle \\succeq 0 , \\\n \\mathbf{C} \\succeq 0\\end{align}\n\n## Installation\n\nThe constrained problem stated above can be solved using the cvxpy library.\nInstructions on how to install cvxpy\ncan be found at https://www.cvxpy.org/install/. A free SDP solver\ncalled 'SCS' is installed with cvxpy, and can be readily used for\nperforming the fit. However, it is recommended to install an\nalternative solver, MOSEK, for improved speed and performance.\nMOSEK requires a licence which is free for academic use.\nInstructions on how to install Mosek and setting up a licence can be found\nat https://docs.mosek.com/latest/install/installation.html\n\n## Usage example\n\nHere we show how metrics derived from the\nQTI model parameters compare when the fit is performed with and without\napplying the positivity constraints.\n\nIn DIPY, the constrained estimation routine is available as part of the\n`dipy.reconst.qti` module.\nFirst we import all the necessary modules to perform the QTI fit:\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.data import read_DiB_217_lte_pte_ste, read_DiB_70_lte_pte_ste\nimport dipy.reconst.qti as qti" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To showcase why enforcing positivity constraints in QTI is relevant, we use\na human brain dataset comprising 70 volumes acquired with tensor-encoding.\nThis dataset was obtained by subsampling a richer dataset containing 217\ndiffusion measurements, which we will use as ground truth when comparing\nthe parameters estimation with and without applied constraints. This emulates\nperforming shorter data acquisition which can correspond to scanning patients\nin clinical settings.\n\nThe full dataset used in this tutorial was originally published at\nhttps://github.com/filip-szczepankiewicz/Szczepankiewicz_DIB_2019,\nand is described in [3]_.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "First, let's load the complete dataset and create the gradient table.\nWe mark these two with the '_217' suffix.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_img, mask_img, gtab_217 = read_DiB_217_lte_pte_ste()\ndata_217 = data_img.get_fdata()\nmask = mask_img.get_fdata()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Second, let's load the downsampled dataset and create the gradient table.\nWe mark these two with the '_70' suffix.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "data_img, _, gtab_70 = read_DiB_70_lte_pte_ste()\ndata_70 = data_img.get_fdata()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can fit the QTI model to the datasets containing 217 measurements, and\nuse it as reference to compare the constrained and unconstrained fit on the\nsmaller dataset. For time considerations, we restrict the fit to a\nsingle slice.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "mask[:, :, :13] = 0\nmask[:, :, 14:] = 0\n\nqtimodel_217 = qti.QtiModel(gtab_217)\nqtifit_217 = qtimodel_217.fit(data_217, mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can fit the QTI model using the default unconstrained fitting method\nto the subsampled dataset:\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "qtimodel_unconstrained = qti.QtiModel(gtab_70)\nqtifit_unconstrained = qtimodel_unconstrained.fit(data_70, mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we repeat the fit but with the constraints applied.\nTo perform the constrained fit, we select the 'SDPdc' fit method when creating\nthe QtiModel object.\n\n

Note

this fit method is slower compared to the defaults unconstrained.

\n\nIf mosek is installed, it can be specified as the solver to be used\nas follows:\n\n```python\nqtimodel = qti.QtiModel(gtab, fit_method='SDPdc', cvxpy_solver='MOSEK')\nqtifit = qtimodel.fit(data, mask)\n```\nIf Mosek is not installed, the constrained fit can still be performed, and\nSCS will be used as solver. SCS is typically much slower than Mosek, but\nprovides similar results in terms of accuracy. To give an example, the fit\nperformed in the next line will take approximately 15 minutes when using SCS,\nand 2 minute when using Mosek!\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "qtimodel_constrained = qti.QtiModel(gtab_70, fit_method='SDPdc')\nqtifit_constrained = qtimodel_constrained.fit(data_70, mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we can visualize the results obtained with the constrained and\nunconstrained fit on the small dataset, and compare them with the\n\"ground truth\" provided by fitting the QTI model to the full dataset.\nFor example, we can look at the FA and \u00b5FA maps, and their value\ndistribution in White Matter in comparison to the ground truth.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.viz.plotting import compare_qti_maps\n\nz = 13\nwm_mask = qtifit_217.ufa[:, :, z] > 0.6\ncompare_qti_maps(qtifit_217, qtifit_unconstrained, qtifit_constrained, wm_mask)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The results clearly show how many of the FA and \u00b5FA values\nobtained with the unconstrained fit fall outside the correct\ntheoretical range [0 1], while the constrained fit provides\nmore sound results. Note also that even when fitting the rich\ndataset, some values of the parameters produced with the unconstrained\nfit fall outside the correct range, suggesting that the constrained fit,\ndespite the additional time cost, should be performed even on densely\nsampled diffusion data.\n\nFor more information about QTI and QTI+, please read the articles by\nWestin et al. [1]_ and Herberthson et al. [2]_.\n\n\n### References\n.. [1] Westin, Carl-Fredrik, et al. \"Q-space trajectory imaging for\n multidimensional diffusion MRI of the human brain.\" Neuroimage 135\n (2016): 345-362. https://doi.org/10.1016/j.neuroimage.2016.02.039.\n.. [2] Herberthson M., Boito D., Dela Haije T., Feragen A., Westin C.-F.,\n Ozarslan E., \"Q-space trajectory imaging with positivity constraints\n (QTI+)\" in Neuroimage, Volume 238, 2021.\n https://doi.org/10.1016/j.neuroimage.2021.118198\n.. [3] F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and spherical\n tensor-valued diffusion MRI data by free waveform encoding in healthy\n brain, water, oil and liquid crystals. Data in Brief (2019),\n DOI: https://doi.org/10.1016/j.dib.2019.104208\n\n.. include:: ../links_names.inc\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 0 }