{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# K-fold cross-validation for model comparison\n\nDifferent models of diffusion MRI can be compared based on their accuracy in\nfitting the diffusion signal. Here, we demonstrate this by comparing two\nmodels: the diffusion tensor model (DTI) and Constrained Spherical\nDeconvolution (CSD). These models differ from each other substantially. DTI\napproximates the diffusion pattern as a 3D Gaussian distribution, and has only\n6 free parameters. CSD, on the other hand, fits many more parameters. The\nmodels are also not nested, so they cannot be compared using the\nlog-likelihood ratio.\n\nA general way to perform model comparison is cross-validation [Hastie2008]_. In\nthis method, a model is fit to some of the data (a *learning set*) and the\nmodel is then used to predict a held-out set (a *testing set*). The model\npredictions can then be compared to estimate prediction error on the held out\nset. This method has been used for comparison of models such as DTI and CSD\n[Rokem2014]_, and has the advantage that it the comparison is imprevious to\ndifferences in the number of parameters in the model, and it can be used to\ncompare models that are not nested.\n\nIn DIPY_, we include an implementation of k-fold cross-validation. In this\nmethod, the data is divided into $k$ different segments. In each iteration\n$\frac{1}{k}th$ of the data is held out and the model is fit to the other\n$\frac{k-1}{k}$ parts of the data. A prediction of the held out data is done\nand recorded. At the end of $k$ iterations a prediction of all of the data will\nhave been conducted, and this can be compared directly to all of the data.\n\nFirst, we import that modules needed for this example. In particular, the\n:mod:`reconst.cross_validation` module implements k-fold cross-validation\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import numpy as np\nnp.random.seed(2014)\nimport matplotlib.pyplot as plt\n\nimport dipy.data as dpd\nimport dipy.reconst.cross_validation as xval\nimport dipy.reconst.dti as dti\nimport dipy.reconst.csdeconv as csd\nimport scipy.stats as stats\nfrom dipy.core.gradients import gradient_table\nfrom dipy.io.image import load_nifti\nfrom dipy.io.gradients import read_bvals_bvecs" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We fetch some data and select a couple of voxels to perform comparisons on. One\nlies in the corpus callosum (cc), while the other is in the centrum semiovale\n(cso), a part of the brain known to contain multiple crossing white matter\nfiber populations.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "hardi_fname, hardi_bval_fname, hardi_bvec_fname = dpd.get_fnames('stanford_hardi')\n\ndata, affine = load_nifti(hardi_fname)\nbvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname)\ngtab = gradient_table(bvals, bvecs)\n\n\ncc_vox = data[40, 70, 38]\ncso_vox = data[30, 76, 38]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We initialize each kind of model:\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dti_model = dti.TensorModel(gtab)\nresponse, ratio = csd.auto_response_ssst(gtab, data, roi_radii=10, fa_thr=0.7)\ncsd_model = csd.ConstrainedSphericalDeconvModel(gtab, response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we perform cross-validation for each kind of model, comparing model\npredictions to the diffusion MRI data in each one of these voxels.\n\nNote that we use 2-fold cross-validation, which means that in each iteration,\nthe model will be fit to half of the data, and used to predict the other half.\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "dti_cc = xval.kfold_xval(dti_model, cc_vox, 2)\ncsd_cc = xval.kfold_xval(csd_model, cc_vox, 2, response)\ndti_cso = xval.kfold_xval(dti_model, cso_vox, 2)\ncsd_cso = xval.kfold_xval(csd_model, cso_vox, 2, response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We plot a scatter plot of the data with the model predictions in each of these\nvoxels, focusing only on the diffusion-weighted measurements (each point\ncorresponds to a different gradient direction). The two models are compared in\neach sub-plot (blue=DTI, red=CSD).\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "fig, ax = plt.subplots(1, 2)\nfig.set_size_inches([12, 6])\nax[0].plot(cc_vox[gtab.b0s_mask == 0], dti_cc[gtab.b0s_mask == 0], 'o',\n color='b', label='DTI in CC')\nax[0].plot(cc_vox[gtab.b0s_mask == 0], csd_cc[gtab.b0s_mask == 0], 'o',\n color='r', label='CSD in CC')\nax[1].plot(cso_vox[gtab.b0s_mask == 0], dti_cso[gtab.b0s_mask == 0], 'o',\n color='b', label='DTI in CSO')\nax[1].plot(cso_vox[gtab.b0s_mask == 0], csd_cso[gtab.b0s_mask == 0], 'o',\n color='r', label='CSD in CSO')\nax[0].legend(loc='upper left')\nax[1].legend(loc='upper left')\nfor this_ax in ax:\n this_ax.set_xlabel('Data (relative to S0)')\n this_ax.set_ylabel('Model prediction (relative to S0)')\nfig.savefig(\"model_predictions.png\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ ".. figure:: model_predictions.png\n :align: center\n\n Model predictions.\n\n\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can also quantify the goodness of fit of the models by calculating an\nR-squared score:\n\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "cc_dti_r2 = stats.pearsonr(cc_vox[gtab.b0s_mask == 0],\n dti_cc[gtab.b0s_mask == 0])[0]**2\ncc_csd_r2 = stats.pearsonr(cc_vox[gtab.b0s_mask == 0],\n csd_cc[gtab.b0s_mask == 0])[0]**2\ncso_dti_r2 = stats.pearsonr(cso_vox[gtab.b0s_mask == 0],\n dti_cso[gtab.b0s_mask == 0])[0]**2\ncso_csd_r2 = stats.pearsonr(cso_vox[gtab.b0s_mask == 0],\n csd_cso[gtab.b0s_mask == 0])[0]**2\n\nprint(\"Corpus callosum\\n\"\n \"DTI R2 : %s\\n\"\n \"CSD R2 : %s\\n\"\n \"\\n\"\n \"Centrum Semiovale\\n\"\n \"DTI R2 : %s\\n\"\n \"CSD R2 : %s\\n\" % (cc_dti_r2, cc_csd_r2, cso_dti_r2, cso_csd_r2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This should look something like this::\n\nCorpus callosum\n\nDTI R2 : 0.782881752597\n\nCSD R2 : 0.805764364116\n\nCentrum Semiovale\n\nDTI R2 : 0.431921832012\n\nCSD R2 : 0.604806420501\n\n\nAs you can see, DTI is a pretty good model for describing the signal in the CC,\nwhile CSD is much better in describing the signal in regions of multiple\ncrossing fibers.\n\n\n## References\n\n.. [Hastie2008] Hastie, T., Tibshirani, R., Friedman, J. (2008). The Elements\n of Statistical Learning: Data Mining, Inference and\n Prediction. Springer-Verlag, Berlin\n\n.. [Rokem2014] Rokem, A., Chan, K.L. Yeatman, J.D., Pestilli, F., Mezer, A.,\n Wandell, B.A., 2014. Evaluating the accuracy of diffusion models at multiple\n b-values with cross-validation. ISMRM 2014.\n\n.. include:: ../links_names.inc\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 0 }