{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "%matplotlib inline" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n# Parallel reconstruction using CSD\n\nThis example shows how to use parallelism (multiprocessing) using\n``peaks_from_model`` in order to speedup the signal reconstruction\nprocess. For this example will we use the same initial steps\nas we used in `example_reconst_csd`.\n\nImport modules, fetch and read data, apply the mask and calculate the response\nfunction.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import multiprocessing\nfrom dipy.core.gradients import gradient_table\nfrom dipy.data import get_fnames, default_sphere\nfrom dipy.io.gradients import read_bvals_bvecs\nfrom dipy.io.image import load_nifti\n\n\nhardi_fname, hardi_bval_fname, hardi_bvec_fname = get_fnames('stanford_hardi')\n\ndata, affine = load_nifti(hardi_fname)\n\nbvals, bvecs = read_bvals_bvecs(hardi_bval_fname, hardi_bvec_fname)\ngtab = gradient_table(bvals, bvecs)\n\nfrom dipy.segment.mask import median_otsu\n\nmaskdata, mask = median_otsu(data, vol_idx=range(10, 50), median_radius=3,\n numpass=1, autocrop=False, dilate=2)\n\nfrom dipy.reconst.csdeconv import auto_response_ssst\n\nresponse, ratio = auto_response_ssst(gtab, maskdata, roi_radii=10, fa_thr=0.7)\n\ndata = maskdata[:, :, 33:37]\nmask = mask[:, :, 33:37]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we are ready to import the CSD model and fit the datasets.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from dipy.reconst.csdeconv import ConstrainedSphericalDeconvModel\n\ncsd_model = ConstrainedSphericalDeconvModel(gtab, response)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Compute the CSD-based ODFs using ``peaks_from_model``. This function has a\nparameter called ``parallel`` which allows for the voxels to be processed in\nparallel. If ``num_processes`` is None it will figure out automatically the\nnumber of CPUs available in your system. Alternatively, you can set\n``num_processes`` manually. Here, we show an example where we compare the\nduration of execution with or without parallelism.\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "import time\nfrom dipy.direction import peaks_from_model\n\nstart_time = time.time()\ncsd_peaks_parallel = peaks_from_model(model=csd_model,\n data=data,\n sphere=default_sphere,\n relative_peak_threshold=.5,\n min_separation_angle=25,\n mask=mask,\n return_sh=True,\n return_odf=False,\n normalize_peaks=True,\n npeaks=5,\n parallel=True,\n num_processes=2)\n\ntime_parallel = time.time() - start_time\nprint(f\"peaks_from_model using 2 processes ran in : {time_parallel} seconds\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "``peaks_from_model`` using 8 processes ran in 114.425682068 seconds\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "start_time = time.time()\ncsd_peaks = peaks_from_model(model=csd_model,\n data=data,\n sphere=default_sphere,\n relative_peak_threshold=.5,\n min_separation_angle=25,\n mask=mask,\n return_sh=True,\n return_odf=False,\n normalize_peaks=True,\n npeaks=5,\n parallel=False,\n num_processes=None)\n\ntime_single = time.time() - start_time\nprint(\"peaks_from_model ran in :\" + str(time_single) + \" seconds\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "``peaks_from_model`` ran in 242.772505999 seconds\n\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "collapsed": false }, "outputs": [], "source": [ "print(\"Speedup factor : \" + str(time_single / time_parallel))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In Windows if you get a runtime error about frozen executable please start\nyour script by adding your code above in a ``main`` function and use::\n\n if __name__ == '__main__':\n import multiprocessing\n multiprocessing.freeze_support()\n main()\n\n\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 0 }