




The American Association of Physicists in Medicine (AAPM) is sponsoring the Quantitative Intravoxel Incoherent Motion (IVIM) Diffusion MRI (dMRI) Reconstruction Challenge, leading up to the 2024 AAPM Annual Meeting & Exhibition. We invite participants to develop image reconstruction and model fitting methods to improve the accuracy and robustness of quantitative parameter estimation for the widely used IVIM model of dMRI [1]. Both deep learning (DL) and non-DL approaches are welcome. Methods can operate in the image-domain, in the k-space domain, or in a combination of both. In this Challenge, participants will be provided with k-space data of breast dMRI generated via rigorous simulations that accurately represent the dMRI signal generation process associated with the IVIM model across a range of diffusion weighting (b-values). The participants will be asked to derive IVIM parameter maps and compete for the most accurate reconstruction results. The two top-performing teams (only one member per team) will be awarded complimentary meeting registration to present on their methodologies during the 2024 AAPM Annual Meeting & Exhibition in Los Angeles, CA from July 21-25, 2024 (in-person attendance is required). The challenge organizers will summarize the challenge results in a journal publication after the Annual Meeting.
Diffusion MRI (dMRI) has been extensively employed over the years for the diagnosis of diseases and is increasingly used to guide radiation therapy and assess treatment responses. dMRI captures the random motion of water protons influenced by tissue microstructure, thus offering valuable insights into clinically significant tissue microstructural properties [2, 3]. Unlike conventional MRI reconstruction problems, which focus on retrieving anatomical images from measured k-space data, dMRI reconstruction aims to quantitatively determine images of biophysical tissue microstructural parameters. However, the estimation of these parameters is often accompanied by considerable uncertainties due to the complex inverse problem posed by the highly nonlinear nature of dMRI signal models [4, 5], particularly in scenarios with low signal-to-noise ratios (SNRs) resulting from fast image acquisition and physiological motion. The substantial variation and bias in parameter estimation hinder the interpretation of results and impede the reliable clinical application of dMRI in tissue characterization and longitudinal evaluations.
The proposed IVIM-dMRI Reconstruction Challenge aims to enhance accuracy and robustness of quantitative dMRI reconstruction, with a focus on the widely used and clinically significant IVIM model [1]. The IVIM model enables simultaneous assessment of perfusion and diffusion by fitting the dMRI signal to a biexponential model that captures both water molecular diffusion and blood microcirculation. The primary task of this Challenge is to achieve quantitative reconstruction of IVIM-dMRI tissue parametric maps, specifically fractional perfusion () related to microcirculation, pseudo-diffusion coefficient (
), and true diffusion coefficient (
), from the provided k-space data, and strive for the most accurate reconstruction results.
Details of the data generation methodology will be provided on the IVIM-dMRI Reconstruction Challenge website.
In IVIM-dMRI, a series of MR images are acquired, each under a diffusion weight . The MRI signal of each voxel
follows a bi-exponential equation as
, where
is signal intensity at
= 0 s/mm2 For each diffusion weight, the measured complex k-space data
is related to
via the standard Fourier Transform (FT) procedure with noise as
where denotes FT operation, and
is an independent and identically distributed (i.i.d.) Gaussian noise.
In this challenge, participants are given k-space data
at a series of known
values, and are tasked to reconstruct images of
,
, and
.
We generated simulated breast MR images using the VICTRE breast phantom. These breast phantom images will depict realistic breast anatomy consisting of various types of normal breast tissues and tumor tissue types demonstrating intratumoral he terogeneity (See the figure). In this figure, (a) depicts tissue compositions with each voxel value being an integer label for illustration purpose.
For each tissue type, we assigned known values for ,
and
serving as the gold standard. The values for
,
and
will adhere to the parameters established in prior scientific literature, ensuring their alignment with realistic biological interpretations. Based on the tissue specific MRI properties, we generated the images
. Figure (b-c) shows images at
= 0 and 1000 s/mm2 in the absence of noise. (d) illustrates the signal decay for a tumor and a tissue voxel.
Finally, based on the simulated at a series of
values, FT was performed for image at each
value and then noise was added, yielding the k-space data available for the reconstruction.
This challenge consists of three phases:
Phase I (training and development phase)
Participants will be given access to the codes with a description of the simulation process for generating k-space datasets with a range of b-values from the known tissue microstructural parametric maps. A training dataset including 1000 cases will be provided for the participants to develop reconstruction models. Each case will include
To facilitate the understanding of the dMRI model and our data, we will provide a Python script that performs inverse FT to recover from k-space data and then derives tissue parameter maps via pixel-wise data fitting. The script will also include codes to write the results in a specific format for automatic evaluation by the Challenge webpage.
Phase II (validation and refinement phase)
Participants will validate their algorithms using provided validation dataset and submit their reconstruction results through the Challenge webpage. The validation dataset will consist of 10 cases, each including noisy k-space data of complex images with 200 × 200 pixels at corresponding values. Ground truth of the validation datasets will not be provided.
After the participants submit their results to the Challenge webpage, the results will be evaluated using predetermined evaluation metrics, and a leaderboard will display the performance of different participants. At this phase, the number of submissions is unlimited.
Phase III (testing and final scoring phase)
Participants will run their algorithms on provided test dataset and submit their reconstruction results through the Challenge website. The test dataset will consist of 100 cases, each including noisy k-space data of complex images with 200 × 200 pixels at corresponding values. Ground truth of the test datasets will not be provided.
After the participants submit their results to the Challenge webpage, the results will be evaluated using predetermined evaluation metrics, and a leaderboard will display the performance of different participants. At this phase, each participant team is allowed a maximum of three submissions. Required submission:
The accuracy of the reconstructed IVIM-dMRI parameters will be evaluated using the following metrics (see detailed information in at the Challenge website)
At the conclusion of the challenge, the following information will be provided to each participant:
The top 2 participants (one member from each team only):
A manuscript summarizing the challenge results will be submitted for publication after the AAPM Annual Meeting & Exhibition.
The following rules apply to those who register and download the data:
For further information, please contact the lead organizer, Xun Jia (xunjia@jhu.edu) or AAPM staff member, Emily Townley (emily@aapm.org).
References