Data
All subjects have normal development. These subjects are excluded from this challenge if they 1) have a first degree relative with autism, intellectual disability, schizophrenia, or bipolar disorder, 2) have any significant medical and/or genetic conditions affecting growth, development, or cognition, or 3) have any contraindication to MRI.
Training Dataset
One zip file with training images and manual labels is available for downloading. They were randomly chosen from UNC/UMN Baby Connectome Project (BCP), with the following imaging parameters:
- T1-weighted MR images: TR/TE = 2400/2.2 ms, head coil = 32-channel, resolution = 0.8×0.8×0.8 mm3;
The zip file contains 24-month-old T1-weighted MR images of 13 infant subjects (named as subject-1 to subject-13):
- subject-?-T1: T1-weighted MR image
- subject-?-label: manual segmentation
Notes on the manual segmentation
- 0: Background (everything outside the cerebellum)
- 1: Cerebrospinal fluid (CSF)
- 2: Gray matter (GM)
- 3: White matter (WM)
Testing Datasets
One zip file with testing images is available for downloading. The zip file contains T1 MR images from 3 sites and 2 time-points:
1. BCP, 5 subjects at 24 months (named as subject-14 to subject-18), with the following imaging parameters:
- T1-weighted images with the same imaging parameters as the training images;
2. BCP, 5 subjects at 6 months (named as subject-19 to subject-23), with the following imaging parameters:
- T1-weighted images with the same imaging parameters as the training images;
3. Vanderbilt University*, 5 subjects at 6 months (named as subject-24 to subject-28), with the following imaging parameters:
- T1-weighted images: TR/TE = 10/4.6 ms, head coil = 32-channel, resolution = 1×1×1 mm3;
4. Stanford University**, 5 subjects at 6 months (named as subject-29 to subject-33), with the following imaging parameters:
- T1-weighted images: TR/TE = 7.6/2.9 ms, flip angle = 11º, resolution = 0.9×0.9×0.8 mm3;
The resolution of all images was resampled into 0.8×0.8×0.8 mm3. We performed skull stripping and extraction of the cerebellum by leveraging an infant-dedicated pipeline (i.e., iBEAT V2.0 Cloud). In this challenge, we choose imaging data without motion artifacts as training and testing sets.
Testing datasets are made available to each participating team, for a limited controlled time-window (7 days). When you are ready, please notify us an email (yuesun@med.unc.edu) and we will send the testing datasets. The participants will analyze the images using their local computing infrastructure and will have to submit their segmentation results within 7 days.
Registration
Before registering a team, please read the Terms of Participation below:
- By registering a team, you agree to respect the rules described on those pages and print & sign the Agreement-cSeg2022 (download). Note we do not accept the electronic signature. Once you have successfully registered and uploaded the signed agreement, you will be able to download the data.
- We will not respond if the agreement is not signed.
- Please use the institutional email corresponding to your affiliation. We will not respond if not.
- We would greatly appreciate the submission of your results on our testing data, so we can include them on the cSeg-2022 webpage.
- Please contact li_wang@med.unc.edu if you would like to submit your results.
- Please inform the organizers if you would like to include the results in a planned publication.
*We would like to express our special thanks and appreciation to Prof. Kathryn L. Humphreys for providing these testing subjects.
**We would like to express our special thanks and appreciation to Prof. Ian H. Gotlib and his lab members for providing these testing subjects.