Skip to main content
Loading Events

« All Events

FRG Informal Talk Series: Nathan Beaumont and Yunkui Pang

October 4 @ 3:00 pm - 4:00 pm

3:00-3:30pm Informal talk by Nathan Beaumont

Nathan Beaumont

Title: Predicting Brain Metastasis Distant Recurrence with Machine Learning on Surveillance Brain MRI

Background: Brain metastases are a typical complication of advanced cancer, with an incidence rate of approximately 20%. Early detection of these metastases is essential for treatment before they cause adverse symptoms to the patient.

Purpose: To develop a preliminary model to predict if distant intracranial recurrence will occur after stereotactic radiosurgery (SRS) treatment based on radiomic features extracted from normal appearing tissue in surveillance magnetic resonance imaging (MRI) exams prior to the time of recurrence.

Methods: We retrospectively identified 72 patients with distant intracranial recurrence of brain metastases after SRS treatment.  After recurrent tumors were identified on contrast enhanced (CE) T1-weighted magnetic resonance imaging (MRI), surveillance MRIs taken 3 and 6 months prior to tumor appearance were collected for this analysis. In total there were 117 tumors. A segmentation of the tumor in the appearance image was rigidly registered to the 3 month prior image. The 6 month image was also registered to the 3 month image and subtracted to create a set of “subtraction images”. The regions of interest (ROIs) used to extract the radiomic features were the original segmentation, and -50%, -25%, +25%, and +50% volume expansions of the original segmentation. 93 radiomic features were extracted from either the 3 month prior image or the subtraction image via the tumor segmentation. For a “no-appearance” class, radiomic features were extracted after the segmentation was mirrored over the mid-sagittal plane to a location where no tumor appeared in future images. Feature reduction was done via analysis of variance (ANOVA), and several machine learning models were trained to classify appearance versus no-appearance regions via 5-fold cross validation.

Results: The linear support vector machine (SVM) model gave the highest mean accuracy at 0.68 +/- 0.06 with an ROC AUC of 0.73 when trained with 3 month prior images with no segmentation expansion. The accuracies of LSVM and RBF-SVM models were not significantly different (p>0.05). All models had decreased accuracy when trained on subtraction images (p<0.02). Both increasing the size of the radiomic-analysis ROI (including more future normal tissue), and decreasing the size (removing the future tumor border) lowered model performance (p<0.05).

Conclusions: We trained a model capable of predicting if a brain metastasis will appear in a segmentation of seemingly normal tissue based on radiomic features extracted from a surveillance MR taken approximately three months before the clinical diagnosis.

3:30-4:00pm Informal talk by Yunkui Pang

 

Title: SinoSynth: A Physics-based Domain Randomization Approach for Generalizable CBCT Image Enhancement

Abstract:Cone Beam Computed Tomography (CBCT) finds diverse applications in medicine. Ensuring high image quality in CBCT scans is essential for accurate diagnosis and treatment delivery. Yet, the susceptibility of CBCT images to noise and artifacts undermines both their usefulness and reliability. Existing methods typically address CBCT artifacts through image-to-image translation approaches. These methods, however, are limited by the artifact types present in the training data, which may not cover the complete spectrum of CBCT degradations stemming from variations in imaging protocols. Gathering additional data to encompass all possible scenarios can often pose a challenge. To address this, we present SinoSynth, a physics-based degradation model that simulates various CBCT-specific artifacts to generate a diverse set of synthetic CBCT images from high-quality CT images without requiring pre-aligned data. Through extensive experiments, we demonstrate that several different generative networks trained on our synthesized data achieve remarkable results on heterogeneous multi-institutional datasets, outperforming even the same networks trained on actual data. We further show that our degradation model conveniently provides an avenue to enforce anatomical constraints in conditional generative models, yielding high-quality and structure-preserving synthetic CT images.

 

The talks will be both in-person at Chapman Hall 435 and on Zoom: https://unc.zoom.us/j/96417451559. The talks will be recorded.

Details

Date:
October 4
Time:
3:00 pm - 4:00 pm
Comments are closed.