APPLES Reflection 1

Our project is training a model for Semantic Brain Segmentation, designed to provide more modern-day post-processing abilities to life-saving surgeries supported by technologies like MRIs and CT scans. In specific, we’re constructing a machine learning model that can automatically partition these scans into classes that correspond to bone, brain matter, cerebral spinal fluid, abnormalities, etc. We intend to implement this model into a pre-existing app provided by our client that uses Apple’s augmented reality kit to enable this functionality.

We’re working with Dr. Andrew Abumoussa of UNC Neurosurgery to address the problem of analyzing the physiology of a patient quickly and in a way that can assist in the diagnostics decisions made when preparing for brain surgeries. Surprisingly, prior work in using augmented reality and/or machine learning to automatically segment parts of the brain is limited, opening up an opportunity to revolutionize patient assessment and brain surgery as a whole. When evaluating trauma to the skull and/or abnormal physiology, medical professionals are under immense pressure to make accurate assessments, especially before or during emergency surgery. Producing a model that interacts with a simple interface than can generate a result within a few clicks is the ultimate goal here, to increase accuracy of patient assessment and, more importantly, reduce risk during surgery.

While this model is focused mainly on its possible applications within neurosurgery, our client’s provided software inter-connects it with other modeling services that can be used in more general diagnostics, even in other medical specialties.