Using Machine Learning, More Information can be Extracted from One...

Using Machine Learning, More Information can be Extracted from One Brain Scan!

Healthcare Tech Outlook | Wednesday, July 03, 2019

Deep learning models can detect patterns in a brain scan, relatively a new concept in medicine.

FREMONT, CA:  In situations concerning mental conditions such as Alzheimer’s disease or rare brain conditions in children, it is difficult to collect sufficient data. Neurological experts struggle to distinctively outline all anatomical structures in various scans. To combat the situation, MIT researchers have developed a method to extract more information from one scan and are using them to train machine-learning models. Such arrangements are also being used for complex brain scans.

Training deep learning models to detect patterns in brain scan is a relatively new concept in medicine. As per the paper presented at the recent Conference on Computer Vision and Pattern Recognition, the researchers explained that the proposed system uses a single-labeled scan with unlabeled scans that automatically synthesizes a massive set of data gathering distinct training examples. Such scans can further improve machine learning (ML) models to spot anatomical structures in new scans.

The objective is to automatically generate data for "image segmentation" process that categorizes an image into regions of pixels which are simpler to comprehend. The system utilizes a Convolutional Neural Network (CNN), which is an ML model that has become a driver for image processing tasks. The network studies numerous unlabeled scans from several patients and different equipment to gain insights on anatomical, brightness, and contrast variations. After that, a random combination of the variations is applied to a single-labeled scan to synthesize new scans. Finally, the synthesized scans are supplied to a different CNN that learns to segment new images. The development will increase the accessibility of image segmentation in realistic situations that lacks comprehensive training data.

Mind Warp

Magnetic resonance images (MRIs) comprised of three-dimensional pixels, called voxels. While segmenting MRIs, voxel regions are separated and labeled as per the anatomical structure containing them. The challenge to use ML to automate the process arises due to the variations in individual brains and equipment used. The researcher’s system adapts and learns to synthesize realistic scans, especially as they have trained their system from 100 unlabeled scans from different patients to understand spatial transformations.

Weekly Brief

Read Also

Leading by Collaboration

Leading by Collaboration

Ashish Atreja, MD MPH, Chief Digital Health Officer, UC Davis Health Keisuke Nakagawa, MD, Director of Innovation, Digital CoLab, UC Davis Health
Extending the Reach of Health Care Beyond Its Borders Through Telehealth

Extending the Reach of Health Care Beyond Its Borders Through...

BenjaminH. Seo, Director, Global Business Development, Cedars-Sinai Medical Center Heitham T. Hassoun, MD, Vice President & Medical Director, International, Cedars-Sinai Medical Center
Telehealth in a time of pandemic- Let's give it an

Telehealth in a time of pandemic- Let's give it an "A"

Douglas A. Spotts MD, FAAFP, FCPP, Meritus Health
Augmented Intelligence - The Operational ROI Engine

Augmented Intelligence - The Operational ROI Engine

Brian Hammond, Chief Technology Officer, Tampa General Hospital
Intermountain Healthcare's Hospital-Based Dentistry Provides Care for Special Needs Patients and Those Who Need Dental Care in the Hospital

Intermountain Healthcare's Hospital-Based Dentistry Provides Care...

Dr. Marc A. Collman, DDS, FICD, Intermountain Healthcare Medical Group
Lens-Free Microscope for Cell-Culture Monitoring

Lens-Free Microscope for Cell-Culture Monitoring

Sophie Morales, Research Engineer and Project Manager, CEA-Leti