हिंदी खबर
Representative Image
Representative Image

Brain tumours can now be classified with single MRI scan using deep learning model

ANI | Updated: Aug 11, 2021 23:01 IST

Washington [US], August 11 (ANI): During a new study, researchers developed a deep learning model that is capable of classifying a brain tumour as one of six common types, using a single 3D MRI scan.
The study by researchers from the Washington University School of Medicine has been published in Radiology: Artificial Intelligence.
"This is the first study to address the most common intracranial tumours and to directly determine the tumour class or the absence of tumour from a 3D MRI volume," said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, PhD, and Daniel Marcus, PhD, in Mallinckrodt Institute of Radiology's Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.
The six most common intracranial tumour types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of suspected cancer and examining it under a microscope.
According to Chakrabarty, machine and deep learning approaches using MRI data could potentially automate the detection and classification of brain tumours.
"Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination," he said.
To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources.
In addition to the institution's own internal data, the team obtained pre-operative, post-contrast T1-weighted MRI scans from the Brain Tumor Image Segmentation, The Cancer Genome Atlas Glioblastoma Multiforme, and The Cancer Genome Atlas Low-Grade Glioma.
The researchers divided a total of 2,105 scans into three subsets of data: 1,396 for training, 361 for internal testing and 348 for external testing.

The first set of MRI scans was used to train the convolutional neural network to discriminate between healthy scans and scans with tumours, and to classify tumours by type. The researchers evaluated the performance of the model using data from both the internal and external MRI scans.
Using the internal testing data, the model achieved an accuracy of 93.35 per cent (337 of 361) across seven imaging classes (a healthy class and six tumour classes).
Sensitivities ranged from 91 per cent to 100 per cent, and positive predictive value--or the probability that patients with a positive screening test truly have the disease--ranged from 85 to 100 per cent.
Negative predictive values--or the probability that patients with a negative screening test truly don't have the disease--ranged from 98 to 100 per cent across all classes. Network attention overlapped with the tumour areas for all tumour types.
For the external test dataset, which included only two tumour types (high-grade glioma and low-grade glioma), the model had an accuracy of 91.95 per cent.
"These results suggest that deep learning is a promising approach for automated classification and evaluation of brain tumours," Chakrabarty said. "The model achieved high accuracy on a heterogeneous dataset and showed excellent generalization capabilities on unseen testing data."
Chakrabarty said the 3D deep learning model comes closer to the goal of an end-to-end, automated workflow by improving upon existing 2D approaches, which require radiologists to manually delineate, or characterize, the tumour area on an MRI scan before machine processing. The convolutional neural network eliminates the tedious and labour-intensive step of tumour segmentation prior to classification.
Dr Sotiras, a co-developer of the model, said it can be extended to other brain tumour types or neurological disorders, potentially providing a pathway to augment much of the neuroradiology workflow.
"This network is the first step toward developing an artificial intelligence-augmented radiology workflow that can support image interpretation by providing quantitative information and statistics," Chakrabarty added. (ANI)