MRI AI Model Classifies Common Intracranial Tumors
By HospiMedica International staff writers Posted on 07 Sep 2021 |
Image: GradCAM color maps colors showing tumor prediction (Photo courtesy of WUSTL)
An artificial intelligence (AI) 3D model is capable of classifying a brain tumor as one of six common types from a single magnetic resonance imaging (MRI) scan, claims a new study.
To develop the GradCAM algorithm, researchers at Washington University (WUSTL; St. Louis, MO, USA), used 2,105 T1-weighted MRI scans from four publicly available datasets, split into training (1396), internal (361), and an external (348) datasets. A convolutional neural network (CNN) was trained to discriminate between healthy scans and those with tumors, classified by type (high grade glioma, low grade glioma, brain metastases, meningioma, pituitary adenoma, and acoustic neuroma). Performance of the model was then evaluated, with feature maps plotted to visualize network attention.
The internal test results showed GradCAM achieved an accuracy of 93.35% across seven imaging classes (a healthy class and six tumor classes). Sensitivities ranged from 91% to 100%, and positive predictive value (PPV) ranged from 85% to 100%. Negative predictive value (NPV) ranged from 98% to 100% across all classes. Network attention overlapped with the tumor areas for all tumor types. For the external test dataset, which included only two tumor types (high-grade glioma and low-grade glioma), GradCAM had an accuracy of 91.95%. The study was published on August 11, 2021, in Radiology: Artificial Intelligence.
“These results suggest that deep learning is a promising approach for automated classification and evaluation of brain tumors. The model achieved high accuracy on a heterogeneous dataset and showed excellent generalization capabilities on unseen testing data,” said lead author Satrajit Chakrabarty, MSc, of the department of electrical and systems engineering. “This network is the first step toward developing an artificial intelligence-augmented radiology workflow that can support image interpretation by providing quantitative information and statistics.”
Deep learning is part of a broader family of AI machine learning methods based on learning data representations, as opposed to task specific algorithms. It involves CNN algorithms that use a cascade of many layers of nonlinear processing units for feature extraction, conversion, and transformation, with each successive layer using the output from the previous layer as input to form a hierarchical representation.
Related Links:
Washington University
To develop the GradCAM algorithm, researchers at Washington University (WUSTL; St. Louis, MO, USA), used 2,105 T1-weighted MRI scans from four publicly available datasets, split into training (1396), internal (361), and an external (348) datasets. A convolutional neural network (CNN) was trained to discriminate between healthy scans and those with tumors, classified by type (high grade glioma, low grade glioma, brain metastases, meningioma, pituitary adenoma, and acoustic neuroma). Performance of the model was then evaluated, with feature maps plotted to visualize network attention.
The internal test results showed GradCAM achieved an accuracy of 93.35% across seven imaging classes (a healthy class and six tumor classes). Sensitivities ranged from 91% to 100%, and positive predictive value (PPV) ranged from 85% to 100%. Negative predictive value (NPV) ranged from 98% to 100% across all classes. Network attention overlapped with the tumor areas for all tumor types. For the external test dataset, which included only two tumor types (high-grade glioma and low-grade glioma), GradCAM had an accuracy of 91.95%. The study was published on August 11, 2021, in Radiology: Artificial Intelligence.
“These results suggest that deep learning is a promising approach for automated classification and evaluation of brain tumors. The model achieved high accuracy on a heterogeneous dataset and showed excellent generalization capabilities on unseen testing data,” said lead author Satrajit Chakrabarty, MSc, of the department of electrical and systems engineering. “This network is the first step toward developing an artificial intelligence-augmented radiology workflow that can support image interpretation by providing quantitative information and statistics.”
Deep learning is part of a broader family of AI machine learning methods based on learning data representations, as opposed to task specific algorithms. It involves CNN algorithms that use a cascade of many layers of nonlinear processing units for feature extraction, conversion, and transformation, with each successive layer using the output from the previous layer as input to form a hierarchical representation.
Related Links:
Washington University
Latest AI News
- AI-Powered Algorithm to Revolutionize Detection of Atrial Fibrillation
- AI Diagnostic Tool Accurately Detects Valvular Disorders Often Missed by Doctors
- New Model Predicts 10 Year Breast Cancer Risk
- AI Tool Accurately Predicts Cancer Three Years Prior to Diagnosis
- Ground-Breaking Tool Predicts 10-Year Risk of Esophageal Cancer
- AI Tool Analyzes Capsule Endoscopy Videos for Accurately Predicting Patient Outcomes for Crohn’s Disease