AI Doubles Medical Professionals’ Accuracy in Reading EEG Charts of ICU Patients
Posted on 30 May 2024
Electroencephalography (EEG) readings are crucial for detecting when unconscious patients may be experiencing or are at risk of seizures. EEGs involve placing small sensors on the scalp to measure the brain’s electrical signals, which are visualized as lines that fluctuate on a chart. During a seizure, these lines exhibit dramatic spikes similar to a seismograph during an earthquake, making them easy to recognize. However, other significant but subtler abnormalities, known as seizure-like events are more challenging to identify. Now, an assistive machine learning model can significantly enhance how medical professionals interpret EEG charts of patients in intensive care settings.
Researchers at Duke University (Durham, N.C., USA) utilized “interpretable” machine learning algorithms to develop this computational tool. Unlike typical machine learning models, which are often "black boxes" that make it impossible to understand how the conclusions have been arrived at, interpretable models are designed to reveal the processes behind their conclusions. The team began by analyzing EEG samples from over 2,700 patients, with more than 120 experts identifying key features in the graphs, categorizing them as seizures, one of four types of seizure-like events, or 'other.' These events appear on EEG charts as distinct shapes or patterns, but the variability of these charts means signals can be obscured by noise or blend into confusing charts.
Due to the ambiguity in these charts, the model was trained to place its decisions within a continuum rather than well-defined separate bins. Visually, this continuum can be likened to a multicolored starfish evading a predator, with each color representing a different type of seizure-like event. Each differently colored arm represents a type of seizure-like event the EEG could represent. The closer the algorithm puts a specific chart toward the tip of an arm, the more confident it is of its decision, while those placed closer to the central body are less sure. Moreover, the algorithm highlights the specific brainwave patterns it analyzed to reach its conclusions and compares the chart in question to three professionally diagnosed examples.
This approach allows medical professionals to quickly focus on relevant sections of the EEG, assess whether the identified patterns are accurate, or determine if the model's analysis is incorrect. This tool can greatly assist even those with limited experience in reading EEGs to make more informed decisions. To validate the effectiveness of this technology, a team of eight medical professionals with relevant experience categorized 100 EEG samples into six categories, both with and without AI assistance. Their accuracy improved significantly with the AI, jumping from 47% to 71%, and outperforming those who used a more opaque "black box" algorithm in prior studies. The findings were published in the journal NEJM AI on May 23, 2024
“Usually, people think that black box machine learning models are more accurate, but for many important applications, like this one, it's just not true,” said Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science and Electrical and Computer Engineering at Duke. “It's much easier to troubleshoot models when they are interpretable. And in this case, the interpretable model was actually more accurate. It also provides a bird's eye view of the types of anomalous electrical signals that occur in the brain, which is really useful for care of critically ill patients.”
Related Links:
Duke University