Improved Cough-Detection Technology Aids Health Monitoring
Posted on 24 Oct 2025
Coughing serves as an important biomarker for tracking a variety of conditions and can help monitor the progress of respiratory diseases or predict when someone’s asthma is being exacerbated. Historically, cough-detection technologies have struggled to distinguish the sound of coughing from speech and nonverbal human noises, which limits their usefulness. Now, researchers have improved the ability of wearable health devices to accurately detect when a patient is coughing, making it easier to monitor chronic conditions and predict health risks.
Researchers at North Carolina State University (Raleigh, NC, USA) have developed a multimodal approach that draws on data from chest-worn health monitors to train cough-detection models. The team collected two streams of real-world data: audio captured by the monitors and movement data from onboard accelerometers, then refined machine-learning algorithms built on prior work. The combined use of sound and movement lets the model use complementary signals — audio for acoustic features and accelerometer data for the sudden movements associated with coughs — improving detection where sound alone can be ambiguous.
When tested in a laboratory setting, the new multimodal model proved more accurate than previous cough-detection technologies, producing fewer false positives and better distinguishing coughs from speech and from nonverbal sounds like sneezes or throat-clearing. The researchers showed that movement data alone are insufficient (different actions can produce similar motion), but the fusion of modalities reduces misclassification in real-world scenarios. The paper describing these results appears in the IEEE Journal of Biomedical and Health Informatics.
Improved wearable cough detection could enable continuous monitoring of chronic respiratory disease, earlier warnings for asthma exacerbations, and more reliable cough frequency metrics for clinical care and trials. The approach is practical for on-body devices and was developed with an eye toward real-world variability in sounds and motions, though the team notes there is still room for improvement. The researchers are now working to further reduce errors and extend the system’s robustness in everyday settings.
“This is a meaningful step forward,” said Edgar Lobaton, corresponding author. “We’ve gotten very good at distinguishing coughs from human speech, and the new model is substantially better at distinguishing coughs from nonverbal sounds. There is still room for improvement, but we have a good idea of how to address that and are now working on this challenge.”