AI Interpretability Tool for Photographed ECG Images Offers Pixel-Level Precision

By HospiMedica International staff writers
Posted on 05 May 2025

The electrocardiogram (ECG) is a crucial diagnostic tool in modern medicine, used to detect heart conditions such as arrhythmias and structural abnormalities. Every year, millions of ECGs are performed in settings ranging from emergency rooms to routine doctor appointments. As artificial intelligence (AI) systems evolve, they are being utilized to analyze ECGs, with some even identifying conditions that may be overlooked by physicians. However, the challenge with this is that doctors need to understand the rationale behind the AI system’s diagnosis. While AI-powered ECG analysis can provide high accuracy, it often functions like a “black box,” providing results without explaining its reasoning. This lack of clarity often causes hesitation among physicians to fully trust AI tools. To address this, researchers are focusing on making AI more interpretable, enabling it to explain its conclusions in a way that aligns with medical knowledge.

For AI to be truly useful in clinical settings, it must highlight the same ECG features that doctors rely on when diagnosing heart conditions. This is a challenge because even among cardiologists, there isn't always a consensus on which ECG markers are the most important. Despite this, researchers have developed several interpretability techniques to help AI explain its decisions. However, these methods sometimes highlight broad regions of the ECG, failing to identify the exact markers, which could lead to misinterpretation. Furthermore, AI tools can sometimes focus on irrelevant parts of the image, such as the background, instead of the actual ECG signals. Most existing AI models depend on high-quality scanned ECG images, but in reality, doctors often lack access to perfect scans. They frequently rely on paper printouts from ECG machines, which they may photograph with smartphones to share with colleagues or add to patient records. These photographs can be tilted, crumpled, or shadowed, which makes AI analysis more challenging.


Image: Prof. Yael Yaniv led the study introducing a new AI interpretability tool designed specifically for photographed ECG images (Photo courtesy of Technion)

To address this issue, researchers at Technion - Israel Institute of Technology (Haifa, Israel), in collaboration with others, have developed a new AI interpretability tool specifically designed for photographed ECG images. Using an advanced mathematical technique based on the Jacobian matrix, this method offers pixel-level precision, allowing it to focus on even the smallest details within an ECG. Unlike previous models, this new approach, presented in npj-Digital Medicine, avoids distractions from the background and can also explain why certain conditions may not be present in the given ECG. As AI continues to play an increasingly significant role in healthcare, ensuring that it is both explainable and trustworthy is just as important as ensuring its accuracy. By creating methods that enable AI to communicate its findings in alignment with medical expertise, researchers are advancing the development of smarter, more reliable, and widely accepted AI tools in cardiology. With these innovations, doctors could soon have AI assistants that not only detect heart conditions but also clearly explain the reasoning behind their findings, leading to faster, more accurate, and more informed patient care.

Related Links:
Technion


Latest Critical Care News