LightCardiacNet: light-weight deep ensemble network with attention mechanism for cardiac sound classification

Cardiovascular diseases (CVDs) account for about 32% of global deaths. While digital stethoscopes can record heart sounds, expert analysis is often lacking. To address this, we propose LightCardiacNet, an interpretable, lightweight ensemble neural network using Bi-Directional Gated Recurrent Units (...

Full description

Saved in:
Bibliographic Details
Main Authors: Suma K. V., Deepali B. Koppad, Dharini Raghavan, Manjunath P. R.
Format: Article
Language:English
Published: Taylor & Francis Group 2024-12-01
Series:Systems Science & Control Engineering
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/21642583.2024.2420912
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Cardiovascular diseases (CVDs) account for about 32% of global deaths. While digital stethoscopes can record heart sounds, expert analysis is often lacking. To address this, we propose LightCardiacNet, an interpretable, lightweight ensemble neural network using Bi-Directional Gated Recurrent Units (Bi-GRU). It is trained on the PASCAL Heart Challenge and CirCor DigiScope datasets. Static network pruning enhances model sparsity for real-time deployment. We employ various data augmentation techniques to improve resilience to background noise. An ensemble of the two networks is constructed by employing a weighted average approach that combines the two light-weight attention Bi-GRU networks trained on different datasets, which outperforms several state-of-the-art networks achieving an accuracy of 99.8%, specificity of 99.6%, sensitivity of 95.2%, ROC-AUC of 0.974 and inference time of 17 ms on the PASCAL dataset, accuracy of 98.5%, specificity of 95.1%, sensitivity of 90.9%, ROC-AUC of 0.961 and inference time of 18 ms on the CirCor dataset, and an accuracy of 96.21%, sensitivity of 92.78%, specificity of 93.16%, ROC-AUC of 0.913 and inference time of 17.5 ms on real-world data. We adopt the SHAP algorithm to incorporate model interpretability and provide insights to make it clinically explainable and useful to healthcare professionals.
ISSN:2164-2583