A comprehensive analysis of perturbation methods in explainable AI feature attribution validation for neural time series classifiers

Abstract In domains where AI model predictions have significant consequences, such as industry, medicine, and finance, the need for explainable AI (XAI) is of utmost importance. However, ensuring that explanation methods provide faithful and trustworthy explanations requires rigorous validation. Fea...

Full description

Saved in:
Bibliographic Details
Main Authors: Ilija Šimić, Eduardo Veas, Vedran Sabol
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-09538-2
Tags: Add Tag
No Tags, Be the first to tag this record!