AS‐XAI: Self‐Supervised Automatic Semantic Interpretation for CNN
Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black‐box” deep learning models. However, it remains difficult for existing methods to achieve the trade‐off of the three key criteria in interpretability, namely, reliability, understandability, and us...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2024-12-01
|
| Series: | Advanced Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1002/aisy.202400359 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black‐box” deep learning models. However, it remains difficult for existing methods to achieve the trade‐off of the three key criteria in interpretability, namely, reliability, understandability, and usability, which hinder their practical applications. In this article, we propose a self‐supervised automatic semantic interpretable explainable artificial intelligence (AS‐XAI) framework, which utilizes transparent orthogonal embedding semantic extraction spaces and row‐centered principal component analysis (PCA) for global semantic interpretation of model decisions in the absence of human interference, without additional computational costs. In addition, the invariance of filter feature high‐rank decomposition is used to evaluate model sensitivity to different semantic concepts. Extensive experiments demonstrate that robust and orthogonal semantic spaces can be automatically extracted by AS‐XAI, providing more effective global interpretability for convolutional neural networks (CNNs) and generating human‐comprehensible explanations. The proposed approach offers broad fine‐grained extensible practical applications, including shared semantic interpretation under out‐of‐distribution (OOD) categories, auxiliary explanations for species that are challenging to distinguish, and classification explanations from various perspectives. In a systematic evaluation by users with varying levels of AI knowledge, AS‐XAI demonstrated superior “glass box” characteristics. |
|---|---|
| ISSN: | 2640-4567 |