AS‐XAI: Self‐Supervised Automatic Semantic Interpretation for CNN
Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black‐box” deep learning models. However, it remains difficult for existing methods to achieve the trade‐off of the three key criteria in interpretability, namely, reliability, understandability, and us...
Saved in:
Main Authors: | Changqi Sun, Hao Xu, Yuntian Chen, Dongxiao Zhang |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2024-12-01
|
Series: | Advanced Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1002/aisy.202400359 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Interpreting Arabic Transformer Models: A Study on XAI Interpretability for Quranic Semantic Search Models
by: Ahmad M. Mustafa, et al.
Published: (2024-04-01) -
Tuning-Free Universally-Supervised Semantic Segmentation
by: Xiaobo Yang, et al.
Published: (2024-01-01) -
Consistency Regularization for Semi-Supervised Semantic Segmentation of Flood Regions From SAR Images
by: G. Savitha, et al.
Published: (2025-01-01) -
CTFusion: CNN-transformer-based self-supervised learning for infrared and visible image fusion
by: Keying Du, et al.
Published: (2024-07-01) -
Explainable AI (XAI) in image segmentation in medicine, industry, and beyond: A survey
by: Rokas Gipiškis, et al.
Published: (2024-12-01)