A Review of Explainable AI for Android Malware Detection and Analysis

Recent advances in complex machine learning models have significantly enhanced Android malware detection and analysis. However, these models often operate as closed boxes, making it difficult to understand which aspects of the input data influence their decisions. Such interpretability is essential...

Full description

Saved in:
Bibliographic Details
Main Authors: Maryam Tanha, Somayeh Kafaie
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11122514/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent advances in complex machine learning models have significantly enhanced Android malware detection and analysis. However, these models often operate as closed boxes, making it difficult to understand which aspects of the input data influence their decisions. Such interpretability is essential for building trust and improving model robustness and performance. This paper reviews and analyzes recent research on explainable artificial intelligence (XAI) techniques applied to Android malware detection. We identify key objectives for integrating explainability, examine current XAI techniques used for explaining the results of Android malware detectors, and their limitations. We also examine the metrics used to evaluate explanation quality. Furthermore, we introduce a system that utilizes the MITRE ATT&CK framework to enhance and structure feature-based explanations. Lastly, we highlight current challenges and suggest directions for future research in this emerging field.
ISSN:2169-3536