RDMS: Reverse distillation with multiple students of different scales for anomaly detection

Abstract Unsupervised anomaly detection, often approached as a one‐class classification problem, is a critical task in computer vision. Knowledge distillation has emerged as a promising technique for enhancing anomaly detection accuracy, especially with the advent of reverse distillation networks th...

Full description

Saved in:
Bibliographic Details
Main Authors: Ziheng Chen, Chenzhi Lyu, Lei Zhang, ShaoKang Li, Bin Xia
Format: Article
Language:English
Published: Wiley 2024-11-01
Series:IET Image Processing
Subjects:
Online Access:https://doi.org/10.1049/ipr2.13210
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Unsupervised anomaly detection, often approached as a one‐class classification problem, is a critical task in computer vision. Knowledge distillation has emerged as a promising technique for enhancing anomaly detection accuracy, especially with the advent of reverse distillation networks that employ encoder–decoder architectures. This study introduces a novel reverse knowledge distillation framework known as RDMS, which incorporates a pretrained teacher encoding module, a multi‐level feature fusion connection module, and a student decoding module consisting of three independent decoders. RDMS is designed to distill distinct features from the teacher encoder, mitigating overfitting issues associated with similar or identical teacher–student structures. The model achieves an average of 99.3% image‐level AUROC and 98.34% pixel‐level AUROC on the MVTec‐AD dataset and demonstrates state‐of‐the‐art performance on the more challenging BTAD dataset. The RDMS model's high accuracy in anomaly detection and localization underscores the potential of multi‐student reverse distillation to advance unsupervised anomaly detection capabilities. The source code is available at https://github.com/zihengchen777/RDMS
ISSN:1751-9659
1751-9667