Modified Autoencoder Training and Scoring for Robust Unsupervised Anomaly Detection in Deep Learning

The autoencoder (AE) is a fundamental deep learning approach to anomaly detection. AEs are trained on the assumption that abnormal inputs will produce higher reconstruction errors than normal ones. In practice, however, this assumption is unreliable in the unsupervised case, where the training data...

Full description

Saved in:
Bibliographic Details
Main Authors: Nicholas Merrill, Azim Eskandarian
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9099561/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The autoencoder (AE) is a fundamental deep learning approach to anomaly detection. AEs are trained on the assumption that abnormal inputs will produce higher reconstruction errors than normal ones. In practice, however, this assumption is unreliable in the unsupervised case, where the training data may contain anomalous examples. Given sufficient capacity and training time, an AE can generalize to such an extent that it reliably reconstructs anomalies. Consequently, the ability to distinguish anomalies via reconstruction errors is diminished. We respond to this limitation by introducing three new methods to more reliably train AEs for unsupervised anomaly detection: cumulative error scoring (CES), percentile loss (PL), and early stopping via knee detection. We demonstrate significant improvements over conventional AE training on image, remote-sensing, and cybersecurity datasets.
ISSN:2169-3536