Adversarial consistency and the uniqueness of the adversarial bayes classifier

Minimizing an adversarial surrogate risk is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context – or in other words, a minimizing sequence of the adversarial surrogate risk will not necessarily...

Full description

Saved in:
Bibliographic Details
Main Author: Natalie S. Frank
Format: Article
Language:English
Published: Cambridge University Press
Series:European Journal of Applied Mathematics
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S0956792525000038/type/journal_article
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Minimizing an adversarial surrogate risk is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context – or in other words, a minimizing sequence of the adversarial surrogate risk will not necessarily minimize the adversarial classification error. We connect the consistency of adversarial surrogate losses to properties of minimizers to the adversarial classification risk, known as adversarial Bayes classifiers. Specifically, under reasonable distributional assumptions, a convex surrogate loss is statistically consistent for adversarial learning iff the adversarial Bayes classifier satisfies a certain notion of uniqueness.
ISSN:0956-7925
1469-4425