G&G Attack: General and Geometry-Aware Adversarial Attack on the Point Cloud

Deep neural networks have been shown to produce incorrect predictions when imperceptible perturbations are introduced into the clean input. This phenomenon has garnered significant attention and extensive research in 2D images. However, related work on point clouds is still in its infancy. Current m...

Full description

Saved in:
Bibliographic Details
Main Authors: Geng Chen, Zhiwen Zhang, Yuanxi Peng, Chunchao Li, Teng Li
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/1/448
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep neural networks have been shown to produce incorrect predictions when imperceptible perturbations are introduced into the clean input. This phenomenon has garnered significant attention and extensive research in 2D images. However, related work on point clouds is still in its infancy. Current methods suffer from issues such as generated point outliers and poor attack generalization. Consequently, it is not feasible to rely solely on overall or geometry-aware attacks to generate adversarial samples. In this paper, we integrate adversarial transfer networks with the geometry-aware method to introduce adversarial loss into the attack target. A state-of-the-art autoencoder is employed, and sensitivity maps are utilized. We use the autoencoder to generate a sufficiently deceptive mask that covers the original input, adjusting the critical subset through a geometry-aware trick to distort the point cloud gradient. Our proposed approach is quantitatively evaluated in terms of the attack success rate (ASR), imperceptibility, and transferability. Compared to other baselines on ModelNet40, our method demonstrates an approximately 38% improvement in ASR for black-box transferability query attacks, with an average query count of around 7.84. Comprehensive experimental results confirm the superiority of our method.
ISSN:2076-3417