Label flipping adversarial attack on graph neural network

To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic...

Full description

Saved in:
Bibliographic Details
Main Authors: Yiteng WU, Wei LIU, Hongtao YU
Format: Article
Language:zho
Published: Editorial Department of Journal on Communications 2021-09-01
Series:Tongxin xuebao
Subjects:
Online Access:http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2021167/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841539275302109184
author Yiteng WU
Wei LIU
Hongtao YU
author_facet Yiteng WU
Wei LIU
Hongtao YU
author_sort Yiteng WU
collection DOAJ
description To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.
format Article
id doaj-art-530ced613d824079ae3e61e63dca5340
institution Kabale University
issn 1000-436X
language zho
publishDate 2021-09-01
publisher Editorial Department of Journal on Communications
record_format Article
series Tongxin xuebao
spelling doaj-art-530ced613d824079ae3e61e63dca53402025-01-14T07:22:41ZzhoEditorial Department of Journal on CommunicationsTongxin xuebao1000-436X2021-09-0142657459744562Label flipping adversarial attack on graph neural networkYiteng WUWei LIUHongtao YUTo expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2021167/graph neural networkadversarial attacklabel flippingattack hypothesisrobustness
spellingShingle Yiteng WU
Wei LIU
Hongtao YU
Label flipping adversarial attack on graph neural network
Tongxin xuebao
graph neural network
adversarial attack
label flipping
attack hypothesis
robustness
title Label flipping adversarial attack on graph neural network
title_full Label flipping adversarial attack on graph neural network
title_fullStr Label flipping adversarial attack on graph neural network
title_full_unstemmed Label flipping adversarial attack on graph neural network
title_short Label flipping adversarial attack on graph neural network
title_sort label flipping adversarial attack on graph neural network
topic graph neural network
adversarial attack
label flipping
attack hypothesis
robustness
url http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2021167/
work_keys_str_mv AT yitengwu labelflippingadversarialattackongraphneuralnetwork
AT weiliu labelflippingadversarialattackongraphneuralnetwork
AT hongtaoyu labelflippingadversarialattackongraphneuralnetwork