Gradient purification federated adaptive learning algorithm for Byzantine attack resistance
In the context of industrial big data, data security and privacy are key challenges. Traditional data-sharing and model-training methods struggle against risks like Byzantine and poisoning attacks, as federated learning typically assumes all participants are trustworthy, leading to performance drops...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | zho |
Published: |
Editorial Department of Journal on Communications
2024-10-01
|
Series: | Tongxin xuebao |
Subjects: | |
Online Access: | http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2024209/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841537106131812352 |
---|---|
author | YANG Hui QIU Ziyou LI Zhongmei ZHU Jianyong |
author_facet | YANG Hui QIU Ziyou LI Zhongmei ZHU Jianyong |
author_sort | YANG Hui |
collection | DOAJ |
description | In the context of industrial big data, data security and privacy are key challenges. Traditional data-sharing and model-training methods struggle against risks like Byzantine and poisoning attacks, as federated learning typically assumes all participants are trustworthy, leading to performance drops under attacks. To address this, a Byzantine-resilient gradient purification federated adaptive learning algorithm was proposed. The malicious gradients were identified through a sliding window gradient filter and a sign-based clustering filter. The sliding window method detected anomalous gradients, while the sign-based clustering filter selected adversarial gradients based on the consistency of gradient directions. After filtering, a weight-based adaptive aggregation rule was applied to perform weighted aggregation on the remaining trustworthy gradients, dynamically adjusting the weights of participant gradients to reduce the impact of malicious gradients, thereby enhancing the model’s robustness. Experimental results show that despite the increased intensity of new poisoning attacks, the proposed algorithm effectively defends against these attacks while minimizing the loss in model performance. Compared to traditional defense algorithms, it not only improves model accuracy but also enhances its security. |
format | Article |
id | doaj-art-cf1ae5d6efce497680041f4ca0dd3bc2 |
institution | Kabale University |
issn | 1000-436X |
language | zho |
publishDate | 2024-10-01 |
publisher | Editorial Department of Journal on Communications |
record_format | Article |
series | Tongxin xuebao |
spelling | doaj-art-cf1ae5d6efce497680041f4ca0dd3bc22025-01-14T08:46:47ZzhoEditorial Department of Journal on CommunicationsTongxin xuebao1000-436X2024-10-014511179872201Gradient purification federated adaptive learning algorithm for Byzantine attack resistanceYANG HuiQIU ZiyouLI ZhongmeiZHU JianyongIn the context of industrial big data, data security and privacy are key challenges. Traditional data-sharing and model-training methods struggle against risks like Byzantine and poisoning attacks, as federated learning typically assumes all participants are trustworthy, leading to performance drops under attacks. To address this, a Byzantine-resilient gradient purification federated adaptive learning algorithm was proposed. The malicious gradients were identified through a sliding window gradient filter and a sign-based clustering filter. The sliding window method detected anomalous gradients, while the sign-based clustering filter selected adversarial gradients based on the consistency of gradient directions. After filtering, a weight-based adaptive aggregation rule was applied to perform weighted aggregation on the remaining trustworthy gradients, dynamically adjusting the weights of participant gradients to reduce the impact of malicious gradients, thereby enhancing the model’s robustness. Experimental results show that despite the increased intensity of new poisoning attacks, the proposed algorithm effectively defends against these attacks while minimizing the loss in model performance. Compared to traditional defense algorithms, it not only improves model accuracy but also enhances its security.http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2024209/federated learningByzantine attackpoisoning attackmodel robustnessindustrial big data |
spellingShingle | YANG Hui QIU Ziyou LI Zhongmei ZHU Jianyong Gradient purification federated adaptive learning algorithm for Byzantine attack resistance Tongxin xuebao federated learning Byzantine attack poisoning attack model robustness industrial big data |
title | Gradient purification federated adaptive learning algorithm for Byzantine attack resistance |
title_full | Gradient purification federated adaptive learning algorithm for Byzantine attack resistance |
title_fullStr | Gradient purification federated adaptive learning algorithm for Byzantine attack resistance |
title_full_unstemmed | Gradient purification federated adaptive learning algorithm for Byzantine attack resistance |
title_short | Gradient purification federated adaptive learning algorithm for Byzantine attack resistance |
title_sort | gradient purification federated adaptive learning algorithm for byzantine attack resistance |
topic | federated learning Byzantine attack poisoning attack model robustness industrial big data |
url | http://www.joconline.com.cn/zh/article/doi/10.11959/j.issn.1000-436x.2024209/ |
work_keys_str_mv | AT yanghui gradientpurificationfederatedadaptivelearningalgorithmforbyzantineattackresistance AT qiuziyou gradientpurificationfederatedadaptivelearningalgorithmforbyzantineattackresistance AT lizhongmei gradientpurificationfederatedadaptivelearningalgorithmforbyzantineattackresistance AT zhujianyong gradientpurificationfederatedadaptivelearningalgorithmforbyzantineattackresistance |