ALDP-FL for adaptive local differential privacy in federated learning
Abstract Federated learning, as an emerging distributed learning framework, enables model training without compromising user data privacy. However, malicious attackers may still infer sensitive user information by analyzing model updates during the federated learning process. To address this, this p...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-07-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-12575-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Federated learning, as an emerging distributed learning framework, enables model training without compromising user data privacy. However, malicious attackers may still infer sensitive user information by analyzing model updates during the federated learning process. To address this, this paper proposes an Adaptive Localized Differential Privacy Federated Learning (ALDP-FL) method. This approach dynamically sets the clipping threshold for each network layer's updates based on the historical moving average of their $$L_2$$ -norm, thereby injecting adaptive noise into each layer. Additionally, a bounded perturbation mechanism is designed to minimize the impact of the added noise on model accuracy. A privacy analysis of the method is provided. Finally, experiments on the MNIST, Fashion MNIST, and CIFAR-10 datasets demonstrate the effectiveness and practicality of the proposed method. Specifically, ALDP-FL achieves an average improvement of over 10% across all evaluation metrics: Accuracy increases by 10.57%, Precision by 10.64%, Recall by 10.52%, and F1 Score by 10.64%. Regarding the reconstructed images under the iDLG attack, the average improvement rates in MSE and SSIM reach 391.2% and –85.4%, respectively, significantly outperforming all other comparison methods. |
|---|---|
| ISSN: | 2045-2322 |