Model split-based data privacy protection method for federated learning

Split learning (SL) enables data privacy preservation by allowing clients to collaboratively train a deep learning model with the server without sharing raw data. However, the SL still has limitations such as potential data privacy leakage. Therefore, binarized split learning-based data privacy prot...

Full description

Saved in:
Bibliographic Details
Main Author: CHEN Ka
Format: Article
Language:zho
Published: Beijing Xintong Media Co., Ltd 2024-09-01
Series:Dianxin kexue
Subjects:
Online Access:http://www.telecomsci.com/zh/article/doi/10.11959/j.issn.1000-0801.2024206/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Split learning (SL) enables data privacy preservation by allowing clients to collaboratively train a deep learning model with the server without sharing raw data. However, the SL still has limitations such as potential data privacy leakage. Therefore, binarized split learning-based data privacy protection (BLDP) algorithm was proposed. In BLDP, the local layers of client were binarized to reduce privacy leakage from SL smashed data. In addition, the leakage-restriction training strategy was proposed to further reduce data leaks. The strategy combines leak loss of local private data and model accuracy loss that enhances privacy while maintaining model accuracy. To evaluate the proposed BLDP algorithm, experiments were conducted on four commonly benchmarked datasets and the leakage loss and model accuracy were analyzed. The results show that the proposed BLDP algorithm can achieve a balance between classification accuracy and data privacy loss.
ISSN:1000-0801