Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation
With the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complex...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/1/233 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841548957839261696 |
---|---|
author | Yipeng Dong Wei Luo Xiangyang Wang Lei Zhang Lin Xu Zehao Zhou Lulu Wang |
author_facet | Yipeng Dong Wei Luo Xiangyang Wang Lei Zhang Lin Xu Zehao Zhou Lulu Wang |
author_sort | Yipeng Dong |
collection | DOAJ |
description | With the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complexities introduced by multi-modal data and diverse task requirements, such as increased communication overhead and computational burdens. In this paper, we propose a novel privacy-preserving scheme for multi-task federated split learning across multi-modal data (MTFSLaMM). Our approach leverages the principles of split learning to partition models between clients and servers, employing a modular design that reduces computational demands on resource-constrained clients. To ensure data privacy, we integrate differential privacy to protect intermediate data and employ homomorphic encryption to safeguard client models. Additionally, our scheme employs an optimized attention mechanism guided by mutual information to achieve efficient multi-modal data fusion, maximizing information integration while minimizing computational overhead and preventing overfitting. Experimental results demonstrate the effectiveness of the proposed scheme in addressing the challenges of multi-modal data and multi-task learning while offering robust privacy protection, with MTFSLaMM achieving a 15.3% improvement in BLEU-4 and an 11.8% improvement in CIDEr scores compared with the baseline. |
format | Article |
id | doaj-art-f0c0b8e0808c4262b98314db4ff6dc08 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-f0c0b8e0808c4262b98314db4ff6dc082025-01-10T13:21:18ZengMDPI AGSensors1424-82202025-01-0125123310.3390/s25010233Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy PreservationYipeng Dong0Wei Luo1Xiangyang Wang2Lei Zhang3Lin Xu4Zehao Zhou5Lulu Wang6State Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaState Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaState Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaState Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaState Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaShanghai Key Laboratory of Trustworthy Computing, Software Engineering Institute, East China Normal University, Shanghai 200062, ChinaState Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, ChinaWith the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complexities introduced by multi-modal data and diverse task requirements, such as increased communication overhead and computational burdens. In this paper, we propose a novel privacy-preserving scheme for multi-task federated split learning across multi-modal data (MTFSLaMM). Our approach leverages the principles of split learning to partition models between clients and servers, employing a modular design that reduces computational demands on resource-constrained clients. To ensure data privacy, we integrate differential privacy to protect intermediate data and employ homomorphic encryption to safeguard client models. Additionally, our scheme employs an optimized attention mechanism guided by mutual information to achieve efficient multi-modal data fusion, maximizing information integration while minimizing computational overhead and preventing overfitting. Experimental results demonstrate the effectiveness of the proposed scheme in addressing the challenges of multi-modal data and multi-task learning while offering robust privacy protection, with MTFSLaMM achieving a 15.3% improvement in BLEU-4 and an 11.8% improvement in CIDEr scores compared with the baseline.https://www.mdpi.com/1424-8220/25/1/233federated learningmulti-task learningdata privacysplit learningmulti-modal data |
spellingShingle | Yipeng Dong Wei Luo Xiangyang Wang Lei Zhang Lin Xu Zehao Zhou Lulu Wang Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation Sensors federated learning multi-task learning data privacy split learning multi-modal data |
title | Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation |
title_full | Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation |
title_fullStr | Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation |
title_full_unstemmed | Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation |
title_short | Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation |
title_sort | multi task federated split learning across multi modal data with privacy preservation |
topic | federated learning multi-task learning data privacy split learning multi-modal data |
url | https://www.mdpi.com/1424-8220/25/1/233 |
work_keys_str_mv | AT yipengdong multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT weiluo multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT xiangyangwang multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT leizhang multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT linxu multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT zehaozhou multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation AT luluwang multitaskfederatedsplitlearningacrossmultimodaldatawithprivacypreservation |