Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness

To cope with the challenges posed by high-concurrency training tasks involving large models and big data, Directed Acyclic Graph (DAG) and shard were proposed as alternatives to blockchain-based federated learning, aiming to enhance training concurrency. However, there is insufficient research on th...

Full description

Saved in:
Bibliographic Details
Main Authors: Ruiqi Xiao, Yun Cao, Bin Xia
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/1/19
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841549030596804608
author Ruiqi Xiao
Yun Cao
Bin Xia
author_facet Ruiqi Xiao
Yun Cao
Bin Xia
author_sort Ruiqi Xiao
collection DOAJ
description To cope with the challenges posed by high-concurrency training tasks involving large models and big data, Directed Acyclic Graph (DAG) and shard were proposed as alternatives to blockchain-based federated learning, aiming to enhance training concurrency. However, there is insufficient research on the specific consensus designs and the effects of varying shard sizes on federated learning. In this paper, we combine DAG and shard by designing three tip selection consensus algorithms and propose an adaptive algorithm to improve training performance. Additionally, we achieve concurrent control over the scale of the directed acyclic graph’s structure through shard and algorithm adjustments. Finally, we validate the fairness of our model with an incentive mechanism and its robustness under different real-world conditions and demonstrate DAG-Shard-based Federated Learning (DSFL)’s advantages in high concurrency and fairness while adjusting the DAG size through concurrency control. In concurrency, DSFL improves accuracy by 8.19–12.21% and F1 score by 7.27–11.73% compared to DAG-FL. Compared to Blockchain-FL, DSFL shows an accuracy gain of 7.82–11.86% and an F1 score improvement of 8.89–13.27%. Additionally, DSFL outperforms DAG-FL and Chains-FL on both balanced and imbalanced datasets.
format Article
id doaj-art-e34ed29d46004eebba984f246a7ff050
institution Kabale University
issn 1424-8220
language English
publishDate 2024-12-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj-art-e34ed29d46004eebba984f246a7ff0502025-01-10T13:20:35ZengMDPI AGSensors1424-82202024-12-012511910.3390/s25010019Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and FairnessRuiqi Xiao0Yun Cao1Bin Xia2School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, ChinaSchool of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, ChinaSchool of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, ChinaTo cope with the challenges posed by high-concurrency training tasks involving large models and big data, Directed Acyclic Graph (DAG) and shard were proposed as alternatives to blockchain-based federated learning, aiming to enhance training concurrency. However, there is insufficient research on the specific consensus designs and the effects of varying shard sizes on federated learning. In this paper, we combine DAG and shard by designing three tip selection consensus algorithms and propose an adaptive algorithm to improve training performance. Additionally, we achieve concurrent control over the scale of the directed acyclic graph’s structure through shard and algorithm adjustments. Finally, we validate the fairness of our model with an incentive mechanism and its robustness under different real-world conditions and demonstrate DAG-Shard-based Federated Learning (DSFL)’s advantages in high concurrency and fairness while adjusting the DAG size through concurrency control. In concurrency, DSFL improves accuracy by 8.19–12.21% and F1 score by 7.27–11.73% compared to DAG-FL. Compared to Blockchain-FL, DSFL shows an accuracy gain of 7.82–11.86% and an F1 score improvement of 8.89–13.27%. Additionally, DSFL outperforms DAG-FL and Chains-FL on both balanced and imbalanced datasets.https://www.mdpi.com/1424-8220/25/1/19directed acyclic graphblockchainfederated learninghigh concurrencyfair incentive
spellingShingle Ruiqi Xiao
Yun Cao
Bin Xia
Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
Sensors
directed acyclic graph
blockchain
federated learning
high concurrency
fair incentive
title Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
title_full Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
title_fullStr Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
title_full_unstemmed Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
title_short Adaptive Tip Selection for DAG-Shard-Based Federated Learning with High Concurrency and Fairness
title_sort adaptive tip selection for dag shard based federated learning with high concurrency and fairness
topic directed acyclic graph
blockchain
federated learning
high concurrency
fair incentive
url https://www.mdpi.com/1424-8220/25/1/19
work_keys_str_mv AT ruiqixiao adaptivetipselectionfordagshardbasedfederatedlearningwithhighconcurrencyandfairness
AT yuncao adaptivetipselectionfordagshardbasedfederatedlearningwithhighconcurrencyandfairness
AT binxia adaptivetipselectionfordagshardbasedfederatedlearningwithhighconcurrencyandfairness