Malicious code within model detection method based on model similarity

The privacy of user data in federated learning is mainly protected by exchanging model parameters instead of source data.However, federated learning still encounters many security challenges.Extensive research has been conducted to enhance model privacy and detect malicious model attacks.Nevertheles...

Full description

Saved in:
Bibliographic Details
Main Authors: Degang WANG, Yi SUN, Chuanxin ZHOU, Qi GAO, Fan YANG
Format: Article
Language:English
Published: POSTS&TELECOM PRESS Co., LTD 2023-08-01
Series:网络与信息安全学报
Subjects:
Online Access:http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2023056
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841529586725158912
author Degang WANG
Yi SUN
Chuanxin ZHOU
Qi GAO
Fan YANG
author_facet Degang WANG
Yi SUN
Chuanxin ZHOU
Qi GAO
Fan YANG
author_sort Degang WANG
collection DOAJ
description The privacy of user data in federated learning is mainly protected by exchanging model parameters instead of source data.However, federated learning still encounters many security challenges.Extensive research has been conducted to enhance model privacy and detect malicious model attacks.Nevertheless, the issue of risk-spreading through malicious code propagation during the frequent exchange of model data in the federated learning process has received limited attention.To address this issue, a method for detecting malicious code within models, based on model similarity, was proposed.By analyzing the iterative process of local and global models in federated learning, a model distance calculation method was introduced to quantify the similarity between models.Subsequently, the presence of a model carrying malicious code is detected based on the similarity between client models.Experimental results demonstrate the effectiveness of the proposed detection method.For a 178MB model containing 0.375MB embedded malicious code in a training set that is independent and identically distributed, the detection method achieves a true rate of 82.9% and a false positive rate of 1.8%.With 0.75MB of malicious code embedded in the model, the detection method achieves a true rate of 96.6% and a false positive rate of 0.38%.In the case of a non-independent and non-identically distributed training set, the accuracy of the detection method improves as the rate of malicious code embedding and the number of federated learning training rounds increase.Even when the malicious code is encrypted, the accuracy of the proposed detection method still achieves over 90%.In a multi-attacker scenario, the detection method maintains an accuracy of approximately 90% regardless of whether the number of attackers is known or unknown.
format Article
id doaj-art-0f657a517ecd4d5ea75e93b7c7055958
institution Kabale University
issn 2096-109X
language English
publishDate 2023-08-01
publisher POSTS&TELECOM PRESS Co., LTD
record_format Article
series 网络与信息安全学报
spelling doaj-art-0f657a517ecd4d5ea75e93b7c70559582025-01-15T03:16:45ZengPOSTS&TELECOM PRESS Co., LTD网络与信息安全学报2096-109X2023-08-0199010359579564Malicious code within model detection method based on model similarityDegang WANGYi SUNChuanxin ZHOUQi GAOFan YANGThe privacy of user data in federated learning is mainly protected by exchanging model parameters instead of source data.However, federated learning still encounters many security challenges.Extensive research has been conducted to enhance model privacy and detect malicious model attacks.Nevertheless, the issue of risk-spreading through malicious code propagation during the frequent exchange of model data in the federated learning process has received limited attention.To address this issue, a method for detecting malicious code within models, based on model similarity, was proposed.By analyzing the iterative process of local and global models in federated learning, a model distance calculation method was introduced to quantify the similarity between models.Subsequently, the presence of a model carrying malicious code is detected based on the similarity between client models.Experimental results demonstrate the effectiveness of the proposed detection method.For a 178MB model containing 0.375MB embedded malicious code in a training set that is independent and identically distributed, the detection method achieves a true rate of 82.9% and a false positive rate of 1.8%.With 0.75MB of malicious code embedded in the model, the detection method achieves a true rate of 96.6% and a false positive rate of 0.38%.In the case of a non-independent and non-identically distributed training set, the accuracy of the detection method improves as the rate of malicious code embedding and the number of federated learning training rounds increase.Even when the malicious code is encrypted, the accuracy of the proposed detection method still achieves over 90%.In a multi-attacker scenario, the detection method maintains an accuracy of approximately 90% regardless of whether the number of attackers is known or unknown.http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2023056federated learningmodelmodel similaritymalicious codedetection
spellingShingle Degang WANG
Yi SUN
Chuanxin ZHOU
Qi GAO
Fan YANG
Malicious code within model detection method based on model similarity
网络与信息安全学报
federated learning
model
model similarity
malicious code
detection
title Malicious code within model detection method based on model similarity
title_full Malicious code within model detection method based on model similarity
title_fullStr Malicious code within model detection method based on model similarity
title_full_unstemmed Malicious code within model detection method based on model similarity
title_short Malicious code within model detection method based on model similarity
title_sort malicious code within model detection method based on model similarity
topic federated learning
model
model similarity
malicious code
detection
url http://www.cjnis.com.cn/thesisDetails#10.11959/j.issn.2096-109x.2023056
work_keys_str_mv AT degangwang maliciouscodewithinmodeldetectionmethodbasedonmodelsimilarity
AT yisun maliciouscodewithinmodeldetectionmethodbasedonmodelsimilarity
AT chuanxinzhou maliciouscodewithinmodeldetectionmethodbasedonmodelsimilarity
AT qigao maliciouscodewithinmodeldetectionmethodbasedonmodelsimilarity
AT fanyang maliciouscodewithinmodeldetectionmethodbasedonmodelsimilarity