Privacy risks induced by generative large language models and governance paths
Large language models(LLM) are leading to a new revolution in artificial intelligence technology. As LLM have the characteristics of platform, mass data dependence, frequent user interaction, and easy to be attacked, the risk of privacy disclosure is more worrying. The source and internal mechanism...
Saved in:
Main Authors: | LI Yaling, CAI Jingjing, BAI Jieming |
---|---|
Format: | Article |
Language: | zho |
Published: |
POSTS&TELECOM PRESS Co., LTD
2024-09-01
|
Series: | 智能科学与技术学报 |
Subjects: | |
Online Access: | http://www.cjist.com.cn/thesisDetails#10.11959/j.issn.2096-0271.202431 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Balancing Privacy and Robustness in Prompt Learning for Large Language Models
by: Chiyu Shi, et al.
Published: (2024-10-01) -
Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review
by: Georgios Feretzakis, et al.
Published: (2024-11-01) -
Traffic characteristic based privacy leakage assessment scheme for Android device
by: Zhu WANG, et al.
Published: (2020-02-01) -
Privacy-enhanced federated learning scheme based on generative adversarial networks
by: Feng YU, et al.
Published: (2023-06-01) -
Survey of artificial intelligence data security and privacy protection
by: Kui REN, et al.
Published: (2021-02-01)