Privacy risks induced by generative large language models and governance paths
Large language models(LLM) are leading to a new revolution in artificial intelligence technology. As LLM have the characteristics of platform, mass data dependence, frequent user interaction, and easy to be attacked, the risk of privacy disclosure is more worrying. The source and internal mechanism...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | zho |
Published: |
POSTS&TELECOM PRESS Co., LTD
2024-09-01
|
Series: | 智能科学与技术学报 |
Subjects: | |
Online Access: | http://www.cjist.com.cn/thesisDetails#10.11959/j.issn.2096-0271.202431 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Large language models(LLM) are leading to a new revolution in artificial intelligence technology. As LLM have the characteristics of platform, mass data dependence, frequent user interaction, and easy to be attacked, the risk of privacy disclosure is more worrying. The source and internal mechanism of privacy leakage caused by LLM were focused. After reviewing the domestic and foreign experience in the governance of LLM privacy risks, a five-dimensional governance framework that included policies, standards, data, technology, and ecology were proposed. Finally, this paper also looks forward to the new privacy disclosure risks that LLM may face under the development trend of multimodal, agent, embodied intelligence, edge intelligence, etc. |
---|---|
ISSN: | 2096-6652 |