Trustworthy AI: Securing Sensitive Data in Large Language Models
Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper propo...
        Saved in:
      
    
          | Main Authors: | Georgios Feretzakis, Vassilios S. Verykios | 
|---|---|
| Format: | Article | 
| Language: | English | 
| Published: | MDPI AG
    
        2024-12-01 | 
| Series: | AI | 
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-2688/5/4/134 | 
| Tags: | Add Tag 
      No Tags, Be the first to tag this record!
   | 
Similar Items
- 
                
                    Model for attribute based access control        
                          
 by: LI Xiao-feng1~4, et al.
 Published: (2008-01-01)
- 
                
                    Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review        
                          
 by: Georgios Feretzakis, et al.
 Published: (2024-11-01)
- 
                
                    Instruction and demonstration-based secure service attribute generation mechanism for textual data        
                          
 by: LI Chenhao, et al.
 Published: (2024-12-01)
- 
                
                    A trustworthy architecture for Web3 service        
                          
 by: Yuki YASUNO, et al.
 Published: (2024-11-01)
- 
                
                    Fused access control scheme for sensitive data sharing        
                          
 by: Xi-xi YAN, et al.
 Published: (2014-08-01)
 
       