TIBW: Task-Independent Backdoor Watermarking with Fine-Tuning Resilience for Pre-Trained Language Models
Pre-trained language models such as BERT, GPT-3, and T5 have made significant advancements in natural language processing (NLP). However, their widespread adoption raises concerns about intellectual property (IP) protection, as unauthorized use can undermine innovation. Watermarking has emerged as a...
Saved in:
Main Authors: | Weichuan Mo, Kongyang Chen, Yatie Xiao |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Mathematics |
Subjects: | |
Online Access: | https://www.mdpi.com/2227-7390/13/2/272 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Backdoor defense method in federated learning based on contrastive training
by: Jiale ZHANG, et al.
Published: (2024-03-01) -
Efficient Method for Robust Backdoor Detection and Removal in Feature Space Using Clean Data
by: Donik Vrsnak, et al.
Published: (2025-01-01) -
Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology
by: Yihao Hou, et al.
Published: (2025-01-01) -
Frozen Weights as Prior for Parameter-Efficient Fine-Tuning
by: Xiaolong Ma, et al.
Published: (2025-01-01) -
Klasifikasi Citra Generasi Artificial Intellegence menggunakan Metodde Fine Tuning pada Residual Network
by: Sulthan Abiyyu Hakim, et al.
Published: (2024-07-01)