Covert Malicious Finetuning: Challenges in Safeguarding LLM Adaptation

D Halawi, A Wei, E Wallace, TT Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2406.20053, 2024arxiv.org
Black-box finetuning is an emerging interface for adapting state-of-the-art language models
to user needs. However, such access may also let malicious actors undermine model safety.
To demonstrate the challenge of defending finetuning interfaces, we introduce covert
malicious finetuning, a method to compromise model safety via finetuning while evading
detection. Our method constructs a malicious dataset where every individual datapoint
appears innocuous, but finetuning on the dataset teaches the model to respond to encoded …
Black-box finetuning is an emerging interface for adapting state-of-the-art language models to user needs. However, such access may also let malicious actors undermine model safety. To demonstrate the challenge of defending finetuning interfaces, we introduce covert malicious finetuning, a method to compromise model safety via finetuning while evading detection. Our method constructs a malicious dataset where every individual datapoint appears innocuous, but finetuning on the dataset teaches the model to respond to encoded harmful requests with encoded harmful responses. Applied to GPT-4, our method produces a finetuned model that acts on harmful instructions 99% of the time and avoids detection by defense mechanisms such as dataset inspection, safety evaluations, and input/output classifiers. Our findings question whether black-box finetuning access can be secured against sophisticated adversaries.
arxiv.org