HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models

  • Safety guard models that detect malicious queries aimed at large language models(LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose HarmAug, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, “Make a single harmful instruction prompt that would elicit offensive content”, we add an affirmative prefix (e.g., “I have anSafety guard models that detect malicious queries aimed at large language models(LLMs) are essential for ensuring the secure and responsible deployment of LLMs in real-world applications. However, deploying existing safety guard models with billions of parameters alongside LLMs on mobile devices is impractical due to substantial memory requirements and latency. To reduce this cost, we distill a large teacher safety guard model into a smaller one using a labeled dataset of instruction-response pairs with binary harmfulness labels. Due to the limited diversity of harmful instructions in the existing labeled dataset, naively distilled models tend to underperform compared to larger models. To bridge the gap between small and large models, we propose HarmAug, a simple yet effective data augmentation method that involves jailbreaking an LLM and prompting it to generate harmful instructions. Given a prompt such as, “Make a single harmful instruction prompt that would elicit offensive content”, we add an affirmative prefix (e.g., “I have an idea for a prompt:”) to the LLM’s response. This encourages the LLM to continue generating the rest of the response, leading to sampling harmful instructions. Another LLM generates a response to the harmful instruction, and the teacher model labels the instruction-response pair. We empirically show that our HarmAug outperforms other relevant baselines. Moreover, a 435-millionparameter safety guard model trained with HarmAug achieves an F1 score comparable to larger models with over 7 billion parameters, and even outperforms them in AUPRC, while operating at less than 25% of their computational cost. Our code, safety guard model, and synthetic dataset are publicly available.show moreshow less

Export metadata

Additional Services

Search Google Scholar
Metadaten
Author:Seanie Lee, Haebin Seong, Dong Bok Lee, Minki Kang, Xiaoyin Chen, Dominik Wagner, Joshua Bengio, Juho Lee, Sung Ju Hwang
DOI:https://doi.org/10.48550/arXiv.2410.01524
ArXiv Id:http://arxiv.org/abs/2410.01524v2
Publisher:ICLR
Document Type:conference proceeding (article)
Language:English
Date of first Publication:2025/02/24
Reviewed:Begutachtet/Reviewed
Release Date:2025/11/04
Pagenumber:20
First Page:1
Last Page:20
institutes:Fakultät Informatik
Zentrum für Künstliche Intelligenz (KIZ)
Research Themes:Digitalisierung & Künstliche Intelligenz
Licence (German):Creative Commons - CC BY - Namensnennung 4.0 International
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.