Skip to main navigation Skip to search Skip to main content

Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring

  • Honglin Mu
  • , Han He
  • , Yuxin Zhou
  • , Yunlong Feng
  • , Yang Xu
  • , Libo Qin
  • , Xiaoming Shi
  • , Zeming Liu
  • , Xudong Han
  • , Qi Shi
  • , Qingfu Zhu
  • , Wanxiang Che*
  • *Corresponding author for this work
  • Harbin Institute of Technology
  • Central South University
  • East China Normal University
  • Beihang University
  • LibrAI
  • Mohamed Bin Zayed University of Artificial Intelligence
  • Tsinghua University

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Large language model (LLM) safety is a critical issue, with numerous studies employing red team testing to enhance model security. Among these, jailbreak methods explore potential vulnerabilities by crafting malicious prompts that induce model outputs contrary to safety alignments. Existing black-box jailbreak methods often rely on model feedback, repeatedly submitting queries with detectable malicious instructions during the attack search process. Although these approaches are effective, the attacks may be intercepted by content moderators during the search process. We propose an improved transfer attack method that guides malicious prompt construction by locally training a mirror model of the target black-box model through benign data distillation. This method offers enhanced stealth, as it does not involve submitting identifiable malicious instructions to the target model during the search phase. Our approach achieved a maximum attack success rate of 92%, or a balanced value of 80% with an average of 1.5 detectable jailbreak queries per sample against GPT-3.5 Turbo on a subset of AdvBench. These results underscore the need for more robust defense mechanisms.

Original languageEnglish
Title of host publicationLong Papers
EditorsLuis Chiruzzo, Alan Ritter, Lu Wang
PublisherAssociation for Computational Linguistics (ACL)
Pages1784-1799
Number of pages16
ISBN (Electronic)9798891761896
DOIs
StatePublished - 2025
Event2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025 - Hybrid, Albuquerque, United States
Duration: 29 Apr 20254 May 2025

Publication series

NameProceedings of the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies: Long Papers, NAACL-HLT 2025
Volume1

Conference

Conference2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2025
Country/TerritoryUnited States
CityHybrid, Albuquerque
Period29/04/254/05/25

Fingerprint

Dive into the research topics of 'Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring'. Together they form a unique fingerprint.

Cite this