Skip to main navigation Skip to search Skip to main content

Protecting Intellectual Property of Large Language Model-Based Code Generation APIs via Watermarks

  • Zongjie Li
  • , Chaozheng Wang
  • , Shuai Wang*
  • , Cuiyun Gao
  • *Corresponding author for this work
  • Hong Kong University of Science and Technology
  • Harbin Institute of Technology Shenzhen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The rise of large language model-based code generation (LLCG) has enabled various commercial services and APIs. Training LLCG models is often expensive and time-consuming, and the training data are often large-scale and even inaccessible to the public. As a result, the risk of intellectual property (IP) theft over the LLCG models (e.g., via imitation attacks) has been a serious concern. In this paper, we propose the first watermark (WM) technique to protect LLCG APIs from remote imitation attacks. Our proposed technique is based on replacing tokens in an LLCG output with their “synonyms” available in the programming language. A WM is thus defined as the stealthily tweaked distribution among token synonyms in LLCG outputs. We design six WM schemes (instantiated into over 30 WM passes) which rely on conceptually distinct token synonyms available in programming languages. Moreover, to check the IP of a suspicious model (decide if it is stolen from our protected LLCG API), we propose a statistical tests-based procedure that can directly check a remote, suspicious LLCG API. We evaluate our WM technique on LLCG models fine-tuned from two popular large language models, CodeT5 and CodeBERT. The evaluation shows that our approach is effective in both WM injection and IP check. The inserted WMs do not undermine the usage of normal users (i.e., high fidelity) and incur negligible extra cost. Moreover, our injected WMs exhibit high stealthiness and robustness against powerful attackers; even if they know all WM schemes, they can hardly remove WMs without largely undermining the accuracy of their stolen models.

Original languageEnglish
Title of host publicationCCS 2023 - Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages2336-2350
Number of pages15
ISBN (Electronic)9798400700507
DOIs
StatePublished - 21 Nov 2023
Externally publishedYes
Event30th ACM SIGSAC Conference on Computer and Communications Security, CCS 2023 - Copenhagen, Denmark
Duration: 26 Nov 202330 Nov 2023

Publication series

NameCCS 2023 - Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security

Conference

Conference30th ACM SIGSAC Conference on Computer and Communications Security, CCS 2023
Country/TerritoryDenmark
CityCopenhagen
Period26/11/2330/11/23

Keywords

  • Code generation
  • Large language model
  • Watermark

Fingerprint

Dive into the research topics of 'Protecting Intellectual Property of Large Language Model-Based Code Generation APIs via Watermarks'. Together they form a unique fingerprint.

Cite this