Skip to main navigation Skip to search Skip to main content

Taking Away Both Model and Data: Remember Training Data by Parameter Combinations

Research output: Contribution to journalArticlepeer-review

Abstract

Machine Learning (ML) model hatcheries have emerged to help ML model producers. The only thing that the ML model producer needs to do is upload the untrained ML model to the hatchery with a specific task and deploy the returned trained ML model into real-world applications. Although the local private data of the hatchery are not directly accessed by the ML model producer, some backdoor attacks can still steal the private data. These attacks add malicious backdoor codes into the untrained benign ML model and recover the private data in some specific operations after training. However, existing attacks more or less have some disadvantages, such as the limited quality of the stolen private data, seriously affecting the original model performance, and being easy to defend. To address these disadvantages, we propose a novel efficient white-box backdoor attack method called Parameter Combination Encoding Attack (PCEA), which leverages the linear combinations of parameters to remember the private data during training. We evaluate the performance of the proposed method on stolen image quality, testing accuracy, and sensitivity. The experimental results show that PCEA has a much higher quality of the stolen data and robustness while keeping the testing accuracy.

Original languageEnglish
Pages (from-to)1427-1437
Number of pages11
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Volume6
Issue number6
DOIs
StatePublished - 1 Dec 2022
Externally publishedYes

Keywords

  • Backdoor attack
  • data privacy
  • machine learning
  • white-box attack

Fingerprint

Dive into the research topics of 'Taking Away Both Model and Data: Remember Training Data by Parameter Combinations'. Together they form a unique fingerprint.

Cite this