Skip to main navigation Skip to search Skip to main content

An Intermediate-Level Attack Framework on the Basis of Linear Regression

  • Yiwen Guo*
  • , Qizhang Li
  • , Wangmeng Zuo
  • , Hao Chen
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This article substantially extends our work published at ECCV (Li et al., 2020), in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples. Specifically, we advocate a framework in which a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to prediction loss of the adversarial example is established. By delving deep into the core components of such a framework, we show that a variety of linear regression models can all be considered in order to establish the mapping, the magnitude of the finally obtained intermediate-level adversarial discrepancy is correlated with the transferability, and further boost of the performance can be achieved by performing multiple runs of the baseline attack with random initialization. In addition, by leveraging these findings, we achieve new state-of-the-arts on transfer-based ℓ∞ and ℓ2 attacks. Our code is publicly available at https://github.com/qizhangli/ila-plus-plus-lr.

Original languageEnglish
Pages (from-to)2726-2735
Number of pages10
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number3
DOIs
StatePublished - 1 Mar 2023
Externally publishedYes

Keywords

  • Deep neural networks
  • adversarial examples
  • adversarial transferability
  • generalization ability
  • robustness

Fingerprint

Dive into the research topics of 'An Intermediate-Level Attack Framework on the Basis of Linear Regression'. Together they form a unique fingerprint.

Cite this