Skip to main navigation Skip to search Skip to main content

An Asterisk-shaped Patch Attack for Object Detection

  • Fashan Dong
  • , Binyue Deng
  • , Haiyang Yu
  • , Wenrong Xie
  • , Huawei Xu*
  • , Zhaoquan Gu*
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

With the development of artificial intelligence, deep neural networks (DNNs) have been widely used, and the ability to solve some complex problems even exceeds that of humans. However, recent research shows that deep neural networks face multiple security threats. By adding some noise to the input data, the attacker can make a well-performing deep neural network make wrong decisions, or even make the model produce the same recognition results for completely different input data. However, it is difficult for the human eye to distinguish the difference between the samples before and after the perturbation is added, so the adversarial samples have strong concealment. Such attacks are called adversarial attacks, and the carefully constructed data used to fool deep neural networks are called adversarial examples. Most of the existing researches on adversarial attacks are aimed at image classification problems, and few works have shifted their attention to object detectors. Object detection is the basis of many computer vision tasks and has been applied in many life-related applications, such as autonomous driving, pedestrian recognition, pathological detection, etc. Therefore, it is of great significance to study the vulnerability of object detection models. Attacks against object detection models are more difficult because it combines multi-object localization and multi-object classification problems at the same time. In this paper, we study some adversarial attacks against object detection models, and we propose an asterisk-shaped Adversarial patch generation algorithms that make current state-of-the-art object detectors undetectable. Extensive experimental results show that our method can ensure good attack performance while modifying a small number of image pixels.

Original languageEnglish
Title of host publicationProceedings - 2022 7th IEEE International Conference on Data Science in Cyberspace, DSC 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages126-133
Number of pages8
ISBN (Electronic)9781665474801
DOIs
StatePublished - 2022
Externally publishedYes
Event7th IEEE International Conference on Data Science in Cyberspace, DSC 2022 - Guilin, China
Duration: 11 Jul 202213 Jul 2022

Publication series

NameProceedings - 2022 7th IEEE International Conference on Data Science in Cyberspace, DSC 2022

Conference

Conference7th IEEE International Conference on Data Science in Cyberspace, DSC 2022
Country/TerritoryChina
CityGuilin
Period11/07/2213/07/22

Keywords

  • adversarial examples
  • artificial intelligence
  • deep learning
  • object detection

Fingerprint

Dive into the research topics of 'An Asterisk-shaped Patch Attack for Object Detection'. Together they form a unique fingerprint.

Cite this