TY - GEN
T1 - An Asterisk-shaped Patch Attack for Object Detection
AU - Dong, Fashan
AU - Deng, Binyue
AU - Yu, Haiyang
AU - Xie, Wenrong
AU - Xu, Huawei
AU - Gu, Zhaoquan
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - With the development of artificial intelligence, deep neural networks (DNNs) have been widely used, and the ability to solve some complex problems even exceeds that of humans. However, recent research shows that deep neural networks face multiple security threats. By adding some noise to the input data, the attacker can make a well-performing deep neural network make wrong decisions, or even make the model produce the same recognition results for completely different input data. However, it is difficult for the human eye to distinguish the difference between the samples before and after the perturbation is added, so the adversarial samples have strong concealment. Such attacks are called adversarial attacks, and the carefully constructed data used to fool deep neural networks are called adversarial examples. Most of the existing researches on adversarial attacks are aimed at image classification problems, and few works have shifted their attention to object detectors. Object detection is the basis of many computer vision tasks and has been applied in many life-related applications, such as autonomous driving, pedestrian recognition, pathological detection, etc. Therefore, it is of great significance to study the vulnerability of object detection models. Attacks against object detection models are more difficult because it combines multi-object localization and multi-object classification problems at the same time. In this paper, we study some adversarial attacks against object detection models, and we propose an asterisk-shaped Adversarial patch generation algorithms that make current state-of-the-art object detectors undetectable. Extensive experimental results show that our method can ensure good attack performance while modifying a small number of image pixels.
AB - With the development of artificial intelligence, deep neural networks (DNNs) have been widely used, and the ability to solve some complex problems even exceeds that of humans. However, recent research shows that deep neural networks face multiple security threats. By adding some noise to the input data, the attacker can make a well-performing deep neural network make wrong decisions, or even make the model produce the same recognition results for completely different input data. However, it is difficult for the human eye to distinguish the difference between the samples before and after the perturbation is added, so the adversarial samples have strong concealment. Such attacks are called adversarial attacks, and the carefully constructed data used to fool deep neural networks are called adversarial examples. Most of the existing researches on adversarial attacks are aimed at image classification problems, and few works have shifted their attention to object detectors. Object detection is the basis of many computer vision tasks and has been applied in many life-related applications, such as autonomous driving, pedestrian recognition, pathological detection, etc. Therefore, it is of great significance to study the vulnerability of object detection models. Attacks against object detection models are more difficult because it combines multi-object localization and multi-object classification problems at the same time. In this paper, we study some adversarial attacks against object detection models, and we propose an asterisk-shaped Adversarial patch generation algorithms that make current state-of-the-art object detectors undetectable. Extensive experimental results show that our method can ensure good attack performance while modifying a small number of image pixels.
KW - adversarial examples
KW - artificial intelligence
KW - deep learning
KW - object detection
UR - https://www.scopus.com/pages/publications/85141365557
U2 - 10.1109/DSC55868.2022.00024
DO - 10.1109/DSC55868.2022.00024
M3 - 会议稿件
AN - SCOPUS:85141365557
T3 - Proceedings - 2022 7th IEEE International Conference on Data Science in Cyberspace, DSC 2022
SP - 126
EP - 133
BT - Proceedings - 2022 7th IEEE International Conference on Data Science in Cyberspace, DSC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th IEEE International Conference on Data Science in Cyberspace, DSC 2022
Y2 - 11 July 2022 through 13 July 2022
ER -