Abstract
Concealed object detection (COD) has advanced significantly and is crucial in various fields. However, it raises new security and privacy issues, as powerful COD models can potentially reveal sensitive information like human privacy organs or military camouflage. In this paper, we address this issue through the lens of adversarial attacks and introduce a new task: Adversarial Attacks against COD. Compared to general adversarial attacks on object detection models, this new task presents an additional challenge. The challenge lies in generating adversarial perturbations that disrupt the differential information contained within various scenes simultaneously. To address this, In this paper, we introduce a novel adversarial attack method, context-aware target texture perturbation (CAT2P), specifically designed to fool COD models. CAT2P generates adversarial perturbations based on background texture information, disrupting the differential features used by COD models to distinguish concealed objects. The attack comprises three modules: perturbation generation, target localization, and perturbation bootstrap. Extensive experiments on benchmark datasets demonstrate CAT2P’s effectiveness in reducing COD model performance by up to 40% while preserving the visual quality of original images. This work highlights the security vulnerabilities of COD models and provides insights into evaluating their robustness.
| Original language | English |
|---|---|
| Pages (from-to) | 7285-7302 |
| Number of pages | 18 |
| Journal | Visual Computer |
| Volume | 41 |
| Issue number | 10 |
| DOIs | |
| State | Published - Aug 2025 |
Keywords
- Adversarial attack
- Black-box
- Concealed object detection
- Context-aware
Fingerprint
Dive into the research topics of 'Context-aware target texture perturbation attack for concealed object detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver