TY - GEN
T1 - Follow-me
T2 - 31st ACM International Conference on Multimedia, MM 2023
AU - Lou, Shengtao
AU - Liu, Buyu
AU - Bao, Jun
AU - Ding, Jiajun
AU - Yu, Jun
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/10/27
Y1 - 2023/10/27
N2 - Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks in which visually imperceptible perturbations can deceive CNN-based models. While current research on adversarial attacks in single object tracking exists, it overlooks a critical aspect of manipulating predicted trajectories to follow user-defined paths regardless of the actual location of the targeted object. To address this, we propose the very first white-box attack algorithm that is capable of deceiving victim trackers by compelling them to generate trajectories that adhere to predetermined counterfeit paths. Specifically, we focus on Siamese-based trackers as our victim models. Given an arbitrary counterfeit path, we first decompose it into discrete target locations in each frame, with the assumption of constant velocity. These locations are converted to heatmap anchors, which represent the offset of their location from the target object's location in the previous frame. Later on, we design a novel loss function to minimize the gap between above-mentioned anchors and our predicted ones. Finally, the gradients computed by such loss are used to update the original video, resulting in our adversarial video. To validate our ideas, we design three sets of counterfeit paths as well as novel evaluation metrics to measure the path-following properties. Experiments with two victim models on three publicly available datasets, OTB100, VOT2018, and VOT2016, demonstrate that our algorithm not only outperforms SOTA methods significantly under conventional evaluation metrics, e.g. 90% and 68.4% precision and successful rate drop on OTB100, but also follows the counterfeit paths well, which is beyond any existing attack methods. The source code is available at https://github.com/loushengtao/Follow-me.
AB - Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks in which visually imperceptible perturbations can deceive CNN-based models. While current research on adversarial attacks in single object tracking exists, it overlooks a critical aspect of manipulating predicted trajectories to follow user-defined paths regardless of the actual location of the targeted object. To address this, we propose the very first white-box attack algorithm that is capable of deceiving victim trackers by compelling them to generate trajectories that adhere to predetermined counterfeit paths. Specifically, we focus on Siamese-based trackers as our victim models. Given an arbitrary counterfeit path, we first decompose it into discrete target locations in each frame, with the assumption of constant velocity. These locations are converted to heatmap anchors, which represent the offset of their location from the target object's location in the previous frame. Later on, we design a novel loss function to minimize the gap between above-mentioned anchors and our predicted ones. Finally, the gradients computed by such loss are used to update the original video, resulting in our adversarial video. To validate our ideas, we design three sets of counterfeit paths as well as novel evaluation metrics to measure the path-following properties. Experiments with two victim models on three publicly available datasets, OTB100, VOT2018, and VOT2016, demonstrate that our algorithm not only outperforms SOTA methods significantly under conventional evaluation metrics, e.g. 90% and 68.4% precision and successful rate drop on OTB100, but also follows the counterfeit paths well, which is beyond any existing attack methods. The source code is available at https://github.com/loushengtao/Follow-me.
KW - adversarial attack
KW - visual object tracking
UR - https://www.scopus.com/pages/publications/85179557098
U2 - 10.1145/3581783.3611935
DO - 10.1145/3581783.3611935
M3 - 会议稿件
AN - SCOPUS:85179557098
T3 - MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
SP - 8808
EP - 8818
BT - MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 29 October 2023 through 3 November 2023
ER -