TY - GEN
T1 - Positive Style Accumulation
T2 - 33rd ACM International Conference on Multimedia, MM 2025
AU - Xu, Xin
AU - Ren, Chaoyue
AU - Liu, Wei
AU - Huang, Wenke
AU - Yang, Bin
AU - Yu, Zhixi
AU - Jiang, Kui
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - The Federated Domain Generalization for Person re-identification (FedDG-ReID) aims to learn a global server model that can be effectively generalized to source and target domains through distributed source domain data. Existing methods mainly improve the diversity of samples through style transformation, which to some extent enhances the generalization performance of the model. However, we discover that not all styles contribute to the generalization performance. Therefore, we define styles that are beneficial/harmful to the model's generalization performance as positive/negative styles. Based on this, new issues arise: How to effectively screen and continuously utilize the positive styles. To solve these problems, we propose a Style Screening and Continuous Utilization (SSCU) framework. Firstly, we design a Generalization Gain-guided Dynamic Style Memory (GGDSM) for each client model to screen and accumulate generated positive styles. Specifically, the memory maintains a prototype initialized from raw data for each category, then screens positive styles that enhance the global model during training, and updates these positive styles into the memory using a momentum-based approach. Meanwhile, we propose a style memory recognition loss to fully leverage the positive styles memorized by GGDSM. Furthermore, we propose a Collaborative Style Training (CST) strategy to make full use of positive styles. Unlike traditional learning strategies, our approach leverages both newly generated styles and the accumulated positive styles stored in memory to train client models on two distinct branches. This training strategy is designed to effectively promote the rapid acquisition of new styles by the client models, ensuring that they can quickly adapt to and integrate novel stylistic variations. Simultaneously, this strategy guarantees the continuous and thorough utilization of positive styles, which is highly beneficial for the model's generalization performance. Extensive experimental results demonstrate that our method outperforms existing methods in both the source domain and the target domain.
AB - The Federated Domain Generalization for Person re-identification (FedDG-ReID) aims to learn a global server model that can be effectively generalized to source and target domains through distributed source domain data. Existing methods mainly improve the diversity of samples through style transformation, which to some extent enhances the generalization performance of the model. However, we discover that not all styles contribute to the generalization performance. Therefore, we define styles that are beneficial/harmful to the model's generalization performance as positive/negative styles. Based on this, new issues arise: How to effectively screen and continuously utilize the positive styles. To solve these problems, we propose a Style Screening and Continuous Utilization (SSCU) framework. Firstly, we design a Generalization Gain-guided Dynamic Style Memory (GGDSM) for each client model to screen and accumulate generated positive styles. Specifically, the memory maintains a prototype initialized from raw data for each category, then screens positive styles that enhance the global model during training, and updates these positive styles into the memory using a momentum-based approach. Meanwhile, we propose a style memory recognition loss to fully leverage the positive styles memorized by GGDSM. Furthermore, we propose a Collaborative Style Training (CST) strategy to make full use of positive styles. Unlike traditional learning strategies, our approach leverages both newly generated styles and the accumulated positive styles stored in memory to train client models on two distinct branches. This training strategy is designed to effectively promote the rapid acquisition of new styles by the client models, ensuring that they can quickly adapt to and integrate novel stylistic variations. Simultaneously, this strategy guarantees the continuous and thorough utilization of positive styles, which is highly beneficial for the model's generalization performance. Extensive experimental results demonstrate that our method outperforms existing methods in both the source domain and the target domain.
KW - federated dg-reid
KW - negative style
KW - positive style memory
UR - https://www.scopus.com/pages/publications/105024072378
U2 - 10.1145/3746027.3755549
DO - 10.1145/3746027.3755549
M3 - 会议稿件
AN - SCOPUS:105024072378
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 8527
EP - 8536
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
Y2 - 27 October 2025 through 31 October 2025
ER -