TY - GEN
T1 - Graph-Based Traffic Forecasting via Communication-Efficient Federated Learning
AU - Zhang, Chenhan
AU - Zhang, Shiyao
AU - Yu, Shui
AU - Yu, James J.Q.
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The existing Federated Learning (FL) systems encounter an enormous communication overhead when employing GNN-based models for traffic forecasting tasks since these models commonly incorporate enormous number of parameters to be transmitted in the FL systems. In this paper, we propose a FL framework, namely, C lustering-based hierarchical and T wo-step- optimized FL (CTFL), to overcome this practical problem. CTFL employs a divide-and-conquer strategy, clustering clients based on the closeness of their local model parameters. Furthermore, we incorporate the particle swarm optimization algorithm in CTFL, which employs a two-step strategy for optimizing local models. This technique enables the central server to upload only one representative local model update from each cluster, thus reducing the communication overhead associated with model update transmission in the FL. Comprehensive case studies on two real-world datasets and two state-of-the-art GNN-based models demonstrate the proposed framework's outstanding training efficiency and prediction accuracy, and the hyperparameter sensitivity of CTFL is also investigated.
AB - The existing Federated Learning (FL) systems encounter an enormous communication overhead when employing GNN-based models for traffic forecasting tasks since these models commonly incorporate enormous number of parameters to be transmitted in the FL systems. In this paper, we propose a FL framework, namely, C lustering-based hierarchical and T wo-step- optimized FL (CTFL), to overcome this practical problem. CTFL employs a divide-and-conquer strategy, clustering clients based on the closeness of their local model parameters. Furthermore, we incorporate the particle swarm optimization algorithm in CTFL, which employs a two-step strategy for optimizing local models. This technique enables the central server to upload only one representative local model update from each cluster, thus reducing the communication overhead associated with model update transmission in the FL. Comprehensive case studies on two real-world datasets and two state-of-the-art GNN-based models demonstrate the proposed framework's outstanding training efficiency and prediction accuracy, and the hyperparameter sensitivity of CTFL is also investigated.
KW - Federated learning
KW - communication efficiency
KW - graph neural networks
KW - traffic forecasting
UR - https://www.scopus.com/pages/publications/85130755538
U2 - 10.1109/WCNC51071.2022.9771883
DO - 10.1109/WCNC51071.2022.9771883
M3 - 会议稿件
AN - SCOPUS:85130755538
T3 - IEEE Wireless Communications and Networking Conference, WCNC
SP - 2041
EP - 2046
BT - 2022 IEEE Wireless Communications and Networking Conference, WCNC 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Wireless Communications and Networking Conference, WCNC 2022
Y2 - 10 April 2022 through 13 April 2022
ER -