Abstract
Federated learning (FL) is emerging as a new privacy-preserving learning paradigm that allows multiple devices to collaborate in training a model without sharing their raw data, using a central server for coordination. However, device heterogeneity poses a challenge in FL, as participating devices often have different computing capacities. To address this issue, heterogeneous models need to be designed to accommodate different device computing capacities. The existing approach involves pre-designing multiple heterogeneous models and extracting sub-models from the server model. While such an approach effectively tackles device heterogeneity, it has several drawbacks such as high communication overhead and insufficient personalization. In each training round, the server distributes the entire model parameters to each device, and each device also transmits the entire model parameters to the server for aggregation. In this work, we propose FedPartial, a new framework that overcomes these challenges by introducing a partial model transmission and aggregation mechanism. The FedPartial framework eliminates the need for devices to transmit the entire model parameters in each training round while still benefiting from global model aggregation. Specifically, FedPartial divides the device model into two parts: the shallow part participates in the global aggregation of heterogeneous models, while the deep part remains on the device locally. By keeping the deep part of the model on the device, FedPartial reduces the communication overhead significantly and achieves a certain degree of personalization. Through extensive experiments, we demonstrate that FedPartial outperforms existing state-of-the-art methods, particularly in more complex and statistically heterogeneous scenarios.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - 2024 IEEE International Conference on Web Services, ICWS 2024 |
| Editors | Rong N. Chang, Carl K. Chang, Zigui Jiang, Jingwei Yang, Zhi Jin, Michael Sheng, Jing Fan, Kenneth K. Fletcher, Qiang He, Qiang He, Claudio Ardagna, Jian Yang, Jianwei Yin, Zhongjie Wang, Amin Beheshti, Stefano Russo, Nimanthi Atukorala, Jia Wu, Philip S. Yu, Heiko Ludwig, Stephan Reiff-Marganiec, Emma Zhang, Anca Sailer, Nicola Bena, Kuang Li, Yuji Watanabe, Tiancheng Zhao, Shangguang Wang, Zhiying Tu, Yingjie Wang, Kang Wei |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Pages | 1145-1152 |
| Number of pages | 8 |
| ISBN (Electronic) | 9798350368550 |
| DOIs | |
| State | Published - 2024 |
| Externally published | Yes |
| Event | 2024 IEEE International Conference on Web Services, ICWS 2024 - Hybrid, Shenzhen, China Duration: 7 Jul 2024 → 13 Jul 2024 |
Conference
| Conference | 2024 IEEE International Conference on Web Services, ICWS 2024 |
|---|---|
| Country/Territory | China |
| City | Hybrid, Shenzhen |
| Period | 7/07/24 → 13/07/24 |
Keywords
- Communication Efficiency
- Device Heterogeneity
- Federated Learning
- Partial Model Transmission
- Personalization
Fingerprint
Dive into the research topics of 'FedPartial: Enabling Model-Heterogeneous Federated Learning via Partial Model Transmission and Aggregation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver