Skip to main navigation Skip to search Skip to main content

FedPartial: Enabling Model-Heterogeneous Federated Learning via Partial Model Transmission and Aggregation

  • Changkun Jiang*
  • , Jiahao Chen
  • , Lin Gao
  • , Jianqiang Li
  • *Corresponding author for this work
  • Shenzhen University
  • Harbin Institute of Technology Shenzhen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Federated learning (FL) is emerging as a new privacy-preserving learning paradigm that allows multiple devices to collaborate in training a model without sharing their raw data, using a central server for coordination. However, device heterogeneity poses a challenge in FL, as participating devices often have different computing capacities. To address this issue, heterogeneous models need to be designed to accommodate different device computing capacities. The existing approach involves pre-designing multiple heterogeneous models and extracting sub-models from the server model. While such an approach effectively tackles device heterogeneity, it has several drawbacks such as high communication overhead and insufficient personalization. In each training round, the server distributes the entire model parameters to each device, and each device also transmits the entire model parameters to the server for aggregation. In this work, we propose FedPartial, a new framework that overcomes these challenges by introducing a partial model transmission and aggregation mechanism. The FedPartial framework eliminates the need for devices to transmit the entire model parameters in each training round while still benefiting from global model aggregation. Specifically, FedPartial divides the device model into two parts: the shallow part participates in the global aggregation of heterogeneous models, while the deep part remains on the device locally. By keeping the deep part of the model on the device, FedPartial reduces the communication overhead significantly and achieves a certain degree of personalization. Through extensive experiments, we demonstrate that FedPartial outperforms existing state-of-the-art methods, particularly in more complex and statistically heterogeneous scenarios.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE International Conference on Web Services, ICWS 2024
EditorsRong N. Chang, Carl K. Chang, Zigui Jiang, Jingwei Yang, Zhi Jin, Michael Sheng, Jing Fan, Kenneth K. Fletcher, Qiang He, Qiang He, Claudio Ardagna, Jian Yang, Jianwei Yin, Zhongjie Wang, Amin Beheshti, Stefano Russo, Nimanthi Atukorala, Jia Wu, Philip S. Yu, Heiko Ludwig, Stephan Reiff-Marganiec, Emma Zhang, Anca Sailer, Nicola Bena, Kuang Li, Yuji Watanabe, Tiancheng Zhao, Shangguang Wang, Zhiying Tu, Yingjie Wang, Kang Wei
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1145-1152
Number of pages8
ISBN (Electronic)9798350368550
DOIs
StatePublished - 2024
Externally publishedYes
Event2024 IEEE International Conference on Web Services, ICWS 2024 - Hybrid, Shenzhen, China
Duration: 7 Jul 202413 Jul 2024

Conference

Conference2024 IEEE International Conference on Web Services, ICWS 2024
Country/TerritoryChina
CityHybrid, Shenzhen
Period7/07/2413/07/24

Keywords

  • Communication Efficiency
  • Device Heterogeneity
  • Federated Learning
  • Partial Model Transmission
  • Personalization

Fingerprint

Dive into the research topics of 'FedPartial: Enabling Model-Heterogeneous Federated Learning via Partial Model Transmission and Aggregation'. Together they form a unique fingerprint.

Cite this