KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning

  • Renlong Chen
  • , Hui Xia*
  • , Kai Wang
  • , Shuo Xu
  • , Rui Zhang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Split Federated Learning (SFL) enables organizations such as healthcare to collaborate to improve model performance without sharing private data. However, SFL is currently susceptible to model inversion (MI) attacks, which create a serious problem of risk for private data leakage and loss of accuracy. Therefore, this paper proposes an innovative framework called Knowledge Distillation Resistance Transfer for Split Federated Learning (KDRSFL). The KDRSFL framework combines one-shot distillation techniques with adjustment strategies optimized for attackers, aiming to achieve knowledge distillation-based resistance transfer. KDRSFL enhances the classification accuracy of feature extractors and strengthens their resistance to adversarial attacks. First, a teacher model with strong resistance to MI attacks is constructed, and then this capability is transferred to the client models through knowledge distillation. Second, the defense of the client models is further strengthened through attacker-aware training. Finally, the client models achieve effective defense against MI through local training. Detailed experimental validation shows that KDRSFL performs well against MI attacks on the CIFAR100 dataset. KDRSFL achieved a reconstruction mean squared error (MSE) of 0.058 while maintaining a model accuracy of 67.4% for the VGG11 model. KDRSFL represents a 16% improvement in MI attack error rate over ResSFL, with only 0.1% accuracy loss.

Original languageEnglish
Article number107637
JournalFuture Generation Computer Systems
Volume166
DOIs
StatePublished - May 2025
Externally publishedYes

Keywords

  • Knowledge distillation
  • Model inversion attacks
  • Privacy-Preserving Machine Learning
  • Resistance Transfer
  • Split federated learning

Fingerprint

Dive into the research topics of 'KDRSFL: A knowledge distillation resistance transfer framework for defending model inversion attacks in split federated learning'. Together they form a unique fingerprint.

Cite this