Extracting Privacy-Preserving Subgraphs in Federated Graph Learning using Information Bottleneck

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

As graphs are getting larger and larger, federated graph learning (FGL) is increasingly adopted, which can train graph neural networks (GNNs) on distributed graph data. However, the privacy of graph data in FGL systems is an inevitable concern due to multi-party participation. Recent studies indicated that the gradient leakage of trained GNN can be used to infer private graph data information utilizing model inversion attacks (MIA). Moreover, the central server can legitimately access the local GNN gradients, which makes MIA difficult to counter if the attacker is at the central server. In this paper, we first identify a realistic crowdsourcing-based FGL scenario where MIA from the central server towards clients' subgraph structures is a nonnegligible threat. Then, we propose a defense scheme, Subgraph-Out-of-Subgraph (SOS), to mitigate such MIA and meanwhile, maintain the prediction accuracy. We leverage the information bottleneck (IB) principle to extract task-relevant subgraphs out of the clients' original subgraphs. The extracted IB-subgraphs are used for local GNN training and the local model updates will have less information about the original subgraphs, which renders the MIA harder to infer the original subgraph structure. Particularly, we devise a novel neural network-powered approach to overcome the intractability of graph data's mutual information estimation in IB optimization. Additionally, we design a subgraph generation algorithm for finally yielding reasonable IB-subgraphs from the optimization results. Extensive experiments demonstrate the efficacy of the proposed scheme, the FGL system trained on IB-subgraphs is more robust against MIA attacks with minuscule accuracy loss.

Original languageEnglish
Title of host publicationASIA CCS 2023 - Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages109-121
Number of pages13
ISBN (Electronic)9798400700989
DOIs
StatePublished - 10 Jul 2023
Externally publishedYes
Event18th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2023 - Melbourne, Australia
Duration: 10 Jul 202314 Jul 2023

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference18th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2023
Country/TerritoryAustralia
CityMelbourne
Period10/07/2314/07/23

Keywords

  • Federated Learning
  • Graph Neural Networks
  • Information Bottleneck
  • Model Inversion Attack

Fingerprint

Dive into the research topics of 'Extracting Privacy-Preserving Subgraphs in Federated Graph Learning using Information Bottleneck'. Together they form a unique fingerprint.

Cite this