TY - GEN
T1 - Extracting Privacy-Preserving Subgraphs in Federated Graph Learning using Information Bottleneck
AU - Zhang, Chenhan
AU - Wang, Weiqi
AU - Yu, James J.Q.
AU - Yu, Shui
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/7/10
Y1 - 2023/7/10
N2 - As graphs are getting larger and larger, federated graph learning (FGL) is increasingly adopted, which can train graph neural networks (GNNs) on distributed graph data. However, the privacy of graph data in FGL systems is an inevitable concern due to multi-party participation. Recent studies indicated that the gradient leakage of trained GNN can be used to infer private graph data information utilizing model inversion attacks (MIA). Moreover, the central server can legitimately access the local GNN gradients, which makes MIA difficult to counter if the attacker is at the central server. In this paper, we first identify a realistic crowdsourcing-based FGL scenario where MIA from the central server towards clients' subgraph structures is a nonnegligible threat. Then, we propose a defense scheme, Subgraph-Out-of-Subgraph (SOS), to mitigate such MIA and meanwhile, maintain the prediction accuracy. We leverage the information bottleneck (IB) principle to extract task-relevant subgraphs out of the clients' original subgraphs. The extracted IB-subgraphs are used for local GNN training and the local model updates will have less information about the original subgraphs, which renders the MIA harder to infer the original subgraph structure. Particularly, we devise a novel neural network-powered approach to overcome the intractability of graph data's mutual information estimation in IB optimization. Additionally, we design a subgraph generation algorithm for finally yielding reasonable IB-subgraphs from the optimization results. Extensive experiments demonstrate the efficacy of the proposed scheme, the FGL system trained on IB-subgraphs is more robust against MIA attacks with minuscule accuracy loss.
AB - As graphs are getting larger and larger, federated graph learning (FGL) is increasingly adopted, which can train graph neural networks (GNNs) on distributed graph data. However, the privacy of graph data in FGL systems is an inevitable concern due to multi-party participation. Recent studies indicated that the gradient leakage of trained GNN can be used to infer private graph data information utilizing model inversion attacks (MIA). Moreover, the central server can legitimately access the local GNN gradients, which makes MIA difficult to counter if the attacker is at the central server. In this paper, we first identify a realistic crowdsourcing-based FGL scenario where MIA from the central server towards clients' subgraph structures is a nonnegligible threat. Then, we propose a defense scheme, Subgraph-Out-of-Subgraph (SOS), to mitigate such MIA and meanwhile, maintain the prediction accuracy. We leverage the information bottleneck (IB) principle to extract task-relevant subgraphs out of the clients' original subgraphs. The extracted IB-subgraphs are used for local GNN training and the local model updates will have less information about the original subgraphs, which renders the MIA harder to infer the original subgraph structure. Particularly, we devise a novel neural network-powered approach to overcome the intractability of graph data's mutual information estimation in IB optimization. Additionally, we design a subgraph generation algorithm for finally yielding reasonable IB-subgraphs from the optimization results. Extensive experiments demonstrate the efficacy of the proposed scheme, the FGL system trained on IB-subgraphs is more robust against MIA attacks with minuscule accuracy loss.
KW - Federated Learning
KW - Graph Neural Networks
KW - Information Bottleneck
KW - Model Inversion Attack
UR - https://www.scopus.com/pages/publications/85168153488
U2 - 10.1145/3579856.3595791
DO - 10.1145/3579856.3595791
M3 - 会议稿件
AN - SCOPUS:85168153488
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 109
EP - 121
BT - ASIA CCS 2023 - Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 18th ACM ASIA Conference on Computer and Communications Security, ASIA CCS 2023
Y2 - 10 July 2023 through 14 July 2023
ER -