TY - GEN
T1 - LPIA
T2 - 30th Australasian Conference on Information Security and Privacy, ACISP 2025
AU - Bai, Jiaxue
AU - Shi, Lu
AU - Liu, Yang
AU - Zhang, Weizhe
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025.
PY - 2025
Y1 - 2025
N2 - Federated Graph Learning (FGL), as a technique that combines Graph Neural Networks (GNNs) and Federated Learning (FL), aims to protect graph data privacy. However, FGL still faces potential privacy threats. To uncover privacy vulnerabilities in FGL, we first propose Label Preference Inference Attack (LPIA) for this scenario. LPIA infers the label preference of target client by analyzing its uploaded model updates. Label preference refers to the label that has the highest or lowest sample count in the target client’s private dataset. Based on the difference in gradient changes between traditional FL and FGL, we design a new model sensitivity calculation method and a dual selective aggregation strategy, which are better suited to the FGL scenario. LPIA demonstrates excellent attack performance across three mainstream GNN models and four graph datasets. Additionally, we systematically investigate the key factors affecting LPIA performance, including preference level, attack round, and neuron size. We further evaluate mainstream defense strategies (e.g., dropout and differential privacy), and the results show that LPIA remains highly effective when the global model’s accuracy drop is minimal.
AB - Federated Graph Learning (FGL), as a technique that combines Graph Neural Networks (GNNs) and Federated Learning (FL), aims to protect graph data privacy. However, FGL still faces potential privacy threats. To uncover privacy vulnerabilities in FGL, we first propose Label Preference Inference Attack (LPIA) for this scenario. LPIA infers the label preference of target client by analyzing its uploaded model updates. Label preference refers to the label that has the highest or lowest sample count in the target client’s private dataset. Based on the difference in gradient changes between traditional FL and FGL, we design a new model sensitivity calculation method and a dual selective aggregation strategy, which are better suited to the FGL scenario. LPIA demonstrates excellent attack performance across three mainstream GNN models and four graph datasets. Additionally, we systematically investigate the key factors affecting LPIA performance, including preference level, attack round, and neuron size. We further evaluate mainstream defense strategies (e.g., dropout and differential privacy), and the results show that LPIA remains highly effective when the global model’s accuracy drop is minimal.
KW - Federated graph learning
KW - Graph neural network
KW - Privacy inference attack
UR - https://www.scopus.com/pages/publications/105021955727
U2 - 10.1007/978-981-96-9101-2_13
DO - 10.1007/978-981-96-9101-2_13
M3 - 会议稿件
AN - SCOPUS:105021955727
SN - 9789819691005
T3 - Lecture Notes in Computer Science
SP - 245
EP - 264
BT - Information Security and Privacy - 30th Australasian Conference, ACISP 2025, Proceedings
A2 - Susilo, Willy
A2 - Pieprzyk, Josef
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 14 July 2025 through 16 July 2025
ER -