TY - GEN
T1 - Incentive Mechanism for Federated Learning based on Random Client Sampling
AU - Wu, Hongyi
AU - Tang, Xiaoying
AU - Zhang, Ying Jun Angela
AU - Gao, Lin
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Federated learning (FL) is a distributed machine learning paradigm that enables edge devices to participate in training as clients, and at the same time protect their privacy. Recent research in this field mainly focuses on improving training performance and reducing communication costs However, how to incentivize clients of federated learning still remains a challenge. Existing researches on FL often assume that clients participate in training voluntarily, which is not practical in most cases due to computation costs. In this paper, we propose an incentive mechanism for federated learning based on random client sampling. The mechanism consists of two parts First, a subset of clients are selected randomly according to the importance sampling scheme. Then, the interaction between the server and the subset of clients is modeled into a Stackelberg game. The server releases a total incentive, and the incentive is allocated to all clients based on their contribution. Clients then decide their choices of batch size, which potentially affects the contribution metric. Moreover, we prove that the client-level subgame of the Stackelberg game has a subgame equilibrium and can be written into a semi-closed form. We also propose an approximation algorithm for computing the subgame equilibrium of the server's level subgame, which is shown in the experiment to converge to the equilibrium point successfully. The simulation results also demonstrate the effectiveness of our mechanism in comparison with two baselines.
AB - Federated learning (FL) is a distributed machine learning paradigm that enables edge devices to participate in training as clients, and at the same time protect their privacy. Recent research in this field mainly focuses on improving training performance and reducing communication costs However, how to incentivize clients of federated learning still remains a challenge. Existing researches on FL often assume that clients participate in training voluntarily, which is not practical in most cases due to computation costs. In this paper, we propose an incentive mechanism for federated learning based on random client sampling. The mechanism consists of two parts First, a subset of clients are selected randomly according to the importance sampling scheme. Then, the interaction between the server and the subset of clients is modeled into a Stackelberg game. The server releases a total incentive, and the incentive is allocated to all clients based on their contribution. Clients then decide their choices of batch size, which potentially affects the contribution metric. Moreover, we prove that the client-level subgame of the Stackelberg game has a subgame equilibrium and can be written into a semi-closed form. We also propose an approximation algorithm for computing the subgame equilibrium of the server's level subgame, which is shown in the experiment to converge to the equilibrium point successfully. The simulation results also demonstrate the effectiveness of our mechanism in comparison with two baselines.
UR - https://www.scopus.com/pages/publications/85146851521
U2 - 10.1109/GCWkshps56602.2022.10008737
DO - 10.1109/GCWkshps56602.2022.10008737
M3 - 会议稿件
AN - SCOPUS:85146851521
T3 - 2022 IEEE GLOBECOM Workshops, GC Wkshps 2022 - Proceedings
SP - 1640
EP - 1645
BT - 2022 IEEE GLOBECOM Workshops, GC Wkshps 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE Globecom Workshops, GLOBECOM Workshop 2022
Y2 - 4 December 2022 through 8 December 2022
ER -