Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other us...Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other users.It trains a globalmodel by aggregating locally-computedmodels of clients rather than their rawdata.However,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global model.For this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local models.We propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global model.It firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in training.Then,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby theNon-IIDdata.Wemake experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.62072146The Key Research and Development Program of Zhejiang Province under Grant No.2021C03187+1 种基金National Key Research and Development Program of China 2019YFB2102100The State Key Laboratory of Computer Architecture(ICT,CAS)under Grant No.CARCHB202120.
文摘Federated learning is a distributedmachine learningmethod that can solve the increasingly serious problemof data islands and user data privacy,as it allows training data to be kept locally and not shared with other users.It trains a globalmodel by aggregating locally-computedmodels of clients rather than their rawdata.However,the divergence of local models caused by data heterogeneity of different clients may lead to slow convergence of the global model.For this problem,we focus on the client selection with federated learning,which can affect the convergence performance of the global model with the selected local models.We propose FedChoice,a client selection method based on loss function optimization,to select appropriate local models to improve the convergence of the global model.It firstly sets selected probability for clients with the value of loss function,and the client with high loss will be set higher selected probability,which can make them more likely to participate in training.Then,it introduces a local control vector and a global control vector to predict the local gradient direction and global gradient direction,respectively,and calculates the gradient correction vector to correct the gradient direction to reduce the cumulative deviationof the local gradient causedby theNon-IIDdata.Wemake experiments to verify the validity of FedChoice on CIFAR-10,CINIC-10,MNIST,EMNITS,and FEMNIST datasets,and the results show that the convergence of FedChoice is significantly improved,compared with FedAvg,FedProx,and FedNova.