摘要
为了克服异构边缘计算环境下联邦学习的3个关键挑战,边缘异构性、非独立同分布数据及通信资源约束,提出了一种分组异步联邦学习(FedGA)机制,将边缘节点分为多个组,各个分组间通过异步方式与全局模型聚合进行全局更新,每个分组内部节点通过分时方式与参数服务器通信。理论分析建立了FedGA的收敛界与分组间数据分布之间的定量关系。针对分组内节点的通信提出了分时调度策略魔镜法(MMM)优化模型单轮更新的完成时间。基于FedGA的理论分析和MMM,设计了一种有效的分组算法来最小化整体训练的完成时间。实验结果表明,FedGA和MMM相对于现有最先进的方法能降低30.1%~87.4%的模型训练时间。
To overcome the three key challenges of federated learning in heterogeneous edge computing,i.e.,edge heterogeneity,data Non-IID,and communication resource constraints,a grouping asynchronous federated learning(FedGA)mechanism was proposed.Edge nodes were divided into multiple groups,each of which performed global updated asynchronously with the global model,while edge nodes within a group communicate with the parameter server through time-sharing communication.Theoretical analysis established a quantitative relationship between the convergence bound of FedGA and the data distribution among the groups.A time-sharing scheduling magic mirror method(MMM)was proposed to optimize the completion time of a single round of model updating within a group.Based on both the theoretical analysis for FedGA and MMM,an effective grouping algorithm was designed for minimizing the overall training completion time.Experimental results demonstrate that the proposed FedGA and MMM can reduce model training time by 30.1%~87.4%compared to the existing state-of-the-art methods.
作者
马千飘
贾庆民
刘建春
徐宏力
谢人超
黄韬
MA Qianpiao;JIA Qingmin;LIU Jianchun;XU Hongli;XIE Renchao;HUANG Tao(Future Network Research Center,Purple Mountain Laboratories,Nanjing 211111,China;School of Computer Science and Technology,University of Science and Technology of China,Hefei 230026,China;Suzhou Institute for Advanced Research,University of Science and Technology of China,Suzhou 215123,China;State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,Beijing 100876,China)
出处
《通信学报》
EI
CSCD
北大核心
2023年第11期79-93,共15页
Journal on Communications
基金
国家自然科学基金资助项目(No.U1709217,No.61936015,No.92267301)。