摘要
针对大数据分类问题应用设计了一种快速隐层优化方法来解决分布式超限学习机(Extreme Learning Machine,ELM)在训练过程中存在的突出问题--需要独立重复运行多次才能优化隐层结点个数或模型泛化性能。在不增加算法时间复杂度的前提下,新算法能同时训练多个ELM隐层网络,全面兼顾模型泛化能力和隐层结点个数的优化,并通过分布式计算避免大量重复计算。同时,在算法求解过程中通过这种方式能更精确、更直观地学习隐含层结点个数变化带来的影响。比较多种类型标准测试函数的实验结果,相对于分布式ELM,新算法在求解精度、泛化能力、稳定性上大大提高。
On the classification of big data background,this paper proposes a fast hidden layer optimal strategy to solve the prominent problem in the training process of Extreme Learning Machine(ELM),which needs a single ELM to be trained by too many iterations to optimize the number of hidden layer nodes or better generalization performance of the model.Without additional time complexity,the proposed algorithm trains the hidden layer networks parallelly and simultaneously,i.e.,the generalization ability of the model and the number of hidden layer nodes are optimized thoughtfully,as well,it avoids a large number of repeated calculations by distributed calculation.Meanwhile,the proposed algorithm can learn the data via more accurate,more intuitive comparison of different influences due to the hidden nodes in various number.Based on the experimental results on many types of standard tests,compared with the traditional distributed ELM,the proposed algorithm greatly improves the performance in solving accuracy,generalized ability and stability.
作者
易明雨
肖赤心
潘晖
舒文杰
YI Mingyu;XIAO Chixin;PAN Hui;SHU Wenjie(College of Information Engineering, Xiangtan University, Xiangtan, Hunan 411100, China)
出处
《计算机工程与应用》
CSCD
北大核心
2019年第16期165-169,203,共6页
Computer Engineering and Applications
关键词
分类
极限学习机
泛化能力
隐层优化
classification
extreme learning machine
generalization performance
hidden layer optimal