期刊文献+

一种连续型深度信念网的设计与应用 被引量:21

Design and Application of Continuous Deep Belief Network
下载PDF
导出
摘要 针对深度信念网(Deep belief network,DBN)学习连续数据时预测精度较差问题,提出一种双隐层连续型深度信念网.该网络首先对输入数据进行无监督训练,利用连续型传递函数实现数据特征提取,设计基于对比分歧算法的权值训练方法,并通过误差反传对隐层权值进行局部寻优,给出稳定性分析,保证训练输出结果稳定在规定区域.利用Lorenz混沌序列、CATS序列和大气CO_2预测实验对该网络进行测试,结果表明,连续型深度信念网具有结构精简、收敛速度快、预测精度高等优点. A continuous deep belief network(c DBN) with two hidden layers is proposed to solve the problem of low accuracy of traditional DBN in modeling continuous data. The whole process is to train the input data in an unsupervised way using continuous version of transfer function, to design the contrastive divergence in hidden-layer training process,and then to fine-tune the net by back propagation. Besides, hyper-parameters are analyzed according to stability analysis,as is given in the paper, to make sure the network finds the optimal. Experiments on Lorenz, CATS benchmark simulation and CO2 forecasting show a simplified structure, fast convergence speed and accuracy of this c DBN.
出处 《自动化学报》 EI CSCD 北大核心 2015年第12期2138-2146,共9页 Acta Automatica Sinica
基金 国家自然科学基金(61203099 61225016 61533002) 北京市科技计划课题(Z141100001414005 Z141101004414058) 高等学校博士学科点专项科研基金资助课题(20131103110016) 北京市科技新星计划(Z131104000413007) 北京市教育委员会科研计划项目(KZ201410005002 km201410005001)资助~~
关键词 深度学习 神经网络 结构设计 稳定分析 时序预测 Deep learning neural networks structural design stability time series forecasting
  • 相关文献

参考文献31

  • 1Hinton G E, Salakhutdinov R R. Reducing the dimensional- ity of data with neural networks. Science, 2006, 313(5786): 504-507.
  • 2Hinton G E, Osindero S, Teh Y W. A fast learning algo- rithm for deep belief nets. Neural Computation, 2006 18(7): 1527-1554.
  • 3Deselaers T, Hasan S, Bender O, Ney H. A deep learning approach to machine transliteration. In: Proceedings of the 4th EACL Workshop on Statistical Machine Translation. Athens, Greece: Association for Computational Linguistics, 2009. 233--241.
  • 4Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533-536.
  • 5Bengio Y, Lamblin P, Popovici D, Larochelle H. Greedy layer-wise training of deep networks. In: Proceedings of the 20th Advances in Neural Information Processing Sys- tems. Vancouver, British Columbia, Canada: NIPS, 2007. 153-160.
  • 6Arel I, Rose D C, Karnowski T P. Deep machine learning- a new frontier in Artificial intelligence research. IEEE Com- putational Intelligence Magazine, 2010, 5(4): 13-18.
  • 7Dahl G E, Dong Y, Deng L, Acero A. Large vocabulary con- tinuous speech recognition with context-dependent DBN- HMMS. In: Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing. Prague: IEEE, 2011. 4688-4691.
  • 8Fasel I, Berry J. Deep belief networks for real-time extrac- tion of tongue contours from ultrasound during speech. In: Proceedings of the 20th International Conference on Pattern Recognition. Istanbuh IEEE, 2010. 1493--1496.
  • 9Zhang S L, Bao Y B, Zhou P, Jiang H, Dai L R. Improving deep neural networks for LVCSR using dropout and shrink- ing structure. In: Proceedings of the 2014 IEEE Interna- tional Conference on Acoustics, Speech and Signal Process- ing (ICASSP). Florence: IEEE, 2014. 6849-6853.
  • 10Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1--127.

二级参考文献5

共引文献17

同被引文献181

引证文献21

二级引证文献417

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部