期刊文献+

深度强化学习综述:兼论计算机围棋的发展 被引量:131

Review of deep reinforcement learning and discussions on the development of computer Go
下载PDF
导出
摘要 深度强化学习将深度学习的感知能力和强化学习的决策能力相结合,可以直接根据输入的图像进行控制,是一种更接近人类思维方式的人工智能方法.自提出以来,深度强化学习在理论和应用方面均取得了显著的成果.尤其是谷歌深智(Deep Mind)团队基于深度强化学习方法研发的计算机围棋"初弈号–Alpha Go",在2016年3月以4:1的大比分战胜了世界围棋顶级选手李世石(Lee Sedol),成为人工智能历史上一个新里程碑.为此,本文综述深度强化学习的发展历程,兼论计算机围棋的历史,分析算法特性,探讨未来的发展趋势和应用前景,期望能为控制理论与应用新方向的发展提供有价值的参考. Deep reinforcement learning which incorporates both the advantages of the perception of deep learning and the decision making of reinforcement learning is able to output control signal directly based on input images. This mechanism makes the artificial intelligence much close to human thinking modes. Deep reinforcement learning has achieved remarkable success in terms of theory and application since it is proposed. ‘Chuyihao–Alpha Go', a computer Go developed by Google Deep Mind, based on deep reinforcement learning, beat the world's top Go player Lee Sedol 4:1 in March2016. This becomes a new milestone in artificial intelligence history. This paper surveys the development course of deep reinforcement learning, reviews the history of computer Go concurrently, analyzes the algorithms features, and discusses the research directions and application areas, in order to provide a valuable reference to the development of control theory and applications in a new direction.
出处 《控制理论与应用》 EI CAS CSCD 北大核心 2016年第6期701-717,共17页 Control Theory & Applications
基金 国家自然科学基金项目(61273136 61573353 61533017)~~
关键词 深度强化学习 初弈号 深度学习 强化学习 人工智能 deep reinforcement learning AlphaGo deep learning reinforcement learning artificial intelligence
  • 相关文献

参考文献120

  • 1MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-levelcontrol through deep reinforcement learning [J]. Nature, 2015,518(7540): 529 – 533.
  • 2SILVER D, HUANG A, MADDISON C, et al. Mastering the gameof Go with deep neural networks and tree search [J]. Nature, 2016,529(7587): 484 – 489.
  • 3AREL I. Deep reinforcement learning as foundation for artificialgeneral intelligence [M] //Theoretical Foundations of Artificial GeneralIntelligence. Amsterdam: Atlantis Press, 2012: 89 – 102.
  • 4TEAAURO G. TD-Gammon, a self-teaching backgammon program,achieves master-level play [J]. Neural Computation, 1994,6(2): 215 – 219.
  • 5SUTTON R S, BARTO A G. Reinforcement Learning: An Introduction[M]. Cambridge MA: MIT Press, 1998.
  • 6KEARNS M, SINGH S. Near-optimal reinforcement learning inpolynomial time [J]. Machine Learning, 2002, 49(2/3): 209 – 232.
  • 7KOCSIS L, SZEPESVARI C. Bandit based Monte-Carlo planning[C] //Proceedings of the European Conference on MachineLearning. Berlin: Springer, 2006: 282 – 293.
  • 8LITTMAN M L. Reinforcement learning improves behaviour fromevaluative feedback [J]. Nature, 2015, 521(7553): 445 – 451.
  • 9BELLMAN R. Dynamic programming and Lagrange multipliers[J]. Proceedings of the National Academy of Sciences, 1956,42(10): 767 – 769.
  • 10WERBOS P J. Advanced forecasting methods for global crisis warningand models of intelligence [J]. General Systems Yearbook, 1977,22(12): 25 – 38.

二级参考文献88

共引文献337

同被引文献864

引证文献131

二级引证文献1505

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部