期刊文献+

U-Clustering:基于效用聚类的激励学习算法

U-Clustering: A Reinforcement Learning Algorithm Based on Utility Clustering
下载PDF
导出
摘要 提出了一个新的效用聚类激励学习算法U-Clustering。该算法完全不用像U-Tree算法那样进行边缘节点的生成和测试,它首先根据实例链的观测动作值对实例进行聚类,然后对每个聚类进行特征选择,最后再进行特征压缩,经过压缩后的新特征就成为新的状态空间树节点。通过对NewYorkDriving[2,13]的仿真和算法的实验分析,表明U-Clustering算法对解决大型部分可观测环境问题是比较有效的算法。 That presented in this paper is a new utility clustering based reinforcement learning algorithm called U-Clustering.Unlike the U-Tree,it does not use fringe and related statistical test at all.The U-Clustering algorithm groups the instances that have matching history up to a certain length into a cluster based on the observation-action value of them,and makes the feature selecting and feature compressing for each cluster.The new features become new nodes in the agent's internal state space tree.Experimental results in a difficult partially observable driving task called New York Driving show that the U-Clustering algorithm is an effective one for solving the large partially observable problems.
出处 《计算机工程与应用》 CSCD 北大核心 2005年第26期37-42,74,共7页 Computer Engineering and Applications
基金 国家自然科学基金(编号:60075019)资助
关键词 激励学习 效用聚类 部分可观测Markov决策过程 reinforcement learning, utility clustering, partially observable Markov decision processes (POMDPs)
  • 相关文献

参考文献17

  • 1McCallum A.Efficient exploration in reinforcement learning with hidden state[C].In:AAAI Fall Symposium on Model-directed Autonomous Systems, 1997.
  • 2McCallum A.Reinforcement Learning with Selective Perception and Hidden State[D].Ph D Thesis.Rochester NY:Dept of Computer Science,University of Rochester,1995.
  • 3MitchellTM著 曾华军 张银奎译.机器学习[M].北京:机械工业出版社,2003..
  • 4Uther W T B,Veloso M M.Tree based discretization for continuous state space reinforcement learning[C].In:Proceedings of AAAI-98, Madison, WI, 1998-07.
  • 5Breslow L A.Greedy utile suffix memory for reinforcement learning with perceptually-aliased states[R].NCARAI Technical Report No AIC-96-004,1996.
  • 6Inoue K,Ota J,Arai T.Autonomous state space construction in POMDP with continuous observation space[C].In:Proceedings of 4th I-FAC Symposium on Intelligent Autonomous Vehicles,2001:255-260.
  • 7Suematsu N,Hayashi A.A reinforcement learning algorithm in partially observable environments using short-term memory[C].In:Proceedings of Neural Information Processing Systems, 1998:IPS 11.
  • 8Gardiol N H,Mahadevan S.Hierarchical memory-based reinforcement learning[C].In:Advances in Neural Information Processing Systems, 14,MIT Press,2001.
  • 9Jonsson A,Barto A G.Automated state abstraction for options using the U-Tree algorithm[C].In:Advances in Neural Information Processing Systems, 13,MIT Press, 1054-1060.
  • 10Kaelbling L P,Oates T,Hernandez Net al.Learning in worlds with objects[C].In:working Notes of AAAI Spring Symposium Workshop: Learning Grounded Representation,2001.

共引文献1

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部