摘要
传统的行动者—评论家(actor-critic,AC)算法用在连续空间时,数据利用率低、收敛慢,而现实世界中采样往往需要昂贵的代价,因此提出了一种新的连续空间递归最小二乘AC算法,能够充分利用数据,提高学习预测能力。该方法用高斯径向基函数对连续的状态空间进行编码,评论家部分改用带资格迹的递归最小二乘时间差分方法,而行动者部分用策略梯度方法,在连续动作空间中进行策略搜索。Mountain Car问题的仿真结果表明该算法具有较好的收敛结果。
The traditional actor-critic(AC) algorithms is applied in continuous space,which has low data utilization rate and slow convergence speed,but in the real world,sampling often requires expensive price. So this paper proposed a new recursive least squares AC algorithm of continuous space,which could make full use of the data and improve the learning and predictive abilities. The algorithm used Gaussian radial basis functions to encode the continuous state space. The critic applied to recursive least-squares temporal difference method,and the actor adopted policy gradient to search in the continuous action space. The simulation results of Mountain Car problem show that the proposed algorithm has good convergent results.
出处
《计算机应用研究》
CSCD
北大核心
2014年第7期1994-1997,2000,共5页
Application Research of Computers
基金
国家自然科学基金资助项目(61070122
61070223
61373094
60970015)
江苏省自然科学基金资助项目(BK2009116)
江苏省高校自然科学研究项目(09KJA520002)
吉林大学符号计算与知识工程教育部重点实验室资助项目(93K172012K04)
关键词
强化学习
行动者—评论家方法
连续状态动作空间
递归最小二乘
策略梯度
高斯径向基函数
reinforcement learning
actor-critic method
continuous state and action space
recursive least-squares
policy gradient
Gaussian radial basis functions