摘要
强化学习是一种重要的机器学习方法。为了提高强化学习过程的收敛速度和减少学习过程值函数估计的误差,提出了基于递推最小二乘法的多步时序差分学习算法(RLS-TD(λ))。证明了在满足一定条件下,该算法的权值将以概率1收敛到唯一解,并且得出和证明了值函数估计值的误差应满足的关系式。迷宫实验表明,与RLS-TD(0)算法相比,该算法能加快学习过程的收敛,与传统的TD(λ)算法相比,该算法减少了值函数估计误差,从而提高了精度。
Reinforcement learning is one of most important machine learning methods.In order to solve the problem of slow convergence speed and the error of value function in reinforcement learning systems,a multi-step Temporal Difference(TD(λ)) learning algorithm using Recursive Least-Squares (RSL) methods (RLS-TD (λ)) is proposed.The proposed algorithm is based on RLS-TD(0) ,its convergence is proved,and its formula of error estimation is obtained.The experiment on maze problem demonstrates that the algorithm can speed up the convergence of the learning process compared with RLS-TD(0),and improve the learning precision compared with TD(λ).
出处
《计算机工程与应用》
CSCD
北大核心
2010年第8期52-55,共4页
Computer Engineering and Applications
关键词
强化学习
时序差分
最小二乘
收敛
RLS—TD(λ)算法
reinforcement learning
temporal difference
Recursive Least-Squares( RLS )
convergence
RIS-TD(λ ) algorithm