摘要
论文针对协同工作中的任务调度问题,建立了相应的马尔可夫决策过程模型,在此基础上提出了一种改进的基于模拟退火的Q学习算法。该算法通过引入模拟退火,并结合贪婪策略,以及在状态空间上的筛选判断,显著地提高了收敛速度,缩短了执行时间。最后与其它文献中相关算法的对比分析,验证了本改进算法的高效性。
In this paper,a Markov Decision Process model is built to describe the problem of task scheduling in cooperative work,and a improved Q-learning algorithm based on Metropolis rule is present to solve the problem.In the algorithm,Metropolis rule combined with Greedy Strategy is introduced and a selection in state space is adopted,which accelerate the convergence,and shorten the running time.Finally,the algorithm is compared to some related algorithms of other papers,and the algorithm performance is analyzed as well,which indicates the efficiency of the improved Q-learning algorithm.
出处
《图学学报》
CSCD
北大核心
2012年第3期11-16,共6页
Journal of Graphics
基金
国家自然科学基金资助项目(61070124)
合肥工业大学自主创新资助项目(2012HGZY0017)
关键词
任务调度
Q学习
强化学习
模拟退火
task scheduling
Q-learning
reinforcement learning
simulated annealing