摘要
针对递归最小二乘反向传播训练(RLS-BP)算法在推导过程中因引入近似公式而影响了收敛速度的进一步提高的情况,提出了一种改进的RLS-BP训练算法,它通过修改误差性能测度使推导过程中不引入近似公式,进一步提高了收敛速度.实验也表明改进的RLS-BP算法比原算法的收敛速度一般要快,有较好的实用价值.
In order to resolve the convergence speed problem due to the approximate formula in the process of the recursive least squares back propagation training algorithm(namely RLS-BP), we propose here an improved RLS-BP algorithm, which is deduced through an un-approximation formula. Experiments show that the convergence of the improved algorithm is faster than before. The improved RLS-BP algorithm is more effective.
出处
《北京邮电大学学报》
EI
CAS
CSCD
北大核心
2004年第4期87-91,共5页
Journal of Beijing University of Posts and Telecommunications
基金
首信博士后基金(0111)
关键词
模式识别
神经网络
训练算法
pattern recognition
neural network
training algorithm