摘要
提出了一种基于平均风险误差准则的遗传算法优化设计前向神经网络的方法,遗传算法的适应度函数并不采用基于传统的最小均方误差准则,而是由最小平均风险误差准则所决定,这种方法在计算神经网络输出与期望输出之间误差的同时,还要考虑神经网络对每一类训练样本产生的这种误差所引起的风险损失.这种方法优化得到的神经网络不但可以准确地再现训练样本集合的期望输出,对训练样本集合外样本的预测能力也有明显的提高.
A novel approach for optimizing feed forward neural networks is proposed in this paper, the genetic algorithms is not based on the traditional criterion of minimized square error, however its fitness function is determined by the average risk. The method considered not only the errors between the network's outputs and the desired outputs, but also the risk caused by these errors, because the errors for different types of samples in training set may present different risks. The neural networks optimized by the proposed approach shows good performance on the samples both inside and outside training set.
出处
《厦门大学学报(自然科学版)》
CAS
CSCD
北大核心
2001年第z1期54-57,共4页
Journal of Xiamen University:Natural Science
基金
江西省跨世纪学科带头人培养计划项目(第3批)
江西省自然科学基金资助项目(9911013)