摘要
多层前向神经网络 (MLP)的容错性有两种主要的研究方法 :改进算法和部件冗余 .前一种方法需要耗用大量的学习时间 ,对大型网络是不适用的 .Phatak曾提出了用后一种方法进行MLP的单故障容错的一种网络结构 ,但是冗余部件数庞大 ,尤其对于大型网络 .本文提出了一种新的冗余体系结构 ,针对单隐层MLP的单故障容错问题 .这种体系结构充分考虑了不同权值的不同重要度 ,解决了原体系结构的仅值瓶颈问题 ,可以显著减少冗余部件数 ,尤其对于大型网络 ,更具有优越性 .
There are 2 main methods in the research of fault tolerance of Multilayer Perceptrons (MLP):improvement in the learning algorithm and components redundancy.Phatak presented a network architecture of components redundancy to achieve the fault tolerance of the network,but the number of redundant components is too large,especially to the large network.A new architecture is presented in this paper,especially to the single fault tolerance of the single hidden layer MLP.In this architecture,the different importance of the weights is considered,thus the weight bottleneck problem is solved,and the number of redundant components is substantially reduced.So there's great superiority in this architecture,especially to the large network.
出处
《电子学报》
EI
CAS
CSCD
北大核心
2000年第5期99-101,共3页
Acta Electronica Sinica
基金
国家自然科学基金
关键词
多层前向神经网络
容错
冗余
人工智能
multilayer perceptrons
fault tolerance
redundancy
single fault