摘要
针对深度神经网络在图像识别中存在的训练数据不足,以及多模型蒸馏中存在的细节特征丢失和蒸馏计算量大的问题,提出一种小样本问题下培训弱教师网络的模型蒸馏模型。首先通过集成学习算法中的引导聚集(Bagging)算法培训弱教师网络集,在保留图像数据集细节特征的同时进行并行计算以提升网络生成效率;然后融合知识合并算法,并基于弱教师网络特征图形成单个高质量、高复杂度的教师网络,从而获得细节重点更突出的图像特征图;最后在目前先进的模型蒸馏基础上提出了针对组合特征图改进元网络的集成蒸馏模型,该算法在减少了元网络训练计算量的同时实现了小样本数据集对目标网络的训练。实验结果表明,所提模型在准确率上相较于单纯以优质网络为教师网络的蒸馏方案有6.39%的相对改进;比较自适应增强(AdaBoost)算法训练教师网络再加以蒸馏得到的模型和集成蒸馏模型的模型准确率,二者相差在给定误差范围内,而集成蒸馏模型比AdaBoost算法的网络生成速率提升了4.76倍。可见所提模型能有效提高目标模型在小样本问题下的准确率和训练效率。
Aiming at the lack of training data of deep neural networks in image recognition,as well as the loss of detailed features and the large amount of distillation calculations in the multi-model distillation,a model distillation model based on training weak teacher networks about few-shots was proposed. Firstly,the weak teacher network set was trained through the Bootstrap aggregating(Bagging)algorithm in the ensemble learning algorithm. While retaining the detailed features of the image dataset,parallel computing was able to be realized to improve the efficiency of network generation.Then,the knowledge merging algorithm was combined,and single high-quality high-complexity teacher networks were formed based on the weak teacher network feature maps,thereby obtaining the image feature maps with more significant details. Finally,based on the current advanced model distillation,an ensemble distillation algorithm with meta-network improved with combined feature maps was proposed,which reduced the calculation of meta-network training and realized the training of the target network about few-shots at the same time. Experimental results show that the algorithm had a 6. 39%relative improvement in accuracy compared to the distillation scheme that uses a high-quality network as the teacher network.Comparing the accuracy of the model which obtained by training and distilling the teacher networks with Adaptive Boosting(AdaBoost)algorithm and the accuracy of the model obtained by the ensemble distillation model,the difference is within the given error range. However,the network generation rate of the ensemble distillation algorithm was increased by 4. 76 times compared with that of AdaBoost algorithm. Therefore,the proposed algorithm can effectively improve the accuracy and training efficiency of the target model about few-shots.
作者
蔡淳豪
李建良
CAI Chunhao;LI Jianliang(School of Science,Nanjing University of Science and Technology,Nanjing Jiangsu 210094,China)
出处
《计算机应用》
CSCD
北大核心
2022年第9期2652-2658,共7页
journal of Computer Applications
基金
装备预研中国电科联合基金资助项目(6141B08231109)。
关键词
小样本
模型蒸馏
集成学习
元学习
特征合并
few-shot
model distillation
Ensemble Learning(EL)
meta learning
feature merging