摘要
少样本学习旨在提高模型泛化能力,使用少量样本完成对新类别的分类,显著降低深度学习中样本的搜集标注成本和模型的训练成本.目前大多数基于度量学习的少样本学习方法关注模型对某一度量空间的适应,而很少关注提高样本特征的特异性表达.当样本数量较少时,充分挖掘样本中的信息变得更加重要.基于不同特征图对同一类别的表征能力不同,提出一种通道自注意力的方法,将更具类别表现力的特征通道赋予更大的权重,完成特征图平衡,以提高样本特征表示的可鉴别性.为充分挖掘容易获取样本的更多信息,提出“空间原型”的概念.同时,受自编码器思想的启发,设计一种利用全体样本信息校正类别原型的方法来提高类别原型的准确性.作为一种无参数的增强型特征提取器,所提通道自注意力方法能有效避免少样本学习中广泛存在的模型迁移能力弱问题并且兼容于多种现有少样本学习方法,进一步提高其性能,展现出较好的泛化能力.将两种方法用于原型网络,所提方法在两个少样本分类主流数据集miniImageNet和CUB的3种分类场景下相对原方法均带来较大性能提升.特别地,当训练集和测试集领域跨度较大时,所提方法相对原方法可获得10.23%的绝对性能提升和17.04%的相对性能提升,这充分展现出所提方法的有效性.
The aim of few-shot learning(FSL)is to improve the generalization ability of a learning model,so that new categories can be classified with a small number of available samples.It leads to significant reduction in both annotation and model training cost.Most of the existing metric learning methods pay attention only to find an appropriate metric space rather than improving the discrimination of the feature vectors.It is important to get maximum information when the number of samples is very few.Based on the representation capacity of different feature maps,a channel-based self-attention method is proposed to improve the discrimination of various class samples by weighting the more important feature map with a big value.Apart from that,the conception of“space prototype”is proposed to explore the information easily obtained from the samples.Meanwhile,inspired by auto-encoder,a method leveraging the information from all the samples is proposed.In this method,the class of prototypes are modified or altered to improve their accuracy.As a parameter-free augmented feature extractor,the proposed self-attention method alleviates the over-fitting problem that widely exists in FSL.This proposed self-attention method is a generalized and compatible one with many existing FSL methods in improving the performance.Compared to prototypical networks,the results on two few-shot learning benchmarks miniImagenet and CUB of three classification settings are improved when the proposed methods are applied on it.Specifically,when the training set largely differs from the test set,the proposed method results in absolute performance improvement by 10.23%and relative improvement by 17.04%.
作者
冀中
柴星亮
Ji Zhong;Chai Xingliang(School of Electrical and Information Engineering,Tianjin University,Tianjin 300072,China)
出处
《天津大学学报(自然科学与工程技术版)》
EI
CSCD
北大核心
2021年第4期338-345,共8页
Journal of Tianjin University:Science and Technology
基金
国家自然科学基金资助项目(61771329).
关键词
少样本学习
图像分类
机器学习
通道自注意力
自编码器
few-shot learning(FSL)
image classification
machine learning
channel self-attention
auto-encoder