摘要
为解决异源图像匹配中样本量过少和成像原理不同导致成像差异的问题,提出了一种采用类内迁移学习的异源图像匹配网络(PairsNet)。该网络由特征提取子网络和匹配度量子网络两部分组成。特征提取子网络中存在4条卷积神经网络分支,其通过卷积神经网络分支提取出红外图像和可见光图像的特征。将可见光图像作为源域、红外图像作为目标域进行迁移学习,通过减小两个域中样本特征的类内最大均值差异距离,实现了源域和目标域对应图像类别上精准的样本特征分布对齐。匹配度量子网络使用2个全连接层和1个softmax层进行串联,评估出异源图像特征的匹配度。构建了红外和可见光图像数据集,进行端到端的训练和测试。结果表明:与当前使用预训练模型微调的方法相比,PairsNet的准确率提升了10.54%,可见光图像匹配网络的能力可以有效迁移到异源图像匹配网络。
To solve the problem of imaging difference and too few training samples in infrared-visible images matching,a matching network based on intra-class transfer learning is proposed.The network consists of feature extraction subnetwork and metric subnetwork.Owing to four convolutional neural network(CNN)branches in feature extraction subnetwork,the proposed network is referred to as PairsNet for short.The CNN branches extract infrared and visible image features.The visible image is used as the source domain and the infrared image as the target domain.By reducing the maximum mean discrepancy distance of intra-class samples in the two domains,more accurate sample distribution alignment is achieved in the intra-class feature space.The metric subnetwork uses two full connection layers and one softmax layer in series to evaluate infrared-visible image matching performance.Infrared and visible image data sets are built for end-to-end training and testing.The experimental results show that the accuracy of PairsNet is improved by 10.54%,compared with the network fine-tuning method based on pre-training model.The ability of visible image matching network can be effectively transferred to the infrared-visible image matching network.
作者
毛远宏
贺占庄
马钟
毕瑞星
王竹平
MAO Yuanhong;HE Zhanzhuang;MA Zhong;BI Ruixing;WANG Zhuping(Xi’an Microelectronics Technology Institute,Xi’an 710065,China)
出处
《西安交通大学学报》
EI
CAS
CSCD
北大核心
2020年第1期49-55,共7页
Journal of Xi'an Jiaotong University
基金
国家自然科学基金青年科学基金资助项目(61702413)
关键词
异源图像
图像匹配
迁移学习
infrared-visible images
image matching
transfer learning