为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据...为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving syst...Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.展开更多
Electronic properties of two-dimensional(2D) materials can be strongly modulated by localized strain. The typical spatial resolution of conventional Kelvin probe force microscopy(KPFM) is usually limited in a few hund...Electronic properties of two-dimensional(2D) materials can be strongly modulated by localized strain. The typical spatial resolution of conventional Kelvin probe force microscopy(KPFM) is usually limited in a few hundreds of nanometers, and it is difficult to characterize localized electronic properties of 2D materials at nanoscales. Herein, tip-enhanced Raman spectroscopy(TERS) is proposed to combine with KPFM to break this restriction. TERS scan is conducted on ReS2bubbles deposited on a rough Au thin film to obtain strain distribution by using the Raman peak shift. The localized contact potential difference(CPD) is inversely calculated with a higher spatial resolution by using strain measured by TERS and CPD-strain working curve obtained using conventional KPFM and atomic force microscopy. This method enhances the spatial resolution of CPD measurements and can be potentially used to characterize localized electronic properties of 2D materials.展开更多
文摘为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金supported by the Natural Science Foundation of Guizhou Province(Grant Number:20161054)Joint Natural Science Foundation of Guizhou Province(Grant Number:LH20177226)+1 种基金2017 Special Project of New Academic Talent Training and Innovation Exploration of Guizhou University(Grant Number:20175788)The National Natural Science Foundation of China under Grant No.12205062.
文摘Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.
文摘利用多尺度特征策略进行特征提取的有效性不足是多模态医学图像融合领域存在的问题。为了增加融合结果的多尺结构信息,提出了一种基于残差多尺度网络(residual multi-scale network,Res2Net)、交错稠密网络和空间通道融合算法的多模态医学图像融合算法。Res2Net的编码器在提取多尺度特征时能保留更多语义信息;交错稠密网络减少了解码器和编码器之间的语义差异,丰富了融合图像的结构和细节信息;掩码鉴别器约束了脑瘤病灶区域,进一步提高了融合图像的质量;特征图通过空间通道融合算法融合减少了多模态图像之间的信息冗余。该算法在信息熵(entropy of information,EN)、互信息(mutual information,MI)、结构相似性(structure similarity index measure,SSIM)、多尺度结构相似性(multi scale structural similarity index measure,MI_SSIM)指标上拥有较高水平的性能表现,EN提高了6%,MI提高了3%。结果显示,所提出的算法在视觉感知和指标评估上达到了较高的融合质量。
基金Project supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LZ22A040003)the National Natural Science Foundation of China (Grant No. 52027809)。
文摘Electronic properties of two-dimensional(2D) materials can be strongly modulated by localized strain. The typical spatial resolution of conventional Kelvin probe force microscopy(KPFM) is usually limited in a few hundreds of nanometers, and it is difficult to characterize localized electronic properties of 2D materials at nanoscales. Herein, tip-enhanced Raman spectroscopy(TERS) is proposed to combine with KPFM to break this restriction. TERS scan is conducted on ReS2bubbles deposited on a rough Au thin film to obtain strain distribution by using the Raman peak shift. The localized contact potential difference(CPD) is inversely calculated with a higher spatial resolution by using strain measured by TERS and CPD-strain working curve obtained using conventional KPFM and atomic force microscopy. This method enhances the spatial resolution of CPD measurements and can be potentially used to characterize localized electronic properties of 2D materials.