为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据...为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving syst...Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.展开更多
针对静态人群图像中背景干扰和尺度变化等问题,采用多尺度特征提取模块(Res2Net)以更细的粒度提取多尺度特征,提高对不同尺寸人头的计数性能;引入卷积注意力模块(CBAM),分别在通道域和空间域上提高人群区域的权重,有效改善了高密度和复...针对静态人群图像中背景干扰和尺度变化等问题,采用多尺度特征提取模块(Res2Net)以更细的粒度提取多尺度特征,提高对不同尺寸人头的计数性能;引入卷积注意力模块(CBAM),分别在通道域和空间域上提高人群区域的权重,有效改善了高密度和复杂的人群场景下背景干扰等问题。在此基础上,将CBAM模块集成到Res2Net模块中,形成了新的多尺度特征提取模块CBAM-Res2Net。在后端网络中设计了一个扩张模块以提取更深层的特征并进行特征融合回归,从而生成高质量的密度图。并且分别在ShanghaiTech Part A、ShanghaiTech Part B和UCF_CC_50数据集上进行了算法对比实验,本文模型在上述数据集的平均绝对误差和均方根误差分别为61.4、7.3、255.6和98.5、10.8、310.2,综合性能均优于其他算法,验证了模型的准确性和鲁棒性。展开更多
文摘为解决自然条件下人脸表情识别易受角度、光线、遮挡物的影响以及人脸表情数据集各类表情数量不均衡等问题,提出基于Res2Net的人脸表情识别方法。使用Res2Net50作为特征提取的主干网络,在预处理阶段对图像随机翻转、缩放、裁剪进行数据增强,提升模型的泛化性。引入广义平均池化(generalized mean pooling, GeM)方式,关注图像中比较显著的区域,增强模型的鲁棒性;选用Focal Loss损失函数,针对表情类别不平衡和错误分类问题,提高较难识别表情的识别率。该方法在FER2013数据集上准确率达到了70.41%,相较于原Res2Net50网络提高了1.53%。结果表明,在自然条件下对人脸表情识别具有更好的准确性。
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金supported by the Natural Science Foundation of Guizhou Province(Grant Number:20161054)Joint Natural Science Foundation of Guizhou Province(Grant Number:LH20177226)+1 种基金2017 Special Project of New Academic Talent Training and Innovation Exploration of Guizhou University(Grant Number:20175788)The National Natural Science Foundation of China under Grant No.12205062.
文摘Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.
文摘利用多尺度特征策略进行特征提取的有效性不足是多模态医学图像融合领域存在的问题。为了增加融合结果的多尺结构信息,提出了一种基于残差多尺度网络(residual multi-scale network,Res2Net)、交错稠密网络和空间通道融合算法的多模态医学图像融合算法。Res2Net的编码器在提取多尺度特征时能保留更多语义信息;交错稠密网络减少了解码器和编码器之间的语义差异,丰富了融合图像的结构和细节信息;掩码鉴别器约束了脑瘤病灶区域,进一步提高了融合图像的质量;特征图通过空间通道融合算法融合减少了多模态图像之间的信息冗余。该算法在信息熵(entropy of information,EN)、互信息(mutual information,MI)、结构相似性(structure similarity index measure,SSIM)、多尺度结构相似性(multi scale structural similarity index measure,MI_SSIM)指标上拥有较高水平的性能表现,EN提高了6%,MI提高了3%。结果显示,所提出的算法在视觉感知和指标评估上达到了较高的融合质量。
文摘针对静态人群图像中背景干扰和尺度变化等问题,采用多尺度特征提取模块(Res2Net)以更细的粒度提取多尺度特征,提高对不同尺寸人头的计数性能;引入卷积注意力模块(CBAM),分别在通道域和空间域上提高人群区域的权重,有效改善了高密度和复杂的人群场景下背景干扰等问题。在此基础上,将CBAM模块集成到Res2Net模块中,形成了新的多尺度特征提取模块CBAM-Res2Net。在后端网络中设计了一个扩张模块以提取更深层的特征并进行特征融合回归,从而生成高质量的密度图。并且分别在ShanghaiTech Part A、ShanghaiTech Part B和UCF_CC_50数据集上进行了算法对比实验,本文模型在上述数据集的平均绝对误差和均方根误差分别为61.4、7.3、255.6和98.5、10.8、310.2,综合性能均优于其他算法,验证了模型的准确性和鲁棒性。