Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav...Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.展开更多
Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively inv...Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively investigated in this field. Foreground segmentation networks (FgSegNets) are representative deep end-to-endMOS methods proposed recently. This study explores a new mechanism to improve the spatial feature learningcapability of FgSegNets with relatively few brought parameters. Specifically, we propose an enhanced attention(EA) module, a parallel connection of an attention module and a lightweight enhancement module, with sequentialattention and residual attention as special cases. We also propose integrating EA with FgSegNet_v2 by taking thelightweight convolutional block attention module as the attention module and plugging EA module after the twoMaxpooling layers of the encoder. The derived new model is named FgSegNet_v2 EA. The ablation study verifiesthe effectiveness of the proposed EA module and integration strategy. The results on the CDnet2014 dataset,which depicts human activities and vehicles captured in different scenes, show that FgSegNet_v2 EA outperformsFgSegNet_v2 by 0.08% and 14.5% under the settings of scene dependent evaluation and scene independent evaluation, respectively, which indicates the positive effect of EA on improving spatial feature learning capability ofFgSegNet_v2.展开更多
文摘Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.
基金the National Natural Science Foundation of China(No.61702323)。
文摘Moving object segmentation (MOS) is one of the essential functions of the vision system of all robots,including medical robots. Deep learning-based MOS methods, especially deep end-to-end MOS methods, are actively investigated in this field. Foreground segmentation networks (FgSegNets) are representative deep end-to-endMOS methods proposed recently. This study explores a new mechanism to improve the spatial feature learningcapability of FgSegNets with relatively few brought parameters. Specifically, we propose an enhanced attention(EA) module, a parallel connection of an attention module and a lightweight enhancement module, with sequentialattention and residual attention as special cases. We also propose integrating EA with FgSegNet_v2 by taking thelightweight convolutional block attention module as the attention module and plugging EA module after the twoMaxpooling layers of the encoder. The derived new model is named FgSegNet_v2 EA. The ablation study verifiesthe effectiveness of the proposed EA module and integration strategy. The results on the CDnet2014 dataset,which depicts human activities and vehicles captured in different scenes, show that FgSegNet_v2 EA outperformsFgSegNet_v2 by 0.08% and 14.5% under the settings of scene dependent evaluation and scene independent evaluation, respectively, which indicates the positive effect of EA on improving spatial feature learning capability ofFgSegNet_v2.