提出了一种基于单幅图像头部姿态估计的学生注意力可视化分析方法,采用随机级联回归树进行人脸特征点定位,引入了一个统计测量获得的刚性模型作为3D人脸近似,实现基于Pn P (perspective-n-point)映射的单幅图像头部姿态估计,最后将学生...提出了一种基于单幅图像头部姿态估计的学生注意力可视化分析方法,采用随机级联回归树进行人脸特征点定位,引入了一个统计测量获得的刚性模型作为3D人脸近似,实现基于Pn P (perspective-n-point)映射的单幅图像头部姿态估计,最后将学生视线投射到教师授课的视频图像上,实现学生学习注意力的可视化分析。实验结果表明:对于Biwi标准库,该方法可以将头部姿态估计角度平均误差降低到4.88°;方法具有粗颗粒度的计算并行性,使用4线程并行计算可以获得2.37倍的加速效果;实现了3种典型学习状态(专注、关注、漠视)的注意力可视化分析。展开更多
With the development of short video industry,video and bullet screen have become important ways to spread public opinions.Public attitudes can be timely obtained through emotional analysis on bullet screen,which can a...With the development of short video industry,video and bullet screen have become important ways to spread public opinions.Public attitudes can be timely obtained through emotional analysis on bullet screen,which can also reduce difficulties in management of online public opinions.A convolutional neural network model based on multi-head attention is proposed to solve the problem of how to effectively model relations among words and identify key words in emotion classification tasks with short text contents and lack of complete context information.Firstly,encode word positions so that order information of input sequences can be used by the model.Secondly,use a multi-head attention mechanism to obtain semantic expressions in different subspaces,effectively capture internal relevance and enhance dependent relationships among words,as well as highlight emotional weights of key emotional words.Then a dilated convolution is used to increase the receptive field and extract more features.On this basis,the above multi-attention mechanism is combined with a convolutional neural network to model and analyze the seven emotional categories of bullet screens.Testing from perspectives of model and dataset,experimental results can validate effectiveness of our approach.Finally,emotions of bullet screens are visualized to provide data supports for hot event controls and other fields.展开更多
文摘提出了一种基于单幅图像头部姿态估计的学生注意力可视化分析方法,采用随机级联回归树进行人脸特征点定位,引入了一个统计测量获得的刚性模型作为3D人脸近似,实现基于Pn P (perspective-n-point)映射的单幅图像头部姿态估计,最后将学生视线投射到教师授课的视频图像上,实现学生学习注意力的可视化分析。实验结果表明:对于Biwi标准库,该方法可以将头部姿态估计角度平均误差降低到4.88°;方法具有粗颗粒度的计算并行性,使用4线程并行计算可以获得2.37倍的加速效果;实现了3种典型学习状态(专注、关注、漠视)的注意力可视化分析。
基金National Natural Science Foundation of China(No.61562057)Gansu Science and Technology Plan Project(No.18JR3RA104)。
文摘With the development of short video industry,video and bullet screen have become important ways to spread public opinions.Public attitudes can be timely obtained through emotional analysis on bullet screen,which can also reduce difficulties in management of online public opinions.A convolutional neural network model based on multi-head attention is proposed to solve the problem of how to effectively model relations among words and identify key words in emotion classification tasks with short text contents and lack of complete context information.Firstly,encode word positions so that order information of input sequences can be used by the model.Secondly,use a multi-head attention mechanism to obtain semantic expressions in different subspaces,effectively capture internal relevance and enhance dependent relationships among words,as well as highlight emotional weights of key emotional words.Then a dilated convolution is used to increase the receptive field and extract more features.On this basis,the above multi-attention mechanism is combined with a convolutional neural network to model and analyze the seven emotional categories of bullet screens.Testing from perspectives of model and dataset,experimental results can validate effectiveness of our approach.Finally,emotions of bullet screens are visualized to provide data supports for hot event controls and other fields.