期刊文献+

三维动作识别时空特征提取方法 被引量:7

Three-dimensional spatio-temporal feature extraction method for action recognition
下载PDF
导出
摘要 针对传统的彩色视频中动作识别算法成本高,且二维信息不足导致动作识别效果不佳的问题,提出一种新的基于三维深度图像序列的动作识别方法。该算法在时间维度上提出了时间深度模型(TDM)来描述动作。在三个正交的笛卡尔平面上,将深度图像序列分成几个子动作,对所有子动作作帧间差分并累积能量,形成深度运动图来描述动作的动态特征。在空间维度上,用空间金字塔方向梯度直方图(SPHOG)对时间深度模型进行编码得到了最终的描述符。最后用支持向量机(SVM)进行动作的分类。在两个权威数据库MSR Action3D和MSRGesture3D上进行实验验证,该方法识别率分别达到了94.90%(交叉测试组)和94.86%。实验结果表明,该方法能够快速对深度图像序列进行计算并取得较高的识别率,并基本满足深度视频序列的实时性要求。 Concerning the high costs of traditional action recognition algorithm in color video and poor recognition performance caused by insufficient two-dimensional information, a new human action recognition method based on threedimensional depth image sequence was put forward. On the temporal dimension, Temporal Depth Model( TDM) was proposed to describe the action. Specially, the entire depth maps were divided into several sub-actions under three orthogonal Cartesian planes. The absolute difference between two consecutive projected maps was accumulated to form a depth motion map to describe the dynamic feature of an action. On the spatial-dimension, Spatial Pyramid Histogram of Oriented Gradient( SPHOG) was computed from the TDM for the representation of an action to obtain the final descriptor. Support Vector Machine( SVM) was used to classify the proposed descriptors at last. The proposed method was tested on two authoritative datasets including MSR Action3 D dataset and MSRGesture3 D dataset, the recognition rates were 94. 90%( cross subject test)and 94. 86% respectively. The experimental results demonstrate that the proposed method has fast speed and better recognition, also it meets the real-time requirement in the depth video sequence system basically.
出处 《计算机应用》 CSCD 北大核心 2016年第2期568-573,579,共7页 journal of Computer Applications
基金 国家自然科学基金重大国际合作项目(61210005) 国家自然科学基金重点项目(61331021)~~
关键词 动作识别 三维深度图像 方向梯度直方图 时空金字塔 深度运动图 action recognition three-dimensional depth image Histogram of Oriented Gradient(HOG) spatio-temporal pyramid depth motion map
  • 相关文献

参考文献20

  • 1胡琼,秦磊,黄庆明.基于视觉的人体动作识别综述[J].计算机学报,2013,36(12):2512-2524. 被引量:123
  • 2徐光祐,曹媛媛.动作识别与行为理解综述[J].中国图象图形学报,2009,14(2):189-195. 被引量:50
  • 3MANINIS K, KOUTRAS P, MARAGOS P, et al. Advances on action recognition in videos using an interest point detector based on multiband spatio-temporal energies[C]//ICIP 2014: Proceedings of the 2014 IEEE International Conference on Image Processing. Piscataway, NJ: IEEE, 2014: 1490-1494.
  • 4DALAL N, TRIGGS B. Silhouette analysis-based action recognition via exploiting human poses[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2013, 23(2): 236-243.
  • 5SHOTTON J, FITZGIBBON A, COOK M, et al. Real-time human pose recognition in parts from single depth images[C]//ICML 2013: Proceedings of the 2013 ACM International Conference on Machine Learning for Computer Vision. New York: ACM, 2013: 116-124.
  • 6OREIFE O, LIU Z. HON4D: histogram of oriented 4D normals for activity recognition from depth sequences[C]//CVPR 2013: Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2013: 716-723.
  • 7YANG X, TIAN Y. Super normal vector for activity recognition using depth sequences[C]//CVPR 2014: Proceedings of 2014 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2014: 804-811.
  • 8YANG X, TIAN Y. Recognizing actions using depth motion maps based histograms of oriented gradients[C]//ICML 2012: Proceedings of 2012 ACM International Conference on Machine Learning for Computer Vision. New York: ACM, 2012: 1057-1060.
  • 9LI W, ZHANG Z, LIU Z. Action recognition based on a bag of 3D points[C]//CVPR 2010: Proceedings of 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2010: 9-14.
  • 10WANG J, LIU P, NAHAVANDI S, et al. Human action recognition based on pyramid histogram of oriented gradients[C]//SMC 2011: Proceedings of 2011 IEEE International Conference on Systems, Man, and Cybernetics. Piscataway, NJ: IEEE, 2011: 2449-2454.

二级参考文献76

  • 1Zhang D, Gatica-Perez D, Bengio S, et al. Modeling individual group actions in meetings: a two-layer HMM framework[A]. In: Proceedings of IEEE CVPR Workshop on Detection and Recognition of Events in Video[ C ] , Washington, DC, USA,2004 : 117-125.
  • 2Olivier N, Horovitz E, Garg A. Layered representations for human activity recognition [ A ] . In: Proceedings of IEEE International Conference on Muhimodal Interfaces [ C ] , Pittsburgh, PA, USA, 2002 : 3-8.
  • 3Luo Y, Wu T D, Hwang J N. Object-based analysis and interpretation of human motion in sports video sequences by dynamic Bayesian networks [ J ] . Computer Vision and Image Understanding, 2003, 92(2-3) : 196-216.
  • 4Du Y T, Chen F, Xu W L, et al. Recognizing interaction activities using dynamic Bayesian network [ A ] . In: Proceedings of International Conference on Pattern Recognition [ C ], New York, USA. 2006: 618-621.
  • 5Buxton H, Gong S G. Advanced visual surveillance using Bayesian networks [ A ]. In: Proceedings of International Conference on Computer Vision [ C ] , Boston, MA, USA, 1995 : 111 - 123.
  • 6Oliver N, Horvitz E. A comparison of HMMs and dynamic Bayesian networks for recognizing office activities [ A ]. In: Proceedings of lOth International Conference on User Modeling [ C ] , Edinburgh, UK, 2005 : |99-209.
  • 7Sminchisescu C, Kanaujia A, Metaxas D. Conditional models for contextual human motion recognition [ J ]. Computer Vision and Image Understanding, 2006,104 ( 2-3 ) : 210-220.
  • 8Olival A, Torralba A. The role of context in object recognition[ J]. Trends in Cognitive Sciences, 2007,11(12) : 520-527.
  • 9Torralb A. Contextual priming for object detection [ J ]. International Journal of Computer Vision, 2003, 53(2): 169-191.
  • 10Zibetti E, Tijus C. Perceiving action from static images: The role of spatial context [ J]. Lecture Notes in Computer Science,2003, 2680: 397-410

共引文献187

同被引文献90

引证文献7

二级引证文献27

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部