期刊文献+

基于无人机控制和最优视图视选择的动作捕捉方法 被引量:3

Motion capture method based on UAV control and optimal view selection
下载PDF
导出
摘要 针对现有技术难以将相机自动定位在准确合适的位置进行人体动作捕捉,提出利用无人机(UAV)控制和视点最优选择进行不确定性估计的方法。首先,利用神经网络,从UAV视频当前帧中推导出2D和3D人体姿态;然后,使用过去k帧的2D姿态和相关3D姿态以优化全局3D人体运动;最后,选择下一个最优视点,使得从该视点得到的人体姿态估计的不确定性最小化。实验通过合成和公开数据集进行仿真,结果表明:所提方法能够改善3D动作估计结果。在不受物理约束的情况下选择下一个视点时,所提方法性能优于基线方法。对于模拟UAV飞行模式,所提方法与不停环绕方法性能大致相当。 Aiming at the problem that it is difficult to locate the camera automatically in an accurate and appropriate position for human motion capture in the existing technology,a method of uncertainty estimation based on unmanned aerial vehicle(UAV)control and optimal viewpoint selection is proposed.Firstly,the existing neural network is used to derive 2 D and 3 D human pose from the current frame of UAV video.Then,the 2 D pose and related 3 D pose of the past k frames are used to optimize the global 3 D human motion.Finally,the next optimal viewpoint is selected to minimize the uncertainty of human pose estimation from this viewpoint.Experimental results on synthetic and public datasets show that the proposed method can improve the 3 D human motion estimation results.The performance of the proposed method is better than that of the baseline method when the next viewpoint is selected without physical constraints.For the simulation of UAV flight mode,the performance of the proposed method is similar to that of the non-stop orbit method.
作者 孙冬 SUN Dong(School of Computer Science and Technology,Henan Institute of Technology,Xinxiang 453003,China)
出处 《传感器与微系统》 CSCD 北大核心 2021年第10期51-55,共5页 Transducer and Microsystem Technologies
基金 河南省科技攻关项目(202102210153) 河南省高等学校教学名师工作室项目(教高{2019}618号)。
关键词 无人机(UAV)控制 人体动作捕捉 不确定性 最优视点 动作估计 unmanned aerial vehicle(UAV)control human motion capture uncertainty optimal viewpoint motion estimation
  • 相关文献

参考文献5

二级参考文献35

  • 1杨跃东,王莉莉,郝爱民,封春升.基于几何特征的人体运动捕获数据分割方法[J].系统仿真学报,2007,19(10):2229-2234. 被引量:9
  • 2Muller M, Roder T, Clausen M. Efficient content-based retrieval of motion capture data [J]. ACM Transactions on Graphics (S0730-0301), 2005, 24(3): 667-685.
  • 3Shin H J, Lee J. Motion synthesis and editing in low-dimensional spaces [J]. Computer Animation and Virtual Worlds (S1546-4261), 2006, 17(3/4): 219-227.
  • 4Fod A, Mataric M J, Jenkins O C. Automated derivation of primitives for movement classification [C]// Proceedings of the Computer Graphics, Annual Conference Series, ACM SIGGRAPH. New York, USA: ACM Press, 2002: 39-54.
  • 5Jenkins O C, Matarie J M. Deriving action and behavior primitives from human motion data [C]// Proceedings of the Computer Graphics, Annual Conference Series, ACM SIGGRAPH. New York, USA: ACM Press, 2002: 2551-2556.
  • 6Jenkins O C, Mataric M J. Automated derivation of behavior vocabularies for autonomous humanoid motion [C]// Proceedings of the 2na International Joint Conference on Autonomous Agents and Multiagent Systems. New York, USA: ACM Press, 2003: 225-232.
  • 7Barbic J, Safonova A, Pan J Y, et al. Segmenting Motion Capture Data into Distinct Behaviors [C]// Proceedings of the Conference onGraphics Interface. New York, USA: ACM Press, 2004: 185-194.
  • 8Zhou F, Torre F, Hodgins J K. Aligned cluster analysis for temporal segmentation of human motion [C]// Proceedings of the 8th IEEE International Conference on Automatic Face & Gesture Recognition. Los Alamitos, USA: IEEE Computer Society Press, 2008: 1-7.
  • 9Demuth B, Roder T, Muller M. An information retrieval system for motion capture data [M]. Berlin, Germany: Springer, 2006. 373-384.
  • 10Chen D Y, Mark Liao H Y, Shih S W. Continuous Human Action Segmentation and Recognition Using a Spatio-Temporal Probabilistic Framework [C]// Proceedings of IEEE International Symposium on Multimedia. CA, USA: IEEE, 2006: 275-282.

共引文献27

同被引文献38

引证文献3

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部