摘要
精确的头部姿态跟踪是室内增强现实系统实现高精度注册的关键技术之一.本文介绍了使用传感器数据融合原理实现高精度的光学头部姿态跟踪的新方法.该方法使用多传感器数据融合中的扩展卡尔曼滤波器和融合滤波器,将两个互补的单摄像机Inside-out跟踪和双摄像机Outside-in跟踪的头部姿态进行数据融合,以减小光学跟踪传感器的姿态误差.设计了一个典型实验装置验证所提出的算法,实验结果显示,在静态测试下的姿态输出误差与使用误差协方差传播法则计算得到的结果是一致的;在动态跟踪条件下,与单个Inside-out或Outside-in跟踪相比,所提出的光学头部姿态数据融合算法能够使跟踪器获得精度更高、更稳定的位置和方向信息.
Accurate head pose tracking is a key issue to accomplish precise registration in indoor augmented reality systems.This paper proposes a novel approach based on multi-sensor data fusion to achieve optical tracking of head pose with high accuracy.This approach employs two extended Kalman filters and one fusion filter for multi-sensor environment to fuse the pose data from two complement optical trackers,an inside-out tracking(IOT) with a camera and an outside-in tracking(OIT) with two cameras,respectively.The aim is to reduce the pose errors from the optical tracking sensors.A representative experimental setup is designed to verify the above approach.The experimental results show that,in the static state,the pose errors of IOT and OIT are consistent to the theoretical results obtained using the rules of error covariance matrix propagation from the respective image noises to the final pose errors,and that in the dynamic state,the proposed multi-sensor data fusion approach used with our combined optical tracker can achieve more accurate and more stable outputs of position and orientation compared with using a single IOT or OIT alone.
出处
《自动化学报》
EI
CSCD
北大核心
2010年第9期1239-1249,共11页
Acta Automatica Sinica
基金
国家高技术研究发展计划(863计划)(2006AA02Z4E5
2008AA01Z303
2009AA012106)
国家自然科学基金(60827003)
长江学者和创新团队发展计划(IRT0606)资助~~