摘要
视觉传感器在无人机室内定位中发挥着重要作用。传统基于特征点的视觉里程计算法通过底层亮度关系进行描述匹配,抗干扰能力不足,会出现匹配错误甚至失败的情况,导航系统的精度及鲁棒性有待提升。由于室内环境存在丰富的语义信息,提出了一种基于语义信息辅助的无人机视觉/惯性融合定位方法。首先,将室内语义信息进行因子建模,并与传统的视觉里程计方法进行融合;然后,基于惯性预积分方法,在因子图优化中添加惯性约束,以进一步提高无人机在动态复杂环境下的定位精度和鲁棒性;最后,通过无人机室内飞行试验对算法的定位精度进行了分析。试验结果表明,相较于传统的视觉里程计算法,该方法具有更高的精度和鲁棒性。
Vision sensors play an important role in the indoor positioning of unmanned aerial vehicles(UAVs).The traditional visual odometry calculation method based on feature points descri-bes and matches through the underlying brightness relationship,which has insufficient anti-interference ability,and may cause matching errors or even failures.The accuracy and robustness of the navigation system need to be improved.Due to the rich semantic information in the indoor environment,a semantic information aided visual/inertial fusion localization method of UAV is proposed.Firstly,the indoor semantic information is factored and fused with the traditional visual odometry method.Then,based on the inertial pre-integration method,inertial constraint is added to the factor graph optimization to further improve the positioning accuracy and robustness of the UAV in complex dynamic situations.Finally,the positioning accuracy of the algorithm is analyzed through the UAV indoor flight experiment.The experimental results show that the proposed method has higher accuracy and robustness than the traditional visual odometry method.
作者
吕品
何容
赖际舟
杨子寒
袁诚
LYU Pin;HE Rong;LAI Ji-zhou;YANG Zi-han;YUAN Cheng(College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China)
出处
《导航定位与授时》
CSCD
2022年第4期60-69,共10页
Navigation Positioning and Timing
基金
国家自然科学基金(61973160)
装备重大基础研究项目(51405-02B02)。
关键词
视觉/惯性定位
语义信息
因子图优化
无人机
Visual/inertial localization
Semantic information
Factor graph optimization
UAV