摘要
针对动态场景下视觉SLAM(simultaneous localization and mapping)算法易受运动特征点影响,从而导致位姿估计准确度低、鲁棒性差的问题,提出了一种基于动态区域剔除的RGB-D视觉SLAM算法。首先借助语义信息,识别出属于移动对象的特征点,并借助相机的深度信息利用多视图几何检测特征点在此时是否保持静止;然后使用从静态对象提取的特征点和从可移动对象导出的静态特征点来微调相机姿态估计,以此实现系统在动态场景中准确而鲁棒的运行;最后利用TUM数据集中的动态室内场景进行了实验验证。实验表明,在室内动态环境中,所提算法能够有效提高相机的位姿估计精度,实现动态环境中的地图更新,在提升系统鲁棒性的同时也提高了地图构建的准确性。
Aiming at the problem that the visual SLAM algorithm in dynamic scenes is susceptible to motion feature points, resulting in low accuracy and poor robustness of pose estimation, this paper proposed a RGB-D visual SLAM algorithm based on dynamic region culling.Firstly, identify the feature points belonging to the moving objects with the help of semantic information, and detected whether the feature points remain stationary at this point used multi-view geometry with the help of the depth information of the camera.Use feature points extracted from static objects and static feature points derived from movable objects to fine-tune the camera pose estimation to achieve accurate and robust operation of the system in dynamic scenes.Finally, it used the dynamic indoor scenes in the TUM dataset for experimental verification.Experiments show that the proposed algorithm can effectively improve the accuracy of the camera′s pose estimation in the indoor dynamic environment, realize the map update in the dynamic environment, and improve the accuracy of the environment construction while enhancing the robustness of the system.
作者
张恒
侯家豪
刘艳丽
Zhang Heng;Hou Jiahao;Liu Yanli(School of Information Engineering,East China Jiaotong University,Nanchang 330013,China;School of Electronic Information,Shanghai Dianji University,Shanghai 201306,China)
出处
《计算机应用研究》
CSCD
北大核心
2022年第3期675-680,共6页
Application Research of Computers
基金
国家自然科学基金资助项目(61963017,61663010)
江西省科技创新杰出青年人才项目(20192BCBL23004)。
关键词
动态场景
视觉同步定位与建图
几何约束
机器人定位
dynamic environment
visual simultaneous localization and mapping
geometric constraints
robot localization