摘要
载人月球探测任务中,受月面环境的复杂性和车载系统配置的限制,仅依靠车载导航系统难以全自主实现精确导航,需要遥操作中心对载人车大范围移动进行智能化支持。针对载人车在复杂月面环境中进行远距离探测时的高效导航问题,提出了基于区域路标引导的月面大范围高效行驶导航方法,通过分析载人车导航相机成像区域,将全路线可视区域中的月坑构建为月面路标图;并利用载人车导航相机图像中的月坑构建导航相机路标图,使用子图匹配的方法确定载人车可视区域内的月坑和环月卫星影像中月坑的对应关系,从而完成载人车位姿的解算。仿真试验结果表明:提出方法可以实现大范围移动过程中的高效导航。
In future manned lunar exploration,the complexity of the lunar surface environment and the limitations of the onboard systems mean that relying solely on onboard navigation systems is in-sufficient for achieving fully autonomous,precise navigation.The utilization of a remote operation center is needed to provide intelligent support for the large scale movement of the manned rover.This paper addresses the issue of efficient navigation for manned rovers undertaking long-distance ex-plorations in the complex lunar terrain.A method for efficient,large-scale lunar navigation guided by regional landmarks was proposed.By analyzing the imaging area of the navigation camera on the manned rover,craters within the entire route’s visible area were constructed into a lunar landmark map.Furthermore,a navigation camera landmark map was created from the rover’s camera images of lunar craters,and the method of subgraph matching was used to determine the correspondence be-tween craters within the rover’s visible area and those in lunar satellite imagery,thereby solving for the rover’s position and orientation.Experiments with simulation data showed that the method pro-posed in this paper could achieve efficient navigation during large scale movement.
作者
刘传凯
魏晓东
王晓雪
袁春强
刘茜
胡晓东
黄钊
LIU Chuankai;WEI Xiaodong;WANG Xiaoxue;YUAN Chunqiang;LIU Qian;HU Xiaodong;HUANG Zhao(Beijing Aerospace Control Center,Beijing 100094,China;Key Laboratory of Science and Technology on Aerospace Flight Dynamics,Beijing 100094,China;School of Astronautics,Beihang University,Beijing 100191,China)
出处
《载人航天》
CSCD
北大核心
2024年第3期337-345,共9页
Manned Spaceflight
基金
国家自然科学基金(61972020,62003925)
装备预研国防科技重点实验室基金(19NY1208,6142210200307,KGJ6142210210310)。
关键词
遥操作
机器视觉
子图匹配
视觉定位
teleoperation
computer vision
subgraph matching
visual positioning