摘要
研究了一种精度较高的视觉里程计实现方法。首先采用计算量小、精度高的SURF算法提取特征点,并采用最近邻向量匹配法进行特征点匹配;采用RANSAC算法求得帧间摄像机坐标系的转换矩阵;提出了一种视觉里程计标定方法,并通过该方法得到机器人起始点坐标系下的位移信息,保证了机器人有自身姿态变化时的定位的准确性;圆轨迹实验验证了视觉里程计算法的可行性。
A high precision visual odometry is proposed. Small computational amount and high precision algorithm--SURF is used for the extraction of feature points and Hausdorff diatance is introduced for the matching. The transformation matrix of camera coordinates of adjacent frames are got by the nearest neighbor vector-matching algorithm and the precision is ensured. A method of odometry calibration is proposed to obtain the displacement information under the robot starting point coordinate and this method could make sure of the precision when the posture of robot changed. The feasibility of vision odometry is verified by the round trace experiments.
出处
《传感器与微系统》
CSCD
北大核心
2012年第2期26-29,共4页
Transducer and Microsystem Technologies