摘要
面向机器人手眼协调抓取,提出一种针对家庭环境中常见物体的3维建模方法.利用RGB-D传感器能同时获取RGB图像与深度图像的特点,从RGB图像中提取特征点与特征描述子,利用特征描述子的匹配建立相邻帧数据间的对应关系,利用基于随机抽样一致性的三点算法实现帧间的相对位姿计算,并基于路径闭环用Levenberg-Marquardt算法最小化再投影误差以优化位姿计算结果.利用该方法,只需将待建模物体放置在平整桌面上,环绕物体采集10~20帧数据即可建立物体的密集3维点云模型.对20种适于服务机器人抓取的家庭常见物体建立了3维模型.实验结果表明,对于直径5cm~7cm的模型,其误差约为1mm,能够满足机器人抓取时位姿计算的需要.
For robot's hand-eye coordinated grasping, a 3D modeling method for common objects in the household en- vironment is proposed. By simultaneously collecting RGB image and depth image from the RGB-D sensor, feature points and feature descriptors are extracted from the RGB image. The correspondences between adjacent frames are set up through matching of the feature descriptors. The RANSAC (RANdom SAmple Consensus) based three point algorithm is used to compute the relative pose between adjacent frames. Based on loop closure, the result is refined by minimizing the re-projection error with Levenberg-Marquardt algorithm. With this method, object's dense 3D point cloud model can be obtained simply by placing the object on a plane table, and collecting ten to twenty frames data around the object. 3D models are set up for twenty household objects which are appropriate for the service robot to grasp. The experiment results show that the error is about 1 mm for models with diameters between 5 cm and 7 cm, which fully satisfies the requirements for the pose determination in robot grasping.
出处
《机器人》
EI
CSCD
北大核心
2013年第2期151-155,共5页
Robot
基金
国家863计划资助项目(2012AA100906)
机械系统与振动国家重点试验室资助项目(MSV-MS-2010-01)
上海市教委创新项目(12ZZ014)
关键词
3维建模
特征点
特征描述子
自运动估计
位姿计算
3D modeling
feature point
feature descriptor
ego-motion estimation
pose determination