期刊文献+

面向机器人手眼协调抓取的3维建模方法 被引量:8

A 3D Modeling Method for Robot’s Hand-Eye Coordinated Grasping
原文传递
导出
摘要 面向机器人手眼协调抓取,提出一种针对家庭环境中常见物体的3维建模方法.利用RGB-D传感器能同时获取RGB图像与深度图像的特点,从RGB图像中提取特征点与特征描述子,利用特征描述子的匹配建立相邻帧数据间的对应关系,利用基于随机抽样一致性的三点算法实现帧间的相对位姿计算,并基于路径闭环用Levenberg-Marquardt算法最小化再投影误差以优化位姿计算结果.利用该方法,只需将待建模物体放置在平整桌面上,环绕物体采集10~20帧数据即可建立物体的密集3维点云模型.对20种适于服务机器人抓取的家庭常见物体建立了3维模型.实验结果表明,对于直径5cm~7cm的模型,其误差约为1mm,能够满足机器人抓取时位姿计算的需要. For robot's hand-eye coordinated grasping, a 3D modeling method for common objects in the household en- vironment is proposed. By simultaneously collecting RGB image and depth image from the RGB-D sensor, feature points and feature descriptors are extracted from the RGB image. The correspondences between adjacent frames are set up through matching of the feature descriptors. The RANSAC (RANdom SAmple Consensus) based three point algorithm is used to compute the relative pose between adjacent frames. Based on loop closure, the result is refined by minimizing the re-projection error with Levenberg-Marquardt algorithm. With this method, object's dense 3D point cloud model can be obtained simply by placing the object on a plane table, and collecting ten to twenty frames data around the object. 3D models are set up for twenty household objects which are appropriate for the service robot to grasp. The experiment results show that the error is about 1 mm for models with diameters between 5 cm and 7 cm, which fully satisfies the requirements for the pose determination in robot grasping.
出处 《机器人》 EI CSCD 北大核心 2013年第2期151-155,共5页 Robot
基金 国家863计划资助项目(2012AA100906) 机械系统与振动国家重点试验室资助项目(MSV-MS-2010-01) 上海市教委创新项目(12ZZ014)
关键词 3维建模 特征点 特征描述子 自运动估计 位姿计算 3D modeling feature point feature descriptor ego-motion estimation pose determination
  • 相关文献

参考文献9

  • 1Furukawa Y, Curless B, Seitz S M, et al. Towards Internet- scale multi-view stereo[C]//23rd IEEE Conference on Com- puter Vision and Pattern Recognition. Piscataway, NJ, USA: IEEE, 2010: 1434-1441.
  • 2Henry E Krainin M, Herbst E, et al. RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments[J]. International Journal of Robotics Research, 2012, 31(5): 647-663.
  • 3Izadi S, Kim D, Hilliges O, et al. KinectFusion: Real-time 3D reconstruction and interaction using a moving depth cam- era[C]//24th Annual ACM Symposium on User Interface Soft- ware and Technology. New York, NY, USA: ACM, 2011: 559- 568.
  • 4Krainin M, Henry P, Ren X F, et al. Manipulator and object tracking for in-hand 3D object modeling[J]. International Jour- nal of Robotics Research, 2011, 30(11): 1311-1327.
  • 5Arun K S, Huang T S, Blostein S D. Least-squares fitting of two 3-D point sets[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1987, 9(5): 698-700.
  • 6Besl P J, McKay N D. A method for registration of 3-D shapes[J]. IEEE Transactions on Pattem Analysis and Machine Intelligence, 1992, 14(2): 239-256.
  • 7Lowe D G. Distinctive image features from scale-invaxiant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
  • 8Bay H, Ess A, Tuytelaars T, et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3): 346-359.
  • 9Ozuysal M, Calonder M, Lepetit V, et al. Fast keypoint recogni- tion using random ferns[J]. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 2010, 32(3): 448-461.

同被引文献66

  • 1SMISEKJ,JANCOSEKM,PAJDLAT.3DwithKinect[M]//ConsumerDepthCamerasforComputerVision.London:Springer,2013:3-25.
  • 2HUANGYonggang,WANGYunhong,TANTieniu.Combiningstatisticsofgeometricalandcorrelativefeaturesfor3Dfacerecognition[C]//ProcoftheBritishMachineVisionConference.2006:879-888.
  • 3CIOCCAG,CUSANOC,SCHETTINIR.ImageorientationdetectionusingLBPbasedfeaturesandlogisticregression[EB/OL].2013.http://www.ivl.disco.unimib.it/publications/pdf/ciocca2013imageorientation.pdf.
  • 4GUOZhenhua,ZHANGLei,ZHANGD.Acompletedmodelingoflocalbinarypatternoperatorfortextureclassification[J].IEEETransonImageProcessing,2010,19(6):1657-1663.
  • 5LIANHuicheng,LUBaoliang.Multiviewgenderclassificationusinglocalbinarypatternsandsupportvectormachines[M]//AdvancesinNeuralNetworks.Berlin:Springer,2006:202-209.
  • 6SANDBACHG,ZAFEIRIOUS,PANTICM.Localnormalbinarypatternsfor3Dfacialactionunitdetection[C]//Procofthe19thIEEEInternationalConferenceonImageProcessing.[S.l.]:IEEEPress,2012:1813-1816.
  • 7ULLAHI,HUSSAINM,ABOALSAMH H,etal.Genderrecognitionfromfaceimageswithdyadicwavelettransformandlocalbinarypattern[M]//AdvancesinVisualComputing.Berlin:Springer,2012:409-419.
  • 8SHANCaifeng.Learninglocalbinarypatternsforgenderclassificationonrealworldfaceimages[J].PatternRecognitionLetters,2012,33(4):431-437.
  • 9Endres F, Hess J, Engelhard N, et al. An evaluation of the RGB- D SLAM system[J]. Perception, 2012, 3(c): 1691-1696.
  • 10Konolige K, Mihelich E Technical description of Kinect cal- ibration[N/OL]. [2011-11-03]. http://www.ros.org/wiki/kinect_ calibration/technical.

引证文献8

二级引证文献57

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部