期刊文献+

基于TOF相机的喷涂工件在线三维重建 被引量:13

TOF camera based 3D-object modeling for spraying production line
下载PDF
导出
摘要 喷涂生产线轨迹规划和喷涂机器人自编程技术大都以工件的在线三维测量为基础。近年来TOF相机作为一种高性价比的3D成像设备,被应用于工件测量。针对TOF相机成像视场有限、单次成像只能获取局部轮廓深度信息等问题,提出一种基于工件在位旋转和图形处理器(GPU)加速的TOF点云视频流三维重建算法。该方法在有向距离函数(SDF)点云融合基础上,采用空间散列表存储管理海量点云数据,同时引入快速视觉里程(FOVIS)算法用于姿态估计,以提高喷涂工件在位三维重建算法的效率和鲁棒性。喷涂生产线仿真平台实验表明,在线重建过程中平均帧数可达58 f/s,失败率≤2%,显存占用率25%,为随后的三维测量和喷涂轨迹规划提供完整的点云数据。 Spray production line trajectory planning and spray robot self programming technology are based on the workpiece on-line measurement. As a cost-effective three-dimensional imaging device, TOF camera has been applied to workpiece measurement in recent years. Aiming at the problem that existing TOF camera in terms of limited field of imaging view and single image can only obtain local contour depth information. A method of 3D-object modeling based on GPU-accelerated computing and TOF point cloud streaming is proposed. The main algorithm is signed distance function (SDF) point cloud fusion, then spatial hashing storage is used to manage massive point cloud data, meanwhile, fast odometry from vision (FOVIS) system for pose estimation is introduced to improve the efficiency and robustness of in-situ 3D-object modeling algorithms of workpiees. The experimental results on simulation platform of spraying production line show that the average number of frames in the modeling process can reach 58 frames per second, failure rate less than 2% , graphic memory usage rate about 25% , provides complete point cloud data for subsequent 3D measurement and spray traajectory planning.
出处 《电子测量与仪器学报》 CSCD 北大核心 2017年第12期1991-1998,共8页 Journal of Electronic Measurement and Instrumentation
基金 国家自然科学基金(61571184 61733004 U1613209)资助项目
关键词 喷涂机器人 TOF相机 三维重建 图形处理器 spraying robot TOF camera 3D-object modeling graphic processing unit (GPU)
  • 相关文献

参考文献6

二级参考文献56

  • 1郝云彩,肖淑琴,王丽霞.星载光学遥感器消杂光技术现状与发展[J].中国空间科学技术,1995,15(3):40-50. 被引量:21
  • 2陈飞舟,陈志杨,丁展,叶修梓,张三元.基于径向基函数的残缺点云数据修复[J].计算机辅助设计与图形学学报,2006,18(9):1414-1419. 被引量:31
  • 3WANG X, XIAO B X, GUO X Y. Human-like character animation of maize driven by motion capture data[J]. In- formation : Computational Science, 2011, 8(2): 345-353.
  • 4SHIRATORI T, HODGINS J K. Accelerometer- based user interfaces for the control of a physically simulated character[J]. ACM Transactions on Graphics, 2008, 27(5): 1-9.
  • 5XIAO ZH D, NAIT-CHARIF H, ZHANG J J. Auto matic estimation of Skeletal motion from optical mo- tion capture data [J]. Computer Science, 2008, 5277: 144-153.
  • 6BLACKBURN J, RIBEIRO E. Human motion recog- nition using isomap and dynamie time warping[C]. Proceedings of the 2nd Conference on Human Motion: Understanding, Modeling, Capture and Animation, 2007: 285-298.
  • 7SHOTTON J, SHARP T, KIPMAN A, et al. Real- time human pose recognition in parts from single depth images [J]. Communications of the ACM, 2013, 56(1): 116-121.
  • 8ISHIGAKI S, WHITE T, ZORDAN VB, et al. Performance-based control interface for character ani- mation[J]. ACM Transactions on Graphics, 2009, 28(3): 1-8.
  • 9GANAPATHI V, PI.AGEMANN C, K()I.LER D, et al. Real time motion capture using a single time-of- flight camera[C]. IEEE Conference on Computer Vi- sion and Pattern Recognition, 2010 : 755-762.
  • 10PLAGEMANN C, GANAPATHI V, KOLI.ER D, et al. Real-time identification and localization of body parts from depth images [C]. In Proceedings of 1CRA, 2010. 1, 2, 7.

共引文献50

同被引文献141

引证文献13

二级引证文献65

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部