期刊文献+
共找到32篇文章
< 1 2 >
每页显示 20 50 100
Dynamic SLAM Visual Odometry Based on Instance Segmentation:A Comprehensive Review
1
作者 Jiansheng Peng Qing Yang +3 位作者 Dunhua Chen Chengjun Yang Yong Xu Yong Qin 《Computers, Materials & Continua》 SCIE EI 2024年第1期167-196,共30页
Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,... Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals. 展开更多
关键词 Dynamic SLAM instance segmentation visual odometry
下载PDF
Human Visual Attention Mechanism-Inspired Point-and-Line Stereo Visual Odometry for Environments with Uneven Distributed Features
2
作者 Chang Wang Jianhua Zhang +2 位作者 Yan Zhao Youjie Zhou Jincheng Jiang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第3期191-204,共14页
Visual odometry is critical in visual simultaneous localization and mapping for robot navigation.However,the pose estimation performance of most current visual odometry algorithms degrades in scenes with unevenly dist... Visual odometry is critical in visual simultaneous localization and mapping for robot navigation.However,the pose estimation performance of most current visual odometry algorithms degrades in scenes with unevenly distributed features because dense features occupy excessive weight.Herein,a new human visual attention mechanism for point-and-line stereo visual odometry,which is called point-line-weight-mechanism visual odometry(PLWM-VO),is proposed to describe scene features in a global and balanced manner.A weight-adaptive model based on region partition and region growth is generated for the human visual attention mechanism,where sufficient attention is assigned to position-distinctive objects(sparse features in the environment).Furthermore,the sum of absolute differences algorithm is used to improve the accuracy of initialization for line features.Compared with the state-of-the-art method(ORB-VO),PLWM-VO show a 36.79%reduction in the absolute trajectory error on the Kitti and Euroc datasets.Although the time consumption of PLWM-VO is higher than that of ORB-VO,online test results indicate that PLWM-VO satisfies the real-time demand.The proposed algorithm not only significantly promotes the environmental adaptability of visual odometry,but also quantitatively demonstrates the superiority of the human visual attention mechanism. 展开更多
关键词 Visual odometry Human visual attention mechanism Environmental adaptability Uneven distributed features
下载PDF
Overfitting Reduction of Pose Estimation for Deep Learning Visual Odometry 被引量:4
3
作者 Xiaohan Yang Xiaojuan Li +2 位作者 Yong Guan Jiadong Song Rui Wang 《China Communications》 SCIE CSCD 2020年第6期196-210,共15页
Error or drift is frequently produced in pose estimation based on geometric"feature detection and tracking"monocular visual odometry(VO)when the speed of camera movement exceeds 1.5 m/s.While,in most VO meth... Error or drift is frequently produced in pose estimation based on geometric"feature detection and tracking"monocular visual odometry(VO)when the speed of camera movement exceeds 1.5 m/s.While,in most VO methods based on deep learning,weight factors are in the form of fixed values,which are easy to lead to overfitting.A new measurement system,for monocular visual odometry,named Deep Learning Visual Odometry(DLVO),is proposed based on neural network.In this system,Convolutional Neural Network(CNN)is used to extract feature and perform feature matching.Moreover,Recurrent Neural Network(RNN)is used for sequence modeling to estimate camera’s 6-dof poses.Instead of fixed weight values of CNN,Bayesian distribution of weight factors are introduced in order to effectively solve the problem of network overfitting.The 18,726 frame images in KITTI dataset are used for training network.This system can increase the generalization ability of network model in prediction process.Compared with original Recurrent Convolutional Neural Network(RCNN),our method can reduce the loss of test model by 5.33%.And it’s an effective method in improving the robustness of translation and rotation information than traditional VO methods. 展开更多
关键词 visual odometry neural network pose estimation bayesian distribution OVERFITTING
下载PDF
Accurate parameter estimation of systematic odometry errors for two-wheel differential mobile robots 被引量:3
4
作者 Changbae Jung Woojin Chung 《Journal of Measurement Science and Instrumentation》 CAS 2012年第3期268-272,共5页
Odometry using incremental wheel encoder odometry suffers from the accumulation of kinematic sensors provides the relative robot pose estimation. However, the modeling errors of wheels as the robot's travel distance ... Odometry using incremental wheel encoder odometry suffers from the accumulation of kinematic sensors provides the relative robot pose estimation. However, the modeling errors of wheels as the robot's travel distance increases. Therefore, the systematic errors need to be calibrated. The University of Michigan Benchmark(UMBmark) method is a widely used calibration scheme of the systematic errors in two wheel differential mobile robots. In this paper, the accurate parameter estimation of systematic errors is proposed by extending the conventional method. The contributions of this paper can be summarized as two issues. The first contribution is to present new calibration equations that reduce the systematic odometry errors. The new equations were derived to overcome the limitation of conventional schemes. The second contribu tion is to propose the design guideline of the test track for calibration experiments. The calibration performance can be im proved by appropriate design of the test track. The simulations and experimental results show that the accurate parameter es timation can be implemented by the proposed method. 展开更多
关键词 calibration kinematic modeling errors mobile robots odometry test tracks
下载PDF
Science Letters:Visual odometry for road vehicles—feasibility analysis 被引量:2
5
作者 SOTELO Miguel-angel GARCíA Roberto +4 位作者 PARRA Ignacio FERNNDEZ David GAVILN Miguel LVAREZ Sergio NARANJO José-eugenio 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2007年第12期2017-2020,共4页
Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS mea... Estimating the global position of a road vehicle without using GPS is a challenge that many scientists look forward to solving in the near future. Normally, inertial and odometry sensors are used to complement GPS measures in an attempt to provide a means for maintaining vehicle odometry during GPS outage. Nonetheless, recent experiments have demonstrated that computer vision can also be used as a valuable source to provide what can be denoted as visual odometry. For this purpose, vehicle motion can be estimated using a non-linear, photogrametric approach based on RAndom SAmple Consensus (RANSAC). The results prove that the detection and selection of relevant feature points is a crucial factor in the global performance of the visual odometry algorithm. The key issues for further improvement are discussed in this letter. 展开更多
关键词 3D visual odometry Ego-motion estimation RAndom SAmple Consensus (RANSAC) Photogrametric approach
下载PDF
A Study on Planetary Visual Odometry Optimization: Time Constraints and Reliability 被引量:1
6
作者 Enrica Zereik Davide Ducco Fabio Frassinelli Giuseppe Casalino 《Computer Technology and Application》 2011年第5期378-388,共11页
Robust and efficient vision systems are essential in such a way to support different kinds of autonomous robotic behaviors linked to the capability to interact with the surrounding environment, without relying on any ... Robust and efficient vision systems are essential in such a way to support different kinds of autonomous robotic behaviors linked to the capability to interact with the surrounding environment, without relying on any a priori knowledge. Within space missions, above all those involving rovers that have to explore planetary surfaces, vision can play a key role in the improvement of autonomous navigation functionalities: besides obstacle avoidance and hazard detection along the traveling, vision can in fact provide accurate motion estimation in order to constantly monitor all paths executed by the rover. The present work basically regards the development of an effective visual odometry system, focusing as much as possible on issues such as continuous operating mode, system speed and reliability. 展开更多
关键词 Visual odometry stereo vision speeded up robust feature (SURF) planetary rover
下载PDF
Semi-Direct Visual Odometry and Mapping System with RGB-D Camera
7
作者 Xinliang Zhong Xiao Luo +1 位作者 Jiaheng Zhao Yutong Huang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第1期83-93,共11页
In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera,which combines the merits of both feature based and direct based methods.The presented system directly estimates the camer... In this paper a semi-direct visual odometry and mapping system is proposed with a RGB-D camera,which combines the merits of both feature based and direct based methods.The presented system directly estimates the camera motion of two consecutive RGB-D frames by minimizing the photometric error.To permit outliers and noise,a robust sensor model built upon the t-distribution and an error function mixing depth and photometric errors are used to enhance the accuracy and robustness.Local graph optimization based on key frames is used to reduce the accumulative error and refine the local map.The loop closure detection method,which combines the appearance similarity method and spatial location constraints method,increases the speed of detection.Experimental results demonstrate that the proposed approach achieves higher accuracy on the motion estimation and environment reconstruction compared to the other state-of-the-art methods. Moreover,the proposed approach works in real-time on a laptop without a GPU,which makes it attractive for robots equipped with limited computational resources. 展开更多
关键词 RGB-D simultaneous LOCALIZATION and mapping(SLAM) visual odometry LOCALIZATION 3D MAPPING LOOP CLOSURE detection
下载PDF
PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration
8
作者 Yao Xiao Xiaogang Ruan Xiaoqing Zhu 《Journal of Autonomous Intelligence》 2018年第2期29-35,共7页
Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy ... Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 展开更多
关键词 PHOTOMETRIC Calibration Visual-Inertial odometry SIMULTANEOUS Localization and Mapping Robot Navigation
下载PDF
KLT-VIO:Real-time Monocular Visual-Inertial Odometry
9
作者 Yuhao Jin Hang Li Shoulin Yin 《IJLAI Transactions on Science and Engineering》 2024年第1期8-16,共9页
This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By inte... This paper proposes a Visual-Inertial Odometry(VIO)algorithm that relies solely on monocular cameras and Inertial Measurement Units(IMU),capable of real-time self-position estimation for robots during movement.By integrating the optical flow method,the algorithm tracks both point and line features in images simultaneously,significantly reducing computational complexity and the matching time for line feature descriptors.Additionally,this paper advances the triangulation method for line features,using depth information from line segment endpoints to determine their Plcker coordinates in three-dimensional space.Tests on the EuRoC datasets show that the proposed algorithm outperforms PL-VIO in terms of processing speed per frame,with an approximate 5%to 10%improvement in both relative pose error(RPE)and absolute trajectory error(ATE).These results demonstrate that the proposed VIO algorithm is an efficient solution suitable for low-computing platforms requiring real-time localization and navigation. 展开更多
关键词 Visual-inertial odometry Opticalflow Point features Line features Bundle adjustment
下载PDF
M2C-GVIO:motion manifold constraint aided GNSS-visual-inertial odometry for ground vehicles 被引量:1
10
作者 Tong Hua Ling Pei +3 位作者 Tao Li Jie Yin Guoqing Liu Wenxian Yu 《Satellite Navigation》 EI CSCD 2023年第1期77-91,I0003,共16页
Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.Howeve... Visual-Inertial Odometry(VIO)has been developed from Simultaneous Localization and Mapping(SLAM)as a lowcost and versatile sensor fusion approach and attracted increasing attention in ground vehicle positioning.However,VIOs usually have the degraded performance in challenging environments and degenerated motion scenarios.In this paper,we propose a ground vehicle-based VIO algorithm based on the Multi-State Constraint Kalman Filter(MSCKF)framework.Based on a unifed motion manifold assumption,we derive the measurement model of manifold constraints,including velocity,rotation,and translation constraints.Then we present a robust flter-based algorithm dedicated to ground vehicles,whose key is the real-time manifold noise estimation and adaptive measurement update.Besides,GNSS position measurements are loosely coupled into our approach,where the transformation between GNSS and VIO frame is optimized online.Finally,we theoretically analyze the system observability matrix and observability measures.Our algorithm is tested on both the simulation test and public datasets including Brno Urban dataset and Kaist Urban dataset.We compare the performance of our algorithm with classical VIO algorithms(MSCKF,VINS-Mono,R-VIO,ORB_SLAM3)and GVIO algorithms(GNSS-MSCKF,VINS-Fusion).The results demonstrate that our algorithm is more robust than other compared algorithms,showing a competitive position accuracy and computational efciency. 展开更多
关键词 Sensor fusion Visual-inertial odometry Motion manifold constraint
原文传递
A robust RGB-D visual odometry with moving object detection in dynamic indoor scenes
11
作者 Xianglong Zhang Haiyang Yu Yan Zhuang 《IET Cyber-Systems and Robotics》 EI 2023年第1期79-88,共10页
Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorat... Simultaneous localisation and mapping(SLAM)are the basis for many robotic applications.As the front end of SLAM,visual odometry is mainly used to estimate camera pose.In dynamic scenes,classical methods are deteriorated by dynamic objects and cannot achieve satisfactory results.In order to improve the robustness of visual odometry in dynamic scenes,this paper proposed a dynamic region detection method based on RGBD images.Firstly,all feature points on the RGB image are classified as dynamic and static using a triangle constraint and the epipolar geometric constraint successively.Meanwhile,the depth image is clustered using the K-Means method.The classified feature points are mapped to the clustered depth image,and a dynamic or static label is assigned to each cluster according to the number of dynamic feature points.Subsequently,a dynamic region mask for the RGB image is generated based on the dynamic clusters in the depth image,and the feature points covered by the mask are all removed.The remaining static feature points are applied to estimate the camera pose.Finally,some experimental results are provided to demonstrate the feasibility and performance. 展开更多
关键词 dynamic indoor scenes moving object detection RGB-D SLAM visual odometry
原文传递
Lightweight hybrid visual-inertial odometry with closed-form zero velocity update 被引量:5
12
作者 QIU Xiaochen ZHANG Hai FU Wenxing 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2020年第12期3344-3359,共16页
Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-... Visual-Inertial Odometry(VIO) fuses measurements from camera and Inertial Measurement Unit(IMU) to achieve accumulative performance that is better than using individual sensors.Hybrid VIO is an extended Kalman filter-based solution which augments features with long tracking length into the state vector of Multi-State Constraint Kalman Filter(MSCKF). In this paper, a novel hybrid VIO is proposed, which focuses on utilizing low-cost sensors while also considering both the computational efficiency and positioning precision. The proposed algorithm introduces several novel contributions. Firstly, by deducing an analytical error transition equation, onedimensional inverse depth parametrization is utilized to parametrize the augmented feature state.This modification is shown to significantly improve the computational efficiency and numerical robustness, as a result achieving higher precision. Secondly, for better handling of the static scene,a novel closed-form Zero velocity UPda Te(ZUPT) method is proposed. ZUPT is modeled as a measurement update for the filter rather than forbidding propagation roughly, which has the advantage of correcting the overall state through correlation in the filter covariance matrix. Furthermore, online spatial and temporal calibration is also incorporated. Experiments are conducted on both public dataset and real data. The results demonstrate the effectiveness of the proposed solution by showing that its performance is better than the baseline and the state-of-the-art algorithms in terms of both efficiency and precision. A related software is open-sourced to benefit the community. 展开更多
关键词 Inverse depth parametrization Kalman filter Online calibration Visual-inertial odometry Zero velocity update
原文传递
Fast and accurate visual odometry from a monocular camera 被引量:2
13
作者 Xin YANG Tangli XUE +1 位作者 Hongcheng LUO Jiabin GUO 《Frontiers of Computer Science》 SCIE EI CSCD 2019年第6期1326-1336,共11页
This paper aims at a semi-dense visual odometry system that is accurate,robust,and able to run realtime on mobile devices,such as smartphones,AR glasses and small drones.The key contributions of our system include:1)t... This paper aims at a semi-dense visual odometry system that is accurate,robust,and able to run realtime on mobile devices,such as smartphones,AR glasses and small drones.The key contributions of our system include:1)the modified pyramidal Lucas-Kanade algorithm which incorporates spatial and depth constraints for fast and accurate camera pose estimation;2)adaptive image resizing based on inertial sensors for greatly accelerating tracking speed with little accuracy degradation;and 3)an ultrafast binary feature description based directly on intensities of a resized and smoothed image patch around each pixel that is sufficiently effective for relocalization.A quantitative evaluation on public datasets demonstrates that our system achieves better tracking accuracy and up to about 2X faster tracking speed comparing to the state-of-the-art monocular SLAM system:LSD-SLAM.For the relocalization task,our system is 2.0X∼4.6X faster than DBoW2 and achieves a similar accuracy. 展开更多
关键词 visual odometry mobile devices direct tracking relocalization inertial sensing binary feature
原文传递
Design of an enhanced visual odometry by building and matching compressive panoramic landmarks online 被引量:2
14
作者 Wei LU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2015年第2期152-165,共14页
Efficient and precise localization is a prerequisite for the intelligent navigation of mobile robots. Traditional visual localization systems, such as visual odometry (VO) and simultaneous localization and mapping ... Efficient and precise localization is a prerequisite for the intelligent navigation of mobile robots. Traditional visual localization systems, such as visual odometry (VO) and simultaneous localization and mapping (SLAM), suffer from two shortcomings: a drift problem caused by accumulated localization error, and erroneous motion estimation due to illumination variation and moving objects. In this paper, we propose an enhanced VO by introducing a panoramic camera into the traditional stereo-only VO system. Benefiting from the 360° field of view, the panoramic camera is responsible for three tasks: (1) detect- ing road junctions and building a landmark library online; (2) correcting the robot's position when the landmarks are revisited with any orientation; (3) working as a panoramic compass when the stereo VO cannot provide reliable positioning results. To use the large-sized panoramic images efficiently, the concept of compressed sensing is introduced into the solution and an adap- tive compressive feature is presented. Combined with our previous two-stage local binocular bundle adjustment (TLBBA) stereo VO, the new system can obtain reliable positioning results in quasi-real time. Experimental results of challenging long-range tests show that our enhanced VO is much more accurate and robust than the traditional VO, thanks to the compressive panoramic landmarks built online. 展开更多
关键词 Visual odometry Panoramic landmark Landmark matching Compressed sensing Adaptive compressive feature
原文传递
Real-time Visual Odometry Estimation Based on Principal Direction Detection on Ceiling Vision 被引量:2
15
作者 Han Wang Wei Mou +3 位作者 Gerald Seet Mao-Hai Li M.W.S.Lau Dan-Wei Wang 《International Journal of Automation and computing》 EI CSCD 2013年第5期397-404,共8页
In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error acc... In this paper,we present a novel algorithm for odometry estimation based on ceiling vision.The main contribution of this algorithm is the introduction of principal direction detection that can greatly reduce error accumulation problem in most visual odometry estimation approaches.The principal direction is defned based on the fact that our ceiling is flled with artifcial vertical and horizontal lines which can be used as reference for the current robot s heading direction.The proposed approach can be operated in real-time and it performs well even with camera s disturbance.A moving low-cost RGB-D camera(Kinect),mounted on a robot,is used to continuously acquire point clouds.Iterative closest point(ICP) is the common way to estimate the current camera position by registering the currently captured point cloud to the previous one.However,its performance sufers from data association problem or it requires pre-alignment information.The performance of the proposed principal direction detection approach does not rely on data association knowledge.Using this method,two point clouds are properly pre-aligned.Hence,we can use ICP to fne-tune the transformation parameters and minimize registration error.Experimental results demonstrate the performance and stability of the proposed system under disturbance in real-time.Several indoor tests are carried out to show that the proposed visual odometry estimation method can help to signifcantly improve the accuracy of simultaneous localization and mapping(SLAM). 展开更多
关键词 Visual odometry ego-motion principal direction ceiling vision simultaneous localization and mapping(SLAM)
原文传递
Robust and efficient edge-based visual odometry
16
作者 Feihu Yan Zhaoxin Li Zhong Zhou 《Computational Visual Media》 SCIE EI CSCD 2022年第3期467-481,共15页
Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite ch... Visual odometry,which aims to estimate relative camera motion between sequential video frames,has been widely used in the fields of augmented reality,virtual reality,and autonomous driving.However,it is still quite challenging for stateof-the-art approaches to handle low-texture scenes.In this paper,we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose.In contrast to direct methods,we choose reprojection error to construct the optimization energy,which can effectively cope with illumination changes.The distance transform map built upon edge detection for each frame is used to improve tracking efficiency.A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy.Experiments on public datasets show that the method is comparable to stateof-the-art methods in terms of tracking accuracy,while being faster and more robust. 展开更多
关键词 visual odometry(VO) edge structure distance transform low-texture
原文传递
Visual SLAM Based on Object Detection Network:A Review
17
作者 Jiansheng Peng Dunhua Chen +3 位作者 Qing Yang Chengjun Yang Yong Xu Yong Qin 《Computers, Materials & Continua》 SCIE EI 2023年第12期3209-3236,共28页
Visual simultaneous localization and mapping(SLAM)is crucial in robotics and autonomous driving.However,traditional visual SLAM faces challenges in dynamic environments.To address this issue,researchers have proposed ... Visual simultaneous localization and mapping(SLAM)is crucial in robotics and autonomous driving.However,traditional visual SLAM faces challenges in dynamic environments.To address this issue,researchers have proposed semantic SLAM,which combines object detection,semantic segmentation,instance segmentation,and visual SLAM.Despite the growing body of literature on semantic SLAM,there is currently a lack of comprehensive research on the integration of object detection and visual SLAM.Therefore,this study aims to gather information from multiple databases and review relevant literature using specific keywords.It focuses on visual SLAM based on object detection,covering different aspects.Firstly,it discusses the current research status and challenges in this field,highlighting methods for incorporating semantic information from object detection networks into mileage measurement,closed-loop detection,and map construction.It also compares the characteristics and performance of various visual SLAM object detection algorithms.Lastly,it provides an outlook on future research directions and emerging trends in visual SLAM.Research has shown that visual SLAM based on object detection has significant improvements compared to traditional SLAM in dynamic point removal,data association,point cloud segmentation,and other technologies.It can improve the robustness and accuracy of the entire SLAM system and can run in real time.With the continuous optimization of algorithms and the improvement of hardware level,object visual SLAM has great potential for development. 展开更多
关键词 Object detection visual SLAM visual odometry loop closure detection semantic map
下载PDF
An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid 被引量:6
18
作者 He Zhang Lingqiu Jin Cang Ye 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1389-1400,共12页
There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can ... There are about 253 million people with visual impairment worldwide.Many of them use a white cane and/or a guide dog as the mobility tool for daily travel.Despite decades of efforts,electronic navigation aid that can replace white cane is still research in progress.In this paper,we propose an RGB-D camera based visual positioning system(VPS)for real-time localization of a robotic navigation aid(RNA)in an architectural floor plan for assistive navigation.The core of the system is the combination of a new 6-DOF depth-enhanced visual-inertial odometry(DVIO)method and a particle filter localization(PFL)method.DVIO estimates RNA’s pose by using the data from an RGB-D camera and an inertial measurement unit(IMU).It extracts the floor plane from the camera’s depth data and tightly couples the floor plane,the visual features(with and without depth data),and the IMU’s inertial data in a graph optimization framework to estimate the device’s 6-DOF pose.Due to the use of the floor plane and depth data from the RGB-D camera,DVIO has a better pose estimation accuracy than the conventional VIO method.To reduce the accumulated pose error of DVIO for navigation in a large indoor space,we developed the PFL method to locate RNA in the floor plan.PFL leverages geometric information of the architectural CAD drawing of an indoor space to further reduce the error of the DVIO-estimated pose.Based on VPS,an assistive navigation system is developed for the RNA prototype to assist a visually impaired person in navigating a large indoor space.Experimental results demonstrate that:1)DVIO method achieves better pose estimation accuracy than the state-of-the-art VIO method and performs real-time pose estimation(18 Hz pose update rate)on a UP Board computer;2)PFL reduces the DVIO-accrued pose error by 82.5%on average and allows for accurate wayfinding(endpoint position error≤45 cm)in large indoor spaces. 展开更多
关键词 Assistive navigation pose estimation robotic navigation aid(RNA) simultaneous localization and mapping visual-inertial odometry visual positioning system(VPS)
下载PDF
Novel method to calibrate kinematic parameters for mobile robots 被引量:3
19
作者 施家栋 刘娟 王建中 《Journal of Beijing Institute of Technology》 EI CAS 2015年第1期91-96,共6页
In order to reduce the system errors of dead reckoning and improve the localization accu- racy, a new model for systematic error of mobile robot was defined and a UMBmark-based method for calibrating and compensating ... In order to reduce the system errors of dead reckoning and improve the localization accu- racy, a new model for systematic error of mobile robot was defined and a UMBmark-based method for calibrating and compensating systematic error was presented. Three dominant reasons causing systematic errors were considered: imprecise average wheel diameter, uncertainty about the effec- tive wheelbase and unequal wheel' s diameter. The new model for systematic errors is considering the coupling effect of the three factors during the localization of mobile robot. Three coefficients to calibrate average wheel diameter, effective wheelbase, left and right wheels' diameter were ob- tained. Then these three coefficients were used to make improvements on robot kinematic equations. The experiments on the dual-wheel drive mobile robot DaNI show that the presented method has achieveda significant improvement in the location accuracy compared with the UMBmark calibration. 展开更多
关键词 odometry systematic errors POSITION calibration mobile robot
下载PDF
Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality 被引量:5
20
作者 Jinyu LI Bangbang YANG +3 位作者 Danpeng CHEN Nan WANG Guofeng ZHANG Hujun BAO 《Virtual Reality & Intelligent Hardware》 2019年第4期386-410,共25页
Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an ... Although VSLAM/VISLAM has achieved great success,it is still difficult to quantitatively evaluate the localization results of different kinds of SLAM systems from the aspect of augmented reality due to the lack of an appropriate benchmark.For AR applications in practice,a variety of challenging situations(e.g.,fast motion,strong rotation,serious motion blur,dynamic interference)may be easily encountered since a home user may not carefully move the AR device,and the real environment may be quite complex.In addition,the frequency of camera lost should be minimized and the recovery from the failure status should be fast and accurate for good AR experience.Existing SLAM datasets/benchmarks generally only provide the evaluation of pose accuracy and their camera motions are somehow simple and do not fit well the common cases in the mobile AR applications.With the above motivation,we build a new visual-inertial dataset as well as a series of evaluation criteria for AR.We also review the existing monocular VSLAM/VISLAM approaches with detailed analyses and comparisons.Especially,we select 8 representative monocular VSLAM/VISLAM approaches/systems and quantitatively evaluate them on our benchmark.Our dataset,sample code and corresponding evaluation tools are available at the benchmark website http://www.zjucvg.net/eval-vislam/. 展开更多
关键词 Visual-inertial SLAM odometry Tracking LOCALIZATION Mapping Augmented reality
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部