期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
ST-LaneNet: Lane Line Detection Method Based on Swin Transformer and LaneNet
1
作者 Yufeng Du Rongyun Zhang +3 位作者 peicheng shi Linfeng Zhao Bin Zhang Yaming Liu 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第1期130-145,共16页
The advancement of autonomous driving heavily relies on the ability to accurate lane lines detection.As deep learning and computer vision technologies evolve,a variety of deep learning-based methods for lane line dete... The advancement of autonomous driving heavily relies on the ability to accurate lane lines detection.As deep learning and computer vision technologies evolve,a variety of deep learning-based methods for lane line detection have been proposed by researchers in the field.However,owing to the simple appearance of lane lines and the lack of distinctive features,it is easy for other objects with similar local appearances to interfere with the process of detecting lane lines.The precision of lane line detection is limited by the unpredictable quantity and diversity of lane lines.To address the aforementioned challenges,we propose a novel deep learning approach for lane line detection.This method leverages the Swin Transformer in conjunction with LaneNet(called ST-LaneNet).The experience results showed that the true positive detection rate can reach 97.53%for easy lanes and 96.83%for difficult lanes(such as scenes with severe occlusion and extreme lighting conditions),which can better accomplish the objective of detecting lane lines.In 1000 detection samples,the average detection accuracy can reach 97.83%,the average inference time per image can reach 17.8 ms,and the average number of frames per second can reach 64.8 Hz.The programming scripts and associated models for this project can be accessed openly at the following GitHub repository:https://github.com/Duane 711/Lane-line-detec tion-ST-LaneNet. 展开更多
关键词 Autonomous driving Lane line detection Deep learning Swin transformer
下载PDF
3D Vehicle Detection Algorithm Based onMultimodal Decision-Level Fusion
2
作者 peicheng shi Heng Qi +1 位作者 Zhiqiang Liu Aixi Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第6期2007-2023,共17页
3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be... 3D vehicle detection based on LiDAR-camera fusion is becoming an emerging research topic in autonomous driving.The algorithm based on the Camera-LiDAR object candidate fusion method(CLOCs)is currently considered to be a more effective decision-level fusion algorithm,but it does not fully utilize the extracted features of 3D and 2D.Therefore,we proposed a 3D vehicle detection algorithm based onmultimodal decision-level fusion.First,project the anchor point of the 3D detection bounding box into the 2D image,calculate the distance between 2D and 3D anchor points,and use this distance as a new fusion feature to enhance the feature redundancy of the network.Subsequently,add an attention module:squeeze-and-excitation networks,weight each feature channel to enhance the important features of the network,and suppress useless features.The experimental results show that the mean average precision of the algorithm in the KITTI dataset is 82.96%,which outperforms previous state-ofthe-art multimodal fusion-based methods,and the average accuracy in the Easy,Moderate and Hard evaluation indicators reaches 88.96%,82.60%,and 77.31%,respectively,which are higher compared to the original CLOCs model by 1.02%,2.29%,and 0.41%,respectively.Compared with the original CLOCs algorithm,our algorithm has higher accuracy and better performance in 3D vehicle detection. 展开更多
关键词 3D vehicle detection multimodal fusion CLOCs network structure optimization attention module
下载PDF
MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection
3
作者 peicheng shi Zhiqiang Liu +1 位作者 Heng Qi Aixi Yang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5615-5637,共23页
In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection ... In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection will be affected by problems such as illumination changes,object occlusion,and object detection distance.To this purpose,we face these challenges by proposing a multimodal feature fusion network for 3D object detection(MFF-Net).In this research,this paper first uses the spatial transformation projection algorithm to map the image features into the feature space,so that the image features are in the same spatial dimension when fused with the point cloud features.Then,feature channel weighting is performed using an adaptive expression augmentation fusion network to enhance important network features,suppress useless features,and increase the directionality of the network to features.Finally,this paper increases the probability of false detection and missed detection in the non-maximum suppression algo-rithm by increasing the one-dimensional threshold.So far,this paper has constructed a complete 3D target detection network based on multimodal feature fusion.The experimental results show that the proposed achieves an average accuracy of 82.60%on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset,outperforming previous state-of-the-art multimodal fusion networks.In Easy,Moderate,and hard evaluation indicators,the accuracy rate of this paper reaches 90.96%,81.46%,and 75.39%.This shows that the MFF-Net network has good performance in 3D object detection. 展开更多
关键词 3D object detection multimodal fusion neural network autonomous driving attention mechanism
下载PDF
Simulation and verification analysis of the ride comfort of an in-wheel motor-driven electric vehicle based on a combination of ADAMS and MATLAB 被引量:1
4
作者 peicheng shi Qi Zhao +2 位作者 Kefei Wang Rongyun Zhang Ping Xiao 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2022年第1期99-116,共18页
To study the ride comfort of wheel-hub-driven electric vehicles,a simulation and verifi-cation method based on a combination of ADAMS and MATLAB modeling is proposed.First,a multibody dynamic simulation model of an in... To study the ride comfort of wheel-hub-driven electric vehicles,a simulation and verifi-cation method based on a combination of ADAMS and MATLAB modeling is proposed.First,a multibody dynamic simulation model of an in-wheel motor-driven electric vehi-cle is established using ADAMS/Car.Then,the pavement excitation and electromag-netic force analytical equations are provided based on the specific operating conditions of the vehicle and the in-wheel motor to analyze the impact of the electromagnetic force fluctuation from an unsprung mass increase and motor air gap unevenness on vehicle ride comfort after the introduction of an in-wheel motor.Next,the vibration model and the motion differential equation of the body–wheel dual-mass system of an in-wheel motor-driven electric vehicle are established.The influence of the in-wheel motor on the vibration response index of the dual-mass system is analyzed by using MATLAB/Simulink software.The variation in the vehicle vibration performance index with/without the motor electromagnetic force excitation factor is analyzed and com-pared with the ADAMS multibody dynamics analysis results.The results show that the method based on a combination of ADAMS and MATLAB modeling can forecast the ride comfort of an in-wheel motor-driven electric vehicle,reducing the cost of physical prototype experiments. 展开更多
关键词 In-wheel motor electric vehicle ride comfort ADAMS/CAR MAT-LAB/Simulink
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部