期刊文献+
共找到13篇文章
< 1 >
每页显示 20 50 100
Research on Track Fastener Service Status Detection Based on Improved Yolov4 Model
1
作者 Jing He Weiqi Wang Nengpu Yang 《Journal of Transportation Technologies》 2024年第2期212-223,共12页
As an important part of railway lines, the healthy service status of track fasteners was very important to ensure the safety of trains. The application of deep learning algorithms was becoming an important method to r... As an important part of railway lines, the healthy service status of track fasteners was very important to ensure the safety of trains. The application of deep learning algorithms was becoming an important method to realize its state detection. However, there was often a deficiency that the detection accuracy and calculation speed of model were difficult to balance, when the traditional deep learning model is used to detect the service state of track fasteners. Targeting this issue, an improved Yolov4 model for detecting the service status of track fasteners was proposed. Firstly, the Mixup data augmentation technology was introduced into Yolov4 model to enhance the generalization ability of model. Secondly, the MobileNet-V2 lightweight network was employed in lieu of the CSPDarknet53 network as the backbone, thereby reducing the number of algorithm parameters and improving the model’s computational efficiency. Finally, the SE attention mechanism was incorporated to boost the importance of rail fastener identification by emphasizing relevant image features, ensuring that the network’s focus was primarily on the fasteners being inspected. The algorithm achieved both high precision and high speed operation of the rail fastener service state detection, while realizing the lightweight of model. The experimental results revealed that, the MAP value of the rail fastener service state detection algorithm based on the improved Yolov4 model reaches 83.2%, which is 2.83% higher than that of the traditional Yolov4 model, and the calculation speed was improved by 67.39%. Compared with the traditional Yolov4 model, the proposed method achieved the collaborative optimization of detection accuracy and calculation speed. 展开更多
关键词 yolov4 Model Service Status of Track Fasteners Detection and Recognition Data Augmentation Lightweight network Attention Mechanism
下载PDF
基于MSRCRYOLOv4tiny的田间玉米杂草检测模型 被引量:18
2
作者 刘莫尘 高甜甜 +3 位作者 马宗旭 宋占华 李法德 闫银发 《农业机械学报》 EI CAS CSCD 北大核心 2022年第2期246-255,335,共11页
为实现田间环境下对玉米苗和杂草的高精度实时检测,本文提出一种融合带色彩恢复的多尺度视网膜(Multi-scale retinex with color restoration,MSRCR)增强算法的改进YOLOv4tiny模型。首先,针对田间环境的图像特点采用MSRCR算法进行图像... 为实现田间环境下对玉米苗和杂草的高精度实时检测,本文提出一种融合带色彩恢复的多尺度视网膜(Multi-scale retinex with color restoration,MSRCR)增强算法的改进YOLOv4tiny模型。首先,针对田间环境的图像特点采用MSRCR算法进行图像特征增强预处理,提高图像的对比度和细节质量;然后使用Mosaic在线数据增强方式,丰富目标检测背景,提高训练效率和小目标的检测精度;最后对YOLOv4tiny模型使用K-means++聚类算法进行先验框聚类分析和通道剪枝处理。改进和简化后的模型总参数量降低了45.3%,模型占用内存减少了45.8%,平均精度均值(Mean average precision,mAP)提高了2.5个百分点,在Jetson Nano嵌入式平台上平均检测帧耗时减少了22.4%。本文提出的PruneYOLOv4tiny模型与Faster RCNN、YOLOv3tiny、YOLOv43种常用的目标检测模型进行比较,结果表明:PruneYOLOv4tiny的mAP为96.6%,分别比Faster RCNN和YOLOv3tiny高22.1个百分点和3.6个百分点,比YOLOv4低1.2个百分点;模型占用内存为12.2 MB,是Faster RCNN的3.4%,YOLOv3tiny的36.9%,YOLOv4的5%;在Jetson Nano嵌入式平台上平均检测帧耗时为131 ms,分别是YOLOv3tiny和YOLOv4模型的32.1%和7.6%。可知本文提出的优化方法在模型占用内存、检测耗时和检测精度等方面优于其他常用目标检测算法,能够为硬件资源有限的田间精准除草的系统提供可行的实时杂草识别方法。 展开更多
关键词 杂草识别 yolov4tiny 带色彩恢复的多尺度视网膜增强算法 模型剪枝 嵌入式设备
下载PDF
改进的YOLOv4⁃tiny行人检测算法研究 被引量:10
3
作者 周华平 王京 孙克雷 《无线电通信技术》 2021年第4期474-480,共7页
针对大型行人检测网络由于权重大、检测速度慢等原因无法直接应用到小型设备场景中的问题,提出3种改进YOLOv4⁃tiny的行人检测识别模型:①YOLOv4⁃tinye模型,在CSP(Cross Stage Partial Connections)网络中引入改进的ESA_CSP(Enhanced Spa... 针对大型行人检测网络由于权重大、检测速度慢等原因无法直接应用到小型设备场景中的问题,提出3种改进YOLOv4⁃tiny的行人检测识别模型:①YOLOv4⁃tinye模型,在CSP(Cross Stage Partial Connections)网络中引入改进的ESA_CSP(Enhanced Spatial Attention_CSP)结构,使网络更多关注有利于行人检测的特征信息;②YOLOv4⁃tinyr模型,在主干网络输出后增加多尺度特征融合模块(Ring⁃fenced Bodies,RFBs)结构,增大特征提取的感受野,重复利用特征图的多尺度信息;③同时融合ESA_CSP和RFBs结构的YOLOv4⁃tinyer模型。实验结果表明:3种改进行人检测模型在WiderPerson的验证集上,mAP分别达到了53.62%、53.80%和56.13%,FPS达到了86 ms、75 ms和69 ms。与原YOLOv4⁃tiny模型的行人检测结果(mAP:51.35%,FPS:77 ms)相比,3种模型检测精度分别提高了2.27%、2.45%和4.78%,且速度并未下降太多,依然满足轻量级特点,便于在小型设备上移植。 展开更多
关键词 yolov4⁃tiny 注意力机制 特征融合 感受野
下载PDF
一种基于YOLOV4 Tiny的目标检测算法 被引量:4
4
作者 张文 杨雅姿 +1 位作者 黄驰 陈琳 《电脑与信息技术》 2022年第2期33-37,共5页
YOLOV4 Tiny目标检测算法是通过卷积神经网络提取特征,进行预测类别和边界框坐标的经典深度学习算法,是YOLOV4目标检测算法的简化版,没有使用Mish激活函数来提取特征,而只使用特征金字塔来增强特征层,因此不需要进行下采样。存在的不足... YOLOV4 Tiny目标检测算法是通过卷积神经网络提取特征,进行预测类别和边界框坐标的经典深度学习算法,是YOLOV4目标检测算法的简化版,没有使用Mish激活函数来提取特征,而只使用特征金字塔来增强特征层,因此不需要进行下采样。存在的不足是检测精度比较低。文章针对YOLOV4 Tiny算法存在的不足进行了改进,将低层特征层与高层特征层进行特征融合,然后分别进行三次空洞卷积,在扩大感受野的同时也能捕获多尺度上下文信息,并将结果进行堆叠,取代原网络中的FPN特征金字塔。实验结果表明,改进后的YOLOV4 Tiny算法比原算法精度更高,满足实时要求,具有一定程度的鲁棒性。 展开更多
关键词 目标检测 yolov4 tiny 特征融合 空洞卷积
下载PDF
基于YOLOv4⁃tiny模型的细胞图像识别技术研究
5
作者 柴媛媛 《现代电子技术》 2022年第9期46-49,共4页
根据细胞的形态特征进行病理分析是现代医疗健康领域常用的技术手段,传统的细胞识别及分类存在易疲劳、效率低、医师水平及主观因素带来的不确定性等问题。为此,提出基于YOLOv4⁃tiny模型的细胞图像识别技术。在Jetson Nano人工智能平台... 根据细胞的形态特征进行病理分析是现代医疗健康领域常用的技术手段,传统的细胞识别及分类存在易疲劳、效率低、医师水平及主观因素带来的不确定性等问题。为此,提出基于YOLOv4⁃tiny模型的细胞图像识别技术。在Jetson Nano人工智能平台上设计开发了面向细胞的智能检测系统,通过加入Dropout改进了YOLOv4⁃tiny轻量化网络模型,有效防止了训练数据过度拟合的问题,实现了基于细胞形状特征的精准识别。实验结果表明,该系统的细胞检测准确率可高达99%,能够大幅提高细胞在显微镜下的检测精度及检测效率,促进了人工智能技术在医学检测领域的应用。 展开更多
关键词 细胞图像识别 yolov4⁃tiny模型 智能检测 目标识别 网络模型改进 病理分析
下载PDF
Leguminous seeds detection based on convolutional neural networks:Comparison of Faster R-CNN and YOLOv4 on a small custom dataset 被引量:1
6
作者 Noran S.Ouf 《Artificial Intelligence in Agriculture》 2023年第2期30-45,共16页
This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can ident... This paper help with leguminous seeds detection and smart farming. There are hundreds of kinds of seeds and itcan be very difficult to distinguish between them. Botanists and those who study plants, however, can identifythe type of seed at a glance. As far as we know, this is the first work to consider leguminous seeds images withdifferent backgrounds and different sizes and crowding. Machine learning is used to automatically classify andlocate 11 different seed types. We chose Leguminous seeds from 11 types to be the objects of this study. Thosetypes are of different colors, sizes, and shapes to add variety and complexity to our research. The images datasetof the leguminous seeds was manually collected, annotated, and then split randomly into three sub-datasetstrain, validation, and test (predictions), with a ratio of 80%, 10%, and 10% respectively. The images consideredthe variability between different leguminous seed types. The images were captured on five different backgrounds: white A4 paper, black pad, dark blue pad, dark green pad, and green pad. Different heights and shootingangles were considered. The crowdedness of the seeds also varied randomly between 1 and 50 seeds per image.Different combinations and arrangements between the 11 types were considered. Two different image-capturingdevices were used: a SAMSUNG smartphone camera and a Canon digital camera. A total of 828 images wereobtained, including 9801 seed objects (labels). The dataset contained images of different backgrounds, heights,angles, crowdedness, arrangements, and combinations. The TensorFlow framework was used to construct theFaster Region-based Convolutional Neural Network (R-CNN) model and CSPDarknet53 is used as the backbonefor YOLOv4 based on DenseNet designed to connect layers in convolutional neural. Using the transfer learningmethod, we optimized the seed detection models. The currently dominant object detection methods, Faster RCNN, and YOLOv4 performances were compared experimentally. The mAP (mean average precision) of the FasterR-CNN and YOLOv4 models were 84.56% and 98.52% respectively. YOLOv4 had a significant advantage in detection speed over Faster R-CNN which makes it suitable for real-time identification as well where high accuracy andlow false positives are needed. The results showed that YOLOv4 had better accuracy, and detection ability, as wellas faster detection speed beating Faster R-CNN by a large margin. The model can be effectively applied under avariety of backgrounds, image sizes, seed sizes, shooting angles, and shooting heights, as well as different levelsof seed crowding. It constitutes an effective and efficient method for detecting different leguminous seeds incomplex scenarios. This study provides a reference for further seed testing and enumeration applications. 展开更多
关键词 Machine learning Object detection Leguminous seeds Deep learning Convolutional neural networks Faster R-CNN yolov4
原文传递
A New Childhood Pneumonia Diagnosis Method Based on Fine-Grained Convolutional Neural Network
7
作者 Yang Zhang Liru Qiu +2 位作者 Yongkai Zhu Long Wen Xiaoping Luo 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第12期873-894,共22页
Pneumonia is part of the main diseases causing the death of children.It is generally diagnosed through chest Xray images.With the development of Deep Learning(DL),the diagnosis of pneumonia based on DL has received ex... Pneumonia is part of the main diseases causing the death of children.It is generally diagnosed through chest Xray images.With the development of Deep Learning(DL),the diagnosis of pneumonia based on DL has received extensive attention.However,due to the small difference between pneumonia and normal images,the performance of DL methods could be improved.This research proposes a new fine-grained Convolutional Neural Network(CNN)for children’s pneumonia diagnosis(FG-CPD).Firstly,the fine-grainedCNNclassificationwhich can handle the slight difference in images is investigated.To obtain the raw images from the real-world chest X-ray data,the YOLOv4 algorithm is trained to detect and position the chest part in the raw images.Secondly,a novel attention network is proposed,named SGNet,which integrates the spatial information and channel information of the images to locate the discriminative parts in the chest image for expanding the difference between pneumonia and normal images.Thirdly,the automatic data augmentation method is adopted to increase the diversity of the images and avoid the overfitting of FG-CPD.The FG-CPD has been tested on the public Chest X-ray 2017 dataset,and the results show that it has achieved great effect.Then,the FG-CPD is tested on the real chest X-ray images from children aged 3–12 years ago from Tongji Hospital.The results show that FG-CPD has achieved up to 96.91%accuracy,which can validate the potential of the FG-CPD. 展开更多
关键词 Childhood pneumonia diagnosis fine-grained classification yolov4 attention network Convolutional Neural network(CNN)
下载PDF
车载手部小目标运动跟踪算法研究
8
作者 王金磊 魏同权 +2 位作者 邓亮 谢正华 陈万刚 《传感器与微系统》 CSCD 北大核心 2023年第8期65-68,77,共5页
随着汽车座舱的发展,通过对车内乘客手部进行运动跟踪,实现与车内灯具交互的应用成为了市场热点需求。但手部小目标易漏检的问题会造成目标缺失与跟踪不连续。提出一种车载手部小目标运动跟踪算法。首先,改进了YOLOv4-Tiny目标检测算法... 随着汽车座舱的发展,通过对车内乘客手部进行运动跟踪,实现与车内灯具交互的应用成为了市场热点需求。但手部小目标易漏检的问题会造成目标缺失与跟踪不连续。提出一种车载手部小目标运动跟踪算法。首先,改进了YOLOv4-Tiny目标检测算法,通过将特征融合层的浅层特征进行多次卷积和下采样,并与深层特征拼接,使深层获得更多的细节特征信息;然后,将检测结果传入DeepSORT算法进行多目标跟踪,实现对手部的运动跟踪。在嵌入式平台实验结果表明:改进后YOLOv4-Tiny算法的召回率提升9.05%;本文算法相比传统算法,多目标跟踪准确度(MOTA)提升17%,精度(MOTP)提升15%,同时具有较高的实时性。 展开更多
关键词 汽车座舱小目标检测 改进yolov4tiny 特征融合 DeepSORT算法 多目标跟踪
下载PDF
Image Recognition Based on Deep Learning with Thermal Camera Sensing
9
作者 Wen-Tsai Sung Chin-Hsuan Lin Sung-Jung Hsiao 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期505-520,共16页
As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine... As the COVID-19 epidemic spread across the globe,people around the world were advised or mandated to wear masks in public places to prevent its spreading further.In some cases,not wearing a mask could result in a fine.To monitor mask wearing,and to prevent the spread of future epidemics,this study proposes an image recognition system consisting of a camera,an infrared thermal array sensor,and a convolutional neural network trained in mask recognition.The infrared sensor monitors body temperature and displays the results in real-time on a liquid crystal display screen.The proposed system reduces the inefficiency of traditional object detection by providing training data according to the specific needs of the user and by applying You Only Look Once Version 4(YOLOv4)object detection technology,which experiments show has more efficient training parameters and a higher level of accuracy in object recognition.All datasets are uploaded to the cloud for storage using Google Colaboratory,saving human resources and achieving a high level of efficiency at a low cost. 展开更多
关键词 Image recognition convolutional neural network yolov4 thermal camera sensing
下载PDF
复杂野外环境下油茶果快速鲁棒检测算法
10
作者 周浩 唐昀超 +3 位作者 邹湘军 王红军 陈明猷 黄钊丰 《现代电子技术》 2022年第15期73-79,共7页
为了提高移动采摘机器人在复杂野外环境下检测油茶果的速度和鲁棒性,在YOLOv4⁃tiny网络的基础上提出YOLO⁃Oleifera网络。首先将两个1×1和3×3的卷积核分别添加至YOLOv4⁃tiny网络的第2个和第3个CSPBlock模块之后,以有助于学习... 为了提高移动采摘机器人在复杂野外环境下检测油茶果的速度和鲁棒性,在YOLOv4⁃tiny网络的基础上提出YOLO⁃Oleifera网络。首先将两个1×1和3×3的卷积核分别添加至YOLOv4⁃tiny网络的第2个和第3个CSPBlock模块之后,以有助于学习油茶果的特征信息和减少计算复杂度;接着使用K⁃means++先验框聚类算法代替YOLOv4⁃tiny网络使用的K⁃means先验框聚类算法,以获得满足油茶果尺寸的聚类结果。消融实验证明了网络改进的有效性。分别测试光照和阴影环境下的油茶果图像,实验表明YOLO⁃Oleifera网络在不同光照条件下检测油茶果具有鲁棒性。此外,对比实验表明被遮挡的油茶果因为语义信息的缺失而导致Precision和Recall降低。将YOLO⁃Oleifera网络的测试结果与YOLOv5⁃s、YOLOv3⁃tiny和YOLOv4⁃tiny网络进行比较,结果显示YOLO⁃Oleifera网络的AP最高,而且YOLO⁃Oleifera网络占用硬件资源最小。此外,YOLO⁃Oleifera网络检测图像平均花费31 ms,能够满足移动采摘机器人的实时检测需求。因此,提出的YOLO⁃Oleifera网络更加适合搭载在移动采摘机器人上进行检测任务。 展开更多
关键词 目标检测 yolov4⁃tiny网络 深度学习 卷积核 采摘机器人 K⁃means++ 鲁棒性
下载PDF
Automatic tunnel lining crack detection via deep learning with generative adversarial network-based data augmentation 被引量:6
11
作者 Zhong Zhou Junjie Zhang +1 位作者 Chenjie Gong Wei Wu 《Underground Space》 SCIE EI CSCD 2023年第2期140-154,共15页
Aiming at solving the challenges of insufficient data samples and low detection efficiency in tunnel lining crack detection methods based on deep learning,a novel detection approach for tunnel lining crack was propose... Aiming at solving the challenges of insufficient data samples and low detection efficiency in tunnel lining crack detection methods based on deep learning,a novel detection approach for tunnel lining crack was proposed,which is based on pruned You Look Only Once v4(YOLOv4)and Wasserstein Generative Adversarial Network enhanced by Residual Block and Efficient Channel Attention Module(WGAN-RE).In this study,a data augmentation method named WGAN-RE was proposed,which can achieve the automatic generation of crack images to enrich data set.Furthermore,YOLOv4 was selected as the basic model for training,and a pruning algo-rithm was introduced to lighten the model size,thereby effectively improving the detection speed.Average Precision(AP),F1 Score(F1),model size,and Frames Per Second(FPS)were selected as evaluation indexes of the model performance.Results indicate that the storage space of the pruned YOLOv4 model is only 49.16 MB,which is 80%compressed compared with the model before pruning.In addition,the FPS of the model reaches 40.58f/s,which provides a basis for the real-time detection of tunnel lining cracks.Findings also demon-strate that the F1 score and AP of the pruned YOLOv4 are only 0.77%and 0.50%lower than that before pruning,respectively.Besides,the pruned YOLOv4 is superior in both model accuracy and efficiency compared with YOLOv3,SSD,and Faster RCNN,which indi-cated that the pruned YOLOv4 model can realize the accurate,fast and intelligent detection of tunnel lining cracks in practical tunnel engineering. 展开更多
关键词 Tunnel engineering Lining cracks Target detection Deep learning yolov4 Generative adversarial network
下载PDF
基于深度学习的轻量级目标检测算法的研究
12
作者 耿硕 李云栋 《工业控制计算机》 2022年第4期97-99,共3页
铁路异物侵限检测技术在视频监控中起着重大作用,而现有的目标检测网络计算成本高,模型存储大,因为硬件成本和计算力存在矛盾,所以导致检测速度不高等问题。针对上述问题,选取YOLOv4 tiny作为基础网络并进行了改进。首先,在CSP Darknet5... 铁路异物侵限检测技术在视频监控中起着重大作用,而现有的目标检测网络计算成本高,模型存储大,因为硬件成本和计算力存在矛盾,所以导致检测速度不高等问题。针对上述问题,选取YOLOv4 tiny作为基础网络并进行了改进。首先,在CSP Darknet53 tiny中,利用深度可分离卷积替换部分标准卷积,减少了参数数量和计算量;其次,将训练后权重进行转换,使得转换后权重可被Tensor RT优化推理器推理加速;然后,引入合并运算和半精度量化处理;最后,将TensorRT-yolov4 tiny部署至嵌入式设备Jeston nano上。实验表明,在2555张铁路数据集上进行实验,其检测速度提升了50%,达到了平均0.036一张检测图片,帧率达到了30.12。MAP达到了82.13%。证明了提出的方法的在部署至嵌入式设备后速度性能上具有的优越性。 展开更多
关键词 yolov4 tiny 铁路异物侵限检测 深度可分离卷积 Tensor RT Jeston nano
下载PDF
A method to generate foggy optical images based on unsupervised depth estimation
13
作者 WANG Xiangjun LIU Linghao +1 位作者 NI Yubo WANG Lin 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第1期44-52,共9页
For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the ... For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment. 展开更多
关键词 traffic object detection foggy images generation unsupervised depth estimation yolov4 model Faster region-based convolutional neural network(Faster-RCNN)
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部