期刊文献+
共找到1,237篇文章
< 1 2 62 >
每页显示 20 50 100
Enhancing Deep Learning Semantics:The Diffusion Sampling and Label-Driven Co-Attention Approach
1
作者 ChunhuaWang Wenqian Shang +1 位作者 Tong Yi Haibin Zhu 《Computers, Materials & Continua》 SCIE EI 2024年第5期1939-1956,共18页
The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-atten... The advent of self-attention mechanisms within Transformer models has significantly propelled the advancement of deep learning algorithms,yielding outstanding achievements across diverse domains.Nonetheless,self-attention mechanisms falter when applied to datasets with intricate semantic content and extensive dependency structures.In response,this paper introduces a Diffusion Sampling and Label-Driven Co-attention Neural Network(DSLD),which adopts a diffusion sampling method to capture more comprehensive semantic information of the data.Additionally,themodel leverages the joint correlation information of labels and data to introduce the computation of text representation,correcting semantic representationbiases in thedata,andincreasing the accuracyof semantic representation.Ultimately,the model computes the corresponding classification results by synthesizing these rich data semantic representations.Experiments on seven benchmark datasets show that our proposed model achieves competitive results compared to state-of-the-art methods. 展开更多
关键词 Semantic representation sampling attention label-driven co-attention attention mechanisms
下载PDF
引入上下文信息和Attention Gate的GUS-YOLO遥感目标检测算法 被引量:10
2
作者 张华卫 张文飞 +2 位作者 蒋占军 廉敬 吴佰靖 《计算机科学与探索》 CSCD 北大核心 2024年第2期453-464,共12页
目前基于通用YOLO系列的遥感目标检测算法存在并未充分利用图像的全局上下文信息,在特征融合金字塔部分并未充分考虑缩小融合特征之间的语义鸿沟、抑制冗余信息干扰的缺点。在结合YOLO算法优点的基础上提出GUS-YOLO算法,其拥有一个能够... 目前基于通用YOLO系列的遥感目标检测算法存在并未充分利用图像的全局上下文信息,在特征融合金字塔部分并未充分考虑缩小融合特征之间的语义鸿沟、抑制冗余信息干扰的缺点。在结合YOLO算法优点的基础上提出GUS-YOLO算法,其拥有一个能够充分利用全局上下文信息的骨干网络Global Backbone。除此之外,该算法在融合特征金字塔自顶向下的结构中引入Attention Gate模块,可以突出必要的特征信息,抑制冗余信息。另外,为Attention Gate模块设计了最佳的网络结构,提出了网络的特征融合结构U-Net。最后,为克服ReLU函数可能导致模型梯度不再更新的问题,该算法将Attention Gate模块的激活函数升级为可学习的SMU激活函数,提高模型鲁棒性。在NWPU VHR-10遥感数据集上,该算法相较于YOLOV7算法取得宽松指标mAP^(0.50)1.64个百分点和严格指标mAP^(0.75)9.39个百分点的性能提升。相较于目前主流的七种检测算法,该算法取得较好的检测性能。 展开更多
关键词 遥感图像 Global Backbone attention Gate SMU U-neck
下载PDF
基于SABO-GRU-Attention的锂电池SOC估计
3
作者 薛家祥 王凌云 《电源技术》 CAS 北大核心 2024年第11期2169-2173,共5页
提出一种基于SABO-GRU-Attention(subtraction average based optimizer-gate recurrent unitattention)的锂电池SOC(state of charge)估计方法。采用基于平均减法优化算法自适应更新GRU神经网络的超参数,融合SE(squeeze and excitation... 提出一种基于SABO-GRU-Attention(subtraction average based optimizer-gate recurrent unitattention)的锂电池SOC(state of charge)估计方法。采用基于平均减法优化算法自适应更新GRU神经网络的超参数,融合SE(squeeze and excitation)注意力机制自适应分配各通道权重,提高学习效率。对马里兰大学电池数据集进行预处理,输入电压、电流参数,进行锂电池充放电仿真实验,并搭建锂电池荷电状态实验平台进行储能锂电池充放电实验。结果表明,提出的SOC神经网络估计模型明显优于LSTM、GRU以及PSO-GRU等模型,具有较高的估计精度与应用价值。 展开更多
关键词 SOC估计 SABO算法 GRU神经网络 attention机制
下载PDF
基于XGBoost-WOA-BiLSTM-Attention的公共建筑暖通空调能耗预测研究
4
作者 于水 罗宇晨 +2 位作者 安瑞 李思尧 陈志杰 《建筑技术》 2024年第17期2071-2075,共5页
为在双碳目标下实现节能减排,降低能源成本,提出一种基于BiLSTM的公共建筑暖通空调能耗预测模型。在BiLSTM模型基础上,使用XGBoost算法对输入特征进行选择,剔除冗余特征,得到最佳模型输入特征;然后利用WOA优化算法对添加了Attention机制... 为在双碳目标下实现节能减排,降低能源成本,提出一种基于BiLSTM的公共建筑暖通空调能耗预测模型。在BiLSTM模型基础上,使用XGBoost算法对输入特征进行选择,剔除冗余特征,得到最佳模型输入特征;然后利用WOA优化算法对添加了Attention机制的BiLSTM模型中的6个超参数进行优化,将得到的最优参数代入BiLSTM-Attention神经网络中进行预测,并与BiLSTM模型、BiLSTM-Attention模型和WOA-BiLSTM-Attention模型进行对比。结果表明,所提出的XGBoost-WOA-BiLSTM-Attention模型的RMSE、MAE、R2分别为0.0106、0.006、0.9991,优于其他模型,且相对于持续模型在均方根误差RMSE上提升了98%,为降低公共建筑暖通空调能耗研究提供了参考。 展开更多
关键词 HVAC能耗 XGBoost WOA优化 attention机制 BiLSTM
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
5
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 Image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
MCBAN: A Small Object Detection Multi-Convolutional Block Attention Network
6
作者 Hina Bhanbhro Yew Kwang Hooi +2 位作者 Mohammad Nordin Bin Zakaria Worapan Kusakunniran Zaira Hassan Amur 《Computers, Materials & Continua》 SCIE EI 2024年第11期2243-2259,共17页
Object detection has made a significant leap forward in recent years.However,the detection of small objects continues to be a great difficulty for various reasons,such as they have a very small size and they are susce... Object detection has made a significant leap forward in recent years.However,the detection of small objects continues to be a great difficulty for various reasons,such as they have a very small size and they are susceptible to missed detection due to background noise.Additionally,small object information is affected due to the downsampling operations.Deep learning-based detection methods have been utilized to address the challenge posed by small objects.In this work,we propose a novel method,the Multi-Convolutional Block Attention Network(MCBAN),to increase the detection accuracy of minute objects aiming to overcome the challenge of information loss during the downsampling process.The multi-convolutional attention block(MCAB);channel attention and spatial attention module(SAM)that make up MCAB,have been crafted to accomplish small object detection with higher precision.We have carried out the experiments on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)and Pattern Analysis,Statical Modeling and Computational Learning(PASCAL)Visual Object Classes(VOC)datasets and have followed a step-wise process to analyze the results.These experiment results demonstrate that significant gains in performance are achieved,such as 97.75%for KITTI and 88.97%for PASCAL VOC.The findings of this study assert quite unequivocally the fact that MCBAN is much more efficient in the small object detection domain as compared to other existing approaches. 展开更多
关键词 Multi-convolutional channel attention spatial attention YOLO
下载PDF
Attention Mechanism-Based Method for Intrusion Target Recognition in Railway
7
作者 SHI Jiang BAI Dingyuan +2 位作者 GUO Baoqing WANG Yao RUAN Tao 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2024年第4期541-554,共14页
The detection of foreign object intrusion is crucial for ensuring the safety of railway operations.To address challenges such as low efficiency,suboptimal detection accuracy,and slow detection speed inherent in conven... The detection of foreign object intrusion is crucial for ensuring the safety of railway operations.To address challenges such as low efficiency,suboptimal detection accuracy,and slow detection speed inherent in conventional comprehensive video monitoring systems for railways,a railway foreign object intrusion recognition and detection system is conceived and implemented using edge computing and deep learning technologies.In a bid to raise detection accuracy,the convolutional block attention module(CBAM),including spatial and channel attention modules,is seamlessly integrated into the YOLOv5 model,giving rise to the CBAM-YOLOv5 model.Furthermore,the distance intersection-over-union_non-maximum suppression(DIo U_NMS)algorithm is employed in lieu of the weighted nonmaximum suppression algorithm,resulting in improved detection performance for intrusive targets.To accelerate detection speed,the model undergoes pruning based on the batch normalization(BN)layer,and Tensor RT inference acceleration techniques are employed,culminating in the successful deployment of the algorithm on edge devices.The CBAM-YOLOv5 model exhibits a notable 2.1%enhancement in detection accuracy when evaluated on a selfconstructed railway dataset,achieving 95.0%for mean average precision(m AP).Furthermore,the inference speed on edge devices attains a commendable 15 frame/s. 展开更多
关键词 foreign object detection railway protection edge computing spatial attention module channel attention module
下载PDF
Main focus of parents of children with attention deficit hyperactivity disorder and the effectiveness of early clinical screening
8
作者 Jia-Wen Li Ke Gao +1 位作者 Xiao-Yun Yang Zhi-Fei Li 《World Journal of Clinical Cases》 SCIE 2024年第19期3752-3759,共8页
BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a common mental and behavioral disorder among children.AIM To explore the focus of attention deficit hyperactivity disorder parents and the effectiveness of ... BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a common mental and behavioral disorder among children.AIM To explore the focus of attention deficit hyperactivity disorder parents and the effectiveness of early clinical screening METHODS This study found that the main directions of parents seeking medical help were short attention time for children under 7 years old(16.6%)and poor academic performance for children over 7 years old(12.1%).We employed a two-stage experiment to diagnose ADHD.Among the 5683 children evaluated from 2018 to 2021,360 met the DSM-5 criteria.Those diagnosed with ADHD underwent assessments for letter,number,and figure attention.Following the exclusion of ADHD-H diagnoses,the detection rate rose to 96.0%,with 310 out of 323 cases identified.RESULTS This study yielded insights into the primary concerns of parents regarding their children's symptoms and validated the efficacy of a straightforward diagnostic test,offering valuable guidance for directing ADHD treatment,facilitating early detection,and enabling timely intervention.Our research delved into the predominant worries of parents across various age groups.Furthermore,we showcased the precision of the simple exclusion experiment in discerning between ADHD-I and ADHD-C in children.CONCLUSION Our study will help diagnose and guide future treatment directions for ADHD. 展开更多
关键词 attention deficit hyperactivity disorder CHILDREN PARENTS Direction of attention Simple test
下载PDF
The Short-Term Prediction ofWind Power Based on the Convolutional Graph Attention Deep Neural Network
9
作者 Fan Xiao Xiong Ping +4 位作者 Yeyang Li Yusen Xu Yiqun Kang Dan Liu Nianming Zhang 《Energy Engineering》 EI 2024年第2期359-376,共18页
The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key... The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident. 展开更多
关键词 Format wind power prediction deep neural network graph attention network attention mechanism quantile regression
下载PDF
New Fusion Approach of Spatial and Channel Attention for Semantic Segmentation of Very High Spatial Resolution Remote Sensing Images
10
作者 Armand Kodjo Atiampo Gokou Hervé Fabrice Diédié 《Open Journal of Applied Sciences》 2024年第2期288-319,共32页
The semantic segmentation of very high spatial resolution remote sensing images is difficult due to the complexity of interpreting the interactions between the objects in the scene. Indeed, effective segmentation requ... The semantic segmentation of very high spatial resolution remote sensing images is difficult due to the complexity of interpreting the interactions between the objects in the scene. Indeed, effective segmentation requires considering spatial local context and long-term dependencies. To address this problem, the proposed approach is inspired by the MAC-UNet network which is an extension of U-Net, densely connected combined with channel attention. The advantages of this solution are as follows: 4) The new model introduces a new attention called propagate attention to build an attention-based encoder. 2) The fusion of multi-scale information is achieved by a weighted linear combination of the attentions whose coefficients are learned during the training phase. 3) Introducing in the decoder, the Spatial-Channel-Global-Local block which is an attention layer that uniquely combines channel attention and spatial attention locally and globally. The performances of the model are evaluated on 2 datasets WHDLD and DLRSD and show results of mean intersection over union (mIoU) index in progress between 1.54% and 10.47% for DLRSD and between 1.04% and 4.37% for WHDLD compared with the most efficient algorithms with attention mechanisms like MAU-Net and transformers like TMNet. 展开更多
关键词 Spatial-Channel attention Super-Token Segmentation Self-attention Vision Transformer
下载PDF
基于Coordinate Attention和空洞卷积的异物识别 被引量:1
11
作者 王春霖 吴春雷 +1 位作者 李灿伟 朱明飞 《计算机系统应用》 2024年第3期178-186,共9页
在我国工厂的工业化生产中,带式运输机占有重要的地位,但是在其运输物料的过程中,常有木板、金属管、大型金属片等混入物料中,从而对带式运输机的传送带造成损毁,引起巨大的经济损失.为了检测出传送带上的不规则异物,设计了一种新的异... 在我国工厂的工业化生产中,带式运输机占有重要的地位,但是在其运输物料的过程中,常有木板、金属管、大型金属片等混入物料中,从而对带式运输机的传送带造成损毁,引起巨大的经济损失.为了检测出传送带上的不规则异物,设计了一种新的异物检测方法.针对传统异物检测方法中存在的对于图像特征提取能力不足以及网络感受野相对较小的问题,我们提出了一种基于coordinate attention和空洞卷积的单阶段异物识别方法.首先,网络利用coordinate attention机制,使网络更加关注图像的空间信息,并对图像中的重要特征进行了增强,增强了网络的性能;其次,在网络提取多尺度特征的部分,将原网络的静态卷积变为空洞卷积,有效减少了常规卷积造成的信息损失;除此之外,我们还使用了新的损失函数,进一步提高了网络的性能.实验结果证明,我们提出的网络能有效识别出传送带上的异物,较好地完成异物检测任务. 展开更多
关键词 coordinate attention 异物检测 空洞卷积 损失函数 目标识别
下载PDF
基于ALBERT-Seq2Seq-Attention模型的数字化档案多标签分类
12
作者 王少阳 成新民 +3 位作者 王瑞琴 陈静雯 周阳 费志高 《湖州师范学院学报》 2024年第2期65-72,共8页
针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进... 针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进行文本特征向量的提取,并获得上下文语义信息;将预训练提取的文本特征作为Seq2Seq-Attention(Sequence to Sequence-Attention)模型的输入序列,构建标签字典以获取多标签间的关联关系.将分类模型在3种数据集上分别进行对比实验,结果表明:模型分类的效果F1值均超过90%.该模型不仅能提高档案文本的多标签分类效果,也能关注标签之间的相关关系. 展开更多
关键词 ALBERT Seq2Seq attention 多标签分类 数字化档案
下载PDF
Workout Action Recognition in Video Streams Using an Attention Driven Residual DC-GRU Network 被引量:1
13
作者 Arnab Dey Samit Biswas Dac-Nhuong Le 《Computers, Materials & Continua》 SCIE EI 2024年第5期3067-3087,共21页
Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions i... Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions in videostreams holds significant importance in computer vision research, as it aims to enhance exercise adherence, enableinstant recognition, advance fitness tracking technologies, and optimize fitness routines. However, existing actiondatasets often lack diversity and specificity for workout actions, hindering the development of accurate recognitionmodels. To address this gap, the Workout Action Video dataset (WAVd) has been introduced as a significantcontribution. WAVd comprises a diverse collection of labeled workout action videos, meticulously curated toencompass various exercises performed by numerous individuals in different settings. This research proposes aninnovative framework based on the Attention driven Residual Deep Convolutional-Gated Recurrent Unit (ResDCGRU)network for workout action recognition in video streams. Unlike image-based action recognition, videoscontain spatio-temporal information, making the task more complex and challenging. While substantial progresshas been made in this area, challenges persist in detecting subtle and complex actions, handling occlusions,and managing the computational demands of deep learning approaches. The proposed ResDC-GRU Attentionmodel demonstrated exceptional classification performance with 95.81% accuracy in classifying workout actionvideos and also outperformed various state-of-the-art models. The method also yielded 81.6%, 97.2%, 95.6%, and93.2% accuracy on established benchmark datasets, namely HMDB51, Youtube Actions, UCF50, and UCF101,respectively, showcasing its superiority and robustness in action recognition. The findings suggest practicalimplications in real-world scenarios where precise video action recognition is paramount, addressing the persistingchallenges in the field. TheWAVd dataset serves as a catalyst for the development ofmore robust and effective fitnesstracking systems and ultimately promotes healthier lifestyles through improved exercise monitoring and analysis. 展开更多
关键词 Workout action recognition video stream action recognition residual network GRU attention
下载PDF
Attention Markets of Blockchain-Based Decentralized Autonomous Organizations 被引量:1
14
作者 Juanjuan Li Rui Qin +3 位作者 Sangtian Guan Wenwen Ding Fei Lin Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1370-1380,共11页
The attention is a scarce resource in decentralized autonomous organizations(DAOs),as their self-governance relies heavily on the attention-intensive decision-making process of“proposal and voting”.To prevent the ne... The attention is a scarce resource in decentralized autonomous organizations(DAOs),as their self-governance relies heavily on the attention-intensive decision-making process of“proposal and voting”.To prevent the negative effects of pro-posers’attention-capturing strategies that contribute to the“tragedy of the commons”and ensure an efficient distribution of attention among multiple proposals,it is necessary to establish a market-driven allocation scheme for DAOs’attention.First,the Harberger tax-based attention markets are designed to facilitate its allocation via continuous and automated trading,where the individualized Harberger tax rate(HTR)determined by the pro-posers’reputation is adopted.Then,the Stackelberg game model is formulated in these markets,casting attention to owners in the role of leaders and other competitive proposers as followers.Its equilibrium trading strategies are also discussed to unravel the intricate dynamics of attention pricing.Moreover,utilizing the single-round Stackelberg game as an illustrative example,the existence of Nash equilibrium trading strategies is demonstrated.Finally,the impact of individualized HTR on trading strategies is investigated,and results suggest that it has a negative correlation with leaders’self-accessed prices and ownership duration,but its effect on their revenues varies under different conditions.This study is expected to provide valuable insights into leveraging attention resources to improve DAOs’governance and decision-making process. 展开更多
关键词 attention decentralized autonomous organizations Harberger tax Stackelberg game.
下载PDF
Attention-relation network for mobile phone screen defect classification via a few samples 被引量:1
15
作者 Jiao Mao Guoliang Xu +1 位作者 Lijun He Jiangtao Luo 《Digital Communications and Networks》 SCIE CSCD 2024年第4期1113-1120,共8页
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro... How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages. 展开更多
关键词 Mobile phone screen defects A few samples Relation network attention mechanism Dilated convolution
下载PDF
基于CNN-LSTM-Attention和自回归的混合水位预测模型
16
作者 吕海峰 涂井先 +1 位作者 林泓全 冀肖榆 《水利水电技术(中英文)》 北大核心 2024年第6期16-31,共16页
【目的】水位预测对交通运输、农业以及防洪措施具有重要影响。精确的水位值可用于提升水道运输的安全及效率、降低洪水风险,同时也是保障区域可持续发展的必要条件。【方法】提出一种CRANet的混合水位预测模型,以卷积神经网络(CNN)、... 【目的】水位预测对交通运输、农业以及防洪措施具有重要影响。精确的水位值可用于提升水道运输的安全及效率、降低洪水风险,同时也是保障区域可持续发展的必要条件。【方法】提出一种CRANet的混合水位预测模型,以卷积神经网络(CNN)、长短期记忆网络(LSTM)、注意力机制以及自回归(AR)组件为基础,旨在应对时间序列数据中存在的线性与非线性问题,缓解自回归及ARIMA模型的缺陷。其应用不仅在于为航运调度提供决策支撑,加强导航安全效率,同样能提升防洪减灾的能力。其中,CNN和LSTM组件有效地针对数据集内的局部和全局关系进行捕捉,AR组件则能充分考虑数据的时间序列特性。同时,通过注意力机制,模型能够优先考虑相关特性,提高预测效果。【结果】研究成果所提出的模型已成功应用于中国西江梧州站的水位预测,在测试集上预测未来3 h级别水位的MAE、RMSE和R^(2)分别为0.086、0.114 5和0.950 8。【结论】结果表明所提出的CRANet模型在水位预测方面的高可用性、准确度与稳健性,相较于AR、SVR、CNN、LSTM等模型具有更优的MAE、RMSE和R^(2)。 展开更多
关键词 时间序列 水位预测 CNN LSTM attention 影响因素 洪水 西江
下载PDF
An Underwater Target Detection Algorithm Based on Attention Mechanism and Improved YOLOv7 被引量:1
17
作者 Liqiu Ren Zhanying Li +2 位作者 Xueyu He Lingyan Kong Yinghao Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第2期2829-2845,共17页
For underwater robots in the process of performing target detection tasks,the color distortion and the uneven quality of underwater images lead to great difficulties in the feature extraction process of the model,whic... For underwater robots in the process of performing target detection tasks,the color distortion and the uneven quality of underwater images lead to great difficulties in the feature extraction process of the model,which is prone to issues like error detection,omission detection,and poor accuracy.Therefore,this paper proposed the CER-YOLOv7(CBAM-EIOU-RepVGG-YOLOv7)underwater target detection algorithm.To improve the algorithm’s capability to retain valid features from both spatial and channel perspectives during the feature extraction phase,we have added a Convolutional Block Attention Module(CBAM)to the backbone network.The Reparameterization Visual Geometry Group(RepVGG)module is inserted into the backbone to improve the training and inference capabilities.The Efficient Intersection over Union(EIoU)loss is also used as the localization loss function,which reduces the error detection rate and missed detection rate of the algorithm.The experimental results of the CER-YOLOv7 algorithm on the UPRC(Underwater Robot Prototype Competition)dataset show that the mAP(mean Average Precision)score of the algorithm is 86.1%,which is a 2.2%improvement compared to the YOLOv7.The feasibility and validity of the CER-YOLOv7 are proved through ablation and comparison experiments,and it is more suitable for underwater target detection. 展开更多
关键词 Deep learning underwater object detection improved YOLOv7 attention mechanism
下载PDF
Efficient Unsupervised Image Stitching Using Attention Mechanism with Deep Homography Estimation 被引量:1
18
作者 Chunbin Qin Xiaotian Ran 《Computers, Materials & Continua》 SCIE EI 2024年第4期1319-1334,共16页
Traditional feature-based image stitching techniques often encounter obstacles when dealing with images lackingunique attributes or suffering from quality degradation. The scarcity of annotated datasets in real-life s... Traditional feature-based image stitching techniques often encounter obstacles when dealing with images lackingunique attributes or suffering from quality degradation. The scarcity of annotated datasets in real-life scenesseverely undermines the reliability of supervised learning methods in image stitching. Furthermore, existing deeplearning architectures designed for image stitching are often too bulky to be deployed on mobile and peripheralcomputing devices. To address these challenges, this study proposes a novel unsupervised image stitching methodbased on the YOLOv8 (You Only Look Once version 8) framework that introduces deep homography networksand attentionmechanisms. Themethodology is partitioned into three distinct stages. The initial stage combines theattention mechanism with a pooling pyramid model to enhance the detection and recognition of compact objectsin images, the task of the deep homography networks module is to estimate the global homography of the inputimages consideringmultiple viewpoints. The second stage involves preliminary stitching of the masks generated inthe initial stage and further enhancement through weighted computation to eliminate common stitching artifacts.The final stage is characterized by adaptive reconstruction and careful refinement of the initial stitching results.Comprehensive experiments acrossmultiple datasets are executed tometiculously assess the proposed model. Ourmethod’s Peak Signal-to-Noise Ratio (PSNR) and Structure Similarity Index Measure (SSIM) improved by 10.6%and 6%. These experimental results confirm the efficacy and utility of the presented model in this paper. 展开更多
关键词 Unsupervised image stitching deep homography estimation YOLOv8 attention mechanism
下载PDF
融合MacBERT和Talking⁃Heads Attention实体关系联合抽取模型
19
作者 王春亮 姚洁仪 李昭 《现代电子技术》 北大核心 2024年第5期127-131,共5页
针对现有的医学文本关系抽取任务模型在训练过程中存在语义理解能力不足,可能导致关系抽取的效果不尽人意的问题,文中提出一种融合MacBERT和Talking⁃Heads Attention的实体关系联合抽取模型。该模型首先利用MacBERT语言模型来获取动态... 针对现有的医学文本关系抽取任务模型在训练过程中存在语义理解能力不足,可能导致关系抽取的效果不尽人意的问题,文中提出一种融合MacBERT和Talking⁃Heads Attention的实体关系联合抽取模型。该模型首先利用MacBERT语言模型来获取动态字向量表达,MacBERT作为改进的BERT模型,能够减少预训练和微调阶段之间的差异,从而提高模型的泛化能力;然后,将这些动态字向量表达输入到双向门控循环单元(BiGRU)中,以便提取文本的上下文特征。BiGRU是一种改进的循环神经网络(RNN),具有更好的长期依赖捕获能力。在获取文本上下文特征之后,使用Talking⁃Heads Attention来获取全局特征。Talking⁃Heads Attention是一种自注意力机制,可以捕获文本中不同位置之间的关系,从而提高关系抽取的准确性。实验结果表明,与实体关系联合抽取模型GRTE相比,该模型F1值提升1%,precision值提升0.4%,recall值提升1.5%。 展开更多
关键词 MacBERT BiGRU 关系抽取 医学文本 Talking⁃Heads attention 深度学习 全局特征 神经网络
下载PDF
上一页 1 2 62 下一页 到第
使用帮助 返回顶部