期刊文献+
共找到84篇文章
< 1 2 5 >
每页显示 20 50 100
A Self-Attention Based Dynamic Resource Management for Satellite-Terrestrial Networks
1
作者 Lin Tianhao Luo Zhiyong 《China Communications》 SCIE CSCD 2024年第4期136-150,共15页
The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor... The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks. 展开更多
关键词 mobile edge computing resource management satellite-terrestrial networks self-attention
下载PDF
Missing Value Imputation for Radar-Derived Time-Series Tracks of Aerial Targets Based on Improved Self-Attention-Based Network
2
作者 Zihao Song Yan Zhou +2 位作者 Wei Cheng Futai Liang Chenhao Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3349-3376,共28页
The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random mis... The frequent missing values in radar-derived time-series tracks of aerial targets(RTT-AT)lead to significant challenges in subsequent data-driven tasks.However,the majority of imputation research focuses on random missing(RM)that differs significantly from common missing patterns of RTT-AT.The method for solving the RM may experience performance degradation or failure when applied to RTT-AT imputation.Conventional autoregressive deep learning methods are prone to error accumulation and long-term dependency loss.In this paper,a non-autoregressive imputation model that addresses the issue of missing value imputation for two common missing patterns in RTT-AT is proposed.Our model consists of two probabilistic sparse diagonal masking self-attention(PSDMSA)units and a weight fusion unit.It learns missing values by combining the representations outputted by the two units,aiming to minimize the difference between the missing values and their actual values.The PSDMSA units effectively capture temporal dependencies and attribute correlations between time steps,improving imputation quality.The weight fusion unit automatically updates the weights of the output representations from the two units to obtain a more accurate final representation.The experimental results indicate that,despite varying missing rates in the two missing patterns,our model consistently outperforms other methods in imputation performance and exhibits a low frequency of deviations in estimates for specific missing entries.Compared to the state-of-the-art autoregressive deep learning imputation model Bidirectional Recurrent Imputation for Time Series(BRITS),our proposed model reduces mean absolute error(MAE)by 31%~50%.Additionally,the model attains a training speed that is 4 to 8 times faster when compared to both BRITS and a standard Transformer model when trained on the same dataset.Finally,the findings from the ablation experiments demonstrate that the PSDMSA,the weight fusion unit,cascade network design,and imputation loss enhance imputation performance and confirm the efficacy of our design. 展开更多
关键词 Missing value imputation time-series tracks probabilistic sparsity diagonal masking self-attention weight fusion
下载PDF
Aerial target threat assessment based on gated recurrent unit and self-attention mechanism
3
作者 CHEN Chen QUAN Wei SHAO Zhuang 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期361-373,共13页
Aerial threat assessment is a crucial link in modern air combat, whose result counts a great deal for commanders to make decisions. With the consideration that the existing threat assessment methods have difficulties ... Aerial threat assessment is a crucial link in modern air combat, whose result counts a great deal for commanders to make decisions. With the consideration that the existing threat assessment methods have difficulties in dealing with high dimensional time series target data, a threat assessment method based on self-attention mechanism and gated recurrent unit(SAGRU) is proposed. Firstly, a threat feature system including air combat situations and capability features is established. Moreover, a data augmentation process based on fractional Fourier transform(FRFT) is applied to extract more valuable information from time series situation features. Furthermore, aiming to capture key characteristics of battlefield evolution, a bidirectional GRU and SA mechanisms are designed for enhanced features.Subsequently, after the concatenation of the processed air combat situation and capability features, the target threat level will be predicted by fully connected neural layers and the softmax classifier. Finally, in order to validate this model, an air combat dataset generated by a combat simulation system is introduced for model training and testing. The comparison experiments show the proposed model has structural rationality and can perform threat assessment faster and more accurately than the other existing models based on deep learning. 展开更多
关键词 target threat assessment gated recurrent unit(GRU) self-attention(SA) fractional Fourier transform(FRFT)
下载PDF
CFSA-Net:Efficient Large-Scale Point Cloud Semantic Segmentation Based on Cross-Fusion Self-Attention
4
作者 Jun Shu Shuai Wang +1 位作者 Shiqi Yu Jie Zhang 《Computers, Materials & Continua》 SCIE EI 2023年第12期2677-2697,共21页
Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requ... Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation. 展开更多
关键词 Semantic segmentation large-scale point cloud random sampling cross-fusion self-attention
下载PDF
Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism
5
作者 陈诺 王绍宇 +3 位作者 陆然 李文萱 覃志东 石秀金 《Journal of Donghua University(English Edition)》 CAS 2023年第6期661-666,共6页
Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.Th... Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task. 展开更多
关键词 clothing parsing convolutional neural network multi-scale fusion self-attention mechanism vision Transformer
下载PDF
基于Self-Attention的方面级情感分析方法研究
6
作者 蔡阳 《智能计算机与应用》 2023年第8期150-154,157,共6页
针对传统模型在细粒度的方面级情感分析上的不足,如RNN会遇到长距离依赖的问题,且模型不能并行计算;CNN的输出通常包含池化层,特征向量经过池化层的运算后会丢失相对位置信息和一些重要特征,且CNN没有考虑到文本的上下文信息。本文提出... 针对传统模型在细粒度的方面级情感分析上的不足,如RNN会遇到长距离依赖的问题,且模型不能并行计算;CNN的输出通常包含池化层,特征向量经过池化层的运算后会丢失相对位置信息和一些重要特征,且CNN没有考虑到文本的上下文信息。本文提出了一种Light-Transformer-ALSC模型,基于Self-Attention机制,且运用了交互注意力的思想,对方面词和上下文使用不同的注意力模块提取特征,细粒度地对文本进行情感分析,在SemEval2014 Task 4数据集上的实验结果表明本文模型的效果优于大部分仅基于LSTM的模型。除基于BERT的模型外,在Laptop数据集上准确率提高了1.3%~5.3%、在Restaurant数据集上准确率提高了2.5%~5.5%;对比基于BERT的模型,在准确率接近的情况下模型参数量大大减少。 展开更多
关键词 方面级情感分析 self-attention TRANSFORMER SemEval-2014 Task 4 BERT
下载PDF
结合LDA与Self-Attention的短文本情感分类方法 被引量:7
7
作者 陈欢 黄勃 +2 位作者 朱翌民 俞雷 余宇新 《计算机工程与应用》 CSCD 北大核心 2020年第18期165-170,共6页
在对短文本进行情感分类任务的过程中,由于文本长度过短导致数据稀疏,降低了分类任务的准确率。针对这个问题,提出了一种基于潜在狄利克雷分布(LDA)与Self-Attention的短文本情感分类方法。使用LDA获得每个评论的主题词分布作为该条评... 在对短文本进行情感分类任务的过程中,由于文本长度过短导致数据稀疏,降低了分类任务的准确率。针对这个问题,提出了一种基于潜在狄利克雷分布(LDA)与Self-Attention的短文本情感分类方法。使用LDA获得每个评论的主题词分布作为该条评论信息的扩展,将扩展信息和原评论文本一起输入到word2vec模型,进行词向量训练,使得该评论文本在高维向量空间实现同一主题的聚类,使用Self-Attention进行动态权重分配并进行分类。通过在谭松波酒店评论数据集上的实验表明,该算法与当前主流的短文本分类情感算法相比,有效地提高了分类性能。 展开更多
关键词 主题词 短文本 self-attention 潜在狄利克雷分布(LDA) word2vec
下载PDF
结合TFIDF的Self-Attention-Based Bi-LSTM的垃圾短信识别 被引量:9
8
作者 吴思慧 陈世平 《计算机系统应用》 2020年第9期171-177,共7页
随着手机短信成为人们日常生活交往的重要手段,垃圾短信的识别具有重要的现实意义.针对此提出一种结合TFIDF的self-attention-based Bi-LSTM的神经网络模型.该模型首先将短信文本以词向量的方式输入到Bi-LSTM层,经过特征提取并结合TFIDF... 随着手机短信成为人们日常生活交往的重要手段,垃圾短信的识别具有重要的现实意义.针对此提出一种结合TFIDF的self-attention-based Bi-LSTM的神经网络模型.该模型首先将短信文本以词向量的方式输入到Bi-LSTM层,经过特征提取并结合TFIDF和self-attention层的信息聚焦获得最后的特征向量,最后将特征向量通过Softmax分类器进行分类得到短信文本分类结果.实验结果表明,结合TFIDF的self-attention-based Bi-LSTM模型相比于传统分类模型的短信文本识别准确率提高了2.1%–4.6%,运行时间减少了0.6 s–10.2 s. 展开更多
关键词 垃圾短信 文本分类 self-attention Bi-LSTM TFIDF
下载PDF
基于Self-Attention模型的机器翻译系统 被引量:6
9
作者 师岩 王宇 吴水清 《计算机与现代化》 2019年第7期9-14,共6页
近几年来神经机器翻译(Neural Machine Translation,NMT)发展迅速,Seq2Seq框架的提出为机器翻译带来了很大的优势,可以在观测到整个输入句子后生成任意输出序列。但是该模型对于长距离信息的捕获能力仍有很大的局限,循环神经网络(RNN)、... 近几年来神经机器翻译(Neural Machine Translation,NMT)发展迅速,Seq2Seq框架的提出为机器翻译带来了很大的优势,可以在观测到整个输入句子后生成任意输出序列。但是该模型对于长距离信息的捕获能力仍有很大的局限,循环神经网络(RNN)、LSTM网络都是为了改善这一问题提出的,但是效果并不明显。注意力机制的提出与运用则有效地弥补了该缺陷。Self-Attention模型就是在注意力机制的基础上提出的,本文使用Self-Attention为基础构建编码器-解码器框架。本文通过探讨以往的神经网络翻译模型,分析Self-Attention模型的机制与原理,通过TensorFlow深度学习框架对基于Self-Attention模型的翻译系统进行实现,在英文到中文的翻译实验中与以往的神经网络翻译模型进行对比,表明该模型取得了较好的翻译效果。 展开更多
关键词 神经机器翻译 Seq2Seq框架 注意力机制 self-attention模型
下载PDF
引入Self-Attention的电力作业违规穿戴智能检测技术研究 被引量:2
10
作者 莫蓓蓓 吴克河 《计算机与现代化》 2020年第2期115-121,126,共8页
随着电网建设的高速发展,作业现场技术支撑人员规模不断扩大。电力现场属于高危作业场所,违规穿戴安全防护用品将会严重危及作业人员的人身安全,为了改善传统人工监管方式效率低下的问题,本文采用实时深度学习算法进行违规穿戴行为检测... 随着电网建设的高速发展,作业现场技术支撑人员规模不断扩大。电力现场属于高危作业场所,违规穿戴安全防护用品将会严重危及作业人员的人身安全,为了改善传统人工监管方式效率低下的问题,本文采用实时深度学习算法进行违规穿戴行为检测。检测模型结合实时目标检测网络YOLOv3和Self-Attention机制,借鉴DANet结构,在YOLOv3网络高层嵌入自注意力模块,更好地挖掘和学习特征位置和通道关系。实验结果表明,该模型在违规穿戴检测任务中mAP达到了94.58%,Recall达到了96.67%,与YOLOv3相比,mAP提高了12.66%,Recall提高了2.69%,显著提高模型的精度,可以满足任务的检测需求,提升了电网智能化水平。 展开更多
关键词 电力作业 违规穿戴 YOLOv3技术 self-attention机制 目标检测
下载PDF
融合Self-Attention机制和n-gram卷积核的印尼语复合名词自动识别方法研究 被引量:1
11
作者 丘心颖 陈汉武 +3 位作者 陈源 谭立聪 张皓 肖莉娴 《湖南工业大学学报》 2020年第3期1-9,共9页
针对印尼语复合名词短语自动识别,提出一种融合Self-Attention机制、n-gram卷积核的神经网络和统计模型相结合的方法,改进现有的多词表达抽取模型。在现有SHOMA模型的基础上,使用多层CNN和Self-Attention机制进行改进。对Universal Depe... 针对印尼语复合名词短语自动识别,提出一种融合Self-Attention机制、n-gram卷积核的神经网络和统计模型相结合的方法,改进现有的多词表达抽取模型。在现有SHOMA模型的基础上,使用多层CNN和Self-Attention机制进行改进。对Universal Dependencies公开的印尼语数据进行复合名词短语自动识别的对比实验,结果表明:TextCNN+Self-Attention+CRF模型取得32.20的短语多词识别F1值和32.34的短语单字识别F1值,比SHOMA模型分别提升了4.93%和3.04%。 展开更多
关键词 印尼语复合名词短语 self-attention机制 卷积神经网络 自动识别 条件随机场
下载PDF
Self-attention transfer networks for speech emotion recognition 被引量:3
12
作者 Ziping ZHAO Keru Wang +6 位作者 Zhongtian BAO Zixing ZHANG Nicholas CUMMINS Shihuang SUN Haishuai WANG Jianhua TAO Björn WSCHULLER 《Virtual Reality & Intelligent Hardware》 2021年第1期43-54,共12页
Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in s... Background A crucial element of human-machine interaction,the automatic detection of emotional states from human speech has long been regarded as a challenging task for machine learning models.One vital challenge in speech emotion recognition(SER)is learning robust and discriminative representations from speech.Although machine learning methods have been widely applied in SER research,the inadequate amount of available annotated data has become a bottleneck impeding the extended application of such techniques(e.g.,deep neural networks).To address this issue,we present a deep learning method that combines knowledge transfer and self-attention for SER tasks.Herein,we apply the log-Mel spectrogram with deltas and delta-deltas as inputs.Moreover,given that emotions are time dependent,we apply temporal convolutional neural networks to model the variations in emotions.We further introduce an attention transfer mechanism,which is based on a self-attention algorithm to learn long-term dependencies.The self-attention transfer network(SATN)in our proposed approach takes advantage of attention transfer to learn attention from speech recognition,followed by transferring this knowledge into SER.An evaluation built on Interactive Emotional Dyadic Motion Capture(IEMOCAP)dataset demonstrates the effectiveness of the proposed model. 展开更多
关键词 Speech emotion recognition Attention transfer self-attention Temporal convolutional neural networks(TCNs)
下载PDF
基于Bi-GRU和Self-Attention模型的企业关系抽取
13
作者 张豪杰 毛建华 《工业控制计算机》 2020年第4期108-110,113,共4页
从大量非结构化的企业文本中抽取出结构化的企业关系,是建立企业知识图谱的基础工作。循环神经网络(RNN)和卷积神经网络(CNN)是当前关系抽取的主要方法。但是由于企业文本语法特征复杂,长程依赖明显,所以采用RNN的变形网络Bi-GRU来进行... 从大量非结构化的企业文本中抽取出结构化的企业关系,是建立企业知识图谱的基础工作。循环神经网络(RNN)和卷积神经网络(CNN)是当前关系抽取的主要方法。但是由于企业文本语法特征复杂,长程依赖明显,所以采用RNN的变形网络Bi-GRU来进行初步提取。Bi-GRU虽然考虑了长距离词的相关性,但提取特征不够充分。所以在已有基础上引入Self-Attention,使模型能进一步计算每个词的长程依赖特征,提高模型的特征表达能力。最后通过各种模型的实验比较,该方法相较只含Bi-GRU或其他经典模型,在企业文本的关系抽取性能有进一步提高。 展开更多
关键词 企业关系抽取 长程依赖 Bi-GRU self-attention
下载PDF
Keyphrase Generation Based on Self-Attention Mechanism
14
作者 Kehua Yang Yaodong Wang +2 位作者 Wei Zhang Jiqing Yao Yuquan Le 《Computers, Materials & Continua》 SCIE EI 2019年第8期569-581,共13页
Keyphrase greatly provides summarized and valuable information.This information can help us not only understand text semantics,but also organize and retrieve text content effectively.The task of automatically generati... Keyphrase greatly provides summarized and valuable information.This information can help us not only understand text semantics,but also organize and retrieve text content effectively.The task of automatically generating it has received considerable attention in recent decades.From the previous studies,we can see many workable solutions for obtaining keyphrases.One method is to divide the content to be summarized into multiple blocks of text,then we rank and select the most important content.The disadvantage of this method is that it cannot identify keyphrase that does not include in the text,let alone get the real semantic meaning hidden in the text.Another approach uses recurrent neural networks to generate keyphrases from the semantic aspects of the text,but the inherently sequential nature precludes parallelization within training examples,and distances have limitations on context dependencies.Previous works have demonstrated the benefits of the self-attention mechanism,which can learn global text dependency features and can be parallelized.Inspired by the above observation,we propose a keyphrase generation model,which is based entirely on the self-attention mechanism.It is an encoder-decoder model that can make up the above disadvantage effectively.In addition,we also consider the semantic similarity between keyphrases,and add semantic similarity processing module into the model.This proposed model,which is demonstrated by empirical analysis on five datasets,can achieve competitive performance compared to baseline methods. 展开更多
关键词 Keyphrase generation self-attention mechanism encoder-decoder framework
下载PDF
Hashtag Recommendation Using LSTM Networks with Self-Attention
15
作者 Yatian Shen Yan Li +5 位作者 Jun Sun Wenke Ding Xianjin Shi Lei Zhang Xiajiong Shen Jing He 《Computers, Materials & Continua》 SCIE EI 2019年第9期1261-1269,共9页
On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend ha... On Twitter,people often use hashtags to mark the subject of a tweet.Tweets have specific themes or content that are easy for people to manage.With the increase in the number of tweets,how to automatically recommend hashtags for tweets has received wide attention.The previous hashtag recommendation methods were to convert the task into a multi-class classification problem.However,these methods can only recommend hashtags that appeared in historical information,and cannot recommend the new ones.In this work,we extend the self-attention mechanism to turn the hashtag recommendation task into a sequence labeling task.To train and evaluate the proposed method,we used the real tweet data which is collected from Twitter.Experimental results show that the proposed method can be significantly better than the most advanced method.Compared with the state-of-the-art methods,the accuracy of our method has been increased 4%. 展开更多
关键词 Hashtags recommendation self-attention neural networks sequence labeling
下载PDF
Automatic infrared image recognition method for substation equipment based on a deep self-attention network and multi-factor similarity calculation
16
作者 Yaocheng Li Yongpeng Xu +4 位作者 Mingkai Xu Siyuan Wang Zhicheng Xie Zhe Li Xiuchen Jiang 《Global Energy Interconnection》 EI CAS CSCD 2022年第4期397-408,共12页
Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpret... Infrared image recognition plays an important role in the inspection of power equipment.Existing technologies dedicated to this purpose often require manually selected features,which are not transferable and interpretable,and have limited training data.To address these limitations,this paper proposes an automatic infrared image recognition framework,which includes an object recognition module based on a deep self-attention network and a temperature distribution identification module based on a multi-factor similarity calculation.First,the features of an input image are extracted and embedded using a multi-head attention encoding-decoding mechanism.Thereafter,the embedded features are used to predict the equipment component category and location.In the located area,preliminary segmentation is performed.Finally,similar areas are gradually merged,and the temperature distribution of the equipment is obtained to identify a fault.Our experiments indicate that the proposed method demonstrates significantly improved accuracy compared with other related methods and,hence,provides a good reference for the automation of power equipment inspection. 展开更多
关键词 Substation equipment Infrared image intelligent recognition Deep self-attention network Multi-factor similarity calculation
下载PDF
Joint Self-Attention Based Neural Networks for Semantic Relation Extraction
17
作者 Jun Sun Yan Li +5 位作者 Yatian Shen Wenke Ding Xianjin Shi Lei Zhang Xiajiong Shen Jing He 《Journal of Information Hiding and Privacy Protection》 2019年第2期69-75,共7页
Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this pape... Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this paper,we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM(SA-Bi-LSTM)to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information,and capture Long-distance dependence on semantics.We conduct experiments using the SemEval-2010 Task 8 dataset.Extensive experiments and the results demonstrated that the proposed method is effective against relation classification,which can obtain state-ofthe-art classification accuracy just with minimal feature engineering. 展开更多
关键词 self-attention relation extraction neural networks
下载PDF
WMA:A Multi-Scale Self-Attention Feature Extraction Network Based on Weight Sharing for VQA
18
作者 Yue Li Jin Liu Shengjie Shang 《Journal on Big Data》 2021年第3期111-118,共8页
Visual Question Answering(VQA)has attracted extensive research focus and has become a hot topic in deep learning recently.The development of computer vision and natural language processing technology has contributed t... Visual Question Answering(VQA)has attracted extensive research focus and has become a hot topic in deep learning recently.The development of computer vision and natural language processing technology has contributed to the advancement of this research area.Key solutions to improve the performance of VQA system exist in feature extraction,multimodal fusion,and answer prediction modules.There exists an unsolved issue in the popular VQA image feature extraction module that extracts the fine-grained features from objects of different scale difficultly.In this paper,a novel feature extraction network that combines multi-scale convolution and self-attention branches to solve the above problem is designed.Our approach achieves the state-of-the-art performance of a single model on Pascal VOC 2012,VQA 1.0,and VQA 2.0 datasets. 展开更多
关键词 VQA feature extraction self-attention FINE-GRAINED
下载PDF
An Intrusion Detection Scheme Based on Federated Learning and Self-Attention Fusion Convolutional Neural Network for IoT
19
作者 Jie Deng Ran Guo Zilong Jin 《Journal on Internet of Things》 2022年第3期141-153,共13页
Traditional based deep learning intrusion detection methods face problems such as insufficient cloud storage,data privacy leaks,high com-munication costs,unsatisfactory detection rates,and false positive rate.To addre... Traditional based deep learning intrusion detection methods face problems such as insufficient cloud storage,data privacy leaks,high com-munication costs,unsatisfactory detection rates,and false positive rate.To address existing issues in intrusion detection,this paper presents a novel approach called CS-FL,which combines Federated Learning and a Self-Attention Fusion Convolutional Neural Network.Federated Learning is a new distributed computing model that enables individual training of client data without uploading local data to a central server.at the same time,local training results are uploaded and integrated across all participating clients to produce a global model.The sharing model reduces communication costs,protects data privacy,and solves problems such as insufficient cloud storage and“data islands”for each client.In the proposed method,a hybrid model is formed by integrating the self-Attention and similar parts of the Convolutional Neural Network in the local data processing.This approach not only enhances the performance of the hybrid model but also reduces computational overhead compared to pure hybrid neural networks.Results from experiments on the NSL-KDD dataset show that the proposed method outperforms other intrusion detection techniques,resulting in a significant improvement in performance.This demonstrates the effectiveness of the proposed approach in improving intrusion detection accuracy. 展开更多
关键词 Intrusion detection self-attention convolutional neural network federated learning
下载PDF
基于概率稀疏自注意力的航空发动机剩余寿命预测
20
作者 王欣 黄佳琪 许雅玺 《科学技术与工程》 北大核心 2024年第6期2424-2433,共10页
航空发动机剩余寿命预测对其健康管理具有重要意义,针对长序列、多维度的航空发动机监测参数,提出一种基于概率稀疏自注意力(ProbSparse Self-Attention)的Transformer模型以实现航空发动机剩余寿命的准确预测。用ProbSparse Self-Atten... 航空发动机剩余寿命预测对其健康管理具有重要意义,针对长序列、多维度的航空发动机监测参数,提出一种基于概率稀疏自注意力(ProbSparse Self-Attention)的Transformer模型以实现航空发动机剩余寿命的准确预测。用ProbSparse Self-Attention取代原始Transformer中的常规自注意力机制,使得模型更关注时间序列中重要的时间节点,大幅缩减时间维度,减小了时间和空间复杂度;通过注意力层整合后的信息,进一步通过前馈神经网络层和卷积层,提取传感器的空间特征,编码层之间通过扩张因果卷积相连接,扩大了感受野,提高了模型对长序列信息的捕获能力。在新公开的N-CMAPSS数据集上验证算法,实验结果表明,相比于实验中的对比模型,所提模型的RMSE和Score值均有提升,推理速度也优于其他模型。 展开更多
关键词 概率稀疏自注意力 剩余寿命预测 航空发动机 TRANSFORMER 深度学习
下载PDF
上一页 1 2 5 下一页 到第
使用帮助 返回顶部