期刊文献+
共找到106,952篇文章
< 1 2 250 >
每页显示 20 50 100
Worker’s Helmet Recognition and Identity Recognition Based on Deep Learning
1
作者 Jie Wang Guangzu Zhu +1 位作者 Shiqi Wu Chunshan Luo 《Open Journal of Modelling and Simulation》 2021年第2期135-145,共11页
For decades, safety has been a concern for the construction industry. Helmet detection caught the attention of machine learning, but the problem of identity recognition has been ignored in previous studies, which brin... For decades, safety has been a concern for the construction industry. Helmet detection caught the attention of machine learning, but the problem of identity recognition has been ignored in previous studies, which brings trouble to the subsequent safety education of workers. Although, many scholars have devoted themselves to the study of person re-identification which neglected safety detection. The study of this paper mainly proposes a method based on deep learning, which is different from the previous study of helmet detection </span><span style="font-family:Verdana;">and human identity recognition and can carry out helmet detection and</span><span style="font-family:Verdana;"> identity recognition for construction workers. This paper proposes a computer vision-based worker identity recognition and helmet recognition method. We collected 3000 real-name channel images and constructed a neural network based on </span></span><span style="font-family:Verdana;">the </span><span style="font-family:Verdana;">You Only Look Once (YOLO) v3 model to extract the features of the construction worker’s face and helmet, respectively. Experiments show that the method has a high recognition accuracy rate, fast recognition speed, accurate recognition of workers and helmet detection, and solves the problem of poor supervision of real-name channels. 展开更多
关键词 Construction Safety Human identity recognition Helmet recognition Computer Vision Deep Learning
下载PDF
Mobile-Customer Identity Recognition
2
作者 LI Zhan XU Ji-sheng +1 位作者 XU Min SUN Hong 《Wuhan University Journal of Natural Sciences》 EI CAS 2005年第6期1013-1018,共6页
By utilizing artificial intelligence and pattern recognition techniques, we propose an integrated mobile-customer identity recognition approach in this paper, based on customer’s behavior characteristics extracted fr... By utilizing artificial intelligence and pattern recognition techniques, we propose an integrated mobile-customer identity recognition approach in this paper, based on customer’s behavior characteristics extracted from the customer information database. To verify the effectiveness of this approach, a test has been run on the dataset consisting of 1 000 customers in 3 consecutive months. The result is compared with the real dataset in the fourth month consisting of 162 customers, which has been set as the customers for recognition. The high correct rate of the test (96.30%), together with 1.87% of the judge-by-mistake rate and 7.82% of the leaving-out rate, demonstrates the effectiveness of this approach. 展开更多
关键词 移动身份鉴别 遗传算法 模糊集 人工智能技术
下载PDF
Convolutional Neural Network-Based Identity Recognition Using ECG atDifferent Water Temperatures During Bathing
3
作者 Jianbo Xu Wenxi Chen 《Computers, Materials & Continua》 SCIE EI 2022年第4期1807-1819,共13页
This study proposes a convolutional neural network(CNN)-based identity recognition scheme using electrocardiogram(ECG)at different water temperatures(WTs)during bathing,aiming to explore the impact of ECG length on th... This study proposes a convolutional neural network(CNN)-based identity recognition scheme using electrocardiogram(ECG)at different water temperatures(WTs)during bathing,aiming to explore the impact of ECG length on the recognition rate.ECG data was collected using non-contact electrodes at five different WTs during bathing.Ten young student subjects(seven men and three women)participated in data collection.Three ECG recordings were collected at each preset bathtub WT for each subject.Each recording is 18 min long,with a sampling rate of 200 Hz.In total,150 ECG recordings and 150 WT recordings were collected.The R peaks were detected based on the processed ECG(baseline wandering eliminated,50-Hz hum removed,ECG smoothing and ECG normalization)and the QRS complex waves were segmented.These segmented waves were then transformed into binary images,which served as the datasets.For each subject,the training,validation,and test data were taken from the first,second,and third ECG recordings,respectively.The number of training and validation images was 84297 and 83734,respectively.In the test stage,the preliminary classification results were obtained using the trained CNN model,and the finer classification results were determined using the majority vote method based on the preliminary results.The validation rate was 98.71%.The recognition rates were 95.00%and 98.00%when the number of test heartbeats was 7 and 17,respectively,for each subject. 展开更多
关键词 ELECTROCARDIOGRAM QRS recognition rate water temperatures convolutional neural network majority vote
下载PDF
A Speaker Identity Recognition System based on Deep Learning
4
作者 Yili Shen 《Journal of Electronic Research and Application》 2019年第5期21-22,共2页
This paper describes a branch of pattern recognition and lies in the field of digital signal processing.It is a speech recognition system of identifying different people speaking based on deep learning.In brief this m... This paper describes a branch of pattern recognition and lies in the field of digital signal processing.It is a speech recognition system of identifying different people speaking based on deep learning.In brief this method can be used as intelligent voice control like Siri. 展开更多
关键词 SPEECH recognition INTELLIGENT SIGNAL processing
下载PDF
Research on Present Situation and Countermeasure of Identity Recognition among Current College Students towards Socialist Core Value System
5
作者 Xue Lei 《International English Education Research》 2015年第1期69-71,共3页
关键词 社会主义现代化 价值体系 大学生 识别 高等职业学校 价值观念 教育方式 思想建设
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
6
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Spatial Distribution Feature Extraction Network for Open Set Recognition of Electromagnetic Signal
7
作者 Hui Zhang Huaji Zhou +1 位作者 Li Wang Feng Zhou 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期279-296,共18页
This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distri... This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distribution feature extraction layer in SDFEN replaces convolutional output neural networks with the spatial distribution features that focus more on inter-sample information by incorporating class center vectors.The designed hybrid loss function considers both intra-class distance and inter-class distance,thereby enhancing the similarity among samples of the same class and increasing the dissimilarity between samples of different classes during training.Consequently,this method allows unknown classes to occupy a larger space in the feature space.This reduces the possibility of overlap with known class samples and makes the boundaries between known and unknown samples more distinct.Additionally,the feature comparator threshold can be used to reject unknown samples.For signal open set recognition,seven methods,including the proposed method,are applied to two kinds of electromagnetic signal data:modulation signal and real-world emitter.The experimental results demonstrate that the proposed method outperforms the other six methods overall in a simulated open environment.Specifically,compared to the state-of-the-art Openmax method,the novel method achieves up to 8.87%and 5.25%higher micro-F-measures,respectively. 展开更多
关键词 Electromagnetic signal recognition deep learning feature extraction open set recognition
下载PDF
Deep Learning Approach for Hand Gesture Recognition:Applications in Deaf Communication and Healthcare
8
作者 Khursheed Aurangzeb Khalid Javeed +3 位作者 Musaed Alhussein Imad Rida Syed Irtaza Haider Anubha Parashar 《Computers, Materials & Continua》 SCIE EI 2024年第1期127-144,共18页
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml... Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics. 展开更多
关键词 Computer vision deep learning gait recognition sign language recognition machine learning
下载PDF
Enhancing Identity Protection in Metaverse-Based Psychological Counseling System
9
作者 Jun Lee Hanna Lee +1 位作者 Seong Chan Lee Hyun Kwon 《Computers, Materials & Continua》 SCIE EI 2024年第1期617-632,共16页
Non-face-to-face psychological counseling systems rely on network technologies to anonymize information regard-ing client identity.However,these systems often face challenges concerning voice data leaks and the subopt... Non-face-to-face psychological counseling systems rely on network technologies to anonymize information regard-ing client identity.However,these systems often face challenges concerning voice data leaks and the suboptimal communication of the client’s non-verbal expressions,such as facial cues,to the counselor.This study proposes a metaverse-based psychological counseling system designed to enhance client identity protection while ensuring efficient information delivery to counselors during non-face-to-face counseling.The proposed systemincorporates a voicemodulation function that instantlymodifies/masks the client’s voice to safeguard their identity.Additionally,it employs real-time client facial expression recognition using an ensemble of decision trees to mirror the client’s non-verbal expressions through their avatar in the metaverse environment.The system is adaptable for use on personal computers and smartphones,offering users the flexibility to access metaverse-based psychological counseling across diverse environments.The performance evaluation of the proposed system confirmed that the voice modulation and real-time facial expression replication consistently achieve an average speed of 48.32 frames per second or higher,even when tested on the least powerful smartphone configurations.Moreover,a total of 550 actual psychological counseling sessions were conducted,and the average satisfaction rating reached 4.46 on a 5-point scale.This indicates that clients experienced improved identity protection compared to conventional non-face-to-face metaverse counseling approaches.Additionally,the counselor successfully addressed the challenge of conveying non-verbal cues from clients who typically struggled with non-face-to-face psychological counseling.The proposed systemholds significant potential for applications in interactive discussions and educational activities in the metaverse. 展开更多
关键词 Metaverse counseling system face tracking identity protection
下载PDF
An Approach for Human Posture Recognition Based on the Fusion PSE-CNN-BiGRU Model
10
作者 Xianghong Cao Xinyu Wang +2 位作者 Xin Geng Donghui Wu Houru An 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期385-408,共24页
This study proposes a pose estimation-convolutional neural network-bidirectional gated recurrent unit(PSECNN-BiGRU)fusion model for human posture recognition to address low accuracy issues in abnormal posture recognit... This study proposes a pose estimation-convolutional neural network-bidirectional gated recurrent unit(PSECNN-BiGRU)fusion model for human posture recognition to address low accuracy issues in abnormal posture recognition due to the loss of some feature information and the deterioration of comprehensive performance in model detection in complex home environments.Firstly,the deep convolutional network is integrated with the Mediapipe framework to extract high-precision,multi-dimensional information from the key points of the human skeleton,thereby obtaining a human posture feature set.Thereafter,a double-layer BiGRU algorithm is utilized to extract multi-layer,bidirectional temporal features from the human posture feature set,and a CNN network with an exponential linear unit(ELU)activation function is adopted to perform deep convolution of the feature map to extract the spatial feature of the human posture.Furthermore,a squeeze and excitation networks(SENet)module is introduced to adaptively learn the importance weights of each channel,enhancing the network’s focus on important features.Finally,comparative experiments are performed on available datasets,including the public human activity recognition using smartphone dataset(UCIHAR),the public human activity recognition 70 plus dataset(HAR70PLUS),and the independently developed home abnormal behavior recognition dataset(HABRD)created by the authors’team.The results show that the average accuracy of the proposed PSE-CNN-BiGRU fusion model for human posture recognition is 99.56%,89.42%,and 98.90%,respectively,which are 5.24%,5.83%,and 3.19%higher than the average accuracy of the five models proposed in the comparative literature,including CNN,GRU,and others.The F1-score for abnormal posture recognition reaches 98.84%(heartache),97.18%(fall),99.6%(bellyache),and 98.27%(climbing)on the self-builtHABRDdataset,thus verifying the effectiveness,generalization,and robustness of the proposed model in enhancing human posture recognition. 展开更多
关键词 Posture recognition mediapipe BiGRU CNN ELU ATTENTION
下载PDF
TransTM:A device-free method based on time-streaming multiscale transformer for human activity recognition
11
作者 Yi Liu Weiqing Huang +4 位作者 Shang Jiang Bobai Zhao Shuai Wang Siye Wang Yanfang Zhang 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期619-628,共10页
RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still... RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published. 展开更多
关键词 Human activity recognition RFID TRANSFORMER
下载PDF
Adaptive Segmentation for Unconstrained Iris Recognition
12
作者 Mustafa AlRifaee Sally Almanasra +3 位作者 Adnan Hnaif Ahmad Althunibat Mohammad Abdallah Thamer Alrawashdeh 《Computers, Materials & Continua》 SCIE EI 2024年第2期1591-1609,共19页
In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requ... In standard iris recognition systems,a cooperative imaging framework is employed that includes a light source with a near-infrared wavelength to reveal iris texture,look-and-stare constraints,and a close distance requirement to the capture device.When these conditions are relaxed,the system’s performance significantly deteriorates due to segmentation and feature extraction problems.Herein,a novel segmentation algorithm is proposed to correctly detect the pupil and limbus boundaries of iris images captured in unconstrained environments.First,the algorithm scans the whole iris image in the Hue Saturation Value(HSV)color space for local maxima to detect the sclera region.The image quality is then assessed by computing global features in red,green and blue(RGB)space,as noisy images have heterogeneous characteristics.The iris images are accordingly classified into seven categories based on their global RGB intensities.After the classification process,the images are filtered,and adaptive thresholding is applied to enhance the global contrast and detect the outer iris ring.Finally,to characterize the pupil area,the algorithm scans the cropped outer ring region for local minima values to identify the darkest area in the iris ring.The experimental results show that our method outperforms existing segmentation techniques using the UBIRIS.v1 and v2 databases and achieved a segmentation accuracy of 99.32 on UBIRIS.v1 and an error rate of 1.59 on UBIRIS.v2. 展开更多
关键词 Image recognition color segmentation image processing LOCALIZATION
下载PDF
Sparse representation scheme with enhanced medium pixel intensity for face recognition
13
作者 Xuexue Zhang Yongjun Zhang +3 位作者 Zewei Wang Wei Long Weihao Gao Bob Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期116-127,共12页
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ... Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms. 展开更多
关键词 computer vision face recognition image classification image representation
下载PDF
Cybernet Model:A New Deep Learning Model for Cyber DDoS Attacks Detection and Recognition
14
作者 Azar Abid Salih Maiwan Bahjat Abdulrazaq 《Computers, Materials & Continua》 SCIE EI 2024年第1期1275-1295,共21页
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being... Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate. 展开更多
关键词 Deep learning CNN LSTM Cybernet model DDoS recognition
下载PDF
Analysis of RNA Recognition and Binding Characteristics of OsCPPR1 Protein in Rice
15
作者 ZHENG Shaoyan CHEN Junyu +3 位作者 LI Huatian LIU Zhenlan LI Jing ZHUANG Chuxiong 《Rice science》 SCIE CSCD 2024年第2期215-225,I0032-I0035,共15页
Pentatricopeptide repeat(PPR)proteins represent one of the largest protein families in plants and typically localize to organelles like mitochondria and chloroplasts.By contrast,CYTOPLASMLOCALIZED PPR1(OsCPPR1)is a cy... Pentatricopeptide repeat(PPR)proteins represent one of the largest protein families in plants and typically localize to organelles like mitochondria and chloroplasts.By contrast,CYTOPLASMLOCALIZED PPR1(OsCPPR1)is a cytoplasm-localized PPR protein that can degrade OsGOLDENLIKE1(OsGLK1)mRNA in the tapetum of rice anther.However,the mechanism,by which OsCPPR1 recognizes and binds to OsGLK1 transcripts,remains unknown.Through protein structure prediction and macromolecular docking experiments,we observed that distinct PPR motif structures of OsCPPR1 exhibited varying binding efficiencies to OsGLK1 RNA.Moreover,RNA-electrophoretic mobility shift assay experiment demonstrated that the recombinant OsCPPR1 can directly recognize and bind to OsGLK1 mRNA in vitro.This further confirmed that the mutations in the conserved amino acids in each PPR motif resulted in loss of activity,while truncation of OsCPPR1 decreased its binding efficiency.These findings collectively suggest that it may require some co-factors to assist in cleavage,a facet that warrants further exploration in subsequent studies. 展开更多
关键词 OsCPPR1 RNA recognition and binding pentatricopeptide repeat RICE
下载PDF
A Support Data-Based Core-Set Selection Method for Signal Recognition
16
作者 Yang Ying Zhu Lidong Cao Changjie 《China Communications》 SCIE CSCD 2024年第4期151-162,共12页
In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classif... In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources. 展开更多
关键词 core-set selection deep learning model training signal recognition support data
下载PDF
SciCN:A Scientific Dataset for Chinese Named Entity Recognition
17
作者 Jing Yang Bin Ji +2 位作者 Shasha Li Jun Ma Jie Yu 《Computers, Materials & Continua》 SCIE EI 2024年第3期4303-4315,共13页
Named entity recognition(NER)is a fundamental task of information extraction(IE),and it has attracted considerable research attention in recent years.The abundant annotated English NER datasets have significantly prom... Named entity recognition(NER)is a fundamental task of information extraction(IE),and it has attracted considerable research attention in recent years.The abundant annotated English NER datasets have significantly promoted the NER research in the English field.By contrast,much fewer efforts are made to the Chinese NER research,especially in the scientific domain,due to the scarcity of Chinese NER datasets.To alleviate this problem,we present aChinese scientificNER dataset–SciCN,which contains entity annotations of titles and abstracts derived from 3,500 scientific papers.We manually annotate a total of 62,059 entities,and these entities are classified into six types.Compared to English scientific NER datasets,SciCN has a larger scale and is more diverse,for it not only contains more paper abstracts but these abstracts are derived from more research fields.To investigate the properties of SciCN and provide baselines for future research,we adapt a number of previous state-of-theart Chinese NER models to evaluate SciCN.Experimental results show that SciCN is more challenging than other Chinese NER datasets.In addition,previous studies have proven the effectiveness of using lexicons to enhance Chinese NER models.Motivated by this fact,we provide a scientific domain-specific lexicon.Validation results demonstrate that our lexicon delivers better performance gains than lexicons of other domains.We hope that the SciCN dataset and the lexicon will enable us to benchmark the NER task regarding the Chinese scientific domain and make progress for future research.The dataset and lexicon are available at:https://github.com/yangjingla/SciCN.git. 展开更多
关键词 Named entity recognition DATASET scientific information extraction LEXICON
下载PDF
Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition
18
作者 Liya Yue Pei Hu +1 位作者 Shu-Chuan Chu Jeng-Shyang Pan 《Computers, Materials & Continua》 SCIE EI 2024年第2期1957-1975,共19页
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext... Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER. 展开更多
关键词 Speech emotion recognition filter-wrapper HIGH-DIMENSIONAL feature selection equilibrium optimizer MULTI-OBJECTIVE
下载PDF
Human Gait Recognition for Biometrics Application Based on Deep Learning Fusion Assisted Framework
19
作者 Ch Avais Hanif Muhammad Ali Mughal +3 位作者 Muhammad Attique Khan Nouf Abdullah Almujally Taerang Kim Jae-Hyuk Cha 《Computers, Materials & Continua》 SCIE EI 2024年第1期357-374,共18页
The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in c... The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in computer vision.Researchers have paid a lot of attention to gait recognition,specifically the identification of people based on their walking patterns,due to its potential to correctly identify people far away.Gait recognition systems have been used in a variety of applications,including security,medical examinations,identity management,and access control.These systems require a complex combination of technical,operational,and definitional considerations.The employment of gait recognition techniques and technologies has produced a number of beneficial and well-liked applications.Thiswork proposes a novel deep learning-based framework for human gait classification in video sequences.This framework’smain challenge is improving the accuracy of accuracy gait classification under varying conditions,such as carrying a bag and changing clothes.The proposed method’s first step is selecting two pre-trained deep learningmodels and training fromscratch using deep transfer learning.Next,deepmodels have been trained using static hyperparameters;however,the learning rate is calculated using the particle swarmoptimization(PSO)algorithm.Then,the best features are selected from both trained models using the Harris Hawks controlled Sine-Cosine optimization algorithm.This algorithm chooses the best features,combined in a novel correlation-based fusion technique.Finally,the fused best features are categorized using medium,bi-layer,and tri-layered neural networks.On the publicly accessible dataset known as the CASIA-B dataset,the experimental process of the suggested technique was carried out,and an improved accuracy of 94.14% was achieved.The achieved accuracy of the proposed method is improved by the recent state-of-the-art techniques that show the significance of this work. 展开更多
关键词 Gait recognition covariant factors BIOMETRIC deep learning FUSION feature selection
下载PDF
Novel Rifle Number Recognition Based on Improved YOLO in Military Environment
20
作者 Hyun Kwon Sanghyun Lee 《Computers, Materials & Continua》 SCIE EI 2024年第1期249-263,共15页
Deep neural networks perform well in image recognition,object recognition,pattern analysis,and speech recog-nition.In military applications,deep neural networks can detect equipment and recognize objects.In military e... Deep neural networks perform well in image recognition,object recognition,pattern analysis,and speech recog-nition.In military applications,deep neural networks can detect equipment and recognize objects.In military equipment,it is necessary to detect and recognize rifle management,which is an important piece of equipment,using deep neural networks.There have been no previous studies on the detection of real rifle numbers using real rifle image datasets.In this study,we propose a method for detecting and recognizing rifle numbers when rifle image data are insufficient.The proposed method was designed to improve the recognition rate of a specific dataset using data fusion and transfer learningmethods.In the proposed method,real rifle images and existing digit images are fusedas trainingdata,andthe final layer is transferredto theYolov5 algorithmmodel.The detectionand recognition performance of rifle numbers was improved and analyzed using rifle image and numerical datasets.We used actual rifle image data(K-2 rifle)and numeric image datasets,as an experimental environment.TensorFlow was used as the machine learning library.Experimental results show that the proposed method maintains 84.42% accuracy,73.54% precision,81.81% recall,and 77.46% F1-score in detecting and recognizing rifle numbers.The proposed method is effective in detecting rifle numbers. 展开更多
关键词 Machine learning deep neural network rifle number recognition DETECTION
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部