期刊文献+
共找到69篇文章
< 1 2 4 >
每页显示 20 50 100
Speech Recognition via CTC-CNN Model
1
作者 Wen-Tsai Sung Hao-WeiKang Sung-Jung Hsiao 《Computers, Materials & Continua》 SCIE EI 2023年第9期3833-3858,共26页
In the speech recognition system,the acoustic model is an important underlying model,and its accuracy directly affects the performance of the entire system.This paper introduces the construction and training process o... In the speech recognition system,the acoustic model is an important underlying model,and its accuracy directly affects the performance of the entire system.This paper introduces the construction and training process of the acoustic model in detail and studies the Connectionist temporal classification(CTC)algorithm,which plays an important role in the end-to-end framework,established a convolutional neural network(CNN)combined with an acoustic model of Connectionist temporal classification to improve the accuracy of speech recognition.This study uses a sound sensor,ReSpeakerMic Array v2.0.1,to convert the collected speech signals into text or corresponding speech signals to improve communication and reduce noise and hardware interference.The baseline acousticmodel in this study faces challenges such as long training time,high error rate,and a certain degree of overfitting.The model is trained through continuous design and improvement of the relevant parameters of the acousticmodel,and finally the performance is selected according to the evaluation index.Excellentmodel,which reduces the error rate to about 18%,thus improving the accuracy rate.Finally,comparative verificationwas carried out from the selection of acoustic feature parameters,the selection of modeling units,and the speaker’s speech rate,which further verified the excellent performance of the CTCCNN_5+BN+Residual model structure.In terms of experiments,to train and verify the CTC-CNN baseline acoustic model,this study uses THCHS-30 and ST-CMDS speech data sets as training data sets,and after 54 epochs of training,the word error rate of the acoustic model training set is 31%,the word error rate of the test set is stable at about 43%.This experiment also considers the surrounding environmental noise.Under the noise level of 80∼90 dB,the accuracy rate is 88.18%,which is the worst performance among all levels.In contrast,at 40–60 dB,the accuracy was as high as 97.33%due to less noise pollution. 展开更多
关键词 Artificial intelligence speech recognition speech to text convolutional neural network automatic speech recognition
下载PDF
Challenges and Limitations in Speech Recognition Technology:A Critical Review of Speech Signal Processing Algorithms,Tools and Systems
2
作者 Sneha Basak Himanshi Agrawal +4 位作者 Shreya Jena Shilpa Gite Mrinal Bachute Biswajeet Pradhan Mazen Assiri 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1053-1089,共37页
Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computa... Speech recognition systems have become a unique human-computer interaction(HCI)family.Speech is one of the most naturally developed human abilities;speech signal processing opens up a transparent and hand-free computation experience.This paper aims to present a retrospective yet modern approach to the world of speech recognition systems.The development journey of ASR(Automatic Speech Recognition)has seen quite a few milestones and breakthrough technologies that have been highlighted in this paper.A step-by-step rundown of the fundamental stages in developing speech recognition systems has been presented,along with a brief discussion of various modern-day developments and applications in this domain.This review paper aims to summarize and provide a beginning point for those starting in the vast field of speech signal processing.Since speech recognition has a vast potential in various industries like telecommunication,emotion recognition,healthcare,etc.,this review would be helpful to researchers who aim at exploring more applications that society can quickly adopt in future years of evolution. 展开更多
关键词 speech recognition automatic speech recognition(ASR) mel-frequency cepstral coefficients(MFCC) hidden Markov model(HMM) artificial neural network(ANN)
下载PDF
Joint On-Demand Pruning and Online Distillation in Automatic Speech Recognition Language Model Optimization
3
作者 Soonshin Seo Ji-Hwan Kim 《Computers, Materials & Continua》 SCIE EI 2023年第12期2833-2856,共24页
Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these... Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these systems,it is important to deploy efficient models capable of adapting to diverse deployment conditions.In recent years,on-demand pruning methods have obtained significant attention within the ASR domain due to their adaptability in various deployment scenarios.However,these methods often confront substantial trade-offs,particularly in terms of unstable accuracy when reducing the model size.To address challenges,this study introduces two crucial empirical findings.Firstly,it proposes the incorporation of an online distillation mechanism during on-demand pruning training,which holds the promise of maintaining more consistent accuracy levels.Secondly,it proposes the utilization of the Mogrifier long short-term memory(LSTM)language model(LM),an advanced iteration of the conventional LSTM LM,as an effective alternative for pruning targets within the ASR framework.Through rigorous experimentation on the ASR system,employing the Mogrifier LSTM LM and training it using the suggested joint on-demand pruning and online distillation method,this study provides compelling evidence.The results exhibit that the proposed methods significantly outperform a benchmark model trained solely with on-demand pruning methods.Impressively,the proposed strategic configuration successfully reduces the parameter count by approximately 39%,all the while minimizing trade-offs. 展开更多
关键词 Automatic speech recognition neural language model Mogrifier long short-term memory PRUNING DISTILLATION efficient deployment OPTIMIZATION joint training
下载PDF
A Robust Conformer-Based Speech Recognition Model for Mandarin Air Traffic Control
4
作者 Peiyuan Jiang Weijun Pan +2 位作者 Jian Zhang Teng Wang Junxiang Huang 《Computers, Materials & Continua》 SCIE EI 2023年第10期911-940,共30页
This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents ... This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents a novel cascaded model architecture,namely Conformer-CTC/Attention-T5(CCAT),to build a highly accurate and robust ATC speech recognition model.To tackle the challenges posed by noise and fast speech rate in ATC,the Conformer model is employed to extract robust and discriminative speech representations from raw waveforms.On the decoding side,the Attention mechanism is integrated to facilitate precise alignment between input features and output characters.The Text-To-Text Transfer Transformer(T5)language model is also introduced to handle particular pronunciations and code-mixing issues,providing more accurate and concise textual output for downstream tasks.To enhance the model’s robustness,transfer learning and data augmentation techniques are utilized in the training strategy.The model’s performance is optimized by performing hyperparameter tunings,such as adjusting the number of attention heads,encoder layers,and the weights of the loss function.The experimental results demonstrate the significant contributions of data augmentation,hyperparameter tuning,and error correction models to the overall model performance.On the Our ATC Corpus dataset,the proposed model achieves a Character Error Rate(CER)of 3.44%,representing a 3.64%improvement compared to the baseline model.Moreover,the effectiveness of the proposed model is validated on two publicly available datasets.On the AISHELL-1 dataset,the CCAT model achieves a CER of 3.42%,showcasing a 1.23%improvement over the baseline model.Similarly,on the LibriSpeech dataset,the CCAT model achieves a Word Error Rate(WER)of 5.27%,demonstrating a performance improvement of 7.67%compared to the baseline model.Additionally,this paper proposes an evaluation criterion for assessing the robustness of ATC speech recognition systems.In robustness evaluation experiments based on this criterion,the proposed model demonstrates a performance improvement of 22%compared to the baseline model. 展开更多
关键词 Air traffic control automatic speech recognition CONFORMER robustness evaluation T5 error correction model
下载PDF
An Optimal Method for Speech Recognition Based on Neural Network
5
作者 Mohamad Khairi Ishak DagØivind Madsen Fahad Ahmed Al-Zahrani 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1951-1961,共11页
Natural language processing technologies have become more widely available in recent years,making them more useful in everyday situations.Machine learning systems that employ accessible datasets and corporate work to ... Natural language processing technologies have become more widely available in recent years,making them more useful in everyday situations.Machine learning systems that employ accessible datasets and corporate work to serve the whole spectrum of problems addressed in computational linguistics have lately yielded a number of promising breakthroughs.These methods were particularly advantageous for regional languages,as they were provided with cut-ting-edge language processing tools as soon as the requisite corporate information was generated.The bulk of modern people are unconcerned about the importance of reading.Reading aloud,on the other hand,is an effective technique for nour-ishing feelings as well as a necessary skill in the learning process.This paper pro-posed a novel approach for speech recognition based on neural networks.The attention mechanism isfirst utilized to determine the speech accuracy andfluency assessments,with the spectrum map as the feature extraction input.To increase phoneme identification accuracy,reading precision,for example,employs a new type of deep speech.It makes use of the exportchapter tool,which provides a corpus,as well as the TensorFlow framework in the experimental setting.The experimentalfindings reveal that the suggested model can more effectively assess spoken speech accuracy and readingfluency than the old model,and its evalua-tion model’s score outcomes are more accurate. 展开更多
关键词 Machine learning neural networks speech recognition signal processing learning process fluency and accuracy
下载PDF
Speech Recognition-Based Automated Visual Acuity Testing with Adaptive Mel Filter Bank 被引量:1
6
作者 Shibli Nisar Muhammad Asghar Khan +3 位作者 Fahad Algarni Abdul Wakeel M.Irfan Uddin Insaf Ullah 《Computers, Materials & Continua》 SCIE EI 2022年第2期2991-3004,共14页
One of the most commonly reported disabilities is vision loss,which can be diagnosed by an ophthalmologist in order to determine the visual system of a patient.This procedure,however,usually requires an appointment wi... One of the most commonly reported disabilities is vision loss,which can be diagnosed by an ophthalmologist in order to determine the visual system of a patient.This procedure,however,usually requires an appointment with an ophthalmologist,which is both time-consuming and expensive process.Other issues that can arise include a lack of appropriate equipment and trained practitioners,especially in rural areas.Centered on a cognitively motivated attribute extraction and speech recognition approach,this paper proposes a novel idea that immediately determines the eyesight deficiency.The proposed system uses an adaptive filter bank with weighted mel frequency cepstral coefficients for feature extraction.The adaptive filter bank implementation is inspired by the principle of spectrum sensing in cognitive radio that is aware of its environment and adapts to statistical variations in the input stimuli by learning from the environment.Comparative performance evaluation demonstrates the potential of our automated visual acuity test method to achieve comparable results to the clinical ground truth,established by the expert ophthalmologist’s tests.The overall accuracy achieved by the proposed model when compared with the expert ophthalmologist test is 91.875%.The proposed method potentially offers a second opinion to ophthalmologists,and serves as a cost-effective pre-screening test to predict eyesight loss at an early stage. 展开更多
关键词 Eyesight test speech recognition HMM SVM feature extraction
下载PDF
Relative Contributions of Spectral and Temporal Cues for Speech Recognition in Patients with Sensorineural Hearing Loss 被引量:1
7
作者 Rebecca Brashears Katherine Rife 《Journal of Otology》 2008年第2期84-91,共8页
The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to mode... The present study was designed to examine speech recognition in patients with sensorineural hearing loss when the temporal and spectral information in the speech signals were co-varied. Four subjects with mild to moderate sensorineural hearing loss were recruited to participate in consonant and vowel recognition tests that used speech stimuli processed through a noise-excited vocoder. The number of channels was varied between 2 and 32, which defined spectral information. The lowpass cutoff frequency of the temporal envelope extractor was varied from 1 to 512 Hz, which defined temporal information. Results indicate that performance of subjects with sen-sorineural hearing loss varied tremendously among the subjects. For consonant recognition, patterns of relative contributions of spectral and temporal information were similar to those in normal-hearing subjects. The utility of temporal envelope information appeared to be normal in the hearing-impaired listeners. For vowel recognition, which depended predominately on spectral information, the performance plateau was achieved with numbers of channels as high as 16-24, much higher than expected, given that the frequency selectivity in patients with sensorineural hearing loss might be compromised. In order to understand the mechanisms on how hearing-impaired listeners utilize spectral and temporal cues for speech recognition, future studies that involve a large sample of patients with sensorineural hearing loss will be necessary to elucidate the relationship between frequency selectivity as well as central processing capability and speech recognition performance using vocoded signals. 展开更多
关键词 SPECTRAL TEMPORAL speech recognition hearing loss
下载PDF
EMOTIONAL SPEECH RECOGNITION BASED ON SVM WITH GMM SUPERVECTOR 被引量:1
8
作者 Chen Yanxiang Xie Jian 《Journal of Electronics(China)》 2012年第3期339-344,共6页
Emotion recognition from speech is an important field of research in human computer interaction. In this letter the framework of Support Vector Machines (SVM) with Gaussian Mixture Model (GMM) supervector is introduce... Emotion recognition from speech is an important field of research in human computer interaction. In this letter the framework of Support Vector Machines (SVM) with Gaussian Mixture Model (GMM) supervector is introduced for emotional speech recognition. Because of the importance of variance in reflecting the distribution of speech, the normalized mean vectors potential to exploit the information from the variance are adopted to form the GMM supervector. Comparative experiments from five aspects are conducted to study their corresponding effect to system performance. The experiment results, which indicate that the influence of number of mixtures is strong as well as influence of duration is weak, provide basis for the train set selection of Universal Background Model (UBM). 展开更多
关键词 Emotional speech recognition Support Vector Machines (SVM) Gaussian Mixture Model (GMM) supervector Universal Background Model (USB)
下载PDF
Robust Speech Recognition System Using Conventional and Hybrid Features of MFCC,LPCC,PLP,RASTA-PLP and Hidden Markov Model Classifier in Noisy Conditions 被引量:7
9
作者 Veton Z.Kepuska Hussien A.Elharati 《Journal of Computer and Communications》 2015年第6期1-9,共9页
In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance... In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate. 展开更多
关键词 speech recognition Noisy Conditions Feature Extraction Mel-Frequency Cepstral Coefficients Linear Predictive Coding Coefficients Perceptual Linear Production RASTA-PLP Isolated speech Hidden Markov Model
下载PDF
An Innovative Approach Utilizing Binary-View Transformer for Speech Recognition Task
10
作者 Muhammad Babar Kamal Arfat Ahmad Khan +5 位作者 Faizan Ahmed Khan Malik Muhammad Ali Shahid Chitapong Wechtaisong Muhammad Daud Kamal Muhammad Junaid Ali Peerapong Uthansakul 《Computers, Materials & Continua》 SCIE EI 2022年第9期5547-5562,共16页
The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small seque... The deep learning advancements have greatly improved the performance of speech recognition systems,and most recent systems are based on the Recurrent Neural Network(RNN).Overall,the RNN works fine with the small sequence data,but suffers from the gradient vanishing problem in case of large sequence.The transformer networks have neutralized this issue and have shown state-of-the-art results on sequential or speech-related data.Generally,in speech recognition,the input audio is converted into an image using Mel-spectrogram to illustrate frequencies and intensities.The image is classified by the machine learning mechanism to generate a classification transcript.However,the audio frequency in the image has low resolution and causing inaccurate predictions.This paper presents a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution problem.Firstly,the input audio signal is transformed into a 2D image using Mel-spectrogram.Secondly,the modified universal transformers utilize the multi-head attention to derive contextual information and derive different speech-related features.Moreover,a feedforward neural network is also deployed for classification.The proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16%and with minimal loss.The binary-view transformer eradicates the eventuality of the over-fitting problem by deploying a multiview mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map. 展开更多
关键词 Convolution neural network multi-head attention MULTI-VIEW RNN self-attention speech recognition TRANSFORMER
下载PDF
Deep Learning-Based Approach for Arabic Visual Speech Recognition
11
作者 Nadia H.Alsulami Amani T.Jamal Lamiaa A.Elrefaei 《Computers, Materials & Continua》 SCIE EI 2022年第4期85-108,共24页
Lip-reading technologies are rapidly progressing following the breakthrough of deep learning.It plays a vital role in its many applications,such as:human-machine communication practices or security applications.In thi... Lip-reading technologies are rapidly progressing following the breakthrough of deep learning.It plays a vital role in its many applications,such as:human-machine communication practices or security applications.In this paper,we propose to develop an effective lip-reading recognition model for Arabic visual speech recognition by implementing deep learning algorithms.The Arabic visual datasets that have been collected contains 2400 records of Arabic digits and 960 records of Arabic phrases from 24 native speakers.The primary purpose is to provide a high-performance model in terms of enhancing the preprocessing phase.Firstly,we extract keyframes from our dataset.Secondly,we produce a Concatenated Frame Images(CFIs)that represent the utterance sequence in one single image.Finally,the VGG-19 is employed for visual features extraction in our proposed model.We have examined different keyframes:10,15,and 20 for comparing two types of approaches in the proposed model:(1)the VGG-19 base model and(2)VGG-19 base model with batch normalization.The results show that the second approach achieves greater accuracy:94%for digit recognition,97%for phrase recognition,and 93%for digits and phrases recognition in the test dataset.Therefore,our proposed model is superior to models based on CFIs input. 展开更多
关键词 Convolutional neural network deep learning lip reading transfer learning visual speech recognition
下载PDF
Hybrid In-Vehicle Background Noise Reduction for Robust Speech Recognition:The Possibilities of Next Generation 5G Data Networks
12
作者 Radek Martinek Jan Baros +2 位作者 Rene Jaros Lukas Danys Jan Nedoma 《Computers, Materials & Continua》 SCIE EI 2022年第6期4659-4676,共18页
This pilot study focuses on employment of hybrid LMS-ICA system for in-vehicle background noise reduction.Modern vehicles are nowadays increasingly supporting voice commands,which are one of the pillars of autonomous ... This pilot study focuses on employment of hybrid LMS-ICA system for in-vehicle background noise reduction.Modern vehicles are nowadays increasingly supporting voice commands,which are one of the pillars of autonomous and SMART vehicles.Robust speaker recognition for context-aware in-vehicle applications is limited to a certain extent by in-vehicle back-ground noise.This article presents the new concept of a hybrid system which is implemented as a virtual instrument.The highly modular concept of the virtual car used in combination with real recordings of various driving scenarios enables effective testing of the investigated methods of in-vehicle background noise reduction.The study also presents a unique concept of an adaptive system using intelligent clusters of distributed next generation 5G data networks,which allows the exchange of interference information and/or optimal hybrid algorithm settings between individual vehicles.On average,the unfiltered voice commands were successfully recognized in 29.34%of all scenarios,while the LMS reached up to 71.81%,and LMS-ICA hybrid improved the performance further to 73.03%. 展开更多
关键词 5G noise reduction hybrid algorithms speech recognition 5G data networks in-vehicle background noise
下载PDF
Cross-Language Transfer Learning-based Lhasa-Tibetan Speech Recognition
13
作者 Zhijie Wang Yue Zhao +3 位作者 Licheng Wu Xiaojun Bi Zhuoma Dawa Qiang Ji 《Computers, Materials & Continua》 SCIE EI 2022年第10期629-639,共11页
As one of Chinese minority languages,Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently.This,along with the relatively small Tibetan corpus,has resu... As one of Chinese minority languages,Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently.This,along with the relatively small Tibetan corpus,has resulted in an unsatisfying performance of Tibetan speech recognition based on an end-to-end model.This paper aims to achieve an accurate Tibetan speech recognition using a small amount of Tibetan training data.We demonstrate effective methods of Tibetan end-to-end speech recognition via cross-language transfer learning from three aspects:modeling unit selection,transfer learning method,and source language selection.Experimental results show that the Chinese-Tibetan multi-language learning method using multilanguage character set as the modeling unit yields the best performance on Tibetan Character Error Rate(CER)at 27.3%,which is reduced by 26.1%compared to the language-specific model.And our method also achieves the 2.2%higher accuracy using less amount of data compared with the method using Tibetan multi-dialect transfer learning under the same model structure and data set. 展开更多
关键词 Cross-language transfer learning low-resource language modeling unit Tibetan speech recognition
下载PDF
Enhanced Marathi Speech Recognition Facilitated by Grasshopper Optimisation-Based Recurrent Neural Network
14
作者 Ravindra Parshuram Bachate Ashok Sharma +3 位作者 Amar Singh Ayman AAly Abdulaziz HAlghtani Dac-Nhuong Le 《Computer Systems Science & Engineering》 SCIE EI 2022年第11期439-454,共16页
Communication is a significant part of being human and living in the world.Diverse kinds of languages and their variations are there;thus,one person can speak any language and cannot effectively communicate with one w... Communication is a significant part of being human and living in the world.Diverse kinds of languages and their variations are there;thus,one person can speak any language and cannot effectively communicate with one who speaks that language in a different accent.Numerous application fields such as education,mobility,smart systems,security,and health care systems utilize the speech or voice recognition models abundantly.Though,various studies are focused on the Arabic or Asian and English languages by ignoring other significant languages like Marathi that leads to the broader research motivations in regional languages.It is necessary to understand the speech recognition field,in which the major concentrated stages are feature extraction and classification.This paper emphasis developing a Speech Recognition model for the Marathi language by optimizing Recurrent Neural Network(RNN).Here,the preprocessing of the input signal is performed by smoothing and median filtering.After preprocessing the feature extraction is carried out using MFCC and Spectral features to get precise features from the input Marathi Speech corpus.The optimized RNN classifier is used for speech recognition after completing the feature extraction task,where the optimization of hidden neurons in RNN is performed by the Grasshopper Optimization Algorithm(GOA).Finally,the comparison with the conventional techniques has shown that the proposed model outperforms most competing models on a benchmark dataset. 展开更多
关键词 Deep learning grasshopper optimization algorithm recurrent neural network speech recognition word error rate
下载PDF
Tibetan Multi-Dialect Speech Recognition Using Latent Regression Bayesian Network and End-To-End Mode
15
作者 Yue Zhao Jianjian Yue +4 位作者 Wei Song Xiaona Xu Xiali Li Licheng Wu Qiang Ji 《Journal on Internet of Things》 2019年第1期17-23,共7页
We proposed a method using latent regression Bayesian network (LRBN) toextract the shared speech feature for the input of end-to-end speech recognition model.The structure of LRBN is compact and its parameter learning... We proposed a method using latent regression Bayesian network (LRBN) toextract the shared speech feature for the input of end-to-end speech recognition model.The structure of LRBN is compact and its parameter learning is fast. Compared withConvolutional Neural Network, it has a simpler and understood structure and lessparameters to learn. Experimental results show that the advantage of hybridLRBN/Bidirectional Long Short-Term Memory-Connectionist Temporal Classificationarchitecture for Tibetan multi-dialect speech recognition, and demonstrate the LRBN ishelpful to differentiate among multiple language speech sets. 展开更多
关键词 Multi-dialect speech recognition Tibetan language latent regressionbayesian network end-to-end model
下载PDF
English Speech Recognition and Multidimensional Pronunciation Evaluation
16
作者 Jinwei Dong Shaohui Li 《教育研究前沿(中英文版)》 2020年第3期184-188,共5页
As one of the four major skills in English learning,speaking is an important part in English learning which getting more and more attention.The study set out to improve the accuracy of speech recognition by applying s... As one of the four major skills in English learning,speaking is an important part in English learning which getting more and more attention.The study set out to improve the accuracy of speech recognition by applying speech recognition technology to the evaluation of English pronunciation.On this basis,the study attempts to improve computer speech evaluation methods.This model employs indexes such as intonation,speech rate,rhythm and intonation,using multi-dimensional indicators as evaluation criteria to establish a more comprehensive and objective English speech evaluation model.English learners can use the evaluation systems to guide their self-study,and the system can have a profound influence on the practice of teaching spoken English. 展开更多
关键词 Spoken English speech recognition Pronunciation Evaluation
下载PDF
Multi-Objective Equilibrium Optimizer for Feature Selection in High-Dimensional English Speech Emotion Recognition
17
作者 Liya Yue Pei Hu +1 位作者 Shu-Chuan Chu Jeng-Shyang Pan 《Computers, Materials & Continua》 SCIE EI 2024年第2期1957-1975,共19页
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext... Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER. 展开更多
关键词 speech emotion recognition filter-wrapper HIGH-DIMENSIONAL feature selection equilibrium optimizer MULTI-OBJECTIVE
下载PDF
Exploring Sequential Feature Selection in Deep Bi-LSTM Models for Speech Emotion Recognition
18
作者 Fatma Harby Mansor Alohali +1 位作者 Adel Thaljaoui Amira Samy Talaat 《Computers, Materials & Continua》 SCIE EI 2024年第2期2689-2719,共31页
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona... Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field. 展开更多
关键词 Artificial intelligence application multi features sequential selection speech emotion recognition deep Bi-LSTM
下载PDF
Audio-Text Multimodal Speech Recognition via Dual-Tower Architecture for Mandarin Air Traffic Control Communications
19
作者 Shuting Ge Jin Ren +3 位作者 Yihua Shi Yujun Zhang Shunzhi Yang Jinfeng Yang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3215-3245,共31页
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p... In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management. 展开更多
关键词 speech-text multimodal automatic speech recognition semantic alignment air traffic control communications dual-tower architecture
下载PDF
WTASR:Wavelet Transformer for Automatic Speech Recognition of Indian Languages
20
作者 Tripti Choudhary Vishal Goyal Atul Bansal 《Big Data Mining and Analytics》 EI CSCD 2023年第1期85-91,共7页
Automatic speech recognition systems are developed for translating the speech signals into the corresponding text representation.This translation is used in a variety of applications like voice enabled commands,assist... Automatic speech recognition systems are developed for translating the speech signals into the corresponding text representation.This translation is used in a variety of applications like voice enabled commands,assistive devices and bots,etc.There is a significant lack of efficient technology for Indian languages.In this paper,an wavelet transformer for automatic speech recognition(WTASR)of Indian language is proposed.The speech signals suffer from the problem of high and low frequency over different times due to variation in speech of the speaker.Thus,wavelets enable the network to analyze the signal in multiscale.The wavelet decomposition of the signal is fed in the network for generating the text.The transformer network comprises an encoder decoder system for speech translation.The model is trained on Indian language dataset for translation of speech into corresponding text.The proposed method is compared with other state of the art methods.The results show that the proposed WTASR has a low word error rate and can be used for effective speech recognition for Indian language. 展开更多
关键词 TRANSFORMER WAVELET automatic speech recognition(ASR) Indian language
原文传递
上一页 1 2 4 下一页 到第
使用帮助 返回顶部