期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
1
作者 R.Sujatha k.nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Oppositional Harris Hawks Optimization with Deep Learning-Based Image Captioning
2
作者 V.R.Kavitha k.nimala +4 位作者 A.Beno K.C.Ramya Seifedine Kadry Byeong-Gwon Kang Yunyoung Nam 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期579-593,共15页
Image Captioning is an emergent topic of research in the domain of artificial intelligence(AI).It utilizes an integration of Computer Vision(CV)and Natural Language Processing(NLP)for generating the image descriptions... Image Captioning is an emergent topic of research in the domain of artificial intelligence(AI).It utilizes an integration of Computer Vision(CV)and Natural Language Processing(NLP)for generating the image descriptions.Itfinds use in several application areas namely recommendation in editing applications,utilization in virtual assistance,etc.The development of NLP and deep learning(DL)modelsfind useful to derive a bridge among the visual details and textual semantics.In this view,this paper introduces an Oppositional Harris Hawks Optimization with Deep Learning based Image Captioning(OHHO-DLIC)technique.The OHHO-DLIC technique involves the design of distinct levels of pre-processing.Moreover,the feature extraction of the images is carried out by the use of EfficientNet model.Furthermore,the image captioning is performed by bidirectional long short term memory(BiLSTM)model,comprising encoder as well as decoder.At last,the oppositional Harris Hawks optimization(OHHO)based hyperparameter tuning process is performed for effectively adjusting the hyperparameter of the EfficientNet and BiLSTM models.The experimental analysis of the OHHO-DLIC technique is carried out on the Flickr 8k Dataset and a comprehensive comparative analysis highlighted the better performance over the recent approaches. 展开更多
关键词 Image captioning natural language processing artificial intelligence machine learning deep learning
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部