首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对...首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.展开更多
识别服装质量抽检通告中的实体信息,对于评估不同区域的服装质量状况以及制定宏观政策具有重要意义。针对质量抽检通告命名实体识别存在的长文本序列信息丢失、小类样本特征学习不全等问题,以注意力机制为核心,提出了基于BERT(bidirecti...识别服装质量抽检通告中的实体信息,对于评估不同区域的服装质量状况以及制定宏观政策具有重要意义。针对质量抽检通告命名实体识别存在的长文本序列信息丢失、小类样本特征学习不全等问题,以注意力机制为核心,提出了基于BERT(bidirectional encoder representations from transformers)和TENER(transformer encoder for NER)模型的领域命名实体识别模型。BERT-TENER模型通过预训练模型BERT获得字符的动态字向量;将字向量输入TENER模块中,基于注意力机制使得同样的字符拥有不同的学习过程,基于改进的Transformer模型进一步捕捉字符与字符之间的距离和方向信息,增强模型对不同长度、小类别文本内容的理解,并采用条件随机场模型获得每个字符对应的实体标签。在领域数据集上,BERT-TENER模型针对服装抽检领域的实体识别F_1达到92.45%,相较传统方法有效提升了命名实体识别率,并且在长文本以及非均衡的实体类别中也表现出较好的性能。展开更多
Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit informati...Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit information,a Semi-supervised New Intent Discovery for Elastic Neighborhood Syntactic Elimination and Fusion model(SNID-ENSEF)is proposed.Syntactic elimination contrast learning leverages verb-dominant syntactic features,systematically replacing specific words to enhance data diversity.The radius of the positive sample neighborhood is elastically adjusted to eliminate invalid samples and improve training efficiency.A neighborhood sample fusion strategy,based on sample distribution patterns,dynamically adjusts neighborhood size and fuses sample vectors to reduce noise and improve implicit information utilization and discovery accuracy.Experimental results show that SNID-ENSEF achieves average improvements of 0.88%,1.27%,and 1.30%in Normalized Mutual Information(NMI),Accuracy(ACC),and Adjusted Rand Index(ARI),respectively,outperforming PTJN,DPN,MTP-CLNN,and DWG models on the Banking77,StackOverflow,and Clinc150 datasets.The code is available at https://github.com/qsdesz/SNID-ENSEF,accessed on 16 January 2025.展开更多
Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze ...Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.展开更多
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ...Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.展开更多
Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning ...Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning algorithms to classify Portuguese patents and evaluate the performance of transfer learning methodologies to solve this task.Design/methodology/approach:We applied three different approaches in this paper.First,we used a dataset available by INPI to explore traditional machine learning algorithms and ensemble methods.After preprocessing data by applying TF-IDF,FastText and Doc2Vec,the models were evaluated by cross-validation in 5 folds.In a second approach,we used two different Neural Networks architectures,a Convolutional Neural Network(CNN)and a bi-directional Long Short-Term Memory(BiLSTM).Finally,we used pre-trained BERT,DistilBERT,and ULMFiT models in the third approach.Findings:BERTTimbau,a BERT architecture model pre-trained on a large Portuguese corpus,presented the best results for the task,even though with a performance of only 4%superior to a LinearSVC model using TF-IDF feature engineering.Research limitations:The dataset was highly imbalanced,as usual in patent applications,so the classes with the lowest samples were expected to present the worst performance.That result happened in some cases,especially in classes with less than 60 training samples.Practical implications:Patent classification is challenging because of the hierarchical classification system,the context overlap,and the underrepresentation of the classes.However,the final model presented an acceptable performance given the size of the dataset and the task complexity.This model can support the decision and improve the time by proposing a category in the second level of ICP,which is one of the critical phases of the grant patent process.Originality/value:To our knowledge,the proposed models were never implemented for Portuguese patent classification.展开更多
For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural net...For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.展开更多
针对标书文本重要信息的抽取需求,提出一种基于BERT(bidirectional encoder representations from transformers)的阅读理解式标书文本信息抽取方法。该方法将信息抽取任务转换为阅读理解任务,根据标书文本内容,生成对应问题,再抽取标...针对标书文本重要信息的抽取需求,提出一种基于BERT(bidirectional encoder representations from transformers)的阅读理解式标书文本信息抽取方法。该方法将信息抽取任务转换为阅读理解任务,根据标书文本内容,生成对应问题,再抽取标书文本片段作为问题答案。利用BERT预训练模型,得到强健的语言模型,获取更深层次的上下文关联。相比传统的命名实体识别方法,基于阅读理解的信息抽取方法能够很好地同时处理非嵌套实体和嵌套实体的抽取,也能充分利用问题所包含的先验语义信息,区分出具有相似属性的信息。从中国政府采购网下载标书文本数据进行了实验,本文方法总体EM(exact match)值达到92.41%,F1值达到95.03%。实验结果表明本文提出的方法对标书文本的信息抽取是有效的。展开更多
The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from depende...The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from dependency graphs generated by dependency trees and semantic graphs generated by Multi-headed self-attention(MHSA).However,these approaches do not highlight the sentiment information associated with aspect in the syntactic and semantic graphs.We propose the Aspect-Guided Multi-Graph Convolutional Networks(AGGCN)for Aspect-Based Sentiment Classification.Specifically,we reconstruct two kinds of graphs,changing the weight of the dependency graph by distance from aspect and improving the semantic graph by Aspect-guided MHSA.For interactive learning of syntax and semantics,we dynamically fuse syntactic and semantic diagrams to generate syntactic-semantic graphs to learn emotional features jointly.In addition,Multi-dropout is added to solve the overftting of AGGCN in training.The experimental results on extensive datasets show that our model AGGCN achieves particularly advanced results and validates the effectiveness of the model.展开更多
多项选择作为机器阅读理解中的一项重要任务,在自然语言处理(natural language processing,NLP)领域受到了广泛关注。由于数据中需要处理的文本长度不断增长,长文本多项选择成为了一项新的挑战。然而,现有的长文本处理方法容易丢失文本...多项选择作为机器阅读理解中的一项重要任务,在自然语言处理(natural language processing,NLP)领域受到了广泛关注。由于数据中需要处理的文本长度不断增长,长文本多项选择成为了一项新的挑战。然而,现有的长文本处理方法容易丢失文本中的有效信息,导致结果不准确。针对上述问题,提出了一种基于压缩与推理的长文本多项选择答题方法(Long Text Multiple Choice Answer Method Based on Compression and Reasoning,LTMCA),通过训练评判模型识别相关句子,将相关句拼接成短文本输入到推理模型进行推理。为了提高评判模型的精度,在评判模型中增加了文章与选项之间的交互以补充文章对选项的注意力,有针对性地进行相关语句识别,更加准确地完成多项选择答题任务。在本文构建的CLTMCA中文长文本多项选择数据集上进行了实验验证,结果表明本文方法能够有效地解决BERT在处理长文本多项选择任务时的限制问题,相比于其他方法,在各项评价指标上均取得了较高的提升。展开更多
In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support ...In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support and evidence for humanistic studies.Based on the novel A Dream of Red Mansions,the automatic extraction and classification of those sentiment terms in it were realized,and detailed analysis of large-scale sentiment terms was carried out.Bidirectional encoder representation from transformers(BERT) pretraining and fine-tuning model was used to construct the sentiment classifier of A Dream of Red Mansions.Sentiment terms of A Dream of Red Mansions are divided into eight sentimental categories,and the relevant people in sentences are extracted according to specific rules.It also tries to visually display the sentimental interactions between Twelve Girls of Jinling and Jia Baoyu along with the development of the episode.The overall F_(1) score of BERT-based sentiment classifier reached 84.89%.The best single sentiment score reached 91.15%.Experimental results show that the classifier can satisfactorily classify the text of A Dream of Red Mansions,and the text classification and interactional analysis results can be mutually verified with the text interpretation of A dream of Red Mansions by literature experts.展开更多
文摘首先利用bidirectional encoder representations from transformers(BERT)模型的强大的语境理解能力来提取数据法律文本的深层语义特征,然后引入细粒度特征提取层,依照注意力机制,重点关注文本中与数据法律问答相关的关键部分,最后对所采集的法律问答数据集进行训练和评估.结果显示:与传统的多个单一模型相比,所提出的模型在准确度、精确度、召回率、F1分数等关键性能指标上均有提升,表明该系统能够更有效地理解和回应复杂的数据法学问题,为研究数据法学的专业人士和公众用户提供更高质量的问答服务.
文摘识别服装质量抽检通告中的实体信息,对于评估不同区域的服装质量状况以及制定宏观政策具有重要意义。针对质量抽检通告命名实体识别存在的长文本序列信息丢失、小类样本特征学习不全等问题,以注意力机制为核心,提出了基于BERT(bidirectional encoder representations from transformers)和TENER(transformer encoder for NER)模型的领域命名实体识别模型。BERT-TENER模型通过预训练模型BERT获得字符的动态字向量;将字向量输入TENER模块中,基于注意力机制使得同样的字符拥有不同的学习过程,基于改进的Transformer模型进一步捕捉字符与字符之间的距离和方向信息,增强模型对不同长度、小类别文本内容的理解,并采用条件随机场模型获得每个字符对应的实体标签。在领域数据集上,BERT-TENER模型针对服装抽检领域的实体识别F_1达到92.45%,相较传统方法有效提升了命名实体识别率,并且在长文本以及非均衡的实体类别中也表现出较好的性能。
基金supported by Research Projects of the Nature Science Foundation of Hebei Province(F2021402005).
文摘Semi-supervised new intent discovery is a significant research focus in natural language understanding.To address the limitations of current semi-supervised training data and the underutilization of implicit information,a Semi-supervised New Intent Discovery for Elastic Neighborhood Syntactic Elimination and Fusion model(SNID-ENSEF)is proposed.Syntactic elimination contrast learning leverages verb-dominant syntactic features,systematically replacing specific words to enhance data diversity.The radius of the positive sample neighborhood is elastically adjusted to eliminate invalid samples and improve training efficiency.A neighborhood sample fusion strategy,based on sample distribution patterns,dynamically adjusts neighborhood size and fuses sample vectors to reduce noise and improve implicit information utilization and discovery accuracy.Experimental results show that SNID-ENSEF achieves average improvements of 0.88%,1.27%,and 1.30%in Normalized Mutual Information(NMI),Accuracy(ACC),and Adjusted Rand Index(ARI),respectively,outperforming PTJN,DPN,MTP-CLNN,and DWG models on the Banking77,StackOverflow,and Clinc150 datasets.The code is available at https://github.com/qsdesz/SNID-ENSEF,accessed on 16 January 2025.
文摘Offensive messages on social media,have recently been frequently used to harass and criticize people.In recent studies,many promising algorithms have been developed to identify offensive texts.Most algorithms analyze text in a unidirectional manner,where a bidirectional method can maximize performance results and capture semantic and contextual information in sentences.In addition,there are many separate models for identifying offensive texts based on monolin-gual and multilingual,but there are a few models that can detect both monolingual and multilingual-based offensive texts.In this study,a detection system has been developed for both monolingual and multilingual offensive texts by combining deep convolutional neural network and bidirectional encoder representations from transformers(Deep-BERT)to identify offensive posts on social media that are used to harass others.This paper explores a variety of ways to deal with multilin-gualism,including collaborative multilingual and translation-based approaches.Then,the Deep-BERT is tested on the Bengali and English datasets,including the different bidirectional encoder representations from transformers(BERT)pre-trained word-embedding techniques,and found that the proposed Deep-BERT’s efficacy outperformed all existing offensive text classification algorithms reaching an accuracy of 91.83%.The proposed model is a state-of-the-art model that can classify both monolingual-based and multilingual-based offensive texts.
基金funded by Scientific Research Deanship at University of Ha’il-Saudi Arabia through Project Number RG-23092。
文摘Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
基金This work was supported by national funds through FCT(Fundação para a Ciência e a Tecnologia),under the project-UIDB/04152/2020-Centro de Investigação em Gestão de Informação(MagIC)/NOVA IMS.
文摘Purpose:Patent classification is one of the areas in Intellectual Property Analytics(IPA),and a growing use case since the number of patent applications has been increasing worldwide.We propose using machine learning algorithms to classify Portuguese patents and evaluate the performance of transfer learning methodologies to solve this task.Design/methodology/approach:We applied three different approaches in this paper.First,we used a dataset available by INPI to explore traditional machine learning algorithms and ensemble methods.After preprocessing data by applying TF-IDF,FastText and Doc2Vec,the models were evaluated by cross-validation in 5 folds.In a second approach,we used two different Neural Networks architectures,a Convolutional Neural Network(CNN)and a bi-directional Long Short-Term Memory(BiLSTM).Finally,we used pre-trained BERT,DistilBERT,and ULMFiT models in the third approach.Findings:BERTTimbau,a BERT architecture model pre-trained on a large Portuguese corpus,presented the best results for the task,even though with a performance of only 4%superior to a LinearSVC model using TF-IDF feature engineering.Research limitations:The dataset was highly imbalanced,as usual in patent applications,so the classes with the lowest samples were expected to present the worst performance.That result happened in some cases,especially in classes with less than 60 training samples.Practical implications:Patent classification is challenging because of the hierarchical classification system,the context overlap,and the underrepresentation of the classes.However,the final model presented an acceptable performance given the size of the dataset and the task complexity.This model can support the decision and improve the time by proposing a category in the second level of ICP,which is one of the critical phases of the grant patent process.Originality/value:To our knowledge,the proposed models were never implemented for Portuguese patent classification.
基金Supported by the National Key Research and Development Program of China(No.2018YFB1702601).
文摘For the existing aspect category sentiment analysis research,most of the aspects are given for sentiment extraction,and this pipeline method is prone to error accumulation,and the use of graph convolutional neural network for aspect category sentiment analysis does not fully utilize the dependency type information between words,so it cannot enhance feature extraction.This paper proposes an end-to-end aspect category sentiment analysis(ETESA)model based on type graph convolutional networks.The model uses the bidirectional encoder representation from transformers(BERT)pretraining model to obtain aspect categories and word vectors containing contextual dynamic semantic information,which can solve the problem of polysemy;when using graph convolutional network(GCN)for feature extraction,the fusion operation of word vectors and initialization tensor of dependency types can obtain the importance values of different dependency types and enhance the text feature representation;by transforming aspect category and sentiment pair extraction into multiple single-label classification problems,aspect category and sentiment can be extracted simultaneously in an end-to-end way and solve the problem of error accumulation.Experiments are tested on three public datasets,and the results show that the ETESA model can achieve higher Precision,Recall and F1 value,proving the effectiveness of the model.
基金supported by the National Natural Science Foundation of China under Grant 61976158 and Grant 61673301.
文摘The Aspect-Based Sentiment Analysis(ABSA)task is designed to judge the sentiment polarity of a particular aspect in a review.Recent studies have proved that GCN can capture syntactic and semantic features from dependency graphs generated by dependency trees and semantic graphs generated by Multi-headed self-attention(MHSA).However,these approaches do not highlight the sentiment information associated with aspect in the syntactic and semantic graphs.We propose the Aspect-Guided Multi-Graph Convolutional Networks(AGGCN)for Aspect-Based Sentiment Classification.Specifically,we reconstruct two kinds of graphs,changing the weight of the dependency graph by distance from aspect and improving the semantic graph by Aspect-guided MHSA.For interactive learning of syntax and semantics,we dynamically fuse syntactic and semantic diagrams to generate syntactic-semantic graphs to learn emotional features jointly.In addition,Multi-dropout is added to solve the overftting of AGGCN in training.The experimental results on extensive datasets show that our model AGGCN achieves particularly advanced results and validates the effectiveness of the model.
文摘多项选择作为机器阅读理解中的一项重要任务,在自然语言处理(natural language processing,NLP)领域受到了广泛关注。由于数据中需要处理的文本长度不断增长,长文本多项选择成为了一项新的挑战。然而,现有的长文本处理方法容易丢失文本中的有效信息,导致结果不准确。针对上述问题,提出了一种基于压缩与推理的长文本多项选择答题方法(Long Text Multiple Choice Answer Method Based on Compression and Reasoning,LTMCA),通过训练评判模型识别相关句子,将相关句拼接成短文本输入到推理模型进行推理。为了提高评判模型的精度,在评判模型中增加了文章与选项之间的交互以补充文章对选项的注意力,有针对性地进行相关语句识别,更加准确地完成多项选择答题任务。在本文构建的CLTMCA中文长文本多项选择数据集上进行了实验验证,结果表明本文方法能够有效地解决BERT在处理长文本多项选择任务时的限制问题,相比于其他方法,在各项评价指标上均取得了较高的提升。
基金supported by the Fundamental Research Funds for the Central Universities (2019XD-A03-3)the Beijing Key Lab of Network System and Network Culture (NSNC-202 A09)。
文摘In the context of interdisciplinary research,using computer technology to further mine keywords in cultural texts and carry out semantic analysis can deepen the understanding of texts,and provide quantitative support and evidence for humanistic studies.Based on the novel A Dream of Red Mansions,the automatic extraction and classification of those sentiment terms in it were realized,and detailed analysis of large-scale sentiment terms was carried out.Bidirectional encoder representation from transformers(BERT) pretraining and fine-tuning model was used to construct the sentiment classifier of A Dream of Red Mansions.Sentiment terms of A Dream of Red Mansions are divided into eight sentimental categories,and the relevant people in sentences are extracted according to specific rules.It also tries to visually display the sentimental interactions between Twelve Girls of Jinling and Jia Baoyu along with the development of the episode.The overall F_(1) score of BERT-based sentiment classifier reached 84.89%.The best single sentiment score reached 91.15%.Experimental results show that the classifier can satisfactorily classify the text of A Dream of Red Mansions,and the text classification and interactional analysis results can be mutually verified with the text interpretation of A dream of Red Mansions by literature experts.