Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,...Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.展开更多
Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations.It is becoming one of the most important tasks for natural language processing in recent ...Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations.It is becoming one of the most important tasks for natural language processing in recent years.However,it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context.To address the challenge,we propose a method to classify emotion in textual conversations,by integrating the advantages of deep learning and broad learning,namely DBL.It aims to provide a more effective solution to capture local contextual information(i.e.,utterance-level)in an utterance,as well as global contextual information(i.e.,speaker-level)in a conversation,based on Convolutional Neural Network(CNN),Bidirectional Long Short-Term Memory(Bi-LSTM),and broad learning.Extensive experiments have been conducted on three public textual conversation datasets,which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification.In addition,the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1.展开更多
Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner.Due to the domain discrepancy,...Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner.Due to the domain discrepancy,an emotion classifier trained on source domain may not work well on target domain.Many researchers have focused on traditional cross-domain sentiment classification,which is coarse-grained emotion classification.However,the problem of emotion classification for cross-domain is rarely involved.In this paper,we propose a method,called convolutional neural network(CNN)based broad learning,for cross-domain emotion classification by combining the strength of CNN and broad learning.We first utilized CNN to extract domain-invariant and domain-specific features simultaneously,so as to train two more efficient classifiers by employing broad learning.Then,to take advantage of these two classifiers,we designed a co-training model to boost together for them.Finally,we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method.The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods.展开更多
As one of the key operations in Wireless Sensor Networks(WSNs), the energy-efficient data collection schemes have been actively explored in the literature. However, the transform basis for sparsifing the sensed data i...As one of the key operations in Wireless Sensor Networks(WSNs), the energy-efficient data collection schemes have been actively explored in the literature. However, the transform basis for sparsifing the sensed data is usually chosen empirically, and the transformed results are not always the sparsest. In this paper, we propose a Data Collection scheme based on Denoising Autoencoder(DCDA) to solve the above problem. In the data training phase, a Denoising AutoEncoder(DAE) is trained to compute the data measurement matrix and the data reconstruction matrix using the historical sensed data. Then, in the data collection phase, the sensed data of whole network are collected along a data collection tree. The data measurement matrix is utilized to compress the sensed data in each sensor node, and the data reconstruction matrix is utilized to reconstruct the original data in the sink.Finally, the data communication performance and data reconstruction performance of the proposed scheme are evaluated and compared with those of existing schemes using real-world sensed data. The experimental results show that compared to its counterparts, the proposed scheme results in a higher data compression rate, lower energy consumption, more accurate data reconstruction, and faster data reconstruction speed.展开更多
By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this ...By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this paper,we present an efficient classifier using the online bagging ensemble method for big data stream learning. In this classifier, we introduce an efficient online resampling mechanism on the training instances, and use a robust coding method based on error-correcting output codes. This is done in order to reduce the effects of correlations between the classifiers and increase the diversity of the ensemble. A dynamic updating model based on classification performance is adopted to reduce the unnecessary updating operations and improve the efficiency of learning.We implement a parallel version of EoBag, which runs faster than the serial version, and results indicate that the classification performance is almost the same as the serial one. Finally, we compare the performance of classification and the usage of resources with other state-of-the-art algorithms using the artificial and the actual data sets, respectively. Results show that the proposed algorithm can obtain better accuracy and more feasible usage of resources for the classification of big data stream.展开更多
Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structur...Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structures and too many hyperparameters.To meet these challenges,in this paper,we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach(RoBERTa)and p-norm Broad Learning(p-BL).Specifically,there are mainly three contributions in this paper.Firstly,we fine-tune the RoBERTa to adapt it to the task of negative emotion classification.Then,we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors.Secondly,we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier.Compared with deep learning models,p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained.Moreover,it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p.Thirdly,we conduct extensive experiments on the public datasets,and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.展开更多
基金This work is partially supported by the National Natural Science Foundation of China under Grant Nos.61876205 and 61877013the Ministry of Education of Humanities and Social Science project under Grant Nos.19YJAZH128 and 20YJAZH118+1 种基金the Science and Technology Plan Project of Guangzhou under Grant No.201804010433the Bidding Project of Laboratory of Language Engineering and Computing under Grant No.LEC2017ZBKT001.
文摘Textual Emotion Analysis(TEA)aims to extract and analyze user emotional states in texts.Various Deep Learning(DL)methods have developed rapidly,and they have proven to be successful in many fields such as audio,image,and natural language processing.This trend has drawn increasing researchers away from traditional machine learning to DL for their scientific research.In this paper,we provide an overview of TEA based on DL methods.After introducing a background for emotion analysis that includes defining emotion,emotion classification methods,and application domains of emotion analysis,we summarize DL technology,and the word/sentence representation learning method.We then categorize existing TEA methods based on text structures and linguistic types:text-oriented monolingual methods,text conversations-oriented monolingual methods,text-oriented cross-linguistic methods,and emoji-oriented cross-linguistic methods.We close by discussing emotion analysis challenges and future research trends.We hope that our survey will assist readers in understanding the relationship between TEA and DL methods while also improving TEA development.
基金supported by the National Natural Science Foundation of China(No.61876205)the National Key Research and Development Program of China(No.2020YFB1005804)the MOE Project at Center for Linguistics and Applied Linguistics,Guangdong University of Foreign Studies.
文摘Emotion classification in textual conversations focuses on classifying the emotion of each utterance from textual conversations.It is becoming one of the most important tasks for natural language processing in recent years.However,it is a challenging task for machines to conduct emotion classification in textual conversations because emotions rely heavily on textual context.To address the challenge,we propose a method to classify emotion in textual conversations,by integrating the advantages of deep learning and broad learning,namely DBL.It aims to provide a more effective solution to capture local contextual information(i.e.,utterance-level)in an utterance,as well as global contextual information(i.e.,speaker-level)in a conversation,based on Convolutional Neural Network(CNN),Bidirectional Long Short-Term Memory(Bi-LSTM),and broad learning.Extensive experiments have been conducted on three public textual conversation datasets,which show that the context in both utterance-level and speaker-level is consistently beneficial to the performance of emotion classification.In addition,the results show that our proposed method outperforms the baseline methods on most of the testing datasets in weighted-average F1.
基金This work was partially supported by the National Natural Science Foundation of China(No.61876205)the Natural Science Foundation of Guangdong(No.2021A1515012652)the Science and Technology Program of Guangzhou(No.2019050001).
文摘Cross-domain emotion classification aims to leverage useful information in a source domain to help predict emotion polarity in a target domain in a unsupervised or semi-supervised manner.Due to the domain discrepancy,an emotion classifier trained on source domain may not work well on target domain.Many researchers have focused on traditional cross-domain sentiment classification,which is coarse-grained emotion classification.However,the problem of emotion classification for cross-domain is rarely involved.In this paper,we propose a method,called convolutional neural network(CNN)based broad learning,for cross-domain emotion classification by combining the strength of CNN and broad learning.We first utilized CNN to extract domain-invariant and domain-specific features simultaneously,so as to train two more efficient classifiers by employing broad learning.Then,to take advantage of these two classifiers,we designed a co-training model to boost together for them.Finally,we conducted comparative experiments on four datasets for verifying the effectiveness of our proposed method.The experimental results show that the proposed method can improve the performance of emotion classification more effectively than those baseline methods.
基金supported by the National Natural Science Foundation of China (Nos. 61402094, 61572060, and 61702089)the Natural Science Foundation of Hebei Province (Nos. F2016501076 and F2016501079)+3 种基金the Natural Science Foundation of Liaoning Province (No. 201602254)the Fundamental Research Funds for the Central Universities (No. N172304022)the Science and Technology Plan Project of Guangzhou (No. 201804010433)the Bidding Project of Laboratory of Language Engineering and Computing (No. LEC2017ZBKT001)
文摘As one of the key operations in Wireless Sensor Networks(WSNs), the energy-efficient data collection schemes have been actively explored in the literature. However, the transform basis for sparsifing the sensed data is usually chosen empirically, and the transformed results are not always the sparsest. In this paper, we propose a Data Collection scheme based on Denoising Autoencoder(DCDA) to solve the above problem. In the data training phase, a Denoising AutoEncoder(DAE) is trained to compute the data measurement matrix and the data reconstruction matrix using the historical sensed data. Then, in the data collection phase, the sensed data of whole network are collected along a data collection tree. The data measurement matrix is utilized to compress the sensed data in each sensor node, and the data reconstruction matrix is utilized to reconstruct the original data in the sink.Finally, the data communication performance and data reconstruction performance of the proposed scheme are evaluated and compared with those of existing schemes using real-world sensed data. The experimental results show that compared to its counterparts, the proposed scheme results in a higher data compression rate, lower energy consumption, more accurate data reconstruction, and faster data reconstruction speed.
基金supported in part by the National Natural Science Foundation of China(Nos.61702089,61876205,and 61501102)the Science and Technology Plan Project of Guangzhou(No.201804010433)the Bidding Project of Laboratory of Language Engineering and Computing(No.LEC2017ZBKT001)
文摘By combining multiple weak learners with concept drift in the classification of big data stream learning, the ensemble learning can achieve better generalization performance than the single learning approach. In this paper,we present an efficient classifier using the online bagging ensemble method for big data stream learning. In this classifier, we introduce an efficient online resampling mechanism on the training instances, and use a robust coding method based on error-correcting output codes. This is done in order to reduce the effects of correlations between the classifiers and increase the diversity of the ensemble. A dynamic updating model based on classification performance is adopted to reduce the unnecessary updating operations and improve the efficiency of learning.We implement a parallel version of EoBag, which runs faster than the serial version, and results indicate that the classification performance is almost the same as the serial one. Finally, we compare the performance of classification and the usage of resources with other state-of-the-art algorithms using the artificial and the actual data sets, respectively. Results show that the proposed algorithm can obtain better accuracy and more feasible usage of resources for the classification of big data stream.
基金This work was partially supported by the National Natural Science Foundation of China(No.61876205)the Ministry of Education of Humanities and Social Science Project(No.19YJAZH128)+1 种基金the Science and Technology Plan Project of Guangzhou(No.201804010433)the Bidding Project of Laboratory of Language Engineering and Computing(No.LEC2017ZBKT001).
文摘Negative emotion classification refers to the automatic classification of negative emotion of texts in social networks.Most existing methods are based on deep learning models,facing challenges such as complex structures and too many hyperparameters.To meet these challenges,in this paper,we propose a method for negative emotion classification utilizing a Robustly Optimized BERT Pretraining Approach(RoBERTa)and p-norm Broad Learning(p-BL).Specifically,there are mainly three contributions in this paper.Firstly,we fine-tune the RoBERTa to adapt it to the task of negative emotion classification.Then,we employ the fine-tuned RoBERTa to extract features of original texts and generate sentence vectors.Secondly,we adopt p-BL to construct a classifier and then predict negative emotions of texts using the classifier.Compared with deep learning models,p-BL has advantages such as a simple structure that is only 3-layer and fewer parameters to be trained.Moreover,it can suppress the adverse effects of more outliers and noise in data by flexibly changing the value of p.Thirdly,we conduct extensive experiments on the public datasets,and the experimental results show that our proposed method outperforms the baseline methods on the tested datasets.