期刊文献+
共找到16篇文章
< 1 >
每页显示 20 50 100
Chinese word segmentation with local and global context representation learning 被引量:2
1
作者 李岩 Zhang Yinghua +2 位作者 Huang Xiaoping Yin Xucheng Hao Hongwei 《High Technology Letters》 EI CAS 2015年第1期71-77,共7页
A local and global context representation learning model for Chinese characters is designed and a Chinese word segmentation method based on character representations is proposed in this paper. First, the proposed Chin... A local and global context representation learning model for Chinese characters is designed and a Chinese word segmentation method based on character representations is proposed in this paper. First, the proposed Chinese character learning model uses the semanties of loeal context and global context to learn the representation of Chinese characters. Then, Chinese word segmentation model is built by a neural network, while the segmentation model is trained with the eharaeter representations as its input features. Finally, experimental results show that Chinese charaeter representations can effectively learn the semantic information. Characters with similar semantics cluster together in the visualize space. Moreover, the proposed Chinese word segmentation model also achieves a pretty good improvement on precision, recall and f-measure. 展开更多
关键词 local and global context representation learning Chinese character representa- tion Chinese word segmentation
下载PDF
An Improved Unsupervised Approach to Word Segmentation
2
作者 WANG Hanshi HAN Xuhong +2 位作者 LIU Lizhen SONG Wei YUAN Mudan 《China Communications》 SCIE CSCD 2015年第7期82-95,共14页
ESA is an unsupervised approach to word segmentation previously proposed by Wang, which is an iterative process consisting of three phases: Evaluation, Selection and Adjustment. In this article, we propose Ex ESA, the... ESA is an unsupervised approach to word segmentation previously proposed by Wang, which is an iterative process consisting of three phases: Evaluation, Selection and Adjustment. In this article, we propose Ex ESA, the extension of ESA. In Ex ESA, the original approach is extended to a 2-pass process and the ratio of different word lengths is introduced as the third type of information combined with cohesion and separation. A maximum strategy is adopted to determine the best segmentation of a character sequence in the phrase of Selection. Besides, in Adjustment, Ex ESA re-evaluates separation information and individual information to overcome the overestimation frequencies. Additionally, a smoothing algorithm is applied to alleviate sparseness. The experiment results show that Ex ESA can further improve the performance and is time-saving by properly utilizing more information from un-annotated corpora. Moreover, the parameters of Ex ESA can be predicted by a set of empirical formulae or combined with the minimum description length principle. 展开更多
关键词 word segmentation character sequence smoothing algorithm maximum strategy
下载PDF
Applying rough sets in word segmentation disambiguation based on maximum entropy model
3
作者 姜维 王晓龙 +1 位作者 关毅 梁国华 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2006年第1期94-98,共5页
To solve the complicated feature extraction and long distance dependency problem in Word Segmentation Disambiguation (WSD), this paper proposes to apply rough sets ill WSD based on the Maximum Entropy model. Firstly... To solve the complicated feature extraction and long distance dependency problem in Word Segmentation Disambiguation (WSD), this paper proposes to apply rough sets ill WSD based on the Maximum Entropy model. Firstly, rough set theory is applied to extract the complicated features and long distance features, even frnm noise or inconsistent corpus. Secondly, these features are added into the Maximum Entropy model, and consequently, the feature weights can be assigned according to the performance of the whole disambiguation mnltel. Finally, tile semantic lexicou is adopted to build class-hased rough set teatures to overcome data spareness. The experiment indicated that our method performed better than previous models, which got top rank in WSD in 863 Evaluation in 2003. This system ranked first and second respcetively in MSR and PKU open test in the Second International Chinese Word Segmentation Bankeoff held in 2005. 展开更多
关键词 word segmentation feature extraction rough sets maximum entropy
下载PDF
Chinese to Braille Translation Based on Braille Word Segmentation Using Statistical Model 被引量:2
4
作者 王向东 杨阳 +3 位作者 张金超 姜文斌 刘宏 钱跃良 《Journal of Shanghai Jiaotong university(Science)》 EI 2017年第1期82-86,共5页
Automatic translation of Chinese text to Chinese Braille is important for blind people in China to acquire information using computers or smart phones. In this paper, a novel scheme of Chinese-Braille translation is p... Automatic translation of Chinese text to Chinese Braille is important for blind people in China to acquire information using computers or smart phones. In this paper, a novel scheme of Chinese-Braille translation is proposed. Under the scheme, a Braille word segmentation model based on statistical machine learning is trained on a Braille corpus, and Braille word segmentation is carried out using the statistical model directly without the stage of Chinese word segmentation. This method avoids establishing rules concerning syntactic and semantic information and uses statistical model to learn the rules stealthily and automatically. To further improve the performance, an algorithm of fusing the results of Chinese word segmentation and Braille word segmentation is also proposed. Our results show that the proposed method achieves accuracy of 92.81% for Braille word segmentation and considerably outperforms current approaches using the segmentation-merging scheme. 展开更多
关键词 Chinese Braille word segmentation perceptron algorithm TP 391.1 A
原文传递
Chinese Word Segmentation via BiLSTM+Semi-CRF with Relay Node 被引量:2
5
作者 Nuo Qun Hang Yan +1 位作者 Xi-Peng Qiu Xuan-Jing Huang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第5期1115-1126,共12页
Semi-Markov conditional random fields(Semi-CRFs)have been successfully utilized in many segmentation problems,including Chinese word segmentation(CWS).The advantage of Semi-CRF lies in its inherent ability to exploit ... Semi-Markov conditional random fields(Semi-CRFs)have been successfully utilized in many segmentation problems,including Chinese word segmentation(CWS).The advantage of Semi-CRF lies in its inherent ability to exploit properties of segments instead of individual elements of sequences.Despite its theoretical advantage,Semi-CRF is still not the best choice for CWS because its computation complexity is quadratic to the sentenced length.In this paper,we propose a simple yet effective framework to help Semi-CRF achieve comparable performance with CRF-based models under similar computation complexity.Specifically,we first adopt a bi-directional long short-term memory(BiLSTM)on character level to model the context information,and then use simple but effective fusion layer to represent the segment information.Besides,to model arbitrarily long segments within linear time complexity,we also propose a new model named Semi-CRF-Relay.The direct modeling of segments makes the combination with word features easy and the CWS performance can be enhanced merely by adding publicly available pre-trained word embeddings.Experiments on four popular CWS datasets show the effectiveness of our proposed methods.The source codes and pre-trained embeddings of this paper are available on https://github.com/fastnlp/fastNLP/. 展开更多
关键词 Semi-Markov conditional random field(Semi-CRF) Chinese word segmentation bi-directional long short-term memory deep learning
原文传递
Word Segmentation Based on Database Semantics in NChiql 被引量:2
6
作者 孟小峰 刘爽 王珊 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第4期346-354,共9页
In this paper a novel word-segmentation algorithm is presented todelimit words in Chinese natural language queries in NChiql system, a Chinese natural language query interface to databases. Although there are sizable ... In this paper a novel word-segmentation algorithm is presented todelimit words in Chinese natural language queries in NChiql system, a Chinese natural language query interface to databases. Although there are sizable literatureson Chinese segmentation, they cannot satisfy particular requirements in this system. The novel word-segmentation algorithm is based on the database semantics,namely Semantic Conceptual Model (SCM) for specific domain knowledge. Basedon SCM, the segmenter labels the database semantics to words directly, which easesthe disambiguation and translation (from natural language to database query) inNChiql. 展开更多
关键词 database query natural language processing word segmentation disambiguation
原文传递
Construction of Word Segmentation Model Based on HMM+BI-LSTM
7
作者 Hang Zhang Bin Wen 《国际计算机前沿大会会议论文集》 2020年第2期47-61,共15页
Chinese word segmentation plays an important role in search engine,artificial intelligence,machine translation and so on.There are currently three main word segmentation algorithms:dictionary-based word segmentation a... Chinese word segmentation plays an important role in search engine,artificial intelligence,machine translation and so on.There are currently three main word segmentation algorithms:dictionary-based word segmentation algorithms,statistics-based word segmentation algorithms,and understandingbased word segmentation algorithms.However,few people combine these three methods or two of them.Therefore,a Chinese word segmentation model is proposed based on a combination of statistical word segmentation algorithm and understanding-based word segmentation algorithm.It combines Hidden Markov Model(HMM)word segmentation and Bi-LSTM word segmentation to improve accuracy.The main method is to make lexical statistics on the results of the two participles,and to choose the best results based on the statistical results,and then to combine them into the final word segmentation results.This combined word segmentation model is applied to perform experiments on the MSRA corpus provided by Bakeoff.Experiments show that the accuracy of word segmentation results is 12.52%higher than that of traditional HMM model and 0.19%higher than that of BI-LSTM model. 展开更多
关键词 Chinese word segmentation HMM BI-LSTM Sequence tagging
原文传递
Chinese Word Boundary Ambiguity and Unknown Word Resolution Using Unsupervised Methods 被引量:1
8
作者 傅国宏 《High Technology Letters》 EI CAS 2000年第2期29-39,共11页
An unsupervised framework to partially resolve the four issues, namely ambiguity, unknown word, knowledge acquisition and efficient algorithm, in developing a robust Chinese segmentation system is described. It first ... An unsupervised framework to partially resolve the four issues, namely ambiguity, unknown word, knowledge acquisition and efficient algorithm, in developing a robust Chinese segmentation system is described. It first proposes a statistical segmentation model integrating the simplified character juncture model (SCJM) with word formation power. The advantage of this model is that it can employ the affinity of characters inside or outside a word and word formation power simultaneously to process disambiguation and all the parameters can be estimated in an unsupervised way. After investigating the differences between real and theoretical size of segmentation space, we apply A * algorithm to perform segmentation without exhaustively searching all the potential segmentations. Finally, an unsupervised version of Chinese word formation patterns to detect unknown words is presented. Experiments show that the proposed methods are efficient. 展开更多
关键词 word segmentation CHARACTER JUNCTURE Work formation pattern
下载PDF
A New Word Detection Method for Chinese Based on Local Context Information 被引量:1
9
作者 曾华琳 周昌乐 郑旭玲 《Journal of Donghua University(English Edition)》 EI CAS 2010年第2期189-192,共4页
Finding out out-of-vocabulary words is an urgent and difficult task in Chinese words segmentation. To avoid the defect causing by offline training in the traditional method, the paper proposes an improved prediction b... Finding out out-of-vocabulary words is an urgent and difficult task in Chinese words segmentation. To avoid the defect causing by offline training in the traditional method, the paper proposes an improved prediction by partical match (PPM) segmenting algorithm for Chinese words based on extracting local context information, which adds the context information of the testing text into the local PPM statistical model so as to guide the detection of new words. The algorithm focuses on the process of online segmentatien and new word detection which achieves a good effect in the close or opening test, and outperforms some well-known Chinese segmentation system to a certain extent. 展开更多
关键词 new word detection improved PPM model context information Chinese words segmentation
下载PDF
Feature study for improving Chinese overlapping ambiguity resolution based on SVM 被引量:1
10
作者 熊英 朱杰 《Journal of Southeast University(English Edition)》 EI CAS 2007年第2期179-184,共6页
In order to improve Chinese overlapping ambiguity resolution based on a support vector machine, statistical features are studied for representing the feature vectors. First, four statistical parameters-mutual informat... In order to improve Chinese overlapping ambiguity resolution based on a support vector machine, statistical features are studied for representing the feature vectors. First, four statistical parameters-mutual information, accessor variety, two-character word frequency and single-character word frequency are used to describe the feature vectors respectively. Then other parameters are tried to add as complementary features to the parameters which obtain the best results for further improving the classification performance. Experimental results show that features represented by mutual information, single-character word frequency and accessor variety can obtain an optimum result of 94. 39%. Compared with a commonly used word probability model, the accuracy has been improved by 6. 62%. Such comparative results confirm that the classification performance can be improved by feature selection and representation. 展开更多
关键词 support vector machine Chinese overlapping ambiguity Chinese word segmentation word probability model
下载PDF
Song Ci Style Automatic Identification
11
作者 郑旭玲 周昌乐 曾华琳 《Journal of Donghua University(English Edition)》 EI CAS 2010年第2期181-184,共4页
To identify Song Ci style automatically,we put forward a novel stylistic text categorization approach based on words and their semantic in this paper. And a modified special word segmentation method,a new semantic rel... To identify Song Ci style automatically,we put forward a novel stylistic text categorization approach based on words and their semantic in this paper. And a modified special word segmentation method,a new semantic relativity computing method based on HowNet along with the corresponding word sense disambiguation method are proposed to extract words and semantic features from Song Ci. Experiments are carried out and the results show that these methods are effective. 展开更多
关键词 stylistic text categorization word sense disambiguation (WSD) word segmentation HOWNET Song Ci
下载PDF
Apriori and N-gram Based Chinese Text Feature Extraction Method 被引量:4
12
作者 王晔 黄上腾 《Journal of Shanghai Jiaotong university(Science)》 EI 2004年第4期11-14,20,共5页
A feature extraction, which means extracting the representative words from a text, is an important issue in text mining field. This paper presented a new Apriori and N-gram based Chinese text feature extraction method... A feature extraction, which means extracting the representative words from a text, is an important issue in text mining field. This paper presented a new Apriori and N-gram based Chinese text feature extraction method, and analyzed its correctness and performance. Our method solves the question that the exist extraction methods cannot find the frequent words with arbitrary length in Chinese texts. The experimental results show this method is feasible. 展开更多
关键词 Apriori algorithm N-GRAM Chinese words segmentation feature extraction
下载PDF
Improving the Syllable-Synchronous Network SearchAlgorithm for Word Decoding in ContinuousChinese Speech Recognition 被引量:2
13
作者 郑方 武健 宋战江 《Journal of Computer Science & Technology》 SCIE EI CSCD 2000年第5期461-471,共11页
The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several r... The previously proposed syllable-synchronous network search (SSNS) algorithm plays a very important role in the word decoding of the continuous Chinese speech recognition and achieves satisfying performance. Several related key factors that may affect the overall word decoding effect are carefully studied in this paper, including the perfecting of the vocabulary, the big-discount Turing re-estimating of the N-Gram probabilities, and the managing of the searching path buffers. Based on these discussions, corresponding approaches to improving the SSNS algorithm are proposed. Compared with the previous version of SSNS algorithm, the new version decreases the Chinese character error rate (CCER) in the word decoding by 42.1% across a database consisting of a large number of testing sentences (syllable strings). 展开更多
关键词 large-vocabulary continuous Chinese speech recognition word decoding syllable- synchronous network search word segmentation
原文传递
Scaling Conditional Random Fields by One-Against-the-Other Decomposition 被引量:1
14
作者 赵海 揭春雨 《Journal of Computer Science & Technology》 SCIE EI CSCD 2008年第4期612-619,共8页
As a powerful sequence labeling model, conditional random fields (CRFs) have had successful applications in many natural language processing (NLP) tasks. However, the high complexity of CRFs training only allows a... As a powerful sequence labeling model, conditional random fields (CRFs) have had successful applications in many natural language processing (NLP) tasks. However, the high complexity of CRFs training only allows a very small tag (or label) set, because the training becomes intractable as the tag set enlarges. This paper proposes an improved decomposed training and joint decoding algorithm for CRF learning. Instead of training a single CRF model for all tags, it trains a binary sub-CRF independently for each tag. An optimal tag sequence is then produced by a joint decoding algorithm based on the probabilistic output of all sub-CRFs involved. To test its effectiveness, we apply this approach to tackling Chinese word segmentation (CWS) as a sequence labeling problem. Our evaluation shows that it can reduce the computational cost of this language processing task by 40-50% without any significant performance loss on various large-scale data sets. 展开更多
关键词 natural language processing machine learning conditional random fields Chinese word segmentation
原文传递
Resolution of overlapping ambiguity strings based on maximum entropy model 被引量:1
15
作者 ZHANG Feng FAN Xiao-zhong 《Frontiers of Electrical and Electronic Engineering in China》 CSCD 2006年第3期273-276,共4页
The resolution of overlapping ambiguity strings(OAS)is studied based on the maximum entropy model.There are two model outputs,where either the first two characters form a word or the last two characters form a word.Th... The resolution of overlapping ambiguity strings(OAS)is studied based on the maximum entropy model.There are two model outputs,where either the first two characters form a word or the last two characters form a word.The features of the model include one word in con-text of OAS,the current OAS and word probability relation of two kinds of segmentation results.OAS in training text is found by the combination of the FMM and BMM segmen-tation method.After feature tagging they are used to train the maximum entropy model.The People Daily corpus of January 1998 is used in training and testing.Experimental results show a closed test precision of 98.64%and an open test precision of 95.01%.The open test precision is 3.76%better compared with that of the precision of common word probability method. 展开更多
关键词 Chinese information processing Chinese auto-matic word segmentation overlapping ambiguity strings maximum entropy model
原文传递
Pretrained Models and Evaluation Data for the Khmer Language
16
作者 Shengyi Jiang Sihui Fu +1 位作者 Nankai Lin Yingwen Fu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第4期709-718,共10页
Trained on a large corpus,pretrained models(PTMs)can capture different levels of concepts in context and hence generate universal language representations,which greatly benefit downstream natural language processing(N... Trained on a large corpus,pretrained models(PTMs)can capture different levels of concepts in context and hence generate universal language representations,which greatly benefit downstream natural language processing(NLP)tasks.In recent years,PTMs have been widely used in most NLP applications,especially for high-resource languages,such as English and Chinese.However,scarce resources have discouraged the progress of PTMs for low-resource languages.Transformer-based PTMs for the Khmer language are presented in this work for the first time.We evaluate our models on two downstream tasks:Part-of-speech tagging and news categorization.The dataset for the latter task is self-constructed.Experiments demonstrate the effectiveness of the Khmer models.In addition,we find that the current Khmer word segmentation technology does not aid performance improvement.We aim to release our models and datasets to the community in hopes of facilitating the future development of Khmer NLP applications. 展开更多
关键词 pretrained models Khmer language word segmentation part-of-speech(POS)tagging news categorization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部