期刊文献+

基于注意力头数和词性融合的藏文预训练模型

Tibetan Pre-training Model Based on Attention Heads and Part-of-Speech Fusion
下载PDF
导出
摘要 为了更好地学习藏文语言特征以及探究藏文预训练语言模型的最佳注意力机制头数,将词性与藏文预训练模型相结合,并进行了对比实验确定最佳的注意力头数,旨在提高语言模型对藏文语言特征的理解以及下游任务的性能。实验结果表明,在多个分类任务中,注意力头数为12的预训练模型皆表现了良好的性能。此外,将词性融入预训练模型后,文本、标题和情感分类任务的模型F_(1)值分别提高了0.57%、0.92%和1.01%。实验结果证明融入词性特征后,模型可以更准确地理解藏文语言结构和语法规则,从而提高分类任务的准确率。 In order to acquire superior Tibetan characteristics and enhance the model’s understanding of Tibetan features,part-of-speech was combined with the Tibetan pre-trained language model.Meanwhile,improving the performance of downstream tasks,the optimal attention mechanism head number of Tibetan pre-trained language model were explored by comparative experiments.The results show that pre-trained language models with 12 attention heads perform well in multiple classification tasks.Furthermore,after incorporating part-of-speech into the pre-trained language models,the macroF1 values of text,title and sentiment classification tasks increase by 0.57%,0.92%and 1.01%respectively.It is conclued that after incorporating part-of-speech features,the language structure and grammar rules of Tibetan can be better understanded.
作者 张英 拥措 斯曲卓嘎 拉毛杰 扎西永珍 尼玛扎西 ZHANG Ying;YONG Tso;SI Qu-zhuo-ga;LA Mao-jie;ZHA Xi-yong-zhen;NI Ma-zha-xi(Information Science and Technology Academy,Tibet University,Lhasa 850000,China;Key Laboratory of Tibetan Information Technology and Artificial Intelligence of Tibet Autonomous Region,Lhasa 850000,China;Engineering Research Center of the Ministry of Education of Tibetan Information Technology,Lhasa 850000,China)
出处 《科学技术与工程》 北大核心 2024年第23期9957-9964,共8页 Science Technology and Engineering
基金 科技创新2030——“新一代人工智能”重大项目(2022ZD0116100) 西藏自治区科技厅项目(XZ202401JD0010)。
关键词 注意力机制 词性 预训练语言模型 文本分类 情感分类 attention mechanism part-of-speech pre-train language models text classification sentiment classification
  • 相关文献

参考文献13

二级参考文献163

共引文献92

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部