摘要
[目的/意义]文章以古籍自动分词为切入点,引入“荀子”系列大语言模型,对大语言模型在古籍文本分词任务上的表现进行了探讨。[方法/过程]文章基于《左传》分词语料,进行了数据清洗和整理,构建了指令数据集,在此基础上,从数据集中抽取了1 000条作为测试数据,并分别使用500、1 000、2 000、5 000条数据作为训练数据进行指令微调,并测试其性能。[结果/结论]实验结果表明,只需要少量的数据,大语言模型就可以有较为理想的表现,在微调数据量达到5 000条数据时,Xunzi-Qwen-7B模型表现出了最优性能,F1值达到84.54%。
[Purpose/significance]In this paper,we take the automatic text segmentation of ancient books as an entry point,introduce the"Xunzi"series of large language models,and explore the performance of large language models on the task of word division of ancient texts.[Method/process]This paper constructs an instruction dataset based on the Zuozhuan,with data cleaning and organisation.on this basis,1000 pieces were extracted from it as test data,then 500,1000,2000,and 5000 pieces of data were used as training data to fine-tune the instructions and test their performance,respectively.[Result/conclusion]The experimental results show that only a relatively small amount of data is needed for the large language model to have a more desirable performance,and the Xunzi-Qwen-7B model shows optimal performance with an F1 value of 84.54%when the amount of fine-tuned data reaches 5000 pieces.
作者
朱丹浩
赵志枭
吴娜
王希羽
孙光耀
王东波
ZHU Danhao;ZHAO Zhixiao;WU Na;WANG Xiyu;SUN Guangyao;WANG Dongbo(Department of Criminal Science and Technology,Jiangsu Police Institute,Nanjing 210031;School of Information Management,Nanjing Agricultural University,Nanjing 210095)
出处
《科技情报研究》
CSSCI
2024年第2期11-20,共10页
Scientific Information Research
基金
国家社科基金重大项目“中国古代典籍跨语言知识库构建及应用研究”(编号:21&ZD331)。
关键词
“荀子”大模型
《左传》
分词
指令微调
"Xunzi"large language model
Zuozhuan
segmentation
instruction tuning