Astronomical knowledge entities,such as celestial object identifiers,are crucial for literature retrieval and knowledge graph construction,and other research and applications in the field of astronomy.Traditional meth...Astronomical knowledge entities,such as celestial object identifiers,are crucial for literature retrieval and knowledge graph construction,and other research and applications in the field of astronomy.Traditional methods of extracting knowledge entities from texts face numerous challenging obstacles that are difficult to overcome.Consequently,there is a pressing need for improved methods to efficiently extract them.This study explores the potential of pre-trained Large Language Models(LLMs)to perform astronomical knowledge entity extraction(KEE)task from astrophysical journal articles using prompts.We propose a prompting strategy called PromptKEE,which includes five prompt elements,and design eight combination prompts based on them.We select four representative LLMs(Llama-2-70B,GPT-3.5,GPT-4,and Claude 2)and attempt to extract the most typical astronomical knowledge entities,celestial object identifiers and telescope names,from astronomical journal articles using these eight combination prompts.To accommodate their token limitations,we construct two data sets:the full texts and paragraph collections of 30 articles.Leveraging the eight prompts,we test on full texts with GPT-4and Claude 2,on paragraph collections with all LLMs.The experimental results demonstrate that pre-trained LLMs show significant potential in performing KEE tasks,but their performance varies on the two data sets.Furthermore,we analyze some important factors that influence the performance of LLMs in entity extraction and provide insights for future KEE tasks in astrophysical articles using LLMs.Finally,compared to other methods of KEE,LLMs exhibit strong competitiveness in multiple aspects.展开更多
基金supported by the National Natural Science Foundation of China(NSFC,Grant Nos.12273077,72101068,12373110,and 12103070)National Key Research and Development Program of China under grants(2022YFF0712400,2022YFF0711500)+2 种基金the 14th Five-year Informatization Plan of Chinese Academy of Sciences(CAS-WX2021SF-0204)supported by Astronomical Big Data Joint Research Centerco-founded by National Astronomical Observatories,Chinese Academy of Sciences and Alibaba Cloud。
文摘Astronomical knowledge entities,such as celestial object identifiers,are crucial for literature retrieval and knowledge graph construction,and other research and applications in the field of astronomy.Traditional methods of extracting knowledge entities from texts face numerous challenging obstacles that are difficult to overcome.Consequently,there is a pressing need for improved methods to efficiently extract them.This study explores the potential of pre-trained Large Language Models(LLMs)to perform astronomical knowledge entity extraction(KEE)task from astrophysical journal articles using prompts.We propose a prompting strategy called PromptKEE,which includes five prompt elements,and design eight combination prompts based on them.We select four representative LLMs(Llama-2-70B,GPT-3.5,GPT-4,and Claude 2)and attempt to extract the most typical astronomical knowledge entities,celestial object identifiers and telescope names,from astronomical journal articles using these eight combination prompts.To accommodate their token limitations,we construct two data sets:the full texts and paragraph collections of 30 articles.Leveraging the eight prompts,we test on full texts with GPT-4and Claude 2,on paragraph collections with all LLMs.The experimental results demonstrate that pre-trained LLMs show significant potential in performing KEE tasks,but their performance varies on the two data sets.Furthermore,we analyze some important factors that influence the performance of LLMs in entity extraction and provide insights for future KEE tasks in astrophysical articles using LLMs.Finally,compared to other methods of KEE,LLMs exhibit strong competitiveness in multiple aspects.