Shubham Agarwal ServiceNow Research, Mila – Quebec AI Institute, HEC Montreal
Gaurav Sahu ServiceNow Research, University of Waterloo
Abhay Puri ServiceNow Research
Issam H. Laradji ServiceNow Research, University of British Columbia
Krishnamurthy DJ Dvijotham ServiceNow Research
Jason Stanley ServiceNow Research
Laurent Charlin Mila – Quebec AI Institute, HEC Montreal
Christopher Pal ServiceNow Research, Mila – Quebec AI Institute, Canada CIFAR AI Chair
Source
via arXiv
DOI: 10.48550/arXiv.2412.15249
Abstract
Literature reviews are an essential component of scientific research, but they remain time-intensive and challenging to write, especially due to the recent influx of research papers. This paper explores the zero-shot abilities of recent Large Language Models (LLMs) in assisting with the writing of literature reviews based on an abstract. We decompose the task into two components: 1. Retrieving related works given a query abstract, and 2. Writing a literature review based on the retrieved results. We analyze how effective LLMs are for both components. For retrieval, we introduce a novel two-step search strategy that first uses an LLM to extract meaningful keywords from the abstract of a paper and then retrieves potentially relevant papers by querying an external knowledge base. Additionally, we study a prompting-based re-ranking mechanism with attribution and show that re-ranking doubles the normalized recall compared to naive search methods, while providing insights into the LLM’s decision-making process. In the generation phase, we propose a two-step approach that first outlines a plan for the review and then executes steps in the plan to generate the actual review. To evaluate different LLM-based literature review methods, we create test sets from arXiv papers using a protocol designed for rolling use with newly released LLMs to avoid test set contamination in zero-shot evaluations. We release this evaluation protocol to promote additional research and development in this regard. Our empirical results suggest that LLMs show promising potential for writing literature reviews when the task is decomposed into smaller components of retrieval and planning. Further, we demonstrate that our planning-based approach achieves higher-quality reviews by minimizing hallucinated references in the generated review by 18-26% compared to existing simpler LLM-based generation methods.
Figure 1: A schematic diagram of our framework, where: 1) Relevant prior work is retrieved using keyword and embedding based search. 2) LLMs re-rank results to find the most relevant prior work. 3) Based on these papers and the user abstract or idea summary, an LLM generates a literature review, (4) optionally controlled by a sentence plan. Source: 10.48550/arXiv.2412.15249
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.