Preprint: “Comparing Five Generative AI Chatbots’ Answers to LLM-Generated Clinical Questions with Medical Information Scientists’ Evidence Summaries”
The research article (preprint) linked below was recently shared on medRxiv.
Title
Authors
Mallory N. Blasingame
Taneya Y. Koonce
Annette M. Williams
Jing Su
Dario A. Giuse
Poppy A. Krum
Nunzia B. Giuse
Affiliation: Vanderbilt University Medical Center (All Authors)
Source
via medRxiv
DOI: 10.1101/2025.09.24.25336199
Abstract
Objective: To compare answers to clinical questions between five publicly available large language model (LLM) chatbots and information scientists.
Methods: LLMs were prompted to provide 45 PICO (patient, intervention, comparison, outcome) questions addressing treatment, prognosis, and etiology. Each question was answered by a medical information scientist and submitted to five LLM tools: ChatGPT, Gemini, Copilot, DeepSeek, and Grok-3. Key elements from the answers provided were used by pairs of information scientists to label each LLM answer as in Total Alignment, Partial Alignment, or No Alignment with the information scientist. The Partial Alignment answers were also analyzed for the inclusion of additional information.
Results: The entire LLM set of answers, 225 in total, were assessed as being in Total Alignment 20.9% of the time (n=47), in Partial Alignment 78.7% of the time (n=177), and in No Alignment 0.4% of the time (n=1). Kruskal-Wallis testing found no significant performance difference in alignment ratings between the five chatbots (p=0.46). An analysis of the partially aligned answers found a significant difference in the number of additional elements provided by the information scientists versus the chatbots per Wilcoxon-Rank Sum testing (p=0.02). Discussion: Five chatbots did not differ significantly in their alignment with information scientists’ evidence summaries. The analysis of partially aligned answers found both chatbots and information scientists included additional information, with information scientists doing so significantly more often. An important next step will be to assess the additional information both from the chatbots and the information scientists for validity and relevance.
Direct to Abstract + Link to Full Text
Filed under: News
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.


