SUBSCRIBE
SUBSCRIBE
EXPLORE +
  • About infoDOCKET
  • Academic Libraries on LJ
  • Research on LJ
  • News on LJ
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Libraries
    • Academic Libraries
    • Government Libraries
    • National Libraries
    • Public Libraries
  • Companies (Publishers/Vendors)
    • EBSCO
    • Elsevier
    • Ex Libris
    • Frontiers
    • Gale
    • PLOS
    • Scholastic
  • New Resources
    • Dashboards
    • Data Files
    • Digital Collections
    • Digital Preservation
    • Interactive Tools
    • Maps
    • Other
    • Podcasts
    • Productivity
  • New Research
    • Conference Presentations
    • Journal Articles
    • Lecture
    • New Issue
    • Reports
  • Topics
    • Archives & Special Collections
    • Associations & Organizations
    • Awards
    • Funding
    • Interviews
    • Jobs
    • Management & Leadership
    • News
    • Patrons & Users
    • Preservation
    • Profiles
    • Publishing
    • Roundup
    • Scholarly Communications
      • Open Access

December 20, 2025 by Gary Price

Conference Paper: “Hallucinations in Scholarly LLMs: A Conceptual Overview and Practical Implications”

December 20, 2025 by Gary Price

The paper linked below appears in the Proceedings of the 2nd AAAI (Association for the Advancement of Artificial Intelligence).

Bridge on Artificial Intelligence for Scholarly Communication.

Title

Hallucinations in Scholarly LLMs: A Conceptual Overview and Practical Implications

Authors

Naveen Lamba
Sharda University, India

Sanju Tiwari
Sharda University, India

Manas Gaur
University of Maryland, Baltimore County

Source

The Second Bridge on Artificial Intelligence for Scholarly Communication (AAAI-26) (Open Conference Proceedings)

DOI: 10.52825/ocp.v8i.3175

Abstract

The issue of large language models (LLMs) is gradually infiltrating the academic workflow, but it also presents one significant problem: hallucination. The hallucinations involve invented research results, ideas of fabricated reference, and misinterpreted inferences that destroy the credibility and dependability of scholarly writing. In the present paper, the concept of hallucinations as the aspect of scholarly communication is discussed, the major types of hallucinations are revealed, and the causes along with effects of hallucinations are discussed. It also examines pragmatic mitigation measures, such as retrieval-augmented generation (RAG) of factual grounding, citation-verification, and neurosymbolic strategies of structured fact-checking. The paper additionally emphasizes the significance of human-AI partnership in the process of creating scholarly tools to make the use of AI in research responsible and verifiable.The paper seeks to create awareness and offer guidance to the creation of reliableAI systems to be used in scholarly contexts by synthesizing risks, opportunities, and available mitigation measures to such systems. Instead of presenting a comprehensive technical structure, the work provides an overview of the conceptual description which may be used to design more reliable, transparent, and fact-driven AI-assisted research tools.

Source: 10.52825/ocp.v8i.3175

Direct to Full Text Article
10 pages; PDF.

On a Related Note (Also in the Proceedings)

Synergistic AI Agents: Integrating Knowledge Graphs and Large Language Models for Scholarly Communication

Agentic AI is and emerging field of artificial intelligence and it has great impacton scholarly research. Agentic AI helps to handle large volume of information fromvast corpora. Currently the Agentic AI systems depends on Large Language Models(LLM) for the tasks of information retrieval and reasoning. LLMs are very effective atNatural Language Understanding and the iterative reasoning. However, there exist some inherent limitations for LLMs, which pose challenges for Agentic AI. Provenance tracking, reasoning challenges, temporal staleness and context dilution are some examples.Incorporating Knowledge Graphs (KG) along with LLMs can mitigate these challenges, and can support deeps search in Agentic AI.In this work, we are exploring the aspects of how KG is well suited for addressing these challenges, and how KG can complement LLMs in Agentic AI for scholarly research. Furthermore, we investigate the problem of frequency bias inherent in LLMs. Frequency bias distorts the outputs in LLMs by biasing towards the most frequent inputs.We examine how a KG integration can counteract this problem. Overall, through this work we aim to highlight the potential of Knowledge Graphs for Agentic AI in scholarly communication.

Direct to Complete Proceedings

Filed under: Conference Presentations, Journal Articles, News, Scholarly Communications

SHARE:

About Gary Price

Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.

ADVERTISEMENT

Archives

Job Zone

ADVERTISEMENT

Related Infodocket Posts

ADVERTISEMENT

FOLLOW US ON X

Tweets by infoDOCKET

ADVERTISEMENT

This coverage is free for all visitors. Your support makes this possible.

This coverage is free for all visitors. Your support makes this possible.

Primary Sidebar

  • News
  • Reviews+
  • Technology
  • Programs+
  • Design
  • Leadership
  • People
  • COVID-19
  • Advocacy
  • Opinion
  • INFOdocket
  • Job Zone

Reviews+

  • Booklists
  • Prepub Alert
  • Book Pulse
  • Media
  • Readers' Advisory
  • Self-Published Books
  • Review Submissions
  • Review for LJ

Awards

  • Library of the Year
  • Librarian of the Year
  • Movers & Shakers 2022
  • Paralibrarian of the Year
  • Best Small Library
  • Marketer of the Year
  • All Awards Guidelines
  • Community Impact Prize

Resources

  • LJ Index/Star Libraries
  • Research
  • White Papers / Case Studies

Events & PD

  • Online Courses
  • In-Person Events
  • Virtual Events
  • Webcasts
  • About Us
  • Contact Us
  • Advertise
  • Subscribe
  • Media Inquiries
  • Newsletter Sign Up
  • Submit Features/News
  • Data Privacy
  • Terms of Use
  • Terms of Sale
  • FAQs
  • Careers at MSI


© 2026 Library Journal. All rights reserved.


© 2022 Library Journal. All rights reserved.