Manas Gaur University of Maryland, Baltimore County
Source
The Second Bridge on Artificial Intelligence for Scholarly Communication (AAAI-26) (Open Conference Proceedings)
DOI: 10.52825/ocp.v8i.3175
Abstract
The issue of large language models (LLMs) is gradually infiltrating the academic workflow, but it also presents one significant problem: hallucination. The hallucinations involve invented research results, ideas of fabricated reference, and misinterpreted inferences that destroy the credibility and dependability of scholarly writing. In the present paper, the concept of hallucinations as the aspect of scholarly communication is discussed, the major types of hallucinations are revealed, and the causes along with effects of hallucinations are discussed. It also examines pragmatic mitigation measures, such as retrieval-augmented generation (RAG) of factual grounding, citation-verification, and neurosymbolic strategies of structured fact-checking. The paper additionally emphasizes the significance of human-AI partnership in the process of creating scholarly tools to make the use of AI in research responsible and verifiable.The paper seeks to create awareness and offer guidance to the creation of reliableAI systems to be used in scholarly contexts by synthesizing risks, opportunities, and available mitigation measures to such systems. Instead of presenting a comprehensive technical structure, the work provides an overview of the conceptual description which may be used to design more reliable, transparent, and fact-driven AI-assisted research tools.
Agentic AI is and emerging field of artificial intelligence and it has great impacton scholarly research. Agentic AI helps to handle large volume of information fromvast corpora. Currently the Agentic AI systems depends on Large Language Models(LLM) for the tasks of information retrieval and reasoning. LLMs are very effective atNatural Language Understanding and the iterative reasoning. However, there exist some inherent limitations for LLMs, which pose challenges for Agentic AI. Provenance tracking, reasoning challenges, temporal staleness and context dilution are some examples.Incorporating Knowledge Graphs (KG) along with LLMs can mitigate these challenges, and can support deeps search in Agentic AI.In this work, we are exploring the aspects of how KG is well suited for addressing these challenges, and how KG can complement LLMs in Agentic AI for scholarly research. Furthermore, we investigate the problem of frequency bias inherent in LLMs. Frequency bias distorts the outputs in LLMs by biasing towards the most frequent inputs.We examine how a KG integration can counteract this problem. Overall, through this work we aim to highlight the potential of Knowledge Graphs for Agentic AI in scholarly communication.
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area.
He earned his MLIS degree from Wayne State University in Detroit.
Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.