Scholarly Communication: New Wiley Guidelines Give Researchers Clear Path Forward in Responsible AI Use
From Wiley:
Wiley has set new standards for responsible and intentional AI use, delivering comprehensive guidelines specifically designed with and for research authors, journal editors, and peer reviewers.
As AI usage among researchers surges to 84%, Wiley is responding directly to the pressing need for publisher guidance articulated by 73% of respondents in the most recent ExplanAItions study. Building on similar advisement for book authors published in March 2025, and shaped by ExplanAItions findings, Wiley’s new guidance draws from more than 40 in-depth interviews with research authors and editors across various disciplines, as well as the company’s experts in AI, research integrity, copyright and permissions.
It offers the following research-specific provisions:
- Disclosure Standards: Detailed disclosure requirements with practical examples show researchers exactly when and how to disclose AI use—covering drafting and editing, study design, data collection, literature review, data analysis, and visuals. This guidance treats disclosure as an enabling practice, not a barrier, helping researchers use AI confidently and responsibly.
- Peer Review Confidentiality Protections: Clear prohibitions on uploading unpublished manuscripts to AI tools, while providing guidance on responsible AI applications for reviewers and editors. This outlines areas where AI use is and is not appropriate in the peer review process.
- Image Integrity Rules: Explicit prohibition of AI-edited photographs in journals, with clear distinctions between permitted conceptual illustrations and factual/evidential images that require verifiable accuracy, providing clarity on AI use for image generation in various contexts.
- Reproducibility Framework: Comprehensive advice as to which AI uses require disclosure, helping researchers understand when transparency is necessary for scientific evaluation.
[Clip]
As the research publishing industry experiences rapid AI adoption, these guidelines will serve as a model for responsible AI integration across the sector. They emphasize that AI use should not result in automatic manuscript rejection. Instead, editorial evaluation should focus on research quality, integrity, and transparency, using disclosure as a routine, intentional practice. Beyond establishing standards, the guidelines provide practical examples, workflow integration tips, and decision-making frameworks.
Direct to Complete Announcement
Resources
- Guidelines Web Page
- ExplanAItions 2025 Key Insights (Preview; Released Earlier This Month)
13 pages; PDF.
Filed under: Data Files, Interviews, News, Publishing, Scholarly Communications
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.



