SUBSCRIBE
SUBSCRIBE
EXPLORE +
  • About infoDOCKET
  • Academic Libraries on LJ
  • Research on LJ
  • News on LJ
  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Libraries
    • Academic Libraries
    • Government Libraries
    • National Libraries
    • Public Libraries
  • Companies (Publishers/Vendors)
    • EBSCO
    • Elsevier
    • Ex Libris
    • Frontiers
    • Gale
    • PLOS
    • Scholastic
  • New Resources
    • Dashboards
    • Data Files
    • Digital Collections
    • Digital Preservation
    • Interactive Tools
    • Maps
    • Other
    • Podcasts
    • Productivity
  • New Research
    • Conference Presentations
    • Journal Articles
    • Lecture
    • New Issue
    • Reports
  • Topics
    • Archives & Special Collections
    • Associations & Organizations
    • Awards
    • Funding
    • Interviews
    • Jobs
    • Management & Leadership
    • News
    • Patrons & Users
    • Preservation
    • Profiles
    • Publishing
    • Roundup
    • Scholarly Communications
      • Open Access

November 18, 2025 by Gary Price

Allen Institute for Artificial Intelligence (Ai2) Releases DR-Tulu: “An Open, End-To-End Training Recipe For Long-Form Deep Research”

November 18, 2025 by Gary Price

From an Allen Institute for Artificial Intelligence (AI2) Announcement:

Deep research is about building agentic systems that can plan, search, and synthesize information from diverse sources to produce in-depth, well-attributed answers to complex questions. Done well, these capabilities could accelerate scientific discovery and allow students and professionals to explore unfamiliar domains with expert-level rigor, backed by transparent citations and reasoning traces.

[Clip]

To address these challenges, we introduce Deep Research Tulu (DR Tulu), the first open model that is directly trained for long-form deep research tasks through an end-to-end training recipe that combines supervised fine-tuning (SFT) and Reinforcement Learning with Evolving Rubrics (RLER).

[Clip]

Training agents to handle long-form, tool-intensive research workflows is difficult: models must integrate evidence across many sources while justifying each step, meaning that there isn’t a single ‘correct’ answer to verify against. Evaluating long-form responses is intrinsically challenging—the criteria for quality are often underspecified, static rubrics can’t capture the full range of response quality, and LM judges must keep pace with a rapidly evolving, incredibly vast body of world knowledge.”

[Clip]

To make this work reproducible and extensible, we’re releasing all the components of DR Tulu: the full training recipe and code, our DR Tulu-8B checkpoint, our RLER rubric generation and training framework, and dr-agent-lib, an open research library built on MCP with multi-tool search, asynchronous tool calling, and an accompanying evaluation suite.

Conducting deep research in steps

Our core challenge was building a model that can flexibly adapt its depth of response, switching between concise answers and multi-paragraph reports depending on a question’s complexity. Deep research is inherently dynamic: as the model searches and acquires new information, the space of possible outputs evolves during execution, making fixed rubrics inadequate. Add to this that evaluating multi-source synthesis requires verifying that claims are faithfully grounded across multiple documents and reasoning steps—far harder than checking short-form answers.

At inference time, DR Tulu runs an auto-search loop and chooses between three actions:

  • think for internal planning
  • call_tool to invoke a search or browsing tool
  • answer to produce a final response

Inside the final answer, the model wraps claims in citation tags that link back to supporting sources.

When given a research question, the model begins by planning what information it needs and which sources to consult. It then iteratively searches and gathers evidence from multiple places, synthesizing findings, identifying gaps, and refining its strategy based on what it learns.

Research questions demand varied information sources. Scientific research benefits from scholarly databases, healthcare queries need authoritative medical sources, while general inquiries work best with broad web search. To support this diversity, we built our inference system using the Model Context Protocol (MCP), treating tools as swappable components. In our default setup, DR Tulu has access to three search tools:

  • google_search, which returns top web snippets
  • web_browse, which extracts full-page text from URLs
  • paper_search, which retrieves relevant paragraphs from open-access research papers

This MCP-based design lets you bring your own tools – API search, local retrieval and reranking, site-specific readers, or domain-specific databases – via a unified protocol. Our agent library, dr-agent-lib, provides a programmable MCP-based frontend for experimenting with prompt templates, multi-stage workflows, and fine-grained tool-calling strategies, without retraining the underlying model.

Resources

Demo | Models & Data | Code | Technical Report

Learn More, Read the Complete Post, View Video (about 2900 words)

MORE Research Tools From Ai2

  • Allen Institute for Artificial Intelligence (Ai2) Announces Launch of Asta: Accelerating Science Through Trustworthy Agentic AI

Filed under: Academic Libraries, Data Files, Journal Articles, Libraries, News, Open Access, Reports

SHARE:

About Gary Price

Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.

ADVERTISEMENT

Archives

Job Zone

ADVERTISEMENT

Related Infodocket Posts

ADVERTISEMENT

FOLLOW US ON X

Tweets by infoDOCKET

ADVERTISEMENT

This coverage is free for all visitors. Your support makes this possible.

This coverage is free for all visitors. Your support makes this possible.

Primary Sidebar

  • News
  • Reviews+
  • Technology
  • Programs+
  • Design
  • Leadership
  • People
  • COVID-19
  • Advocacy
  • Opinion
  • INFOdocket
  • Job Zone

Reviews+

  • Booklists
  • Prepub Alert
  • Book Pulse
  • Media
  • Readers' Advisory
  • Self-Published Books
  • Review Submissions
  • Review for LJ

Awards

  • Library of the Year
  • Librarian of the Year
  • Movers & Shakers 2022
  • Paralibrarian of the Year
  • Best Small Library
  • Marketer of the Year
  • All Awards Guidelines
  • Community Impact Prize

Resources

  • LJ Index/Star Libraries
  • Research
  • White Papers / Case Studies

Events & PD

  • Online Courses
  • In-Person Events
  • Virtual Events
  • Webcasts
  • About Us
  • Contact Us
  • Advertise
  • Subscribe
  • Media Inquiries
  • Newsletter Sign Up
  • Submit Features/News
  • Data Privacy
  • Terms of Use
  • Terms of Sale
  • FAQs
  • Careers at MSI


© 2026 Library Journal. All rights reserved.


© 2022 Library Journal. All rights reserved.