The following preprint was posted on bioRxiv earlier today.
Peer review of a research product varies widely depending on the publishers and platforms involved in the process. As scholarly publishing is disrupted by new innovations, peer review processes become more heterogeneous, placing an increasing burden on the researcher in understanding how they can communicate their scholarship. New ways to model such processes, and increase transparency, trust, and experimentation in scholarly publishing are needed. Many are emerging but can tend to focus on the needs of creators, and not those of readers, funders, and the whole scholarly publishing ecosystem. They may not place focus on representing editorial practices in ways that can be reliably aggregated, surfaced, and queried; are often limited to traditional peer review processes; and cannot capture the full range of editorial practices and events needed to accommodate alternative publication, review, and curation models. To support researchers in a world of experimentation in scholarly publishing, we propose a machine-readable, extensible, and discoverable framework for representing and surfacing review and editorial processes. Working with a Technical Committee composed of interested parties by employing a modified Delphi Method, we developed initial guiding principles and proposals towards an object-level editorial metadata framework compatible with a broad range of possible futures for scholarly publishing. We present the results of this process with a proposal and example use cases for DocMaps, a framework for representing object-level assertions.
This project was supported with funding from the Howard Hughes Medical Institute.
Direct to Full Text (July 15, 2021)
70 pages; PDF.