June 19, 2021

Research Article: “Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework” (Preprint)

The following research article (preprint) was posted today on bioRxiv.

Title

Evaluating FAIR-Compliance Through an Objective, Automated, Community-Governed Framework

Authors

Mark D. Wilkinson
Center for Plant Biotechnology and Genomics UPM-INIA, Madrid, Spain

Michel Dumontier
Institute of Data Science, Maastricht University, Maastricht, The Netherlands

Susanna-Assunta Sansone
Oxford e-Research Centre, Department of Engineering Science, University of Oxford, Oxford, UK

Luiz Olavo Bonino da Silva Santos
GO FAIR International Support and Coordination Office, Leiden, The Netherlands; Leiden University Medical Center, Leiden, The Netherlands

Mario Prieto
Center for Plant Biotechnology and Genomics UPM-INIA, Madrid, Spain

Peter McQuilton
Oxford e-Research Centre, Department of Engineering Science, University of Oxford, Oxford, UK

Julian Gautier
Institute for Quantitative Social Science, Harvard University, Cambridge, USA

Derek Murphy
Institute for Quantitative Social Science, Harvard University, Cambridge, USA

Mercѐ Crosas
Institute for Quantitative Social Science, Harvard University, Cambridge, USA

Erik Schultes
GO FAIR International Support and Coordination Office, Leiden, The Netherlands; Leiden University Medical Center, Leiden, The Netherlands

Source

via arXiv
September 16, 2018
doi: 10.1101/418376

Abstract

With the increased adoption of the FAIR Principles, a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers, are seeking ways to transparently evaluate resource FAIRness. We describe the FAIR valuator, a software infrastructure to register and execute tests of compliance with the recently published FAIR Metrics. The Evaluator enables digital resources to be assessed objectively and transparently. We illustrate its application to three widely used generalist repositories – Dataverse, Dryad, and Zenodo – and report their feedback. Evaluations allow communities to select relevant Metric subsets to deliver FAIRness measurements in diverse and specialized applications. Evaluations are executed in a semi-automated manner through Web Forms filled-in by a user, or through a JSON-based API. A comparison of manual vs automated evaluation reveals that automated evaluations are generally stricter, resulting in lower, though more accurate, FAIRness scores. Finally, we highlight the need for enhanced infrastructure such as standards registries, like FAIRsharing, as well as additional community involvement in domain-specific data infrastructure creation.

Direct to Full Text Article
15 pages; PDF.
About Gary Price

Gary Price (gprice@mediasourceinc.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. Before launching INFOdocket, Price and Shirl Kennedy were the founders and senior editors at ResourceShelf and DocuTicker for 10 years. From 2006-2009 he was Director of Online Information Services at Ask.com, and is currently a contributing editor at Search Engine Land.

Share