The following article (preprint) was recently shared on bioRxiv.
University of Alaska Fairbanks
A full evaluation of the impact of scientific publications needs to count readers who don’t generate citations. But this is very difficult to do. I set up a scholarly foraging experiment to try to estimate readership for my own body of work: a personal web site with ‘reprint’ pdf files available for downloading and a web statistics program engaged to count these downloads. (I made the site topically rather than author-oriented to increase potential audience size.) Despite this, human users are difficult to count. There are a lot of bots, spiders, and other web programs that increasingly mimic human behavior (e.g., with IP- and chrono-camouflage), and humans’ own behaviors are changing (e.g., reading papers online many times). From four years of data, after culling both automated activity (a fascinating ecosystem) and humans reading repeatedly online, my papers receive ~4000-8000 downloads per year, about an order of magnitude higher than the number of citations they receive annually. This is a conservative minimum, because these publications can be obtained from other sources as well. On average, this body of work may be being read at an approximately 10:1 download-to-citation ratio. At a more granular scale, downloads do not correlate with citations; there are some papers being downloaded at about a 100:1 ratio, and it turns out these are exactly how they were meant to be used (e.g., an instruction manual). How intensively someone reads a paper is of course highly variable, but this exercise gives us an idea of how broad our publications’ impacts actually are. Citations alone don’t come close to measuring it.
Direct to Full Text (5 pages; PDF)