The following article (preprint) was recently made available in the arXiv.
Amsterdam School of Communication Research (ASCoR), University of Amsterdam
Centre for Science and Technology Studies CWTS, Leiden University
Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society
Bibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices.
We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests.
These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Direct to Full Text Article (40 pages; PDF)