In 1830, Charles Babbage had an unusual idea. Exasperated by how little recognition science was getting in England, the computer pioneer and scientific provocateur suggested that quantifying authorship might be a way to identify scientific eminence.
Like many of Babbage’s radical ideas, this one persuaded almost nobody, but it eventually proved prophetic. Before the end of the century, listing papers and comparing publication counts had become a popular pursuit among scientific authors and other observers. Within a few decades, academic scientists were coming to fear the creed of ‘publish or perish’
Babbage’s suggestion to count authors’ papers was met with various criticisms. One author did the calculation for each fellow in the Royal Society in London, and showed that this was a terrible guide to scientific eminence. Another pointed out1 that “a far more satisfactory criterion” would have been “the value of those papers”.
In the 1960s, Eugene Garfield launched a radically different search tool, known as the Science Citation Index. He hoped that it might end the harmful culture of publish or perish by showing that some papers were more cited — and hence more valuable — than others.
Immediately, commentators warned that new measures based on citations would only make things worse, leading to a “highly invidious pecking order” of journals that could distort science10. The journal impact factor made its public debut in 1972, soon after the US Congress called on the National Science Foundation to produce a better account of the benefits wrought by public funding of science. There is no doubt that the citation index changed practices of scientific publishing, just as the rise of counting papers had followed the introduction of the catalogue before.
Read the Complete Article (1850 words)