As one has written about the use of citation counts in ranking law schools and law faculties (Ranking Law Schools: Using SSRN to Measure Scholarly Performance, 81 Ind. L.J. 83, 92-95, 117-20 (2006)), and who hosts the leading law school rankings web site which includes citation count measures (Leiter’s Law School Rankings), I was interested to read this article in today’s Inside Higher Ed: The Tyranny of Citations, by Philip G. Altbach:
The analysis of citations — examining what scholars and scientists publish for the purpose of assessing their productivity, impact, or prestige — has become a cottage industry in higher education. And it is an endeavor that needs more scrutiny and skepticism. This approach has been taken to extremes both for the assessment of individuals and of the productivity and influence of entire universities or even academic systems. Pioneered in the 1950s in the United States, bibliometrics was invented as a tool for tracing research ideas, the progress of science, and the impact of scientific work. Developed for the hard sciences, it was expanded to the social sciences and humanities.
Citation analysis, relying mostly on the databases of the Institute for Scientific Information, is used worldwide. Increasingly sophisticated bibliometric methodologies permit ever more fine-grained analysis of the articles included in the ISI corpus of publications. The basic idea of bibliometrics is to examine the impact of scientific and scholarly work, not to measure quality. The somewhat questionable assumption is that if an article is widely cited, it has an impact, and also is of high quality. Quantity of publications is not the main criterion. A researcher may have one widely cited article and be considered influential, while another scholar with many uncited works is seen as less useful.




