Bookmark and Share

The story behind the metrics

[Jonathan Agbenyega]

You can find no livelier debate amongst friends and colleagues than discussing the relative merits of the impact factor as a measure of an academic journal's importance. I did exactly this, to my peril, at our IUCr Congress in Montreal this summer. Over a chilled drink on the roof terrace of the convention centre, I happened to steer the conversation to metrics such as the impact factor, h-index and altmetric scores and how they are now commonplace in measuring research output from our universities and institutions of learning. Opinions varied widely amongst my colleagues and I can confidently say I heard comments from both ends of the spectrum in terms of whether they should be abolished immediately or enhanced with more sophisticated add-ons and supplemented with entirely new metrics.

The impact factor was probably the start of this crusade to provide some quantitative measure of performance in our research communities. The impact factor was devised by Eugene Garfield, the founder of the Inst. for Scientific Information. Little did Garfield know that nearly 40 years later we'd be still fiercely debating the merits of the impact factor as a measure of journal importance and the effect of policies of the journal editorial boards.

To add to this debate, Jorge Hirsch in 2005 introduced his h-index to the scientific community. The h-index is the number of papers by a particular author that receive h or more citations. The h, standing for highly cited, rightly or wrongly has already become one of the most widely used metrics for research evaluation. In describing what the h-index is, it is probably easier to talk about what some scientists do not like about it:
  • It awards all co-authors on any one paper with the same measure.
  • The older the author, the higher their score (in most instances).
  • The h-index is used across disciplines, which is not measuring like for like.
  • Books are given the same count as an article.

These are just a few of the criticisms, so it comes as no surprise that many scientists are lobbying for change to the h-index in order that it might emphasise different features. Whilst many of these proposed changes are themselves subject to fierce debate, the whole landscape is set to change yet again with a new player to the field simply called alternative metrics or altmetrics.

The growing interest and need to develop alternative measurements of scientific productivity resulted in the formation of a different way of analysis and the company called Altmetrics.

Altmetrics exploits the growing number of scholars moving their day-to-day work to the web. Google Scholar, Mendeley and ResearchGate all claim high levels of user sign-up. This trend is also witnessed across social media sites such as Twitter and to a lesser extent Facebook and LinkedIn.

Altmetrics aims to expand our view of what impact looks like, but also comment on what is making the impact. What is making the impact is today as important as Eugene Garfield's world some 40 years ago. With the move to web we can now examine how raw data such as data sets, programs and code are being consumed, we can drill down to precise sections of a paper to determine what is making the impact, and comments posted on blogs, forums and social media platforms can be captured and tagged to papers to ensure we begin to understand the true value of research output. In 2010 Altmetrics published a manifesto that is well worth a read: http://altmetrics.org/manifesto/.

Whatever your conclusions when discussing this topic during your next dinner party, I will wager we will be still talking about its merits in another 40 years from now.

Jonathan Agbenyega, IUCr Business Development Manager (ja@iucr.org)
10 November 2014