Over the past decade we’ve seen a great deal of activity around the development of research impact metrics. This is a good thing generally, as more and varied metrics help to provide a broader perspective on researchers/scholars impact.
However, as use of research performance metrics expands within academia worldwide it is critical that appropriate use of these metrics is practiced.
There is always the danger of use of a metric in simply the wrong way, misrepresenting what is being measured as well as potentially damaging the credibility of the metric itself. There is no better example than the Journal Impact Factor (JIF). So many of the criticisms of this metric miss the target as they are spawned by inappropriate application of the JIF and not the characteristics of the JIF itself.
Use of a metric without an understanding of its source, of the data that lies behind it and what this means for its use, can be the cause of serious misrepresentation of research impact. There is no perfect research impact metric, but there are many useful metrics out there – they simply need to be used the right way with an appropriate understanding of what they communicate.
There is always the danger of use of a metric in simply the wrong way, misrepresenting what is being measured as well as potentially, wrongfully, damaging the credibility of the metric itself.
Elements provides the benefit of working with a variety of sources of metrics, those drawn from Web of Science (Core Collection), Scopus, Altmetric, and more. The following is not the first in a series of primers on these metrics, not even close, but just a bit of information that may help in understanding the metrics displayed within Elements and therefore assisting in their appropriate use. There are of course more “points of strength” for each metric than listed, and more “things to think about” for each. This is just a fundamental place to begin.
The Times Cited count.
Should an institution subscribing to Elements have the proper rights to Web of Science and/or Scopus data, the citation count or “times cited” count for a research article drawn from one or both of these resources will be listed.
Web of Science Times Cited:
Points of Strength — Multidisciplinary coverage of research literature, with deep backlogs and inclusion of cited references from all years of content coverage dating back to about 1900. This provides the ability to capture cites to articles immediately from the time of their publication.
Things to Think About — Web of Science employs a journal evaluation and selection policy that is fairly rigorous. At this time over 12,000 journals are indexed, with significant proceedings and books coverage as well. While this results in a collection of high quality content, some would say it is limiting in terms of coverage of an appropriate volume of the world’s peer-reviewed journal literature – “appropriate volume” specifically within the context of generation of research impact metrics. Quality vs. quantity, always an interesting debate.
Scopus Times Cited:
Points of Strength — Where indexing of actively published titles is concerned, Scopus multidisciplinary coverage of over 20,000 peer-reviewed journals provides for a massive citation index. There is significant book and proceedings coverage as well. So – lots of content available for the provision of cited references that in turn generate the times cited count.
Things to Think About — Though Elsevier has announced a Cited References Expansion Program that will change things in the near future, as of this writing (October 2014) Scopus does not capture cited references within articles published prior to 1996. This means that for a paper published in 1990 for example, the Scopus times cited count would not include cites to the paper that occurred during the first five years following its publication. This is very significant, as frequency of citation of a research article in the hard sciences in particular is typically most intense during the first five years post-publication. Again, stay tuned to Elsevier as they plan to rectify this.
Web of Science core collection and Scopus, additional comments
Times Cited counts for the same article may be quite similar within Web of Science and Scopus, or quite different. This is a reflection of the content and indexing practices within each resource. Let’s look at an influential life sciences paper published in the late-1980’s, such as:
Nitric-Oxide – A Cyto-Toxic Activated Macrophage Effector Molecule, HIBBS, JB et al, Biochemical And Biophysical Research Communications, v 157, iss 1, p 87, 1988
At this time you will find the Times Cited count for this article in Web of Science is much higher than that found in Scopus because Web of Science has captured cited references for all years of coverage while Scopus, at this time, has not captured this data for pre-1996 content.
However, if we take a much more recently published article in the very same journal, such as:
microRNA miR-27b Impairs Human Adipocyte Differentiation and Targets PPAR Gamma, Karbiener, M et al, Biochemical And Biophysical Research Communications, v 390, iss 2, p247, 2009
At this time you will find a higher Times Cited count within Scopus for this article as Scopus covers a much higher volume of actively published titles, the sources of citation to the article, than Web of Science Core Collection.
Keep in mind the journal coverage philosophies of each database – Web of Science more selective than Scopus, producing metrics from a more selective collection of content, or some would say a more limited collection of content. “Quality” is often in in the eye of the beholder, so it depends on the standpoint of the user.
Of course, one should only compare Times Cited counts of articles within the same discipline, taking into account the length of time the articles have been published and available to be seen/read and cited.
Altmetric is the producer of the Altmetric score. Symplectic is in partnership with Altmetric to display this score for all research articles resident within an instance of Elements. Full disclosure – Symplectic and Altmetric are both portfolio companies of Digital Science ( http://www.digital-science.com ), a business division of Macmillan Science and Education.
Points of Strength — As we all know use of social media is expanding dramatically, and the international scholarly community is no exception. Scholarly communication via social media may not be the “formal” method of scholarly communication but it is now mainstream, and this needs to be acknowledged.
Altmetric (http://www.altmetric.com/) is focused on the attention (mentions) a research paper receives online – via news and social media sites, blogs, tweets, etc. Mentions may appear within a scholarly/research site or context, but they are not limited to this. More and more there is emphasis on the ability to track the impact of research on society, and metrics such as Altmetric certainly can support this effort.
Additionally, Altmetric provides an almost immediate measurement, tracking mentions of papers daily. Let’s not forget the time that it can take for a paper to be read, benefitted from, and subsequently cited in another published paper. The time can be quite considerable.
Things to think about — Spend time on the Altmetric website and come to understand what the metric is and does. Its visual presentation is important to note.
As stated clearly within the Altmetric website, the Altmetric score is the quantitative measure of this attention derived from three main factors: volume of online mentions produced by unique individuals; sources (news outlets, blogs, tweets, etc.), each mention contributing a different base amount to the final score based on its source; authors – how often the author of each mention talks about scholarly articles, at whether or not there’s any bias towards a particular journal or publisher and who the audience is.
This is a departure from traditional research impact metrics, so understanding it requires an open mind as well as a real knowledge of just how a big part of our world is communicating these days.
A quick summary
In the world of research impact metrics, citations are the primary currency. There is no more fundamental form of measurement of research impact than that of a citation to a published work. A cite to a research article is intended to be clear, formal recognition of an intellectual debt to that work, and therefore a tremendously important recognition.
Yet, other information needs to be taken into consideration in the assessment of research impact at the article level. At the beginning of this little commentary I’d stated that more and varied metrics help to provide a broader perspective on researchers/scholars and their published work. With the availability of information that provides insight into very real alternative measures of impact it would be foolish to intentionally limit the view to citation impact alone.
Of course, not all cites to an article are necessarily “good”cites. Negative cites occur, though they account for a relatively small volume of cites in the big picture of research literature. In turn, where Altmetric is concerned, not all mentions may be deemed “good”depending on the nature of the mention itself. For both types of metrics looking below the surface can be important.
About the Author
Jeff is an information science specialist and one of our corporate refugees, having spent 25 years working in the scientific and scholarly information industry. When he isn’t sharing his knowledge with us, he is on the road visiting institutions up and down the west coast of the USA to find out what makes them tick. Check out his profile.