Elsevier Impact Factors Compiled in 2014 for Journals in Exercise and Sports Medicine and Science
Will G Hopkins
Sportscience 19, 72-81, 2015 (sportsci.org/2015/wghif.htm)
This article represents my annual summary of the latest impact factors of journals in the discipline of sport and exercise medicine and science. This year I have switched from the Thomson-Reuters impact factor to the equivalent Elsevier factor, the impact per publication (IPP, their abbreviation), derived from the Scopus database. Elsevier allows free access to its citation statistics (at Journal Metrics), and the statistics are available in a convenient spreadsheet with all previous years included, whereas access to the Thomson-Reuters' factors is awkward and requires an institutional subscription. Thomson-Reuters also restricted the amount of information I could show, so I had to resort to inequalities for some factors and color coding to show changes.
The Elsevier impact factor is calculated from citations in a wider range of journals than that of Thomson-Reuters, which will tend to make its factor higher than Thomson-Reuters'. On the other hand, the Elsevier factor is calculated as the citations per article in the given journal over three years rather than Thomson-Reuters' two years, which will tend to make the Elsevier factor smaller (because impact factors three years ago were on average lower than in the previous two years). Earlier this year I compared the two factors for the journals in our discipline compiled from citations in journals published in 2013. In scatter plots it was clear that the comparison was better performed with raw data than with log-transformed data. In the plots, the values for Exercise and Immunology Review and International Journal of Epidemiology were clearly off the trend, with values that were much higher for Thomson-Reuters than for Elsevier. After deletion of these two outlier journals, the Elsevier factor was a little higher than the Thomson-Reuters factor (by 0.17 ± 0.27, mean ± SD). The correlation between the two factors was 0.98, and the standard error of the estimate for predicting an Elsevier value from the Thomson-Reuters was 0.27 (so the equivalent Elsevier factor for a given Thomson-Reuters factor differs typically by ±0.27 from journal to journal, as shown also by the standard deviation for the difference scores). My conclusion is that there is little difference between the Elsevier and Thomson-Reuters impact factors, so we should use the Elsevier impact factor from now on. Table 1 shows the impact factors (the IPPs) for the last three years of journals in exercise science, sport science, and those of some more generic journals we sometimes publish in.
Like Thomson-Reuters, Elsevier produces several citation indices. I was particularly interested in an Elsevier index that Thomson-Reuters does not produce, the subject-normalized impact per paper (SNIP). In subject areas with less research activity, impact factors are lower, because there are fewer papers citing related papers. The SNIP is supposed to adjust for such differences between disciplines, thereby allowing a proper comparison of the impact of such journals as Archives of Budo and Medicine and Science in Sports and Exercise. The adjustment uses length of reference lists in the articles citing articles in the given journal. This approach is obviously a bit crude, considering some journals limit the length of their reference lists, but it's probably better than nothing. The resulting SNIP looks just like the usual impact factor, and on average it has the same value across the entire database of scientific journals.
I have investigated the relationship between the SNIP and the usual impact factor (Elsevier's IPP) for this year's data. In scatterplots it was obvious that the relationship was more uniform after log transformation of both indices, and there were no outliers. Back-transformed means and factor SD for the IPP and the SNIP were 1.1 ×/÷ 2.6 and 0.8 ×/÷ 2.1, respectively, so the usual IPP is slightly higher and has somewhat more scatter than the new SNIP. The correlation between the two log-transformed measures was 0.88 (and 0.92, when I did it with the 2013 data). At first I thought this correlation was too high for the SNIP to convey anything really different from the IPP, but I was wrong: when the journals are ranked by the SNIP, it's obvious that the IPPs are somewhat scrambled, as shown in Table 2. You can also download the spreadsheet sorted by IPP for comparison. (More work needs to be done on the relationship between a correlation coefficient and comparability of ranks of the two variables for measures of journal and athletic performance.)
It's disappointing that the correlation between the IPP and the SNIP isn't lower or even zero: why should a top sport sociology journal have any less relative impact than a top sports injury journal? The academics are surely comparable, so why not their journals? I suspect that the normalizing process isn't working properly, either because of the limit on the size of the reference list in journals in the more active fields, or more likely because of the principle of cumulative advantage from cumulative inequality theory, according to which "there's nothing surer, the rich get rich and the poor get poorer" (1920s' song) in social and other dynamic systems of agents or attractors. It is likely and regrettable that articles providing rankings of journal impact factors serve only to accelerate the divergence of the factors.
For an explanation and critique of the usual impact factor, including the IPP, see an earlier article in this series. Read subsequent articles for explanations of related statistics and publication issues, including the page-rank, cited half-life and immediacy indices, the H (Hirsch) index, post-publication peer review, peer-reviewed proposals, article-influence scores, and institutional research archives.
Thomson Reuters' impact factor has been the most prominent metric for peer-reviewed publications in recent years. It’s no surprise that other publishers and scientific enterprises are developing their own metrics. The number of different metrics appearing in the online scientific community probably reflects publishing houses seeking to maximize competitive and commercial opportunities, and the needs of authors, editors, and institutions (particularly universities) for evidence-based measures of research impact.
Most authors and readers appreciate that citation counts of a researcher's publications are a better measure than the impact factors of the journals in which the researcher publishes, which are measures only of the average impact of all the articles in the journals. After all, relatively unimportant articles can get published in top-ranked journals (much to the chagrin of authors whose work has been rejected), while truly original and ultimately highly cited work can appear in low-ranked journals. Even if the SNIP can be improved to reduce the bias of research activity or cumulative advantage, it will not address this shortcoming of journal impact factors as measures of a researcher's impact. We will need an individual citation statistic adjusted like the SNIP if we are to evaluate the productivity of individual researchers in a fair manner.
Although subscriber-driven models have their place, a readily accessible free service that provides useful metrics for individuals and institutions will attract attention. It appears from Will Hopkins’ analysis that there is little difference between the subscriber (Thomson-Reuters) and free (Elsevier) traditional journal impact factors. Sports scientists should keep an eye on the evolution of publication metrics and of course the annual ranking lists of sport-science journals.