Monday, June 20, 2011

Review: Fersht A (2009) The most influential journals: Impact Factor and Eigenfactor. Proceedings of the National Academy of Sciences (PNAS), 106(17):6883-6884. DOI: www.pnas.org/cgi/doi/10.1073/pnas.0903307106

Feature Paper: Fersht A (2009) The most influential journals: Impact Factor and Eigenfactor. Proceedings of the National Academy of Sciences (PNAS), 106(17):6883-6884. DOI: www.pnas.org/cgi/doi/10.1073/pnas.0903307106

Author Abstract: Progress in science is driven by the publication of novel ideas and experiments, most usually in peer-reviewed journals, but nowadays increasingly just on the internet. We all have our own ideas of which are the most influential journals, but is there a simple statistical metric of the influence of a journal?

Note to Readers: Follow links above for author email, full article text, or the publishing scientific journal. Author notes in my review are in quotes.
Review: This week's review tackles the topic of influential scientific journals, and as the opening lines of this paper correctly asserts, scientific progress is driven by the publication of ideas and discoveries. Performing science alone is not important enough. If no one can read or hear about your discoveries or ideas, then it is just as if those ideas or discoveries were never known. This paper is quite short (just over 1 page in length) and can be accessed through a free online scientific journal that also happens to be one of the more influential journals (by the metrics in the journal, the second-most influential journal).
The author discusses the two most common scientific metrics to determine influence of scientific journals since in theory, if an idea or discovery is published in a journal with a wide audience and a journal that is well-respected, then the idea or discovery will reach the largest scientific audience possible. The author states that there are at least "39 scales" used to measure impact of scientific ideas, but two are most commonly used.
The first metric to determine scientific influence for journals is "Impact Factor" (IF; methods can be found at www.thomsonreuters.com/products_services/scientific/Journal_Citation_Reports). Impact Factor represents "the average number of citations in a year given to those papers in a journal published in the previous 2 years." Part of the problem of that method is that a popular paper is more likely to become even more popular as time goes on because when someone wants to cite or reference a topic in their own paper as background, they are more likely to pick an influential paper to cite if given a choice. Nevertheless, it is true that a paper cannot artificially get to a high number of citations: scientific peers must decide that the paper is influential. If a paper is published in an influential journal it is more likely to be cited as some will almost by default consider the paper "good" but still, to be cited by many is a democratic process.
The second metric that can determine influence is "Eigenfactor" (methods at www.eigenfactor.org/methods.htm). Instead of concentrating on individual articles or authors, Eigenfactor ranks journals by their influence. The author shows that Eigenfactor heavily correlates to total citations for a journal. By the Eigenfactor method, four journals have much more scientific influence than any other journals: Nature, Proceedings of the National Academy of Sciences, Science, and the Journal of Biological Chemistry.
The author points out that "all journals have a spread of citations, and even the best have some papers that are never cited plus some fraudulent papers and some excruciatingly bad ones. So, it is ludicrous to judge an individual paper solely on the IF of the journal in which it is published." This is an important point because all institutions use publication record to evaluate scientific employees and those who publish in the big four journals are likely to be viewed much more favorably just by nature of that publication than other similarly qualified peers.
The author points out a third metric that he describes as "the least evil," the h-index, "which ranks the influence of a scientist by the number of citations to a significant number of his or her papers; an h of 100 would mean that 100 of their publications have been cited at least 100 times each."
The author finishes the article with the caution that "it is important not to rely solely on one standard," important words to live by in any profession.
Next paper we'll examine a scientific effort to implement the an "Eigen" metric similar to that mentioned in this article for determining the impact of extinctions on ecosystem collapse.

No comments:

Post a Comment