Evaluating editorial decisions

The prestige that a scientific journal enjoys within its target community can be assessed in many ways, all of them subjective to a degree. There is little argument over the fact that a prestigious journal is one that publishes many ground-breaking, influential articles; more controversial is the notion of how to measure these qualities for a scientific article. I think it is fair to say, however, that there is relatively broad consensus that the relative importance of an article should be reflected by its citation record.

The rationale is clear: an article to which many refer in the course of time, is likely an important one, i.e., one in which significant new ideas, results, methodologies are expounded, one that sparks the interest of research groups across the world, in turn starting independent investigative efforts in the same area.
Is this a perfect measure ? Of course not. There is no such thing.
Examples are countless of articles that could and should have been regarded as important based on the above definition, but did not receive many citations for a number of different reasons, e.g., the relatively obscure journal where they were published, or because their authors were not themselves well known to the community. Conversely, there are illustrious cases of articles that receive many citations, but whose impact can be considered limited by all standards. The typical example is that of an early contribution, in a field that is rapidly growing in interest, which garners a burst of citations for a short period of time, until its conclusions are proven wrong, at which point it is superseded by other references — by then, however, it may be already relatively highly cited.

Still, in an imperfect world the number of citations seems to come as close as it is humanly possible to something resembling an objective measure of the value of a contribution — one ought not forget that in science, much as in any human activity, there is no such thing as complete objectivity (a fairly abstract and empty concept in and of itself). And, while it is surely influenced by factors that do not pertain to science or research, I am not aware of any other measure that would not be affected by the same factors, arguably to an even higher degree.
Be that as it may, there is no question that the number of citations is taken seriously by the scientific community, and is here to stay. Highly cited researchers are those that raise to roles of prominence, and, consequently, prestigious journals are those that publish frequently cited articles. The one quantitative index that has gained relatively widespread acceptance as the most important measure of the influence of a journal, is Institute for Scientific Information‘s (ISI) Impact Factor (IF). The IF of a journal for a given calendar year, e.g., 2009, is computed as follow: let P be the total number of articles published on that journal in the course of the two previous years (i.e., 2007 and 2008), and let M be the number of times all thee articles are cited in 2009. The ratio M/P is the IF.

Researchers are eager to submit their manuscripts to journals enjoying a high IF; university administrators will definitely look at the IF of journals where their faculty publish, in order to evaluate their scholarly production.
And it is definitely taken seriously by Editors of scientific journals, who monitor the IF of the publication, and its evolution in time, as an indication of its increased or decreased reputation among scientists (see here, for instance). It stands to reason that editorial policies will be drafted and adopted, aimed at raising the journal’s IF. Although I have never been told that explicitly, there is little doubt in my mind that an important criterion to accept or reject a manuscript submitted for publication, should be the predicted number of citations that that article will garner in the following two years.

Now, if the above is true, then it seems that a logical step should be that of calling wrong the decision of an editor to accept a manuscript for publication, which then is cited a number of times N which is less than the journal’s current IF (and therefore contributes to lowering it), or to reject a manuscript that then is published somewhere else and is cited a number of times N’ greater than the IF (and could have therefore contributed to raising it, if it had been accepted).
Based on this argument, I wonder what the impact is of wrong editorial decisions on the decline of the IF of a journal over time.

As a case in point, let us consider the leading journal published by the American Physical Society, namely Physical Review Letters (PRL). In the time period 2006-2007, it published 7,303 articles, which were cited 52,435 times in 2008, resulting in an IF of 7.180 for 2008.
On the other hand, Physical Review B (PRB), namely the Condensed Matter Physics (CMP) journal published by the same Institute, published in the same time frame 11,375 articles, which collected in 2009 37,783 citations, for an IF of 3.322.
Because PRL publishes in all areas of physics, it seems reasonable to assume that, on average, maybe a quarter to a third of all articles will be in CMP (which is the largest area of physics by way of number of practitioners), which means that PRB will publish as much as five times more articles that PRL in CMP. It puzzles many of us that the difference in IF between PRL and PRB is merely a factor 2.5.

Because the scope of these two journals is different, it is generally the case that articles in CMP are written to be submitted either to PRB or PRL. However, as I have written in a previous post, a number of articles that end up in PRB had initially been submitted to PRL, and were then re-directed to PRB following their rejection by PRL. I would be curious to know how much these “demoted” articles affect the (relatively high) impact factor of PRB, for example.

Tags: , ,

4 Responses to “Evaluating editorial decisions”

  1. Rolando Valdes Says:

    Just FYI, PRL and PRB are published by APS (American Physical Society) not AIP.

  2. JaneB Says:

    The other method to get high citations is to sneak an error through into an article – then it will be widely cited as everyone points out how wrong you are…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: