Is it possible to manipulate an Impact Factor ?

A recent ArXiv preprint (filed under Popular physics as well as Physics and Society) discusses the viability of Open Access (OA) scientific journals. The article expresses skepticism on the long-term survival prospects of those OA journals (such as the New Journal of Physics) that seek to derive the revenues that they need by levying publication fees (often exceeding one thousand US dollars per article) on the authors.

The reasoning expounded in the article is straightforward: why would scientists, especially at a time of limited funding for research, tap into their meager research grants to publish an article in an OA journal, when Open Access to their manuscript (something most scientists desire) can be achieved for free by uploading it on a public online repository such as ArXiv ?
The only possible incentive seems to be the high prestige of the journal, with the ensuing recognition for the author(s) from having a manuscript accepted therein. In this context, the author of the above-mentioned preprint points to what might turn out to be a potentially unsolvable conflict of interests for the publisher of the journal; for, in order to keep standards high, most manuscripts would have to be rejected, but publishing a small number of articles will diminish the journal’s revenues.

This article got me thinking about the broader issue of scientific publishing. I actually think that a different scenario may play out, which is not going to be good for scientific communication and publishing, and thus ultimately for science as a whole. These days, pressure on scientists to submit their articles to high profile journals is tangible. It comes first and foremost from university and laboratory administrators, but also from peers.
However, the “prestige” of a journal is presently typically assessed through its Impact Factor (IF), which is actually not a measure of the journal’s acceptance rate, but rather of the average number of citations that accepted manuscripts garner in the course of the time.
To be specific, an IF of, say, 4 or above, which is considered quite good for a physics journal, means that, on average, each article published in that journal is cited four times in the two years following its publication.

I am afraid that increased competition will lead publishers and editors to think of ad hoc strategies aimed exclusively at raising the journal’s IF, some of which may actually be detrimental to the scientific quality of the journals themselves. One such strategy is well known, and consists of seeking contributions from world renowned scientists, expecting their articles to be widely cited — which is of course perfectly legitimate. However, different “tricks” might be played, which may succeed at raising a journal’s IF but may not serve the best interest of the community, i.e., the advancement of science and the recognition of what eventually is accepted as the best scientific work.
For example, the editor of a journal with a low IF may be pressured by the publisher to give priority for acceptance to manuscripts of less than stellar level, just because they cite other work published in the same journal, therefore contributing to raising the journal’s IF. I can also envision the situation of a restricted community of scientists, all working in the same subfield, convinced by an editor to move its operations entirely to a specific journal, which will become the de facto official repository of all articles published in that area. Again, the rationale would be that of increasing the number of citations per article.
While specialized journals obviously have a place, if taken to an extreme this practice may lead in time to a significant intellectual fragmentation of the broader scientific community, with the creation of isolated “centers of powers”, something that is typically not conducive to the fair and open exchange of ideas which we all wish to maintain.
There may be other ways of artificially boosting a journal’s IF, which may end up de facto depriving it of its significance (but I guess people can always find a better index) and, more importantly, doing a disservice to science publishing.

Tags: , ,

2 Responses to “Is it possible to manipulate an Impact Factor ?”

  1. Anonymous Says:

    There are much worse ways to game the impact factor. You might be interested in this editorial in the Journal of Cell Biology:

    http://jcb.rupress.org/cgi/content/full/179/6/1091

    Andre
    http://biocurious.com/

    • Massimo (formerly known as Okham) Says:

      Thank you for the pointer. Very interesting reference indeed. I have to say I have suspected for a while that some of that stuff was going on. But I am sorry, I have to insist with this concept — it’s the administrators that create the problem in the first place. They, not the scientists, are the ones who are fixated with framing the discussion in terms of indices.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: