Who’s really obsessed with Impact Factor ?

What brings more prestige to a scientist, an article which receives hundreds of citations, even if published on a relatively minor, or even obscure journal, or one that is published on a high profile, glamorous publication with a high Impact Factor (IF), but whose citation record is modest ? Most scientists, I believe, would answer such a question by expressing their preference for a publication that is highly cited.

Granted, there is usually a correlation between the publication venue and the number of cites that an article receives. In other words, the same manuscript is likely to be cited more times if it is published in a high profile journal, for a number of simple reasons — wider and more diverse readership, as well as the general expectation that an article published therein is likely to be influential, set a new trend or even open up a new field. It is, however, an expectation that remains unfulfilled in most cases, which is why, ultimately, the influence of a published piece of work is gauged by the number of citations, regardless of where it is published.
This also explains why one’s personal citation record, whether it be expressed by the raw total number of cites or in the form of an index like the Hirsch’s, is often taken as a reliable indicator of the overall impact and quality of the person’s investigative activity (within the obvious, universally accepted limitations that any numerical indicator has, especially if taken as the sole or most important measure).

Still, it is a simple fact that most of us like to go for the big prize, and submit more often than not our manuscripts to “glamorous” journals, even if that means longer and more aggravating reviewing processes (and the consequent delay in the publication of material that owes much of its potential interest to its timeliness), and the high likelihood of eventual rejection and disappointment.
While of course competitiveness and narcissism of scientists have a lot to do with that, it is a somewhat unfortunate fact that much of the time we go after the high profile publication not to impress competitors and/or colleagues who are well-versed in our own area of inquiry, but rather individuals who are not. They could be scientists in other fields or disciplines, as well as administrators at various levels — program directors and university administrators for the most part.
This is particularly the case for junior researchers, chiefly postdoctoral researchers and tenure-track assistant professors, who need to build in a relatively short time impressive-looking credentials, in order to maximize their chances of landing a position (which requires impressing a search committee, or a whole department) or to ensure a smooth, successful tenure review. And it is hard to deny that a CV sporting a number of articles on some high IF journal will generally increase the chances of a grant application to be funded.

What do these situations have in common, here ?
The Impact Factor of the journal where a scientist has published is merely used as a rough numerical criterion, for the purpose of evaluating that person, by individuals who are themselves rather removed from her field of expertise, and are therefore hard put offering a more informed assessment. It is simple and convenient, for lack of anything else that could be used in its place.
The question is, is there really nothing else that could be used in its place ?
Readers of this blog know that I have a predilection for the h-index, and I think this is one of those cases where, if a numerical criterion must be adopted (or, will be adopted) [0], then going by h-index is much preferable than placing emphasis on Impact Factor of journals where one is publishing.
Aside from the fact that I think the h-index is a much more reliable, comprehensive and objective measure of the overall activity and success of a scientist, removing the IF from the evaluation process would have the effect of relieving junior researchers from the pressure of submitting their work to journals of high prestige but high rejection rate as well, often with the result of delaying significantly publication of their best work.

Notes

[0] Obligatory disclaimer: No, I am not saying that the evaluation of a scholar should be reduced to the h-index, even though I expect to be accused of that anyway. What I am saying is that since administrators will rely on numerical indicators, we might as well try to have them look at the ones that are most likely to have something to do with one’s productivity, originality, scientific contribution.
At my own institution, for example, when science faculty submit their annual report they are required to list all of their publications for that year. The electronic filing system automatically attaches to each and every article the IF of the journal on which it was published. Nothing having to do with one’s citation record is part of the evaluation (to my knowledge). I do not think that my institution is an exception, in that regard.

Tags: , ,

8 Responses to “Who’s really obsessed with Impact Factor ?”

  1. Schlupp Says:

    It is outrageous of you to want scientists to be judged solely on their h-index.

    So, having this obligatory part out of the way, I’d like to mention a reason for the use of IFs in such annual scorecards: They are available right now, while citations take longer to accumulate. The main issue is thus the focus on the extremely short timescale.

    • Massimo Says:

      In my opinion, when it comes to tenure (and possibly even hiring) the h-index, even if based on few articles, tells the story much more reliably than impact factor. I worry about (and have seen cases) letting a single publication on a high profile journal unduly influence (or skew) hiring or tenuring decisions.

      • Me Says:

        “I worry about (and have seen cases) letting a single publication on a high profile journal unduly influence (or skew) hiring or tenuring decisions.”

        So true…. It seems those days you need that Nature paper to get on the tenure track of the Big universities.

        This is ironic because this skewed selection reminds me of the arguments of Nassim Taleb on luck and success. If the rewards are highly skewed, then on average it is extremely rewarding to bet everything on a few home-run.

        I have seen around me students who bet almost everything on a Nature/Science paper because they were told they needed it for a decent career. Results: some of them will succeed (by luck, talent or perseverence) at getting this Nature (and not much else). This will help to pass over deserving student who got “only” a couple of PRL.

        Because Academia is very much a (nonlinear) tournament, a small advantage at the beginning turns into a big reward at the end.

  2. A Says:

    A rational for preferring the IF even over actually available individual citation data could be the following:

    Assume that a paper has an objective quality measure q with weakly correlates to its (future) number of citations c. Moreover, assume that q can be determined in a somewhat costly procedure, and that review for journals essentially amounts to determining the quality of a paper, comparing it to a benchmark value b and accepting/rejecting based on that. Furthermore, assume that editors and authors behave sufficiently market-like to avoid grossly “overqualified” submissions.

    Averaged over all papers appearing in a journal, the variation between q and c would (partially) cancel out, leaving the IF as a reliable estimate for b, which in turn serves as a lower bound for q. In particular, the IF could be a better estimate for q than c.

    Now, I’m not claiming this model describes reality at all, but maybe some people do think along such lines?

    • Massimo Says:

      I doubt very much if most of the people in charge of evaluating go, or have ever gone through this kind of reasoning, but I am not denying its validity.
      The question however is different: if, for instance, we take researchers who have already around 13-14 years of activity behind (five years of graduate school, three-four years as a postdoc and six years on the tenure track), what is a better indicator of their scholarly excellence, the average IF of the journals in which they have published or their h-index ? I say the latter.

      • Schlupp Says:

        Well, for this example, you bring into play another difference: Quantity, which – by a less derogatory word – could also be termed “consistency of research performance”. What could most reasonably be compared to the h-index would be some integrated IF that also takes into account the number of papers published.

        One can certainly argue whether or not two PRB are “about the same” as one PRL, as they would be per “integrated IF” (*). But one can’t very well argue that one PRL is “more” than 5 PRL plus 10 PRB, as it would be per “average IF”.

        (*) Personally, I’d say it depends on the papers and on how well they are appreciated by the community. But I am aware that general opinion would go with the one PRL in most cases.

  3. Luca Says:

    Dear Massimo, I just discovered your blog and I find it very interesting. Talking about the h-index, I was wondering if you are familiar with this paper by Sidney Redner (BU):

    http://arxiv.org/abs/1002.0878

    According to this work, the average h-index is a simple function of the total number of citation to an individual (h=sqrt(c)/2, c being the number of citation). In other words, the “typical” h-index does not contain any additional information beside the total number of citations. This has nothing to do with your comments in this article, but it’s just a curious fact that I thought you might find interesting.

    • Massimo Says:

      Hi Luca, thank you for your comment, and I am sorry it took me so long to reply but I have been neglecting my blog for a couple months. I have taken a look at Redner’s paper and it is indeed interesting, although I think that in these cases, as they say, “the devil is in the details”, and especially for an early career scientist I would probably trust the h-index more than the total number of citations (a graduate student can end up with a single, highly cited paper just by virtue of having been at the right place at the right time).

Leave a comment