What brings more prestige to a scientist, an article which receives hundreds of citations, even if published on a relatively minor, or even obscure journal, or one that is published on a high profile, glamorous publication with a high Impact Factor (IF), but whose citation record is modest ? Most scientists, I believe, would answer such a question by expressing their preference for a publication that is highly cited.
Granted, there is usually a correlation between the publication venue and the number of cites that an article receives. In other words, the same manuscript is likely to be cited more times if it is published in a high profile journal, for a number of simple reasons — wider and more diverse readership, as well as the general expectation that an article published therein is likely to be influential, set a new trend or even open up a new field. It is, however, an expectation that remains unfulfilled in most cases, which is why, ultimately, the influence of a published piece of work is gauged by the number of citations, regardless of where it is published.
This also explains why one’s personal citation record, whether it be expressed by the raw total number of cites or in the form of an index like the Hirsch’s, is often taken as a reliable indicator of the overall impact and quality of the person’s investigative activity (within the obvious, universally accepted limitations that any numerical indicator has, especially if taken as the sole or most important measure).
Still, it is a simple fact that most of us like to go for the big prize, and submit more often than not our manuscripts to “glamorous” journals, even if that means longer and more aggravating reviewing processes (and the consequent delay in the publication of material that owes much of its potential interest to its timeliness), and the high likelihood of eventual rejection and disappointment.
While of course competitiveness and narcissism of scientists have a lot to do with that, it is a somewhat unfortunate fact that much of the time we go after the high profile publication not to impress competitors and/or colleagues who are well-versed in our own area of inquiry, but rather individuals who are not. They could be scientists in other fields or disciplines, as well as administrators at various levels — program directors and university administrators for the most part.
This is particularly the case for junior researchers, chiefly postdoctoral researchers and tenure-track assistant professors, who need to build in a relatively short time impressive-looking credentials, in order to maximize their chances of landing a position (which requires impressing a search committee, or a whole department) or to ensure a smooth, successful tenure review. And it is hard to deny that a CV sporting a number of articles on some high IF journal will generally increase the chances of a grant application to be funded.
What do these situations have in common, here ?
The Impact Factor of the journal where a scientist has published is merely used as a rough numerical criterion, for the purpose of evaluating that person, by individuals who are themselves rather removed from her field of expertise, and are therefore hard put offering a more informed assessment. It is simple and convenient, for lack of anything else that could be used in its place.
The question is, is there really nothing else that could be used in its place ?
Readers of this blog know that I have a predilection for the h-index, and I think this is one of those cases where, if a numerical criterion must be adopted (or, will be adopted) , then going by h-index is much preferable than placing emphasis on Impact Factor of journals where one is publishing.
Aside from the fact that I think the h-index is a much more reliable, comprehensive and objective measure of the overall activity and success of a scientist, removing the IF from the evaluation process would have the effect of relieving junior researchers from the pressure of submitting their work to journals of high prestige but high rejection rate as well, often with the result of delaying significantly publication of their best work.
 Obligatory disclaimer: No, I am not saying that the evaluation of a scholar should be reduced to the h-index, even though I expect to be accused of that anyway. What I am saying is that since administrators will rely on numerical indicators, we might as well try to have them look at the ones that are most likely to have something to do with one’s productivity, originality, scientific contribution.
At my own institution, for example, when science faculty submit their annual report they are required to list all of their publications for that year. The electronic filing system automatically attaches to each and every article the IF of the journal on which it was published. Nothing having to do with one’s citation record is part of the evaluation (to my knowledge). I do not think that my institution is an exception, in that regard.