Let’s get things straight

Yes, I know that the h-index suffers from some limitations, much like any other quantitative index. I know that no one’s research or professional activity can be summarized in a single number, that evaluating a scientist is far more complex an operation than that. Nobody has ever suggested that the h-index, or for that matter number of citations, or publications, or invited talks, or what have you, should be the only, or even the main way of evaluating a scholar, especially for promotion and tenure purposes.
However…

… some of us are very unhappy with a scientific establishment that evaluates scholars, especially young ones, almost exclusively based on their pedigree and/or on the opinion of few “prominent” scientists, all based at even fewer “prestigious” institutions. This is, in my opinion, the main reason why so many faculty openings (e.g., in condensed matter physics) go unfilled every year, even in the face of a supposed glut of qualified applicants [0]. Departments all go after the “few excellent” applicants, all from the same factory, and since these few elects can not take more than one job, positions remain unfilled, as no chance is given to outsiders. As a result, departments age, research activity languishes and young people turn away from the field.
This is a bad thing. It is bad for science. And it is stupid.

The same mechanism largely explains why the same people are always invited to give talks at conferences, receive the lion’s share of grant funding, win prizes and so on. It is, in essence, a self-fulfilling prophecy. We need to break this vicious cycle. Science is not, cannot be an “ol’ boys network”.
Numerical indicators such as the h-index, while not themselves immune from bias (reflecting the same dynamics described above), at least might offer young scientists whose records are placed side by side those of competitors with more conspicuous pedigrees, a fighting chance.

The h-index correlates reasonably well with overall scientific achievement and productivity, and while it can never be the one and only thing to look at, it might give an evaluator a reason to take a second look at, and examine more carefully, the record of an applicant whose pedigree may not automatically brand him/her as “stellar”. If you take away all numerical measures, then all you are left with are the “unmeasurables”, and those lead to the same outcome, every time.

So, to my younger colleagues out there, looking for their first faculty job, bitching about the h-index but also complaining about the “powers-that-be”, or lamenting all sort of social conspiracy if they do not land one, here is a question for you:
“Would you rather make it all about where you got your PhD, whom you know, and what so-and-so says about you ? Or do you think that maybe, just maybe, looking at some numbers might help too ?” .

Notes

[0] To be sure, I have personally never believed that such a glut existed — but I do not believe that there is any shortage thereof either.

Tags: , , ,

20 Responses to “Let’s get things straight”

  1. Schlupp Says:

    Boring post once more. Can’t complain, so no fun. (Also no fun, because thinking about some of thee things pisses me off.)

  2. Hope Says:

    Hi Massimo,

    I sometimes read here but don’t believe that I’ve ever commented. You write:

    The h-index correlates reasonably well with overall scientific achievement and productivity….

    Do you have any data to back this up? I agree, in general, with the argument that you’re making. I’m just wondering how you’ve come to the conclusion that the h-index is a good measure of something like “scientific achievement.” That just seems like another “unmeasurable” to me.

    Another thing to consider: does the h-index give one a false sense of objectivity when making a decision that is really quite subjective?

    • Massimo Says:

      Do you have any data to back this up?

      Well, I think the papers by Hirsch are fairly convincing, you find them on ArXiv.

      That just seems like another “unmeasurable” to me.

      No it is measurable by definition — you and I will obtain the same h-index for anyone we have to evaluate. We may attribute to it a different value but there is no ambiguity as to what it is. “Unmeasurables” are things like “personality”, “fire in the belly”, “go-getter”, “outsider”, “abrasive”, “team player”… essentially this stuff.

      does the h-index give one a false sense of objectivity when making a decision that is really quite subjective?

      No, it attempts at introducing an element of objectivity in a decision that is otherwise too often nothing but the reflection of someone’s agenda, typically the most influential members of a department, search committee or community. I mean, look, if I wanted to push my own pet candidate and this person had a weaker record, what would I do ? I would try to undermine the credibility of anything that can be measured, that makes my own guy look bad, and shift the emphasis to stuff that is subjective, hence morphing the discussion into a shouting match. It always works. Fewer papers ? Eh, the number of papers does not mean anything. Fewer citations ? Ah, that is because the significance of their work has not yet been appreciated…. and on and on….

      • Hope Says:

        No it is measurable by definition….

        This is a cop-out and you know it. Right now on the ArXiv, there are two papers by Hirsch on the h-index. Neither one provides a definition of “scientific achievement,” nor do they show, with data, how the h-index is an unbiased measure of this. In fact, the 2nd paper explicitly sidesteps this question. The weaker claim argued in these works is that the h-index is a better metric than other figures, such as total number of pubs, average number of cites per pub, etc. In the first paper, where Hirsch proposes the h-index, he gives examples of the (high) h-index of well-known, “successful” physicists. From this we can conclude that the h-index correlates well with the judgment of the “ol’ boys network.”

        Slapping a number onto an “unmeasurable” that you can’t even define doesn’t mean that you have “introduce[d] an element of objectivity” into a subjective process – and it might result in giving people even less reason to question their gut.

        I suspect that for faculty applicants, the h-index says a lot more about their grad and/or postdoc advisor than it does about them.

      • Massimo Says:

        The h-index is just a simple number, unambiguously defined, that anyone can look up and obtain the same result as anyone else.
        I think you are being disingenuous — you claim that “scientific achievement” is not unambiguously defined, as if there was agreement on what constitutes it. Fact is, many take issue even with a Nobel prize being always an objective sign of achievement. And I am quite sure that you would take issue with any definition thereof one were to come up with (on the other hand, if you think that you have the “ultimate” definition, please submit it to the community, get it to accept it and show that the h-index does not correlate with it).

        Until then, I think it is fair to go by the measures that are commonly accepted as at least generally indicative of achievement, such as number of publications, citations, invited talks, prizes etc. The h-index seems to correlate reasonably well with those indicators, and as such seems reliable, always taking into account the fact that we are talking an inherently imperfect operation in an imperfect world.

        Mind you, I am sure that the h-index too is affected by the “ol’ boys” dynamics, I just think to a lesser extent. Anyway, it is just my opinion, I take it that you feel that other factors, like where one got his/her degree and who is his/her PhD advisor are more objective measures of achievement — and I am sure you are completely unbiased in this assessment ;-)

      • Hope Says:

        The fact that “scientific achievement” is ill-defined is precisely why it is problematic for you to make sweeping statements like: “The h-index correlates reasonably well with overall scientific achievement….” And when asked for evidence to support this claim, it is disingenuous of *you* to refer me to papers that explicitly sidestep this issue.

        I take it that you feel that other factors, like where one got his/her degree and who is his/her PhD advisor are more objective measures of achievement ….

        Not really. But for your “younger colleagues” looking for their first job, I don’t think it’s that much better.

      • Schlupp Says:

        I think the important thing is to keep in mind that “subjective decisions”, which hiring will continue to be, must be kept apart from “arbitrary decisions”, which it should not be. Of course, one would not like the h-index to be set in stone,* but the two following statements are simply something very different:

        1) “Candidate A has the higher-h index, but for the following reason, I still think that candidate B is better:…..”

        2) “h-index, number of publications and citations don’t mean anything, really, I just know that B is better than A.”

        The difference being that in case 1), one can then proceed to have a reasonable discussion about the reasons given and whether they are valid as well as strong enough. Perhaps there is going to be agreement, perhaps not. Maybe A will be chosen, maybe B. But the discussion will be fairer to A than statement 2 is.

        *) Your statement that “nobody has ever suggested” something as stupid is overblown by the way. “Nobody with more than five neurons in working order” would be the correct formulation.

      • Massimo Says:

        The fact that “scientific achievement” is ill-defined is precisely why it is problematic for you to make sweeping statements like: “The h-index correlates reasonably well with overall scientific achievement….”

        Nah, come on now, you are just arguing for the sake of argument. The notion that “evaluation committees try to assess overall scientific achievement” is commonly accepted — very few, upon hearing that contention, go “wait a minute, what is that, exactly ?”.
        One thing is to say that it is ill-defined, the other is that there is no such thing or that any two opinions are irreconcilable.

        There are some reasonable, commonly accepted measures of scientific achievement (publications, citations, number of invited talks, prizes). I understand you take issue with all of them but I do not see what reasonable alternative you have — I suspect none (“any criterion that does not select me is wrong”).
        The h-index correlates fairly well with all of them, as shown in those papers.

  3. Transient Reporter Says:

    Elitism in physics? I’m shocked!
    I always thought it just took a half-dozen geniuses sitting around a table with a pencil and a piece of paper to do physics… How many physicists secretly believe the same thing?

  4. JaneDoh Says:

    I totally agree. Although h-index seems to be better for mid-career scientists due to the lag time between publication and citation, it certainly is a better metric for candidates than “fit”, which often means “looks like me and came from a similar school”.

    When they started using citation counts as part of the review process when I was at a national lab, some of the “old guard” started complaining about “poor metrics”, since their darlings didn’t all have good stats to back up the perceived productivity, and that is what convinced me that h-index might be pretty useful. What is the point of lots of papers that go unread? Especially if some of those journals are the “Journal of Mike’s Microscopy”

    Especially for comparing people at the same career level, I think that adding some impartial numerical analysis as part of the discussion can help overcome lack of “pedigree” and also help with unconscious racism or sexism.

  5. Doug Natelson Says:

    Massimo – I agree with the sentiment, but you know the unfortunate practicalities of hiring. You want a system more like graduate admissions: there is some small group of hotshot students that are in demand and get offers from Harvard, Cornell, MIT, etc., but there is a long list of other applicants. Graduate programs only get about 1/3 of their admits to come because of competition from other universities, so they offer slots to ~ 3x as many people as they need, going down the list well beyond the top few people. There end up being relatively few students who can’t find a home somewhere if they really want to go to grad school.

    The practical aspects of faculty hiring get in the way of a system like that. Everyone on the hiring side (1) wants to get the best people that fit their programmatic needs; (2) can only afford to fly in a handful of people to interview; (3) would never hire anyone they didn’t interview face-to-face; (4) needs to get their decision-making done before the end of the spring, for financial reasons. Under those circumstances, it’s very hard to come up with a system where some perceived top group isn’t highly sought after.

    As for h-index as an “equalizer”, I’m highly skeptical. I’ve been a professor for over 9 years, and only now am I getting to the point where I think my h-index reflects my research productivity. For young people, even up to the point of tenure, the h-index is strongly biased by the kind of graduate training you did. It’s not at all clear to me that ranking applicants by h-index would have the effect you desire, of identifying highly productive people that would otherwise be overlooked b/c they don’t come from “top” places. On the experimental side, at least, in my experience the junior people with highest h-indices come from large, productive groups where there are lots of coauthors on collaborative papers. To use an old example, most students that came out of Smalley’s nanotube group in the late 1990s have big h-indices, whether or not they were 5th author rather than 1st on a bunch of collaborative papers. It’s pretty rare to see previously unnoticed people from small places that have lots of papers and citations. If anything, those people tend to stand out.

  6. Massimo Says:

    Doug, I agree with everything you just wrote but, again, I am not in the least advocating going by h-index only, or mostly, or by any other quantitative measure. What I am thinking of specifically is the following situation:

    Committee Member A: OK, here is Joe Glotz from BigNameU whose PhD advisor is Prof. K. Myboot, and current postdoctoral advisor is Prof. I. Mighttalktoyou and here is John Doe from WhereTheHellU whose PhD advisor is I. Couldonlygethere and is doing a postdoc with O. Notmuchbetter — seems like a no-brainer to me, we should go with Glotz, look at his credentials (read: pedigree)…

    Committee Member B: Well… yeah but… funny though, Doe has a higher h-index than Glotz… should we maybe take a closer look at this one ?

    That’s it. No more than this. I think it would make a big difference already, even though I agree that the relevance and applicability could vary depending on the level (but, Doug, the same can be said about any quantitative measure — should we not even try to use one, then ?)

  7. Schlupp Says:

    And then, there is h-bar: arXiv:0911.3144

    Here, basically, a paper only counts for your h-bar-index, if it also counts for the h-index of ALL coauthors on this paper. Which puts studenst of famous people at a decided disadvantage. Also, you could gain if your co-authors other papers (without you) do NOT get cited, which might introduce some very itneresting dynamics into citation networks. Something tels me that this one is not going to catch on…..

    • Massimo Says:

      Agree. It is complicated, likely to be easily manipulated, and in the end I do not think it would provide much different a comparative assessment than the h-index itself.
      The notion of multiple authorship is a funny one. We all know it should be taken “somehow” into account, but it is unclear how to do it fairly and objectively. For one thing, while for a theorist it is common, and relatively easy to work alone, for experimentalists it is essentially impossible these days to carry out any substantial research project in isolation. Also, why assign a greater weight to an irrelevant single-authored paper than one with multiple authors that garners a lot of attention ?

      Once again: h-index is only a starting point, it is not a replacement for going through the person’s publication record and get a sense of whether someone can work independently. The typical case is that of a probationary faculty who keeps publishing only with his former PhD/PD advisor (and their groups) — this will definitely raise eyebrows with reappointment, promotion and tenure committees (at serious places, anyway), regardless of this person’s h-index.

      • Schlupp Says:

        Will you already stop writing only things I agree with? It’s almost no fun this way!

        No, it’s not that obvious with credit for co-authorship….. If I can make my work better by collaborating with someone else, then science is better off if I collaborate, methinks. Yes, doing it alone may be more difficult, but I am not sure how many “bonus points” this justifies: After all, this is not some sports competition where you get points for doing it on one leg, blind folded and using xmgrace.

        But of course, in SOME way one should take it into account…..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 28 other followers

%d bloggers like this: