Having expounded in my previous post what kind of person I look for, when serving on the search committee for a tenure-track hire, now it is time to list the criteria that I adopt to try and spot my ideal candidate, as I go through application packages (APs).
Archive for the ‘Research’ Category
I am a faculty member in a university physics department, who finds himself periodically involved in faculty searches and hires. How do I evaluate the curriculum vitae of an applicant for a tenure-track position?
What do I look for, and what are the red flags? Does it really boil down to counting (first-authored) articles, impact factor of journals where they were published, citations, invited talks, or maybe places where and individuals by whom the applicant has been mentored, as a student and postdoctoral associate?
Do I even look at the research plan? If so, how do I judge it?
What about teaching potential and/or experience?
What brings more prestige to a scientist, an article which receives hundreds of citations, even if published on a relatively minor, or even obscure journal, or one that is published on a high profile, glamorous publication with a high Impact Factor (IF), but whose citation record is modest ? Most scientists, I believe, would answer such a question by expressing their preference for a publication that is highly cited.
“In 2004, Kim and Chan (KC) carried out torsional oscillator (TO) measurements of solid helium conﬁned in porous Vycor glass and found an abrupt drop in the resonant period below 200 mK. The period drop was interpreted as probable experimental evidence of nonclassical rotational inertia (NCRI). This experiment sparked considerable activities in the studies of superﬂuidity in solid helium”.
Doug Natelson has done an outstanding job at debunking a ridiculous charge of confirmation bias allegedly affecting a recent study of climate change. Such a charge is put forth in an article published in the popular press (on a very prominent venue). While ostensibly aimed at educating the general public about some aspects of how science works, the article sneakily rehashes one of the most common and dangerous misconceptions that exist out there about science, namely that in the end it is not as “objective” as its practitioners claim.
It is that time of the year when Impact Factor (IF) data are updated. As I finished retrieving the 2011 values (from ISI Web of Knowledge), I started looking at notable changes (upward and downward). Being a condensed matter physicist, I am focusing on those journals that are most relevant to me, but I am wondering whether similar observations to those expounded below are made in other subfields.
I have only recently become aware of the existence of the Eigenfactor (EF). It is a proposed measure of the overall influence, impact, prestige of a scholarly journal in its own discipline, or field. The one and only measure with which I was familiar is the well-known Impact Factor (IF), which is actually fairly straightforward to understand. By contrast, the eigenfactor is determined through a rather complex procedure (I am not going to discuss its computation in this post — for details, see here).
The damage that falsehood can do, if unchallenged and/or perpetuated over a period of time, can be considerable, often long lasting, both to individuals (for whom it is typically permanent — ask anyone wrongly convicted of a crime that they did not commit) and to humankind as a whole. For this reason, it seems a good idea to have procedures in place not only to spot it, but also to expose and debunk falsehood swiftly and effectively, before it spreads.
There exist circumstances in which falsehood acquires a pernicious resilience, even in the absence of a concerted effort on the part of anyone to preserve it. All that is needed is a sufficiently robust system of perverse incentives, which may come about for whatever reason and prove surprisingly hard to die.
Nope, sorry, this is not a post about politics, there are no upcoming elections anyway. I am writing in frustration, after checking once again on the web the status of a manuscript that I have submitted for publication over two months ago, to find out that it is still under review, ostensibly in the virtual hands of an unresponsive referee.
How many scientific discoveries have been made by investigators carrying out studies that, in principle, should have merely reproduced known results and/or confirmed the conventional wisdom ? I do not have numbers but I suspect many. Serendipity plays much more important a role than many a scientist would care to admit.