A few commenters took issue with a contention that I made in my latest post, namely that publications matter very little, when it comes to the fortunes of science doctoral degree holders seeking employment in industry. It is the opinion of some that, in fact, many a potential industrial employer will raise eyebrows over the lack of publications on the CV of an applicant with a PhD in a scientific discipline — alternatively, having published the type of peer-reviewed research articles that constitute the backbone of one’s scholarly portfolio, may also significantly enhance that person’s marketability for industrial positions, most of which feature no substantial research component and/or do not specifically target PhDs in the applicant’s field (or in any of the Science and Engineering disciplines, for that matter).
One of the arguments seems to be that, since there is an expectation of publication of doctoral graduates in the sciences, lack thereof is often perceived as a sign of overall applicant mediocrity.
(For the purpose of clarity — the aim of this post is not that of discussing the importance of scholarly publications for graduate students, but only its impact on the narrower issue of finding employment in industry).
There is always a degree of subjectivity in assessing what is “a lot”, or “very little”, but, in general, I am not interested in small differences — if I am going to recommend a graduate student to delay graduation by one year, in order to publish a couple more papers, I think I should have solid reasons.
As I stated in my post, as well as in my replies, I am personally very skeptical of any claim that scholarly publications may play a significant, much less decisive role in the hire a PhD scientist outside academia. This is mostly based on my own personal job hunting experience (see this post for some more background), which now dates back some fifteen years (but I doubt if things have changed much) , as well as on that of my friends from graduate school, and of my own undergraduate and graduate advisees .
Admittedly, however, these are just my own opinion and personal experience (well, and of many other colleagues, actually), and therefore of anecdotal value at best — much like what has been offered by commenters to my post. Fact is, none of us backed with actual hard data our statements, firstly because this is just a blog, secondly because data are not available, or, at any rate, not easily found (I come back to this point below). The comments, however, got me thinking about this issue. While it will always all remain a matter of opinions, to some extent, it seems interesting to pose the question of whether is it possible to try and quantify the effect of publications on one’s CV, based on the little on which maybe we can all agree.
Below, I am going to make a modest attempt at performing a simple statistical analysis, aimed at arriving at least to a numerical upper bound on the effect that publications can be expected to have on a job applicant’s appeal to a potential industrial employer.
As usual, what follows is nothing but my personal observations (rantings). If anyone has data, or finds an obvious problem with my reasoning, I would greatly appreciate to be set straight — you will get in public a personal “thank you” from the author of this widely read, internationally ranked science blog.
A few definitions
Let us consider a freshly minted PhD, intent on seeking employment in industry, an employment not necessarily related to her PhD research work or even field of study, but for which she proves a credible, qualified applicant, by virtue of analytical and technical skills acquired during her doctoral training .
Let P(H|PhD) = α be the probability that this person will successfully land such a job within an accepted time frame. Let us introduce the following two additional probabilities:
P(H|PhD, pub) = probability that the new science PhD will be hired in industry, given that she has published one or more articles during her doctoral studies.
P(H|PhD, no pub) = the same probability, but for a graduates with no publication on her CV.
It is clearly
P(H|PhD) = q P(H|PhD, pub) + (1-q) P(H|PhD, no pub) (1)
where q is a number between zero and one, representing the fraction of all PhD applicants for the typical industrial job, whose CV features at least one publication. This fraction, whose precise value is unknown to me, turns out to be of crucial importance, as we shall see below.
Let us now introduce the publication enhancement factor (PEF), δ, through the definition
P(H|PhD, pub) = (1 + δ) P(H|PhD, no pub)
The PEF furnishes a quantitative measure of how much more likely to find a job, on average, a PhD applicant with publications is, than a colleague without publications. For example, if δ = 2 then the former would have on average three times greater a prior probability to get a job than the latter. If δ = 0 then publications do not enhance at all a CV, at least on average, in the eyes of an industrial recruiter .
Now, the question: is there any way of quantifying the effect of publications, by giving at least a rough estimate of δ ?
δ = [(1-q) P(PhD,pub|H)] [q P(PhD,no pub|H)]-1-1 (2)
where P(PhD, pub|H) (P(PhD,no pub|H)) is the probability that a PhD applicant having been hired in industry possess some (no) publication record. The above expression tells us, for example, that if the overall effect of publications were, hypothetically, that of doubling on average the odds of someone to be hired, and if equal proportions of PhD applicants did and did not have publications (i.e., q = 1/2), then the population of hired PhDs would include 66% of published PhDs and 33% without any publication.
A reasonably good estimate for δ can be obtained using (2), upon knowing the values of q, as well as the fractions of PhD applicants hired in industry with and without publications. I have looked for that kind of data on the internet, but could not find it. I would greatly appreciate it if anyone could furnish a link .
How large can δ be ?
Even if reliable, robust data are unavailable, it is still perhaps possible to make some general estimates. First of all, though, some common sense. If δ were large enough to make a noticeable difference in the outcome of industrial job searches, e.g., to the point where unemployment among PhD scientists were largely confined to those with no publications, or if any such a trend been observed, it would be known by now. Professional societies such as the APS, which have been constantly monitoring employment trends for decades, would have long issued a warning to departments and graduates, urging them to make sure not to leave that aspect uncovered. I am not aware of any such warning, though.
I attended many focus sessions on employment held at APS meetings in the mid-90s, with presentations typically delivered by speakers from industry, and not a single time did I hear even a hint of that. If anything, admonitions consistently went in the opposite direction, namely for advisors and graduate students to worry less about publications and more about developing marketable skills.
Having said the above, starting from (1), we can re-express, using the definition of δ with some simple algebra, P(H|Phd,pub) as follows:
P(H|PhD,pub) = [α (1+δ)] [1+qδ]-1 (3)
Because P(H|PhD,pub) is a probability, it necessarily ought not be greater than one, which allows us to place an upper bound for δ, let us call it δM. Specifically, one finds that
δM = [1- α] [α-q]-1 (4)
Here, in order to obtain a reasonable ballpark estimate for δM one needs to come up with reliable guesses for α and q. Again, I do not have the numbers, but I think it is a plausible guess that δM will not exceed a few percent, maybe ten percent at the most. Here is why:
1) There are good reasons to posit that α should be a number close to one, something like 0.95 or so, i.e., a freshly minted science PhD looking for industrial employment will almost always find it (I am not discussing job satisfaction here, at all). This is because the overall unemployment rate among PhD scientists is a few percent, and only a tiny fraction of them end up employed doing research (see, here, for instance, for data that pertains to Canada — no reason to expect the situation to be very different in other countries). Thus, many of them do look at industry as a potential source of employment.
2) If q ~ 1, i.e., if almost the totality of science PhD were published by the time they seek employment, then the denominator of (4) could also be small, and δ could conceivably be large too. But there are in fact reasons to believe that q should be something like 0.5 or so, i.e., that something like roughly half of all CVs sitting on the desktop of an industrial head hunter do not feature publications .
We physicists tend to think that everyone with a PhD must have published at least one paper, given the insistence on publication in our own field. But physicists account for just a tiny percentage of all PhDs in Science and Engineering (in Canada, approximately three percent), and it turns out that publication rates vary greatly across fields. I have not been able to find hard numbers but, in engineering, for example, the rate at which students graduate with their doctorate without publications could be as high as 30% (see, here, for instance), and, while some of us may not want to consider engineers as scientists, if 70% of them publish the remaining 30% ought to be harmed by recruiting criteria emphasizing publications, no less than science graduates with no publications.
One may also reasonably opine that the population of those seeking employment in industry may be richer in CVs without publications, precisely because of the expectation that the lack of papers will not be seen as such a big weakness in industry as it would be in academia.
If one accepts something like α ~ 0.95 and q ~ 0.5, then the greatest boost that one may expect to derive from being published, when seeking a position in industry, is of the order of 10%. That is, out of a pool of 100 PhD applicants, 50 of which with publications and 50 without, on average 53 with publications and 47 without publications will be hired .
Is this a “big” difference ? Maybe it is a matter of opinions — I find it too small to induce departments and advisors to insist on a strict publication requirement, especially with students with a stated aim for a career in industry.
One could argue that, even though in the end everyone will find a job, which is why one meets very few PhD scientists standing in unemployment lines, publications may make it easier to find a job more quickly, or that PhDs with publications may land better jobs, or move up the ladder more quickly, and so on. All of that is certainly possible or even plausible, but I need to see data in order to be convinced, and of course one would have to assess any benefit against the (possibly) longer time spent in graduate school in order to produce those publications.
 In 1996-97 I interviewed with MBNA (in Delaware), and no mention was made of publications. I do not know if I would have been hired, because I eventually accepted an academic position. However my “industrial” resume did not include publications, and I was not asked about them during the interview. I was also offered a job as UNIX system administrator, through a head hunter with whom I spoke on the phone and who did not seek any information from me about my publications, while he did ask a lot of specific questions about my computer knowledge.
 During my 5 years as a faculty at my former institution I was directly or indirectly involved in the supervision of at least a dozen undergraduate and as many graduate students, who all ended up finding employment in the high-tech private sector in the San Diego area. At the undergraduate and graduate level, as observed by one commenter, there is no expectation that the graduate’s CV will sport peer-reviewed publications. However, a good fraction of our graduates (both at the Bachelor’s and Master’s level) did co-author at least one such article, and I think it is noteworthy that that did not seem to make any difference, as far as them finding a job — they all did, publications or not.
 It is worth stating things precisely, here. The type of industrial employment that is discussed in this post may not be the one that PhD scientists desire at the outset, but is the one with which they prevalently end up, these days. Namely, a job with essentially no research component, and making little or no use of the scientific knowledge acquired during one’s doctoral studies (see, for instance, Fig. 10 of this document, published in 1994 but still very relevant). For this type of employment, typically PhD scientists compete not only with fellow PhDs in other disciplines, but also with applicants whose highest degree is a Master’s, or even a Bachelor’s, if supplemented with significant hands-on experience.
There may still exist jobs in the private sector such as those that PhD physicists used to have, not so long ago, at places such as Lucent Technologies, i.e., with an important research component. Despite being formally employed by a private company, a PhD scientist would be hired in those places to perform tasks not too dissimilar from those of a basic science professional working in academia. For this type of job, for which competition among young scientists is typically quite keen, candidate selection is carried out based on criteria that resemble those of academia, not surprisingly — hence a strong publication record would surely be given great emphasis.
These jobs were never plentiful to begin with, and became rarer in the 90s, with the gradual disengagement from basic research of much of the private sector. These days, they are few and far between. They cannot in fairness be taken as representative of the bulk of the industrial job market for PhD scientists.
 Conceivably, δ could even be negative, still remaining greater than minus one, negative values representing a possible, if unlikely, negative bias of the industrial environment against scholarly publications.
 As usual, one would not establish by that procedure any kind of causation, namely one would not prove that those people who had published did obtain those additional jobs because of their publications, but simply a correlation between publications and hiring. However, the lack of any significant correlation suffices to rule out any substantive influence of one’s publication record on outcome of a job hunt in industry.
 One of the anonymous comments, allegedly by “an Industry scientist responsible for hiring”, states that “no publications is a quick way to reduce your pile of CVs to review”. This suggests that the fraction of applicants who do not have publications is far from small, or the pile of CVs would stay essentially the same size. I have, of course, no way of ascertaining the reliability of such a comment, but, amusingly, in many respects it proves the opposite of what its author may have intended to — for, since the numbers seem to show that those publication-less scientists will find a job anyway, while the author may discard from consideration their CVs due to mere lack of publications, the majority of recruiters ostensibly take less draconian an approach, again supporting the notion that on average publications have little impact.
 And this is if one considers the maximum boost that publications can give. I think a more realistic estimate would put δ more in the 1% range, i.e., well within the “noise” — at that rate, the school of provenience, for example, is likely to be far more important a discriminating factor.