In a recent brilliant post, condensed matter theorist and blogger Schlupp writes about the fascinating field of condensed matter physics (CMP), also outlining the differences between scientists engaged in theoretical and experimental research.
It is a relatively clear distinction, one which most people, including non-scientists, have really no trouble understanding. Experimentalists do the actual research, i.e., build instruments and make measurements, thereby discovering new phenomena; theorists try to understand what the measurements mean, develop theories based on the observations, ultimately attempting to “explain it all” and predict the outcome of yet-to-be-performed experiments.
Schlupp also draws a further distinction, namely among theorists who do “real theory”, and those (like herself and myself) who do “computation”. This is a much subtler one, and I do think it takes the mind of a physicist to appreciate it, because when I try to explain it to people outside physics (or even science), they have absolutely no idea of what I am talking about.
This is how goes a typical conversation between a Sensible Outsider (SO) and myself:
SO: Real theory… computation… I do not understand, I thought all theoretical physicists were engaged in some sort of “computation”. I mean, how can you make a theory without supporting it with calculations ?
Me: Well, yeah, but, you see, “real theorists” use nothing but pencil and paper. For the most part they work with fairly abstract concepts, eventually arriving at results cast in elegant closed-form mathematical expressions… on the other hand, those who do “computer simulation” (as we call it) put everything on a computer and just get a bunch of numbers out… it is not quite as elegant, as numbers have to be analyzed and interpreted.
SO: Yeah but… whose answer is right (elegant or not) ?
Me: Well, analytical (i.e., pencil-and-paper) calculations usually require approximations (often uncontrolled), because most condensed matter problems, involving as they do large numbers of interacting particles, are unsolvable with just pencil and paper… on a computer one can often get an essentially exact answer. But that’s a simplistic way to put it…
SO: How is it “simplistic” ? Who cares how one actually does the calculations, what matters is the final result, whether it is correct or not, and what you learn from it… isn’t it ?
Me: You would think so, wouldn’t you….
(Fellow physicists can point out our mistakes, but only SOs can really make us feel stupid…)
So, where is the distinction, exactly ? Should there be any, really ? Both analytical and numerical approaches have benefits and shortcomings; I see them as inseparable, both integral parts of the activity of a theoretical physicist, in CMP as well as in other fields. Obviously, one may be better versed at one than the other, but the distinction between “computational” and “theoretical” physicists is meaningless. The fact is, one should use whatever tool can afford progress, be that a computer, a pencil or else . However, as Schlupp puts it:
“… pencil people sometimes say we cannot provide ‘real’ explanations”
Yup. That’s what they say, and not just “sometimes”. But, what exactly makes an explanation “real”, anyway ? Is working through pages of algebra, eventually obtaining an analytical expression of some sort, required in order for one to claim physical understanding ?
“All you computer people can do is calculate stuff”, or “a real theorist should use computers for e-mail only”, “clear thinking and computers do not go together”, are a few examples of pills of wisdom dispensed by otherwise seemingly intelligent colleagues. Particularly annoying to me has always been the relentless insistence on some supposed dichotomy between “thinking” and “calculating”, “insight” and “numbers”. Sure, the calculation is not the final outcome, but there is no final outcome without the calculation.
It is downright mind-boggling how, in the light of the tremendous progress afforded by computers in a wide variety of fields of inquiry, including CMP, a pervasively snotty attitude still exists among theoretical physicists toward those of us who are not afraid of getting our hands dirty with some C++ programming (not Fortran, please — that thing is for losers). In fact, while I would certainly hope that she’s right, I am afraid that Schlupp’s assertion that “this despicably ignorant opinion is dwindling” is more a reflection of her youthful optimism than of the actual state of affairs.
From conversations with colleagues at a number of prominent physics departments, whose search for a condensed matter theorist appears to be destined to yield a null outcome this year, I have learned that the problem is often precisely that the strongest candidates are “computationally inclined”, and that departments cannot overcome a “strong internal bias against numerics” .
But, why ? What is it that makes many a prominent, successful and influential member of our community dismiss as “just numbers” the unquestionable advances made by a growing fraction of theoretical physicists who, grown tired of endless arguments over “whose approximation is better”, believe that the computing power available nowadays can be harnessed to generate insight, not merely push accuracy to the next digit ?
Which of our role models of the past would have turned away in disgust at the notion of using a computer to solve, say, the many-body problem ? Certainly not Enrico Fermi, in many respects the father of computational physics, who before everyone else saw the potential benefit of casting theoretical physics problems in a form suitable for fast computing machines.
Certainly not Richard P. Feynman, literally obsessed with computation. While he himself never formulated numerical methods (that I know of), almost his entire body of work was devoted to the developments of techniques capable of furnishing precise numerical answers to problems in various areas of condensed matter and particle physics; in fact, much of it underlies most modern methodologies to study on a computer analytically intractable problems in CMP. I cannot imagine him objecting to the use of computers to implement efficiently, for example, his famous diagrammatic technique. In a conversation I had with him about a decade ago, prominent russian physicist Igor Dzyaloshinkii said to me that in his view, his legendary teacher Lev Davidovic Landau, often known to rely more on his superb physical intuition than on rigorous calculations, would have enthusiastically embraced the use of computers to do theoretical physics. Honestly, I cannot think of anyone who would not have. The one thing that all of those great scientists had in common, was the desire for the truth, the eagerness to get to the bottom of problems, by whichever means available. They would not have had any time for ill-advised and misguided “theoretical purism” .
As Schlupp writes, sometimes “we may end up with just a bunch of numbers and no nearer understanding anything.”
No question about it, that does happen. Sometimes scientists making use of numerical simulations are unable to see by themselves the physical implications of their results, and need help from more analytically inclined colleagues to do so. Other times numerics fail to provide any novel insight. So what ? Like in any human activity, there are successes and failures. The same can be said about pencil-and-paper theory as well, and even about experiments, some of which fail to generate anything useful in spite of all the money invested into them. A wholesale dismissal of the activity of a sizable fraction of the community based on the fact that it does not always work, or that its practitioners are not all uniformly brilliant, seems silly; it is akin to throwing the baby out with the bath water.
Frankly, I think that there are other reasons for the hostility toward computation, including a mechanism of self-defense by some who feel threatened by the emergence of methodologies which they themselves never did get around mastering, and which may put them “out of business”, so to speak. Interestingly enough, in other academic disciplines such as chemistry, materials science and engineering, as well as in different settings such as industry, where the need of obtaining reliable quantitative answers is more pressing, the widespread use of computers has gained acceptance much more rapidly.
 The repeated use on the part of many of us (and them) of the misnomer “computational physics”, implying somehow a different way of thinking, level of rigor and/or physics background of those who use computers, may not help. In my opinion, the sooner we abandon this terminology, the better.
 This bias has a way of suddenly disappearing as soon those “real theorists” need to find out whether their beautifully elaborate theories in the end make any useful predictions — in those cases, a collaborator well-versed in numerics, who can actually provide reliable results against which to assess their theories, is welcome. But as soon as the research project is finished and the beautiful theory has been proven accurate (all right, maybe to within a factor 4…) in some unphysical/irrelevant limit, the collaborator turns back into the “computing monkey” that (s)he initially was, and is metaphorically sent back to scientific purgatory. A simple exercise consists of going through the web pages of, say, the top 30 physics departments in the United States, and just counting how many of their CMP theory faculty engage in research heavily based on computer simulations.
 Just to avoid any misunderstanding — there is something beautiful and profoundly satisfying about arriving at a simple answer to a complex question, expressed in a compact mathematical form. Anyone capable of providing such a solution for a non-trivial outstanding problem deserves praise, and makes a valuable contribution to science. However, it is unfortunately the case that that happy state of affairs is usually not realized. Now, there is absolutely nothing beautiful about a wrong answer, no matter how intriguing its underlying idea and appealing the mathematical formula that expresses it.