… physicists ruin physics
(bumper sticker — I doubt if it exists, but it should)
The efficacy of a computer simulation (or of any other numerical computation) in predicting the behaviour of a physical system, crucially hinges on two ingredients:
1) The reliability of the underlying mathematical model adopted to describe the physical system of interest.
2) The accuracy of the numerical technique utilized.
This is just as true for physics as it is for any other field of inquiry — biology, economics, sociology, engineering, in fact for any research endeavour that relies on complex mathematical models, too intricate to be studied analytically.
A “computational scientist” aimed at making a contribution to the theoretical understanding of a specific outstanding problem in her area, is tasked with selecting an appropriate model of the system of interest, as well as with carrying out a careful, rigorous and accurate calculation, using the most effective numerical tool(s). Both aspects are necessary, even though, like in any human endeavour, most of us are better versed at just one of the two.
In a normal world, there would be no need to add anything else to what stated above, which seems obvious, almost self-evident, hardly justifying a blog post.
As it turns out, however, the world is not normal, and that is what makes blogging such a worthwhile and indeed necessary activity. In particular, it is almost stunning how often one runs into seemingly competent scientists and/or intelligent individuals, sometimes computationally inclined or even referring to themselves as “computational physicists”, who act, write and talk as if they were in a state of utter confusion regarding points 1) and 2) above, as well as their inter-dependence.
Let me explain what I mean by that.
There is the model, and there is the computation
First of all, point 1) can be rephrased in layperson’s terms by saying that “the model should make sense”. Really, it’s simple as that. The mathematical model should have solid physical underpinnings, motivated by what is known experimentally about the system of interest. If one is going to leave anything out of the model, that person had better be ready to justify such a decision, typically by providing quantitatively plausible arguments to the effect that what is being neglected is irrelevant to the phenomenon which one is attempting to understand .
Of course, that is all well understood by any scientist worthy of this name. We all know that the world is not two-dimensional, that H2 molecules are not perfectly spherical, that air resistance may be small but never exactly zero, that not all cars travel at the same speed, etc. However, for specific purposes all of these approximations turn out to be perfectly valid, often excellent — obviously things have to be done with care. This is not “rocket science”, really, just common sense.
Hear me now: the requirement that the model make sense has nothing to do with the computational aspect. In other words 1) and 2) are entirely separate issues. What makes a scientist “computational” is really point 2). There is nothing “numerical” about point 1). Mathematical models of physical systems are formulated, and exist independently on how they will be studied.
In condensed matter physics, the field with which I am most familiar, I can comfortably state that the vast majority of models (either “first principles” or “toy” models) which have kept computational condensed matter physicists busy over the past six decades, were introduced by scientists who knew nothing of numerics — typically because computers did not even exist in those days (at least for practical purposes). There are, to be sure, some models that almost naturally seem to lend themselves to investigation by computer simulation, also because their analytical treatment is so complicated. However, the converse is also true .
A “traditional”, analytical type theorist  is also expected to base her investigation of the phenomenon of interest on a sound, acceptable mathematical description thereof, no less than her numerically inclined counterpart. The difference between the two is only how calculations are done . In both types of computation, there are many subtle technical issues involved, there is the possibility of making mistakes, of misusing certain approaches or overestimating the power of the method. It is no more true for numerics than it is for analytics. And obviously, regardless of how calculations are done (on a computer or on a sheet of paper), an unphysical or overly simplistic model will yield rubbish, no matter how accurate the numbers. And of course, while the supporting calculations must be obviously carried out correctly, ultimately what renders a theoretical study worthwhile is the insight that it affords.
So, in many respects the distinction between “analytical” and “numerical” is more formal than substantial, and surely does not imply anything fundamentally different in the way the two think or operate. In fact, ideally one should be capable of utilizing whatever methodology is best suited for the problem at hand, whether that be analytical or numerical; in practice, most of us common mortals can at best do one thing well (or, passably), and it makes sense to focus on what we do best.
Anyone maintaining that numerical computation is a downright flawed approach to research, simply because some bad scientists make mistakes or study the wrong models, makes just about as much sense as someone suggesting, for example, that experimental neutron diffraction studies of condensed matter are “inherently flawed” because there are many poorly trained neutron scatterers and condensed matter samples are often of poor quality. It is akin to throwing out the baby with the bath water. It is sheer nonsense.
Likewise, any suggestion that computational physicists are somehow more prone to making technical mistakes, or utilizing naive or “off-the-wall” models than their analytical counterparts, is either incredibly ill-informed, or maliciously disingenuous. There is no more sloppy computational work than there is analytical or experimental — and last time I checked, all major cases of downright fraud involved experimental, not computational work.
I can see how a non-scientist, or even a scientist not involved in theoretical or computational research may be confused about points 1) and 2) above. But when people who should know better seemingly fall victims to the same intellectual fallacy, one is left speechless.
Having said that…
In fairness, though, computational scientists themselves are guilty of generating confusion, ultimately making things harder for all of us, for reasons that are best left to sociologists to explain. I am referring to the relentless insistence, on the part of some, to label computation as a “third approach” to scientific research, “fundamentally different” from theory and with some commonalities with experiment. Maybe that line of argument did good to some, but for most of us it has proven deleterious.
Besides being utterly nonsensical (it would be like contending that experimental scientist who utilize a particular investigative technique are “fundamentally different” from the rest of experimentalists), that claim has had the effect of confusing people about the role of computation, in the process making pariahs of those of us who use computers as nothing other than an effective, convenient research tool — we end up being seen as “not quite theorists”, or “technicians” (a connotation typically used with a pejorative slant). It has offered on a silver plate the perfect excuse to those who have their own personal agenda against numerical computation , to pursue it in various settings .
I have been observing this dichotomy even within the division of computational physics (DCOMP) of the American Physical Society, in which I have been involved for two years now.
On one side, there are those who are genuinely convinced that DCOMP is the expression of a different way of doing science, and that its members should call themselves “computational”, rather than “theoretical”. Some go as far as advocating the creation of specialized journals and conferences, almost in defiance of a scientific establishment that fails to show due appreciation for the progress afforded by simulation .
On the other side, there are those who of us who basically regard themselves as theoretical physicists, who use computers to tackle physical problems for which numerical approaches can provide, in our view, more accurate information and/or greater qualitative insight than analytical techniques. We see DCOMP as a unit whose existence reflects primarily this particular moment in time, as newer methodologies are introduced which face resistance and skepticism on the part of many, and whose main goal is to promote their acceptance — in short, to render itself useless, in the long run. It will be interesting to see which side prevails, in this ongoing intellectual argument.
 Unless, of course, the deliberate choice is made to leave something out, precisely in order to establish whether or not the physics of interest requires that it be present. That is a very valuable exercise too, insofar as one is trying to understand what the minimal ingredients are, that lead to a specific observable outcome.
 An interesting example, just to stay within condensed matter theory, is provided by the Hubbard model of correlated electrons, for which exact analytical results were obtained (in the one-dimensional case) decades before they could be reproduced by computer simulations.
 This would be one not relying on computing devices or facilities of any kind (to carry out computations, that is — email and Facebook are allowed, no worries). Honestly, sometimes I wonder if such “traditional” type theorists actually still exist nowadays, but let us assume they do, just for the sake of argument. I am referring to them as “analytical” solely for lack of a better word.
 I am leaving out of this discussion a minuscule (albeit extremely influential) fraction of ostensibly gifted, very successful theorists who do not need 1) because they do not do calculations — of any kind. The fact that they get away with it points to their brilliance. I never know what to say about them, other than “more power to them”.
 Why do they have such an agenda ? Well, computation can be pretty brutal, when it comes to exposing the flimsiness of trendy theoretical scenarios. There is no question that the fact that one’s approximate results will sooner or later be subjected to a cold, impartial comparison with numerical ones, has raised the bar substantially on analytical work.
 For example, as members of search committees, they can argue that applications from computational scientists for tenure-track theory positions should be discarded because “these people are not really theorists”. As anonymous referees, they can recommend that a theory paper submitted to high profile journal be rejected, on the ground that “there is no theory here, just computation”.
 Personally, I regard that as a crazy direction to take. It is like openly stating to the rest of the community that “computational condensed matter physics”, for example, is inherently different from the “non-computational” one. Readers of my blog know that one of my pet peeves is with journal editors trying to change the title of my manuscript. One of the changes that I resist most vehemently (of course I resist any change, but not all with equal ardor), is when a title like “phase diagram of xyz” is changed into “phase diagram of xyz from Monte Carlo simulations”, as if somehow the reader should be warned upfront, or given a reason not to read at all.