*… physicists ruin physics*

(bumper sticker — I doubt if it exists, but it should)

The efficacy of a computer simulation (or of any other numerical computation) in predicting the behaviour of a physical system, crucially hinges on two ingredients:

1) The reliability of the underlying mathematical model adopted to describe the physical system of interest.

2) The accuracy of the numerical technique utilized.

This is just as true for physics as it is for any other field of inquiry — biology, economics, sociology, engineering, in fact for any research endeavour that relies on complex mathematical models, too intricate to be studied analytically.

A “computational scientist” aimed at making a contribution to the theoretical understanding of a specific outstanding problem in her area, is tasked with selecting an appropriate model of the system of interest, *as well as* with carrying out a careful, rigorous and accurate calculation, using the most effective numerical tool(s). Both aspects are necessary, even though, like in any human endeavour, most of us are better versed at just one of the two.

In a normal world, there would be no need to add anything else to what stated above, which seems obvious, almost self-evident, hardly justifying a blog post.

As it turns out, however, the world is not normal, and that is what makes blogging such a worthwhile and indeed necessary activity. In particular, it is almost stunning how often one runs into seemingly competent scientists and/or intelligent individuals, sometimes computationally inclined or even referring to themselves as “computational physicists”, who act, write and talk as if they were in a state of utter confusion regarding points 1) and 2) above, as well as their inter-dependence.

Let me explain what I mean by that.

**There is the model, and there is the computation**

First of all, point 1) can be rephrased in layperson’s terms by saying that “the model should make sense”. Really, it’s simple as that. The mathematical model should have solid physical underpinnings, motivated by what is known experimentally about the system of interest. If one is going to leave anything out of the model, that person had better be ready to justify such a decision, typically by providing quantitatively plausible arguments to the effect that what is being neglected is irrelevant to the phenomenon which one is attempting to understand [0].

Of course, that is all well understood by any scientist worthy of this name. We all know that the world is not two-dimensional, that H_{2} molecules are not perfectly spherical, that air resistance may be small but never *exactly zero*, that not all cars travel at the same speed, etc. However, for specific purposes all of these approximations turn out to be perfectly valid, often excellent — obviously things have to be done with care. This is not “rocket science”, really, just common sense.

Hear me now: *the requirement that the model make sense has nothing to do with the computational aspect.* In other words 1) and 2) are entirely separate issues. What makes a scientist “computational” is really point 2). There is nothing “numerical” about point 1). Mathematical models of physical systems are formulated, and exist independently on how they will be studied.

In condensed matter physics, the field with which I am most familiar, I can comfortably state that the vast majority of models (either “first principles” or “toy” models) which have kept computational condensed matter physicists busy over the past six decades, were introduced by scientists who knew *nothing* of numerics — typically because computers did not even exist in those days (at least for practical purposes). There are, to be sure, some models that almost naturally seem to lend themselves to investigation by computer simulation, also because their analytical treatment is so complicated. However, the converse is also true [1].

A “traditional”, analytical type theorist [2] is also expected to base her investigation of the phenomenon of interest on a sound, acceptable mathematical description thereof, no less than her numerically inclined counterpart. The difference between the two is only how calculations are done [3]. In both types of computation, there are many subtle technical issues involved, there is the possibility of making mistakes, of misusing certain approaches or overestimating the power of the method. It is no more true for numerics than it is for analytics. And obviously, regardless of how calculations are done (on a computer or on a sheet of paper), an unphysical or overly simplistic model will yield rubbish, no matter how accurate the numbers. And of course, while the supporting calculations must be obviously carried out correctly, ultimately what renders a theoretical study worthwhile is the insight that it affords.

So, in many respects the distinction between “analytical” and “numerical” is more formal than substantial, and surely does not imply anything fundamentally different in the way the two think or operate. In fact, ideally one should be capable of utilizing whatever methodology is best suited for the problem at hand, whether that be analytical or numerical; in practice, most of us common mortals can at best do one thing well (or, passably), and it makes sense to focus on what we do best.

Anyone maintaining that numerical computation is a downright flawed approach to research, simply because some bad scientists make mistakes or study the wrong models, makes just about as much sense as someone suggesting, for example, that experimental neutron diffraction studies of condensed matter are “inherently flawed” because there are many poorly trained neutron scatterers and condensed matter samples are often of poor quality. It is akin to throwing out the baby with the bath water. It is sheer nonsense.

Likewise, any suggestion that computational physicists are somehow more prone to making technical mistakes, or utilizing naive or “off-the-wall” models than their analytical counterparts, is either incredibly ill-informed, or maliciously disingenuous. There is no more sloppy computational work than there is analytical or experimental — and last time I checked, all major cases of downright fraud involved experimental, not computational work.

I can see how a non-scientist, or even a scientist not involved in theoretical or computational research may be confused about points 1) and 2) above. But when people who should know better seemingly fall victims to the same intellectual fallacy, one is left speechless.

** Having said that… **

In fairness, though, computational scientists themselves are guilty of generating confusion, ultimately making things harder for all of us, for reasons that are best left to sociologists to explain. I am referring to the relentless insistence, on the part of some, to label computation as a “third approach” to scientific research, “fundamentally different” from theory and with some commonalities with experiment. Maybe that line of argument did good to some, but for most of us it has proven deleterious.

Besides being utterly nonsensical (it would be like contending that experimental scientist who utilize a particular investigative technique are “fundamentally different” from the rest of experimentalists), that claim has had the effect of confusing people about the role of computation, in the process making *pariahs* of those of us who use computers as nothing other than an effective, convenient research *tool* — we end up being seen as “not quite theorists”, or “technicians” (a connotation typically used with a pejorative slant). It has offered on a silver plate the perfect excuse to those who have their own personal agenda against numerical computation [4], to pursue it in various settings [5].

I have been observing this dichotomy even within the division of computational physics (DCOMP) of the American Physical Society, in which I have been involved for two years now.

On one side, there are those who are genuinely convinced that DCOMP is the expression of a different way of doing science, and that its members should call themselves “computational”, rather than “theoretical”. Some go as far as advocating the creation of specialized journals and conferences, almost in defiance of a scientific establishment that fails to show due appreciation for the progress afforded by simulation [6].

On the other side, there are those who of us who basically regard themselves as theoretical physicists, who use computers to tackle physical problems for which numerical approaches can provide, in our view, more accurate information and/or greater qualitative insight than analytical techniques. We see DCOMP as a unit whose existence reflects primarily this particular moment in time, as newer methodologies are introduced which face resistance and skepticism on the part of many, and whose main goal is to promote their acceptance — in short, to render itself useless, in the long run. It will be interesting to see which side prevails, in this ongoing intellectual argument.

**Notes**

[0] Unless, of course, the *deliberate* choice is made to leave something out, *precisely* in order to establish whether or not the physics of interest *requires* that it be present. That is a very valuable exercise too, insofar as one is trying to understand what the *minimal* ingredients are, that lead to a specific observable outcome.

[1] An interesting example, just to stay within condensed matter theory, is provided by the Hubbard model of correlated electrons, for which exact analytical results were obtained (in the one-dimensional case) *decades* before they could be reproduced by computer simulations.

[2] This would be one not relying on computing devices or facilities of any kind (to carry out computations, that is — email and Facebook are allowed, no worries). Honestly, sometimes I wonder if such “traditional” type theorists actually still exist nowadays, but let us assume they do, just for the sake of argument. I am referring to them as “analytical” solely for lack of a better word.

[3] I am leaving out of this discussion a minuscule (albeit *extremely* influential) fraction of ostensibly gifted, very successful theorists who do not need 1) because they do not do calculations — *of any kind*. The fact that they get away with it points to their brilliance. I never know what to say about them, other than “more power to them”.

[4] Why do they have such an agenda ? Well, computation can be pretty brutal, when it comes to exposing the flimsiness of trendy theoretical scenarios. There is no question that the fact that one’s approximate results will sooner or later be subjected to a cold, impartial comparison with numerical ones, has raised the bar substantially on analytical work.

[5] For example, as members of search committees, they can argue that applications from computational scientists for tenure-track theory positions should be discarded because “these people are not really theorists”. As anonymous referees, they can recommend that a theory paper submitted to high profile journal be rejected, on the ground that “there is no theory here, just computation”.

[6] Personally, I regard that as a crazy direction to take. It is like openly stating to the rest of the community that “computational condensed matter physics”, for example, is inherently different from the “non-computational” one. Readers of my blog know that one of my pet peeves is with journal editors trying to change the title of my manuscript. One of the changes that I resist most vehemently (of course I resist *any* change, but not all with equal ardor), is when a title like “phase diagram of xyz” is changed into “phase diagram of xyz from Monte Carlo simulations”, as if somehow the reader should be warned upfront, or given a reason not to read at all.

Tags: Physics, Research, Science, Theoretical Physics

August 19, 2011 at 10:38 am |

That is correct, there is no flaw with experimental condensed matter physics. The problem is that you guys can’t model sh***y crystals.

Kidding aside, I would be interested in examples that you have seen of the sketchy approximations made by the analytical theorists.

August 19, 2011 at 10:47 am |

Are you asking about “approximations” in the model or in the calculations ? Or, are you still confusing 1) and 2) ðŸ˜‰ ?

There is no such thing as a “computational model”. The Hubbard model studied by analytical theorists is the same Hubbard model studied by computational ones. Same goes for

anyother model. There is not a single model introduced for the sole purpose of being studied by computers.As for “sketchy approximations” done by analytical theorists, well, practically all of the calculations that can be done analytically for non-trivial problems are approximate, to no fault of anyone — problems are just too hard to be solved exactly. Some people are more careful than others about trying to assess quantitatively the reliability of their approximations, either by comparing their predictions to numerical results, or experiment, or by providing some reasonably impartial arguments in support of their calculations.

August 19, 2011 at 11:45 am |

@interested

Actually, the only way to accurately predict the properties of shitty crystals (a.k.a. the so-called “dirty limit”) is numerical. I work on the physics of semiconductors, and I all model are systems with lots of disorder.

I can tell you that you get excellent quantitative data for the performance of a vast majority of electronic and many optoelectronic nanostructures by starting from the Boltzmann equation (that is the theoretical model). Now the Boltzmann equation is integro-differential and in principle impossible to solve analytically, except in a few simple limits. It also lends itself to a few approximate analytical solutions (e.g. the relaxation time approximation) and a few approximate derivative techniques (hydrodynamic equations). But if you really want an accurate solution of the full Boltzmann equation, then you have to go numerical (there are several techniques, the best being via ensemble Monte Carlo; for those who don’t like stochastic techniques there is spherical harmonic expansion).

Now, there are cases in which you are not supposed to use the Boltzmann equation at all, such as in highly coherent transport (e.g. low temp transport in nanostructures). If you do, your results are shit, but it’s not the numerics to blame, it’s your fault as a physicist for not knowing the limits of the underlying model.

August 20, 2011 at 7:28 am |

Amen to all that, I just wish to add that the same applies to theorists carrying out out analytical, “mean field” type calculations and selling the conclusions as if they were the last word on the subject. There is nothing wrong with mean field per se (heck, it works great for trivial problems…), but there are limitations to it of which a good physicist should be aware

August 20, 2011 at 9:52 am |

I guess it would be better to ask for examples of when models are used (by any physicist, not just the model’s author) that are quite outside of their proper domain.

For example, when does it become ridiculous to talk about the effective mass of an electron?

August 20, 2011 at 9:59 am |

Wait. Before you say “for which model?”, just pick one like GMP does.

August 20, 2011 at 10:23 am

I am really not following you. The last thing I would ask, in response to the question “when does it become ridiculous to talk about the effective mass of an electron?” is “for which model ?”. Maybe there is some underlying confusion as to what a “model” is.

The notion of “effective mass” is independent of any model, it is a well-defined physical concept, applicable whenever one can meaningfully work within the independent-electron approximation (OK, I suppose you can regard that as a model) and when the dispersion curve can be reasonably accurately regarded as quadratic. Each and every time either one of these two assumptions break down beyond the required level of accuracy, one has to be careful.

But… What do “modelling” or “computation” have to do with any of that ?

August 20, 2011 at 1:58 pm |

Okay I shouldn’t have chosen an example. Basically I want to know if you know of interesting examples of when criteria 1) wasn’t satisified.

August 20, 2011 at 3:36 pm

Well, you may listen to the talk to which I have created a link in the body of my post — the speaker is referring precisely to that scenario as a “failure of computational physics”, in my opinion inaccurately, and provides a few examples of bad physics originated by wrong models and/or assumptions. Like I said, I do not see how that is a failure of “computation”. Wrong models and/or assumptions lead to bad physics regardless of how one does the calculations.

The first example that comes to my mind, even though it is still far from clear what the final answer will be, is in high-temperature superconductivity controversy.

Some believe that the strongly correlated electronic models which a lot of us have studied (such as the t-J or Hubbard) as a simple model of the doped copper-oxide planes, do not “have the right stuff”, i.e., some important physical mechanisms have been left out.

On the other hand, there are some (like me) who are not quite ready to buy that conclusion yet, because we do not believe that the absence of superconductivity in these models has yet been established convincingly, primarily due to the extreme difficulty of getting reliable results out of them (analytically or numerically). But, it may well turn out that these models are indeed “wrong”, that is, inadequate.

August 25, 2011 at 8:50 pm |

Nice post.

IMHO, blaming computers for bad physics is as silly as blaming typewriters for bad poetry.

I agree that the computer is nothing more than another useful new tool for physicists to conduct gedanken experiments (in the broad sense). As I perceive it, bad physics comes about simply because someone wrongly believes that computers and/or numerical algorithms can give straightforward answers and gives up seeking genuine understandings.