The Division of Computational Physics (DCOMP) of the American Physical Society will be holding its annual meeting in conjunction with the March meeting, namely that of the Division of Condensed Matter Physics (DCMP), by far the largest division of the APS. That is, of course, no accident. Inarguably, condensed matter physics is possibly the one area of research in physics, in which the use of large-scale computing facilities has had (and has) the greatest impact.
One of the symposia planned for the meeting is entitled Computational Physics: Past, Present and Future. The goal of this symposium, of which I am one of the organizers, is that of driving home the point that computation in physics is not just about confirming whatever prediction some other (“real”) theorists have already made, merely “going to the next decimal digit”, as many in my discipline are fond of saying (see this old post of mine for some background). On the contrary, conceptual advances can be made in this way .
A few world class experts from very different areas of physics will lecture on fundamental contributions to physical science uniquely made by computers, over the past few decades. By “uniquely”, one means theoretical advances which could not have achieved in any known way, other than by computation.
What is the point of such a symposium ? Well, in order to attempt to provide an answer, maybe I shall take a step or two back, and ask an even more basic question, namely:
What exactly is the point of DCOMP ?
Why is there a “Division of Computational Physics” ?
Is there really such a thing as an area of physics called “computational physics” ? Or, is it an “oxymoron” ?
You would think that I should know, given that I am a member of its Executive Committee. I suppose I could put a few “boiler plate” type statements together, and who knows, they may even sound believable, but the truth is that I myself ultimately feel that, in an ideal world, there would be no need for such a division.
Its official aim is “[to] explore the use of computers in physics research and education as well as the role of physics in the development of computer technology […] promote research and development in computational physics, enhance the prestige and professional standing of its members, encourage scholarly publication, and promote international cooperation in these activities.”
Personally, I think that “enhancing the prestige and professional standing of its members” is the best reason for having such a Division, except that I might rephrase it as “promoting the use of computers as acceptable, legitimate research tools in theoretical physics”, or “removing the stigma from practitioners of computer simulations”.
In my humble opinion, the other stated goals are either obvious, or unclear, and possibly even pernicious.
Take, for example, “promote research and development in computational physics”. What does it mean ? To me, it gives the impression of people spending time trying to develop generic, all-purpose numerical tools, capable in principle to solve essentially mathematical problems of relevance to various areas of physics. In that sense, a “computational physicist” would be similar to a “mathematical physicist”, or an “instrumentation physicist” — not really trying to investigate a physical phenomenon, to learn something new about Nature, but rather building tools to enable others to do it.
For one thing, that tends to perpetuate a notion of computational physicists as glorified technicians, possibly valuable contributors to group efforts, but one very narrowly trained (and narrowly thinking), usually lacking the knowledge, vision and imagination to suggest or carry out independently original research project. Absurd and unfair as that sounds, it is how many of us often end up being labeled.
Secondly, while in principle generic method development sounds like a reasonable thing to do, in my experience things do not quite work that way. Again, I do not want to repeat myself, but my observation in the course of over two decades in this profession, is that the approach of building an all-purpose “mouse trap” and then looking for mice or other animals to catch, is not nearly as effective as developing a new computational tool, instrument, or even mathematical technique with in mind a specific, narrowly defined physical context, and successively, after establishing unambiguously that it is effective in that context, thinking of its application to other problems. In fact, I think most mathematics has originated in that way, not the other way around — I doubt if Hilbert woke up one day thinking “Maybe I’ll make some spaces today… someone might need them”.
But then… why DCOMP ?
Truth be told, there is no such thing as a “computational physicist”. Any worthwhile endeavour in theoretical physics, any “Theory” with capital T must be supported by accurate, rigorous calculations — otherwise it is just hot air. And there are not many a physicist out there who only do calculations, without understanding what they are done for.
There are theoretical physicists who perform those calculations on a computer, instead of using pencil and paper. This has advantages and disadvantages, like any other theoretical approach, and does not make these scientists any less or more worthy of respect than those who do things with pencil and paper.
At the same time, to attempt to draw a distinction, to speak of “third way” as if those of us who do physics this way were fundamentally different scientists, ultimately does more harm than good.
I think that it is on this front that DCOMP can make its most important contribution. Promoting the notion, especially among graduate students and postdocs, that it is OK do use computers to do physics, and there is interesting basic science that can be done in that way, clearing out the collective confusion, overcoming the bizarre but very real bias that exists against numerics, are all worthwhile objectives, and in my mind the main reason for the existence of the Division, at this time.
It is precisely in this context that the above symposium is placed.
One mistake that many of us who make heavy use of computation often make, is perhaps that of placing excessive emphasis (in talks, articles, and even sessions of conferences) on the technical advances, on the accuracy of the estimates produced, on how large a matrix we can diagonalize, on the ever more realistic mathematical models that can be studied. Valuable and important as all of this undoubtedly is, drawing most of the attention on the scientific comprehension afforded by computation should probably be the main concern. Yes, it is possible to gain original insight into complex physical systems, by means of heavily computational approaches.
The examples are numerous. In condensed matter physics, one need only think of the first Molecular Dynamics simulations of hard spheres, and the theoretical understanding generated of the physics of simple liquids that this approach has made possible. Or, one can point to Path Integral Monte Carlo simulations of superfluid helium, which more than any other analytical study have elucidated the relationship between Bose-Einstein Condensation and Superfluidity.
What about outside Condensed Matter Physics ? Well, one may look at Astrophysics, a field whose theorists have come to depend on large-scale simulations as the only realistic way to advance knowledge on such subjects as cosmic structure, galaxy formation and dynamics, dark matter. Other examples come from virtually all other fields of physics, as well as some interdisciplinary areas such as non-linear science (this would likely not even exist without computers).
So, the idea is that DCOMP could be hosting a series of symposia such as the one described above, at major international meetings. The target audience, in my mind, consists primarily of junior colleagues (graduate students and postdocs), who are not yet (completely) wedded to a specific way of thinking or doing research, and might be interested in learning about what one can achieve in theoretical physics using numerical computation, regardless of whether they elect later on to make use of it themselves.
 Obligatory, if unnecessary and likely useless disclaimer: no, I do not believe that numerics is the “end of it all”, and that all theoretical physics should be just running computer codes. I just think that it is a legitimate, powerful and flexible methodology to do calculations. While theoretical physics can not be reduced to mere calculating, it requires it. Why do I say that such a disclaimer is “unnecessary and likely useless” ? Well, it is unnecessary because no sane person would ever say that computation is all that there is to theoretical physics; it is useless because this kind of straw man argument is simply too easy and convenient to use.