An important part of the activity of a theoretical physicist consists of performing calculations. A first principles, mathematical model of a physical system of interest (e.g., a magnet) is assumed, and the value of some experimentally measurable physical quantity is computed, in correspondence to well-defined (thermodynamic) conditions. The comparison of the calculated values of that quantity with those experimentally measured, serves as a fundamental validation of the mathematical model, which can then be utilized to predict the outcome of experiments yet to be performed on the same physical system.
It is typically the case that calculations cannot be carried out exactly, due to the intrinsic difficulty of the mathematical model, which reflects the intricacy of the underlying physical system of interest. In my own research area for example, condensed matter physics, one studies the physical properties of large assemblies of particles, all mutually interacting; this renders exact solutions, expressed as complete, closed-form analytical expressions, essentially unachievable for all but a few, relatively simple textbook cases. One can often go a bit further numerically, but even this type of approach has limitations of its own.
What does one do then ? Typically, one resorts to approximations, virtually all of which consist of making simplifications of the initial mathematical model (e.g., by eliminating some terms), aimed at rendering it analytically or numerically more tractable. Results obtained for such a “simplified” model can be useful, as long as they may still be regarded as relevant to the original problem, i.e., at least qualitatively similar to those which one would have obtained, had one been able to tackle the initial model.
But, how does one know that that is indeed the case ? After all, by definition if the physics of the simplified model were close enough to that of the initial “first principles” model, why bother with that difficult model in the first place ?
Physical intuition, analogy with similar systems for which some basics results are known, as well as fundamental theoretical considerations are typically invoked, in order to justify the approximations made. Ultimately, however, the fact remains that there is no reason to buy approximate results; in fact, the most interesting cases are precisely those for which the most obvious approximations fail, either because they are not workable (i.e., no piece of the initial model can be eliminated without severely altering it) or because they ostensibly yield unphysical predictions.
There is only one way of lending some credibility to approximate calculations, and that consists of assessing the reliability of the approximation by comparing the results that it yields to exact results for specific, non-trivial and cogent cases for which they are known.
In condensed matter physics, there are two general types of exact results against which approximate approaches can be benchmarked:
- Exact analytical solutions. As mentioned above, these are unfortunately available for very few cases only, typically not very relevant to the ones of interest, due to the crudeness of the model for which they can be obtained
- Numerical results, normally obtained for the full model, but on a system of limited size (i.e., only few particles), due to either technical or technological limitations. The usefulness of these results stems directly from the fact that they are obtained for the full-fledge model, without any approximation.
Now, you would think that the following should be widely accepted, and regarded as obvious: The first, fundamental test of any new computational methodology, or approximate analytical approach, should be precisely that of reproducing known results for those few, specific instances for which they are available. How could an approach failing to do so be regarded as a legitimate, reliable methodology in the general case, otherwise ? How could any prediction based on it be taken seriously ?
And yet, much to my surprise, this all-important validating process appears to be often all but skipped. Increasingly I see manuscripts published with bold predictions based on computational procedures whose reliability has not been assessed, and in fact is virtually unknown. [0]
I have had recently a number of conversations with students and postdocs working on new computational techniques, who are consistently told by their advisors or principal investigators to focus on getting new results out with the new technique as quickly as possible, without bothering with attempting to reproduce known results — after all, they are known, what would one be doing that for…
Excuse me ? Is this a result of competition among scientists, that the reliability of a methodology adopted, or even the accuracy of the results obtained, are now no longer an issue ? Can we just publish some numbers pulled out of somewhere, and be done with that ?
I do not think so. In fact, let us get one thing straight: anyone who uses the argument that “results for that case are already known” in order to skip performing a stringent test of the accuracy of a proposed new method, is disingenuous. (S)he is simply trying to oversell the new method, by avoiding an assessment that could possibly yield a disappointing outcome. There is no possible other reason for acting that way, and yes, it is bad scientific practice [1].
Notes
[0] Even more irritating and nonsensical is the following occurrence, which I observe very frequently. Someone does a half-assed, crude calculation, obtaining some predictions that no sane person would take at face value. Subsequently, someone else using a more accurate method, more difficult to implement but yielding more robust and reliable results, happens to confirm, even if only in part, the original crude predictions. The first author will be credited with the discovery, even though the initial prediction is little more than a lucky guess, whereas the more reliable results obtained by the second author will be dismissed as “mere conformation of known physics”. An unfortunate consequence of this misguided conception is that it encourages scientists to sacrifice rigor and hard work (which require time) in favor of crude, hand-waving arguments that allow one to publish quickly results that eventually unfairly come to be regarded as the first correct ones.
[1] Equally unacceptable is to carry out misleading and bogus comparisons, e.g., with other approximate calculations, or with whatever one may have done in the past — improving upon one’s past work need not mean doing things well.
April 12, 2009 at 6:32 pm |
HA! Yes, indeed. And what happened to the old fashioned practice of “finite size scaling”, if I may ask? When I was a student, we did finite size scaling in the snow and uphill! And were grateful for it! (I know that finite size scaling is sometimes not practical and sometimes not necessary. These situations are not what I mean.)
And what I also hate is “modelling” with so many parameters and so sloppy methods that one can reproduce (but not predict) ANYTHING! Hint: if you have considerably more parameters than data points, don’t count on my being very impressed.
BTW, I am fairly sure that the alternative expression for “slave driver” is “principal investigator”, not “principle investigator”. Especially if we are talking about as unprincipled specimens as the ones mentioned by you.
April 13, 2009 at 3:40 am |
Thank you — typo fixed. I think modeling in condensed matter physics seldom suffers from the abundance of parameters that you rightfully lament. In other areas of physics that may be more of an issue, I sense.
April 13, 2009 at 7:02 am
I guess you are right that hyperparametric disorder is even more of an issue in other areas of physics. Only, since I read more CM papers, it bothers me more in CM….. (Yes, I may be a leeeeettle bit self centered.)
April 13, 2009 at 10:20 am
Hey, you and I need to have one of our bitchin’ sessions, at some point, regarding a recent preprint of which I am sure you are aware…
April 13, 2009 at 4:29 am |
What surprises me is that someone could get something like this past a journal editor.
April 13, 2009 at 4:45 am |
You’d be amazed at what the right affiliation and appropriate connections can do for you.
April 13, 2009 at 6:30 am |
All this makes me happy to be an experimentalist. Especially when it comes to biology!
April 13, 2009 at 8:11 am |
Because crappy thinking and implausible models are never an issue in biology…
April 13, 2009 at 10:30 am
No, but at least most often there’s no false sense of accuracy conjoined with a whole mess of putative significant digits.