An important part of the activity of a theoretical physicist consists of performing calculations. A first principles, mathematical model of a physical system of interest (e.g., a magnet) is assumed, and the value of some experimentally measurable physical quantity is computed, in correspondence to well-defined (thermodynamic) conditions. The comparison of the calculated values of that quantity with those experimentally measured, serves as a fundamental validation of the mathematical model, which can then be utilized to predict the outcome of experiments yet to be performed on the same physical system.
It is typically the case that calculations cannot be carried out exactly, due to the intrinsic difficulty of the mathematical model, which reflects the intricacy of the underlying physical system of interest. In my own research area for example, condensed matter physics, one studies the physical properties of large assemblies of particles, all mutually interacting; this renders exact solutions, expressed as complete, closed-form analytical expressions, essentially unachievable for all but a few, relatively simple textbook cases. One can often go a bit further numerically, but even this type of approach has limitations of its own.
What does one do then ? Typically, one resorts to approximations, virtually all of which consist of making simplifications of the initial mathematical model (e.g., by eliminating some terms), aimed at rendering it analytically or numerically more tractable. Results obtained for such a “simplified” model can be useful, as long as they may still be regarded as relevant to the original problem, i.e., at least qualitatively similar to those which one would have obtained, had one been able to tackle the initial model.
But, how does one know that that is indeed the case ? After all, by definition if the physics of the simplified model were close enough to that of the initial “first principles” model, why bother with that difficult model in the first place ?
Physical intuition, analogy with similar systems for which some basics results are known, as well as fundamental theoretical considerations are typically invoked, in order to justify the approximations made. Ultimately, however, the fact remains that there is no reason to buy approximate results; in fact, the most interesting cases are precisely those for which the most obvious approximations fail, either because they are not workable (i.e., no piece of the initial model can be eliminated without severely altering it) or because they ostensibly yield unphysical predictions.
There is only one way of lending some credibility to approximate calculations, and that consists of assessing the reliability of the approximation by comparing the results that it yields to exact results for specific, non-trivial and cogent cases for which they are known.
In condensed matter physics, there are two general types of exact results against which approximate approaches can be benchmarked:
- Exact analytical solutions. As mentioned above, these are unfortunately available for very few cases only, typically not very relevant to the ones of interest, due to the crudeness of the model for which they can be obtained
- Numerical results, normally obtained for the full model, but on a system of limited size (i.e., only few particles), due to either technical or technological limitations. The usefulness of these results stems directly from the fact that they are obtained for the full-fledge model, without any approximation.
Now, you would think that the following should be widely accepted, and regarded as obvious: The first, fundamental test of any new computational methodology, or approximate analytical approach, should be precisely that of reproducing known results for those few, specific instances for which they are available. How could an approach failing to do so be regarded as a legitimate, reliable methodology in the general case, otherwise ? How could any prediction based on it be taken seriously ?
And yet, much to my surprise, this all-important validating process appears to be often all but skipped. Increasingly I see manuscripts published with bold predictions based on computational procedures whose reliability has not been assessed, and in fact is virtually unknown. 
I have had recently a number of conversations with students and postdocs working on new computational techniques, who are consistently told by their advisors or principal investigators to focus on getting new results out with the new technique as quickly as possible, without bothering with attempting to reproduce known results — after all, they are known, what would one be doing that for…
Excuse me ? Is this a result of competition among scientists, that the reliability of a methodology adopted, or even the accuracy of the results obtained, are now no longer an issue ? Can we just publish some numbers pulled out of somewhere, and be done with that ?
I do not think so. In fact, let us get one thing straight: anyone who uses the argument that “results for that case are already known” in order to skip performing a stringent test of the accuracy of a proposed new method, is disingenuous. (S)he is simply trying to oversell the new method, by avoiding an assessment that could possibly yield a disappointing outcome. There is no possible other reason for acting that way, and yes, it is bad scientific practice .
 Even more irritating and nonsensical is the following occurrence, which I observe very frequently. Someone does a half-assed, crude calculation, obtaining some predictions that no sane person would take at face value. Subsequently, someone else using a more accurate method, more difficult to implement but yielding more robust and reliable results, happens to confirm, even if only in part, the original crude predictions. The first author will be credited with the discovery, even though the initial prediction is little more than a lucky guess, whereas the more reliable results obtained by the second author will be dismissed as “mere conformation of known physics”. An unfortunate consequence of this misguided conception is that it encourages scientists to sacrifice rigor and hard work (which require time) in favor of crude, hand-waving arguments that allow one to publish quickly results that eventually unfairly come to be regarded as the first correct ones.
 Equally unacceptable is to carry out misleading and bogus comparisons, e.g., with other approximate calculations, or with whatever one may have done in the past — improving upon one’s past work need not mean doing things well.