In one of his latest posts, Doug Natelson describes the difference between first-principle calculations, and those which instead are based on so-called “toy models”.
First-principle calculations are aimed at incorporating as much as possible of reality as it is known, down to the most fundamental constituents and interactions as required (for example, in Condensed matter physics that reasonably amounts to regarding, e.g., a crystalline solid, as an assembly of electrons and ions, all interacting electrostatically).
But theoretical physicists like to play with “toy models” as well, i.e., highly idealized representations of the physical reality, that cannot (and are not even designed to) provide a quantitatively accurate description of a particular physical system, but rather attempt to capture only its bare essentials.
What is the point of such an exercise ?
It is not obvious at all why there would be any use in studying mathematical models in which, say, the world is two-dimensional, or atoms are assumed to be “hard spheres”, or a caricature description is adopted of a ferromagnet, one in which many simple elementary magnets are assembled into a regular lattice, each magnet only interacting with the ones that are nearby .
Should a theoretical physicist not always attempt to perform calculations wherein accurate, realistic descriptions of a physical system of interest can be utilized, and predictions can be made that can be directly compared to the experimental measurement ? If that is not possible yet, should that not be the goal of research in the various branches of theoretical physics ? Why would anyone waste time with toy models ?
If one could always come up with accurate numbers, allowing one to make reliable quantitative predictions for the outcome of an experiment, should the scope of theoretical physics not be considered achieved ? Wouldn’t we know at that point, all that there is to know ? No.
Theoretical physics research amounts to way more than just producing numbers to compare to experiment. What one is after is physical insight, which consists mostly of identifying the main physical mechanism underlying a particular phenomenon, eliminating everything that only gives a minor quantitative contribution (i.e., to shifting by a few percent the value of the transition temperature). Insight arises as a simple starting point is chosen, and complexity is gradually built on top of it, as observed physical features remain unaccounted for. This type of insight, of understanding, cannot be afforded by any first-principle calculation, no matter how accurate .
That is precisely the purpose a toy model, which renders it far from being devoid of meaning. In fact, the use of toy models mimics how our mind works, creating complexity by assembling simple objects, concepts.
This is why a considerable fraction of all the theoretical work in areas such as statistical mechanics, materials science etc. has as its objects simple models, apparently unrelated to the complex phenomena which they aim at describing.
A useful toy model should be:
- Simple to treat mathematically and/or numerically
- Capable of describing (mostly qualitatively) the fundamental physics of a particular system under exam, or of a particular class of phenomena, which may not depend crucially on the microscopic details of the system (e.g.. the interaction between atoms or molecules).
It is mainly the second point, formally summarized in the idea of universality, to provide a major justification for studying artificial, highly simplified models. This is typically the case in the study of phase transitions and critical phenomena.
Consider the following example: The figure above  shows the experimentally determined fluid-gas coexistence curve for eight different fluids. Clearly, despite the very different atomic and molecular structures, presumably resulting in vastly different elementary interactions, all of the coexistence curves essentially collapse onto one another, upon performing a simple rescaling an taking experimental uncertainties into account.
This leads one to believe that, at least as far as the shape of the coexistence curve is concerned, interactions cannot play too important a role, and the details of the above curve can be probably understood by studying a simple model in which particles interact via some crude, artificial potential.
Except for specific cases, such as in theoretical high energy physics (where the simplicity of the model is the result of the quest for the ultimate building blocks), or, in condensed matter physics, liquid helium, or assemblies of cold atoms in optical lattices, all systems for which a remarkably simple, toy-like microscopic model can be formulated, very nearly reproducing even quantitatively all experimental observations, first-principle calculations and theoretical work based on toy models, are different in scope, philosophy and methodology. Comparing them is akin to comparing apples and oranges.
 It is worth noticing that the use of artificial models, seemingly too naive to be relevant to any system of interest, is not a peculiar feature of theoretical physics. In all scientific disciplines, including biology and economics, toy models are vasty utilized, and their practitioners subjected to the same kind of criticism that theoretical physicists suffer, namely of studying objects much too far removed from the reality to which they claim to have some relevance. It is a criticism that simply misses the point.
 The major advantage of first-principle calculations, namely that they include (nearly) everything, is also their most significant pitfall, as they make it very difficult to disentangle the various physical mechanisms and their relative relevance to a particular effect.
 Taken from E. A. Guggenheim, “The principle of corresponding states”, J. Chem. Phys. 13, 253 (1945).