A long and tiring term is coming to a close. Time to celebrate the holidays, then head out to Vancouver for a few days, to end 2011, and then it will be a new year and a new term. The Winter term of 2012 is also going to be very intense, but for different reasons — I have quite a bit of traveling ahead of me. Indeed, it looks as if I shall be in Europe (Germany and Italy) until Summer.
Archive for the ‘Theoretical’ Category
… physicists ruin physics
(bumper sticker — I doubt if it exists, but it should)
The efficacy of a computer simulation (or of any other numerical computation) in predicting the behaviour of a physical system, crucially hinges on two ingredients:
1) The reliability of the underlying mathematical model adopted to describe the physical system of interest.
2) The accuracy of the numerical technique utilized.
This is just as true for physics as it is for any other field of inquiry — biology, economics, sociology, engineering, in fact for any research endeavour that relies on complex mathematical models, too intricate to be studied analytically.
given that every other day we seem to be telling each other appalling stories of disgraceful article refereeing to which we are subjected, I think we should all try and agree, first of all among ourselves, on what constitutes “bad refereeing”, and pledge to each other not to do it. Ever.
After all, it is as good a starting point as any other. If we are successful at convincing others to agree to our code of conduct, we might see some difference in the future.
The Division of Computational Physics of the American Physical Society has now its own blog, brought to you by its very own executive committee, of which I am presently a “member-at-large” (soon to be apprehended and brought into custody, I guess). The latest post advertises the upcoming conference on computational physics (CCP 2011), to be held in Gatlinburg (TN), October 30th – November 3rd, 2011.
Now, what exactly is this conference all about ?
One common feature that seems to characterize many prominent scientists in the prime of their careers, is that they direct large research groups. Loners are rare.
OK, first of all, what does “large” mean, in such a context ?
Well, it varies across disciplines. For someone like me, a condensed matter physicist engaged in theoretical research, a large group is one that includes more than three graduate students and two postdocs — and three graduate students and two postdocs is one which I could call already fairly large.
I have seen theory groups that are (much) larger than that, though, with a number of graduate students hovering around ten, maybe five or six postdocs of different seniority, a few undergraduate students, maybe one or two visiting scientists, and possibly even a secretary and a technician (e.g., a computer system administrator).
The Division of Computational Physics (DCOMP) of the American Physical Society will be holding its annual meeting in conjunction with the March meeting, namely that of the Division of Condensed Matter Physics (DCMP), by far the largest division of the APS. That is, of course, no accident. Inarguably, condensed matter physics is possibly the one area of research in physics, in which the use of large-scale computing facilities has had (and has) the greatest impact.
What is the job of a theoretical physicist ? Isn’t physics an experimental science ? Should discoveries not always occur as a result of reproducible, controlled laboratory observations ? Perhaps in no other scientific discipline is the division between “theorists” and “experimentalists” so well-defined and rigid as in physics.
In one of his latest posts, Doug Natelson describes the difference between first-principle calculations, and those which instead are based on so-called “toy models”.
First-principle calculations are aimed at incorporating as much as possible of reality as it is known, down to the most fundamental constituents and interactions as required (for example, in Condensed matter physics that reasonably amounts to regarding, e.g., a crystalline solid, as an assembly of electrons and ions, all interacting electrostatically).
But theoretical physicists like to play with “toy models” as well, i.e., highly idealized representations of the physical reality, that cannot (and are not even designed to) provide a quantitatively accurate description of a particular physical system, but rather attempt to capture only its bare essentials.
What is the point of such an exercise ?
Much of the current research work in theoretical physics involves numerical computation. This is because calculations themselves are too complicated to be tackled analytically (i.e., with pencil and paper). The writing of a code suitable to carry out calculations of a specific type, especially one that is flexible, easy enough to use and relatively general in scope, is a major undertaking, one that can consume the better part of a doctoral thesis, for example.
Once the code is functioning, and the project for which it was initially written has been completed, the researcher who developed it often finds him/herself in the somewhat enviable position of having a tool that may be of interest to others, with which different, relevant and important problems may be investigated.
One issue that often comes up in conversation is: what is the accepted protocol for sharing a code with other investigators, or groups ?
Amidst the global recession, funding for basic scientific research is suffering cuts just about everywhere (especially worrisome is the situation in my country). As these cuts trickle down all the way to individual research grants, academics and other professional scientists have to take a hard look at their budgets, and decide which expenses will have to be forgone.