Doug Natelson has done an outstanding job at debunking a ridiculous charge of confirmation bias allegedly affecting a recent study of climate change. Such a charge is put forth in an article published in the popular press (on a very prominent venue). While ostensibly aimed at educating the general public about some aspects of how science works, the article sneakily rehashes one of the most common and dangerous misconceptions that exist out there about science, namely that in the end it is not as “objective” as its practitioners claim.
The argument goes as follows: a scientist is a human being, has her own axe to grind, her judgment can be clouded by her own preconceived idea of how a natural phenomenon should be interpreted. Consequently, what is peddled to the public as impartial, rigorous experimental observation, is nothing but someone’s personal, biased view of how the world works. The supposedly objective “data” upon which the scientist’s conclusions are based, are merely the restricted subset thereof that support her beliefs; her interpretation is just one of the very many possible, and the decision of making hers the “canonical” one, as opposed to any other one, is essentially ideological, there being no accepted criterion to acribe more credence to her theory rather than to a competing one.
Wildly off base and pernicious as it is, this contention is unfortunately (at least in my experience) popular among non-scientists, including otherwise highly educated individuals. And in a way, this is scarcely surprising.
Undermining the respect that people have for science, spreading the false belief that experimental data are themselves “subjective” , that picking one of the many competing theories as the correct one ultimately entails an act of faith — this rhetoric serves the interests of easily identifiable, very influential groups.
It helps set the stage for a political discourse and societal debate in which facts are irrelevant, any attempt to evaluate an argument based on its merit is vacuous, in which careful, objective scrutiny is forgotten and free thinking is replaced by ideological conformation, i.e., one’s stand on any issue must reflect a choice of camp, or party affiliation. In such an environment, it becomes easier for those who have the means and the resources to “scream the loudest”, to advance their own agenda.
Make no mistake, science is not perfect. Yes, its practitioners are human, we all have been guilty of sloppy thinking at times, and yes, on occasion one succumbs to the seduction of possible fame, success and recognition, failing to exercise the required thoroughness and skepticism, especially when a “breakthrough” seems at hand.
Or, as in the case of research fraught with broad potential consequences on society as a whole, such as climate change, a researcher may be so wedded to, so deeply convinced of the validity of a specific hypothesis, to be effectively unable to examine the data objectively, consciously or unconsciously giving greater emphasis to those that appear to confirm the person’s preconceptions.
That all of the above can happen, is not even debatable. But, going from that to positing that scientific research suffers from a built-in credibility gap due to confirmation bias, requires a great deal of ignorance (or, alternatively, a deliberate intent to mislead).
What is “confirmation bias” ?
Let us consider a very simple example. Say I am a theoretical condensed matter physicist, and develop my own model of the interaction between two identical molecules — such as water, for instance. Based on this model, I perform microscopic calculations (e.g., on a computer, using Molecular Dynamics), and compute the equation of state of water — you give me the temperature and the pressure at which water is, and I shall predict its density for you, at thermodynamic equilibrium, in a given range of values of these parameters.
Obviously, in order for my work to be useful and/or taken seriously, my prediction has to agree with experiment, i.e., my computed equilibrium density must reasonably close to that experimentally measured, to within an accepted degree of precision. So, I look up experimental data, and for a given temperature I find what is illustrated in the figure below:
It is clear what the problem is, is it not ? My theory (red line) seems to be doing a pretty good job most of the time, in that it goes through (agrees with) most of the experimentally measured Density-Pressure data points (green circles). In fact, it goes through all of them, except one (the one with a question mark by it), which falls noticeably off the curve .
Up to this point, there is no bias whatsoever on my part. It would be downright silly to accuse me of “bias” because I try to verify if my own calculated equation of state reproduces the one measured experimentally by others, as opposed to that computed by someone else, or some other curve picked at random out of the many possible that could go through the green points. This is nothing but validation of a scientific hypothesis.
If my curve badly missed the mark, if it did not go through any or most of the green points, clearly there would be something wrong with the model (i.e., my proposed inter-molecular interaction), which would have to be downright rejected as inaccurate (unless I made a mistake with the computer calculation). That would be progress — nothing to be excited about, to be sure, but some understanding would have been generated nonetheless.
If, on the other hand, the red curve did go through all of the points, that would still not necessarily mean that mine is the “best possible equation of state” — other models may be even more precise. It would mean that my model affords a certain degree of precision in reproducing the experimental equation of state (in the range of pressure and density to which points shown pertain). And that, of course, would also be progress.
OK, so, what do I do, now ? I am staring at the above figure, thinking:
“Darn, look at that… I almost have all the points right except for that one… funny, both the one above and below it are fine but… that one is not… how can that be… mmmm… that point seems odd though… could it be that maybe that one measurement is wrong ? Maybe I should contact the experimenters and see if this datum was not recorded properly… it must be a mistake… maybe I should just take that one point out of the figure… after all, it is just one point… why generate doubts among people, undermining my own work, when surely the problem must be with that one experimental datum.”
OK, see the part of the text in green ? That is what confirmation bias is. I am so convinced that my theory must be correct, that I am simply discarding experimental evidence that is not consistent with it . By publishing my results without including that “strange” datum, I am at least in part misleading the community, because I am trying to convey the impression that my model is more accurate than it might be. If that experimental datum which I have arbitrarily discarded should turn out to be correct, then clearly there is something in the actual equation of state of water that my model does not reproduce.
What is important to remember, however, is that science is fairly robust against anyone’s ego, sloppiness, delusion, or even fraudulence. Experimental data are passed to the microscope, theoretical scenarios carefully scrutinized and calculations thoroughly checked by competitors eager to prove us wrong. It may take a while at times, but eventually the truth will emerge. If I were to do what described above, i.e., omit that puzzling datum , it is only a matter of time before someone else points it out and makes me look silly. It is simply not in my best interest.
So, while it exists, confirmation bias in the science is, in my opinion, not an issue.
Contending that, because of confirmation bias, there are many possible answers to a scientific question, and picking one over the other is a subjective proposition, is nonsensical. Trying to suggest that any scientific theory largely reflects the bias of its authors, and that there is no objective, accepted way of selecting the one that best fit experimental data; that no real consensus can therefore be achieved on any scientific question; that the prevailing explanation of a given phenomenon is simply that which happens to enjoy the support of the greater fraction of the community; all of that is nothing but hogwash. Only a crackpot or a con man can make such a contention, and only someone blissfully ignorant of how science operates can take it seriously.
If that were true, the technological advances that we owe exclusively to our scientific progress would not have taken (and be taking ) place.
 To be sure, this has also been the theme of a whole school of thought, very critical and skeptical of the scientific method, of which Paul Feyerabend was one of the most prestigious and radical exponents.
 All of this can be stated more precisely, but it is not necessary here. Suffice to say that experimental data come with a stated level of precision, and that the size of the green dots, in a plot like that shown, should be equal to the experimental uncertainty (“error bar“).
 That is very different from actually moving the one green dot up, in order to make it fall on the curve. That would be fraud, not confirmation bias. Totally different ball game.
 As opposed to publishing everything, including all available data in the plot. I can simply write explicitly on the article that there seems to be a problem with that particular datum, propose that maybe that measurements should be repeated, and let the community decide what to make of this. Who knows, maybe the referee(s) will be able to explain what the problem is. Maybe there is a problem with that measurement, and this is the proper way to bring it to the attention of interested researchers.