“The public is the only critic whose opinion is worth anything at all”
An interesting debate has been taking place over the past few weeks inside blogosphere. The matter of contention is whether science-inspired (possibly but not necessarily anonymous) blogs, should be regarded as an appropriate venue to discuss, and provide criticism of, peer-reviewed scientific articles published on academic journals (a summary of the discussion, together with an eminently reasonable opinion on this subject, can be found in this post). Aside from the specific article and blog post that have originated the debate, the relationship between “official” scientific journals and relatively new, interactive media such as weblogs, is undoubtedly an interesting topic and a timely issue.
It has been long the position of the academic establishment, that peer review provides the necessary, official “vetting” of a manuscript, i.e., of the scientific quality of the work expounded therein, and that the orthodox way for a scientist to criticize, or attempt to debunk its results or conclusions, is to submit a competing manuscript for publication, thereby undergoing the same type of scrutiny as the original article.
Indeed, I remember reading this very point some fourteen years ago, made by physicist David Pines, in a letter to the editor of Physics Today, written in response to strong scientific criticism of his work levied by Nobel laureate Philip W. Anderson, in another letter to the same issue of that journal . In general, I can see the merits of such a position, but I cannot help regarding it as becoming rapidly obsolete.
Obviously, much like scientific journals, not all scientific blogs can be good. However, I think that we all had better get used to the idea of defending our work on the web, including on blogs, and to the direct, interactive discussion of scholarly content that takes place on this type of medium. It is not just a fad — science blogs are here to stay, for a number of reasons.
First off, there is the issue of peer review, and its often overstated value. As eloquently stated by P. Z. Myers,
“Passing peer review and getting published does not mean that your work is right. Some incredibly awful papers get through the review process, somehow. Getting published only means that now your paper is going to be opened up to wider criticism.”
I have to agree.
An article that is accepted for publication, even in a high profile journal, has been assessed by a number of scientists that is seldom grater than five or six (and typically just one or two). With everyone being pressed for time these days, the high degree of specialization of every scientific field and the fierce competition existing among scientists, the goal of a fair, thorough, timely and unbiased review of a manuscript submitted for publication is often little more than a pipe dream.
While peer review probably remains an effective device to weed out the (relatively small) fraction of submitted manuscripts that ostensibly fail to meet accepted standards of accuracy and scientific rigor, the fate of the majority of articles submitted for publication, chiefly to the most prestigious journals, is all too often determined by less than scientific, highly subjective (sometimes borderline whimsical) and certainly fallible criteria.
And there are fundamental reasons to believe that anonymous peer review, once regarded as a cornerstone of the scientific enterprise, may become ill-suited in the years to come, especially vis-a-vis the predicted growth in size of the scientific community .
Indeed, it has been long recognized that, due to the unreliability of peer review, the actual impact of a scientific paper ought to be ultimately assessed by the number of citations that the paper receives in the course of the years, (as opposed to, e.g., its publication venue), hence the introduction of indices such as Hirsch’s.
Then there is the issue of timeliness. These days, research advances are made at a breakneck pace and scientific communication takes place at a speed inconceivable only two decades ago.
Online article repositories such as arXiv.org are rapidly supplanting journals as the method of choice of scientists to keep abreast of developments in their fields. Manuscripts are uploaded every day by the hundreds, freely accessed and downloaded by scientists all over the world, discussed not just in private, but at conferences, as well as on blogs and web-based journal clubs, and extensively cited in the “regular” literature even before their actual journal publication (in fact, often times long before that) .
I think it is fair to say that, for the most part, by the time a paper is published (in the traditional sense), it is not “novel” anymore, and in some cases peer review and eventual “regular” publication are an almost frivolous exercise, formally sanctioning a judgment that the community has already collectively expressed about a particular piece of work. I can easily see such a formal appraisal becoming irrelevant, in the future. Interesting pieces of work will be cited and elicit discussion, even without being published in the traditional sense; uninteresting ones will be forgotten, regardless of whether they eventually get published somewhere “formal”.
It is true that a paper appearing on a repository has not been officially “vetted” by peer review. But, as mentioned above, peer review is no surefire guarantee of quality. Moreover, much like the number of citations offers a more robust indication (admittedly one not free of bias) of the impact of a journal article, measures rendered available by online media (such as the number of downloads and hits) appear capable of offering at least comparably reliable an assessment.
Meanwhile, scientific journals are struggling, especially second-tier ones, frantically exploring novel business models (none of which seems to work), overwhelmed by exploding costs, and inexorably unable to offer a satisfactory product to authors eager to claim priority, and unwilling to let their manuscripts languish for weeks inside the electronic in-box of lazy or malicious referees (read: competitors).
Finally, there is the issue of practicality. It is impossible to prevent anyone from writing anything about anything. While this obviously gives no green light to anyone wanting to indulge in slander, disrespect, threat or any other behavior that is either illegal or otherwise not tolerated by society (on the internet or elsewhere), any discussion or attempt to regulate what, by its own nature, cannot be truly controlled, seems a waste of time. Much better is to think of a way to coexist peacefully.
Anonymous peer review with reviewers selected (more or less carefully) by an editor, dates back to a time when the internet was not even a word, and the scientific world far more elitist a place than it is nowadays. Blogs allow virtually anyone to be a reviewer, and while this requires that good judgment be exercised by everyone, I tend to regard this as a good thing.
I am afraid that those who wish to preserve the traditional modus operandi and its solemnity, with evaluation of scholarly work confined to the editorial offices of established journals and conforming to tried-and-true practices, rituals and hierarchies, and dread the thought of scientific articles not been reviewed in this way, discussed in the informal setting provided by a blog, are in for a rough ride.
 Physics Today 47 (2), 1994. Although both letters discussed in technical terms a well-defined scientific topic (high-temperature superconductivity), neither could be regarded as a regular article, of the type that are normally cited in the scientific literature, nor had either been peer reviewed before being formally accepted for publication.
 In his 1994 article The Big Crunch, physicist David Goodstein pointed to the inherent conflict of interest of an anonymous reviewer, as an increasing problem and potential Achilles’ heel of this practice, especially in a time of diminished funding opportunities and increased competition among scientists.
 Some web sites such as Naboj are introducing interesting new concepts such as open review of manuscripts uploaded on public repositories such as arXiv. While this experiment is clearly at a very early stage and it may be a while before it becomes common practice, it does not seem too far-fetched to see this as a plausible direction in the evolution of scientific evaluation.