Tuesday, January 04, 2011

Not real scientists

Picked up by our forum members and others - albeit somewhat late, as it bears a publications date of 13 December - is an important article in the New Yorker magazine, written by Jonah Lehrer under the title "the truth wears off".

Its importance stems from its detailed treatment of a subject that modern science would prefer to ignore, the subject of bias in research studies, although the article is marred by the silly title and the equally silly strap, which reads: "Is there something wrong with the scientific method?"

The silliness is evident from the reading, as there is nothing wrong with the scientific method. The narrative confirms this ... eventually. Somewhat laboriously, it leads you to the main thesis about bias – bringing us then to a biologist at the University of Alberta called Richard Palmer.

His concern is the effect of selective reporting of results, the classic, so-called "publication bias", where journals will only normally publish positive results, so there is an unconscious tendency to steer results in the right direction in order to secure publication.

Palmer emphasises that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results.

There are also cultural issues involved. For instance, workers in Asian countries are far more likely to report successful trials using acupuncture than their Western counterparts. Palmer notes that this wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don't want to see. "Our beliefs," he says, "are a form of blindness".

Another toiler in the vineyard is John Ioannidis, an epidemiologist at Stanford University. One of his most cited papers has a deliberately provocative title: "Why Most Published Research Findings Are False," where he notes that the problem of selective reporting is rooted in a fundamental cognitive flaw.

We like proving ourselves right and hate being wrong. "It feels good to validate a hypothesis," he says. "It feels even better when you've got a financial interest in the idea or your career depends upon it. That's why, even after a claim has been systematically disproven" - he cites, for instance, the early work on hormone replacement therapy, or claims involving various vitamins - "you still see some stubborn researchers citing the first few studies that show a strong effect. They really want to believe that it's true."

Although not specifically mentioned, the application to "climate change" is self evident, which underlines the importance of the article. However, the theme needs to be taken further in order to have an effect.

The essential problem is getting researchers (or those who read their work) even to recognise the existence of bias. This, I write in the Booker column comments, adding that the authority and "prestige" afforded to "the science" makes it very difficult to change a settled view.

Furthermore, people who tend to be unduly influenced by authoritative views are most likely to be affected by such biases, and least likely to recognise them ... and indeed will be offended by the very suggestion of a bias (or many). And rarely inside the circle of specialists who claim authority in the field of climate change do we see any serious discussion of the possible effects of bias and, most often, it is workers outside the field who are most able to detect it.

It is my view that, however, that in the "climate change" field there are several biases at play, not least one coined by myself and my PhD supervisor, which we labelled "acceptable diagnosis bias" - a propensity to detect or report results which are acceptable to the peer group.

This phenomenon is not new. In Ceylon between 1943-46, the most common diagnosis for pyrexias of unknown origin was "malaria", accounting for some 35 percent of hospital and dispensary attendance but, after the successful completion of a mosquito eradication programme, it no longer became acceptable to report such illness as malaria. Physicians, therefore, took to labelling pyrexias of unknown origin as "influenza", the overall rate of such reporting remaining remarkably constant.

This exerts its effect in the publication of papers, where researchers tend to steer their results in a direction which will ensure peer approval. Perversely, therefore, peer group review in this context reinforces the likelihood and effects of this bias, to the extent that peer review is a major distorting factor ... alongside publication bias and several others.

So prevalent in scientific research are various biases – which emerges from Jonah Lehrer's piece – that any person purporting to offer scientific work, who is not aware of the role, nature and potential effects of bias, and has not scrutinised their work for the possibility of it being affected, is not a serious scientist.

Lehrer cites Palmer, who summarises the impact of that one bias of selective reporting on his field: "We cannot escape the troubling conclusion that some - perhaps many - cherished generalities are at best exaggerated in their biological significance and at worst a collective illusion nurtured by strong a priori beliefs often repeated," he says.

One almost shrieks with approval – this describes "climate change" to a tee. The absence of any serious discussion in the field on the effects of bias further confirms that practitioners – whatever their titles and pretensions – are not real scientists.