Thursday, January 06, 2011

Well, well ...

... The Truth Wears Off: Is there something wrong with the scientific method?
(Hat tip, Joseph Chovanes.)

I saw this when it first came out and wanted to link to it, but it wasn't online yet. I found it fascinating, though the reasoning seemed muddy. For instance:

... even when numerous variables were controlled for—Jennions knew, for instance, that the same author might publish several critical papers, which could distort his analysis—there was still a significant decrease in the validity of the hypothesis, often within a year of publication.

Leigh Simmons, a biologist at the University of Western Australia, suggested one explanation when he told me about his initial enthusiasm for the theory: “I was really excited by fluctuating asymmetry. The early studies made the effect look very robust.” He decided to conduct a few experiments of his own, investigating symmetry in male horned beetles. “Unfortunately, I couldn’t find the effect,” he said. “But the worst part was that when I submitted these null results I had difficulty getting them published. The journals only wanted confirming data. It was too exciting an idea to disprove, at least back then.”

For Simmons, the steep rise and slow fall of fluctuating asymmetry is a clear example of a scientific paradigm, one of those intellectual fads that both guide and constrain research: after a new paradigm is proposed, the peer-review process is tilted toward positive results.


Something here doesn't compute. The "process is tilted toward positive results." OK, I get that. So disconfirming papers may have trouble getting published. But the disconfirmation is there, whether published or not. The reluctance of journals to publish the results has no bearing on the results themselves. And the results are that, over time, the initial results steadily decline. Lehrer may think that "publication bias almost certainly plays a role in the decline effect, [but] it remains an incomplete explanation," but I can't see that it can have any effect at all. It certainly "fails to explain the experience of people like Schooler, who have been unable to replicate their initial data despite their best efforts."

Schooler argues that scientists need to become more rigorous about data collection before they publish. “We’re wasting too much time chasing after bad studies and underpowered experiments,” he says. The current “obsession” with replicability distracts from the real problem, which is faulty design. ..."


But Schooler's own experiments don't seem to have been faultily designed. He simply couldn't get the same results when he replicated them: "
His first attempt at replicating the 1990 study, in 1995, resulted in an effect that was thirty per cent smaller. The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend." This certainly does "demonstrate the slipperiness of empiricism."
There is a serious epistemological problem here that most of the people involved -- Ioannidis would seem the exception -- have either failed to notice or are unwilling to acknowledge, though the title of the article makes it clear: If truth "wears off," it is transient indeed, and if that is what the scientific method is producing, it deserves some serious scrutiny. What, in the long run, is gained from a second-generation of drugs that looks great initially but turns out pretty soon to be no significant improvement on its predecessors? And if the scientific method isn't all it's cracked up to be, we should be taking that into consideration, and not deferring to it as reflexively as we tend to.







No comments:

Post a Comment