Monday, November 30, 2009

Oh, this is nice ...

... No One Peer-Reviews Scientific Software.

Today, each instance of custom-written scientific software is like an unknown, novel piece of scientific hardware. Each piece of software might as well be an “amazing wozzlescope” for all that anyone has experience with its accuracy and precision. No one can even tell if it has subtly malfunctioned. As a result, the peer review of scientific software does not indicate even a whisper of the same level of external objective scrutiny that the peer review of scientific hardware indicates.

So we really have no idea if the highly touted climate models are at all reliable, right? Just asking.


  1. Actually, the article overstates its case, and I think is a bit misleading. There IS peer review of scientific software, because anyone who ends up using it is going to review it, make recommendations for improvements and changes. Maybe there needs to be more peer review, but to claim that there is (almost( none is quite inaccurate.

    For one thing, the models and equations and formulations based on observed and documented physics, chemistry, etc., have all been thoroughly peer-reviewed. The results that scientific software produces are peer-reviewed, and when results are not verifiable, perhaps because the software was poorly written, it gets rewritten until it meets the standard of review and repeatability.

    I know you're agnostic about climate change, :) but please don't go hunting for an excuse to find fault with climate modeling software, which has been in operation and development for almost 40 years. One branch of chaos math, which is peer-reviewed, developed from climate and weather modeling software, as Lorenz was trying to model climate from initial conditions when he discovered the strange attractor. Lorenz was a weather guy, not a chaos mathematician, although his work helped to found that field.

    Lorenz basically proved that it's impossible to accurately forecast local weather more than a few days in advance; more than 4 days is basically estimates and guesswork, which is why the TV weather folks are so often wrong. LOL

    But weather and climate are two different, if interrelated, things. Climate is much larger-scale overall than local weather, and is studied in terms of observations recorded over decades, not days.

    So all that weather software and climate software and the math around it has been rather extensively peer-reviewed, and continues to be so.

    What we have pretty much figured out is that the "butterfly effect" is real: starting from the same initial conditions, very different results can happen, due to very small perturbations and chaotic elements within the conditions themselves.

    In fact, the reason that so many scientists are convinced that climate change is occurring is because it HAS been so thoroughly monitored, documented AND peer-reviewed, for at least 30 or 40 years. (The revolution in earth sciences brought about in the 60s and 70s was made possible in part by the new improvements in measuring technologies starting form that period.) That climate change is happening is no longer really at issue: the causes and exact nature, and what if anything can be done about it, is where the questions remain. But something IS happening; whether mankind caused it, or it's part of a natural solar/atmospheric cycle is yet to be clearly settled. Probably "all of the above" is the most likely true answer.

  2. To be precise, Art, I am not agnostic about climate change, because climate is a process of continuous change. Nor do I doubt that human activity has exerted some influence on climate. Hard to imagine that it hasn't. I am agnostic as to whether recent warming was out-of-the-ordinary (the last ice age ended only 10,000 years ago, a moment of time geologically)or would continue to increase or was necessarily bad (history suggest that warm times are good times). As to the climate models, the prima facie evidence in the CRU case is that those models were rigged to achieve certain results, that data were being fitted to the theory rather than the other way around. The dumping of primary temperature data is especially worrisome. People who have confidence in their work do not do this sort of thing.

  3. Thanks for the clarifications. I appreciate them.

    I agree that it's unclear if recent warming is out of the ordinary, but I do think humans have had an impact on it, possibly a minor one, but also possibly a pushing-it-over-the-threshold impact. I'm not convinced either way.

    In terms of rigging the data selection to skew the outcome towards a desired result, obviously that goes against the very grain of the scientific method of inquiry, and I agree should never be done. Yet I would point out that some of the more strident opponents of climate change theories—those paid for by think tanks funded by the oil industry, for example—have a very dismal record of also doing biased data selection, and even skewing their statistical results, in order to shore up their arguments.

    The problem is when science as a method of inquiry gets subverted by ideological agendas. On the whole, however, I've seen more data-fudging coming from the climate-change deniers side of the argument than from the proponents; sometimes rather blatantly, and sometimes to rather blatantly support the oil-biz agenda. So this does not reflect well on their reliability as scientists, and for me seriously undermines any legitimate aspects of their case.

    I realize that no side of this argument has been completely immune from fudging their data. Nonetheless, I haven't changed my opinion that this article is misleading; if it were more accurate reporting, it might have been more even-handed in its condemnation of fudged data across the board.

    For whatever that's worth. As the saying goes, "Statistics can be made to prove anything you want them to prove."

  4. Something I read about this pointed out that while climatologists make use of statistics, they are not statisticians, and their use of statistics has often proved sub-par, especially when it comes to forecasting, as the Wharton School's Scott Armstrong et al. pointed out a couple of years ago. Their paper can read here.