I’m not sure I get this. Presumably, the models aren’t supposed to be wrong, either. But if they’re not supposed to right, why we should make policy decisions based on them, including decisions that can be seriously disruptive to people’s lives? The piece may well be correct, and the problem may well lie with my faulty understanding.Maybe the author of the Atlantic piece should get together with the author of this piece, who is a board certified trauma & emergency specialist with over 15 years of clinical experience, a member of the Nat’l Preparedness Leadership Initiative, a combined effort of Harvard School of Public Health and Kennedy School of Gov’t to develop “meta-leaders” for national disaster preparedness & response.
Friday, April 17, 2020
Hmm …
… Coronavirus Models Aren't Supposed to Be Right - The Atlantic. (Hat tip, Jeff Mauvais.)
A model is simply an hypothesis. All hypotheses share certain features. First, they arise from a set of observations. Second, they offer a testable prediction based on those observations. Third, a properly stated hypothesis can never be proven true, but can be proven false. So requiring a model to be true is, formally speaking, asking the impossible (don't blame me, blame Karl Popper!). The proper question about any hypothesis is: Can it be used to make accurate predictions? Of course, the accuracy of predictions can only be determined post hoc, and we aren't anywhere near post hoc with this epidemic. Scientists refer to any hypothesis that repeatedly generates accurate predictions as "robust", not as "true".
ReplyDeleteHypotheses in the form of mathematical models generally describe
a relationship between input (or independent) variables and output (or dependent) variables. Input variables for an infectious disease model would include a measure of transmissibility (R-naught), length of pre-symptomatic infectivity period, case fatality rate, persistence of virions in the environment, estimated effectiveness of various mitigation measures, how early mitigation measures were adopted, and dozens of other factors. Output variables would include final numbers for overall cases, complex cases, deaths, etc. The output takes the form of a probabilistic distribution of potential outcomes, along with a measure of the sensitivity of those outcome predictions to the range of values for each input variable. That's why the much-maligned Imperial College model predicted anywhere from 5,600 to 550,000 deaths in the UK, depending on the range of input values fed into the model (including then-hypothetical mitigation measures). There have been almost 15,000 deaths in the UK during the one month interval since the model was published, so I don't see any predictive inaccuracy.
The various epidemiological models in use today (person-to-person, single point source, multiple point source, vector-borne) are quite robust, having been refined through thousands of disease outbreaks during the past 150 years. But, early in any outbreak, the accuracy of model output is limited by the amount and quality of input values. What is R-naught for SARS-COV-2? How long is the pre-symptomatic infective period? Those values are determined with greater and greater precision as the epidemic progresses, and the models are run repeatedly, with ever more precise input values, resulting in ever more accurate predictions of cases, deaths, etc. This has been demonstrated in epidemic after epidemic.
One final point, that falls into the "fun with math" category. In an epidemiological model, small changes to the numerical value of an input variable can have a huge impact on output values. Take R-naught, the measure of viral transmissibility. R-naught for seasonal flu, confirmed over decades, is 1.3. Each person with the flu infects, on average, 1.3 other people over the course his case. Each of those 1.3 people infects 1.3 others, and so on. After ten rounds of transmission, approximately 14 people will have been infected. The most recent estimate of R-naught for COVID-19, based on confirmed cases in isolated communities like cruise ships and Italian hill towns, is 3.0. The difference between 1.3 and 3.0 doesn't look like it should have much of an effect on final case rate. But, if one person infects three others, and each of those three infect three more, after ten rounds of transmission, over 59,000 people will have been infected. You can easily check the math with a calculator.
Good heavens! Short version --
ReplyDeleteThe basic epidemiological models, mathematical equations that describe the relationships between input variables like transmission rate, and output variables like final case rate, have been continually refined through thousands of disease outbreaks over the past 150 years. The particular model type used for COVID-19, person-to-person, has been proven remarkably robust, having guided the elimination of smallpox, near-elimination of polio, and containment of outbreaks of nasty diseases like Ebola.
But early in any epidemic, the predictive accuracy of models is limited by the low quantity and quality of input data. As the epidemic progresses, more abundant and precise input data is collected and fed into the models. As this happens, history has demonstrated that the predictive accuracy of any model increases. The value of models is in providing guidance to policymakers in real time; that guidance may change as model outputs change in response to more precise input data. The alternative, waiting until the model is making perfectly accurate predictions, which only occurs as the epidemic nears its end, makes no sense to me.
But doesn't that at least suggest that we should be cautious about imposing draconian policies sooner rather than later? Distancing, masks, etc. might well be advised, but a lockdown of the economy, with all the consequences of that on individuals and families and businesses, etc. should perhaps be postponed until information that we are more certain of.
ReplyDeleteWith regard to timing, earlier is much better than later. The particle dynamics of snow and ice crystals in an avalanche are very similar to those of pathogens in an epidemic. Snow barriers high on avalanche slopes are very effective at preventing or limiting the severity of slides, but useless further down the slope where the mass of snow and ice crystals can be hundreds of yards wide, hundreds of feet thick, and moving at 80 mph. The same is true of mitigation measures in an epidemic. The comparison of outbreak severity in Lombardy and the Veneto, mentioned in the original Atlantic piece, is a good example.
ReplyDeleteWith regard to the particular form that mitigation measures take, culture seems to play a critical role. Japan and Taiwan have not had to impose severe restrictions because their populations, with a high level of voluntary compliance, have adopted social distancing, self-quarantine of those with even minor symptoms, and wearing of masks. I've worked in both countries, and found the people to be self-disciplined, responsible toward others, and well-educated, to a degree far exceeding anything I've ever seen in this country. Because of the strong sense of personal responsibility in those cultures, shutdowns have been less draconian and damage to economies has been less severe. Were the average American less self-centered and more self-disciplined, government would not have to enact such heavy-handed measures. Our defective national character is our greatest enemy.