If you look at the report itself, even the authors hedge on this point: they prefer to present is as a "planning scenario", rather than a prediction. OK, I'll buy this but what is a prediction but a planning scenario? True, it could be fewer. But if we sit around and simply argue about the number, it could be a lot worse.
Let me tell you a little bit about why and how we create these predictions (or planning scenarios, your choice). The why is pretty self-evident: we want to be prepared. Preparation entails having an idea, vague though it may be, of what's to come. Why do you care about tomorrow's weather forecast? So that you can plan what to wear, how long to allocate to your commute, or whether or not to cancel that trip to Florida during the hurricane season. Similarly, policy makers need to have some idea of how to allocate resources in the future. So, we make predictions, create forecasts or planning scenarios.
The irony is that all this future planning is built on past events as we know them in the present. (So you can see how a rapidly-evolving pandemic can throw a monkey-wrench into any estimates.) What makes us think that there are likely to be 30,000-90,000 fatalities associated with H1N1 in the US this flu season? Its past behavior, of course -- we look at what has happened to date, both in the US, and, in this case, in the Southern hemisphere, where the flu season is in full swing, and we apply those numbers to our population. The result is a mathematical formula which, when populated with these carefully evaluated assumptions, spits out the desired estimate. Now, it is also possible to include some other assumptions into the model, such as what happens at different levels of vaccine availability, efficacy and penetration. However, since all of these factors are to date unknown, putting them into the model would create a tremendous amount of uncertainty, and the model would be useless.
But even without introducing conjecture around potential modifiers, these estimates are prone to a large degree of uncertainty. This is why whenever you hear a report of a single number as the estimate derived from a prediction model, ask for more: there is usually a confidence interval calculated around that number based on varying the assumptions across some justifiable range. Generally, the tighter the confidence interval, the more useful the prediction. For example, in our recent study we estimated that on average we can expect ~300,000 cases of acute respiratory failure associated with H1N1 in the US, with the confidence interval ranging from ~225,000 to ~450,000. Now, this is a wide range, and it betrays our uncertainty about assumptions that went into the model.
One final note. The estimates provided by the White House advisors and the ones derived by us do not necessarily account for the effect of an all-out effort at prevention. So, if we sit by and wait for the virus to hit, and if it does not get more or less virulent than its current incarnation, we will have the predicted number of fatalities. That is a lot of "ifs", some of which are certainly within our control -- aggressive education, prevention, containment, to name a few.
So, with all of the pitfalls of planning scenarios, they are necessary for us to appreciate the potential magnitude of the problem and to plan for it rationally. Sweeping them under the rug and throwing their creators under the bus is just playing politics with an uncertain situation: these numbers evolve, and we need to acknowledge that. After all, as that populist philosopher Yogi Berra once pointed out: "Predictions are very hard, especially about the future".
No comments:
Post a Comment