The seemingly absurd question in the title of today's post becomes much less absurd when reading Rothman and Greenland's paper in the AJPH Supplement from 2005 found here. This paper, which is really an abridged version of a chapter in their textbook Modern Epidemiology, is creating a lot of angst among my MPH students this semester. As well it should: the authors make a compelling case for not trusting any assumptions whatsoever, and thus seem to imply that every time we think we know something, think again.
First, they talk about multicausality of any phenomenon in human biology. Pouring water on the dry ice of our cognitive abilities, they delight in vaporizing our most coveted ideas, that, for example, contributions of all causes to a condition cannot add up to more that 100%:
Their discussion of causal inference is quotable for the context it provides for the various approaches of scientific inquiry. I feel that ideas in this entire section, with its apt name, are completely lost in our day-to-day scientific discourse:
Impossibility of Proof
Vigorous debate is a characteristic of modern scientific philosophy, no less in epidemiology than in other areas. Perhaps the most important common thread that emerges from the debated philosophies stems from 18th-century empiricist David Hume’s observation that proof is impossible in empirical science. This simple fact is especially important to epidemiologists, who often face the criticism that proof is impossible in epidemiology, with the implication that it is possible in other scientific disciplines. Such criticism may stem from a view that experiments are the definitive source of scientific knowledge. Such a view is mistaken on at least two counts. First, the nonexperimental nature of a science does not preclude impressive scientific discoveries; the myriad examples include plate tectonics, the evolution of species, planets orbiting other stars, and the effects of cigarette smoking on human health. Even when they are possible, experiments (including randomized trials) do not provide anything approaching proof, and in fact may be controversial, contradictory, or irreproducible. The cold-fusion debacle demonstrates well that neither physical nor experimental science is immune to such problems.
They further proceed to decimate Hill's criteria for confirming causality. In fact, in their usual scholarly fashion, having gone back to the original source, the authors convey Hill's own ambivalence:
Indeed, they even poke gaping holes in Bayesian thinking, which I myself am quite partial to. And incidentally, I am a big fan of Hill's "criteria", as well.
So, what is the point of assigning this paper to my students early in the course that addresses evidence generation and evaluation in policy? The paper serves as a reminder to address our assumptions systematically and regularly. By its nature, our science is imprecise. While most physical and biological processes are subject to mathematical rules, clinical sciences rely heavily on statistics, which is simply mathematics with uncertainty introduced into it. This uncertainty magnifies with every assumption that we make, and for this reason we need to be quite circumspect when insisting that we know something. In this respect, our science is more similar to civil prosecutions, where preponderance of evidence is enough for a conviction, rather that the standard of "beyond the shadow of a doubt" in criminal law.
Now, let's bring this back to the real world of clinical practice, where we do not have the luxury to ruminate on these uncertainties. This is the true intent of EBM: understand what the preponderance of evidence tells us about a phenomenon at the level of the pertinent population, but do not forget that these are mere approximations of what really happens, some closer to and some further from reality. So, while systematically observed measures of central tendency should give us comfort in the current gestalt at the bedside, the uncertainty around them is an invitation to tailor our approach to the individual before us.