You know that joke about the farmer whose cows are not producing enough milk? A university panel gathers under the leadership of a theoretical physicist. They analyze each aspect of the problem thoroughly and carefully, and after much deliberation produce a report, the first line of which is "First, assume a spherical cow in a vacuum". This joke has become short-hand for some of the reductionist thinking in theoretical physics, but it can just as easily personify the field of statistics, upon which we rely so heavily to inform our evidence-based practices.
Here is what I mean. Let us look at four common applications of statistical principles. First, descriptive statistics, usually represented by Table 1 in a paper, in which we are interested in understanding the measures of central tendency such as the mean and median values. I would argue that both measures are somewhat flawed in the real world. As we all learned, a mean is a bona fide measure of central tendency in a normal or Gaussian distribution. And furthermore, in such a distribution 95% confidence intervals are bounded by 2.5x the standard deviation from the mean. Although this is very convenient, few things about human biology are actually distributed normally, while many cluster to the left or to the right of the center creating a tail on the opposite side. For these skewed distributions a median is the recommended measure to be used, bracketed by the boundaries of the 25% to 75% range of the values, or the interquartile range. But in a skewed distribution it is exactly the tail that is its most telling feature, as described so eloquently and personally by Stephen J. Gould in his "The Median Isn't the Message" essay. This is especially true when more specific identification of characteristics precipitates the numbers dwindle; e.g., if instead of such general measures as blood pressure among all-comers you want to focus on morning blood pressures in such specific group as African-American males with a history of diabetes and hypercholesterolemia who attend a smoking cessation program, say. And this is important because at the level of the office encounter each patient is a universe of his/her own risk factors and parameters that does not necessarily obey the rules of the herd.
Second, analytic modeling relies on the assumption of normality. To overcome this limitation we transform certain non-normal distributions to their log forms, for example, to normalize them in order to force them to play nicely. Once normalized, we perform the prestidigitation of a regression, reverse-transform the outcome to its anti-log, and, voila, we have God's own truth! And even though statisticians tend to argue about such techniques, in the real world of research, where evidence cannot wait for perfection, we accept this legerdemain as the best we can do.
The next example I will use is that of pooled analyses, specifically meta-analyses. The intent of meta-analyses is to present the totality of evidence in a single convenient and easily comprehended value. One specific circumstance in which a meta-analysis is considered useful is when different studies are in conflict with one another, as for example, when one study demonstrates that an intervention is effective, while another does not show such effectiveness. In my humble opinion, it is one thing to pool data when studies suffer from Type II error due to small sample sizes. However, what if the studies simply show opposite results? That is, what if one study indicates a therapeutic advantage of treatment T over placebo P, but another shows the exact opposite? Is it still valid to combine these studies to get at the "true" story? Or is it better to leave them separate and try to understand the potential inherent differences in the study designs, populations, interventions, measurements, etc.? I am not saying that all meta-analyses mislead us, but I do think that in the wrong hands this technique can be dangerous, smoothing out differences that are potentially critical. This is one spherical cow that needs to be milked before it is bought.
The last cow is truly spherical, and that is the one-on-one patient-clinician encounter, wherein the doctor needs to cram the individual patient with his/her binary predispositions into the continuous container of evidence. It is here that all of the other cows are magnified to obscene proportions to create a cumbersome, at times incomprehensible and frequently useless pile of manure. It is one thing for a clinician to ignore evidence willfully; it is entirely another to be a conscientious objector to what is known but not applicable to the individual in the office.
But let's remember that I, as a health services researcher and a self-acknowledged sustainability zealot, am in awe of manure's life-giving properties. Extending the metaphor then, this pile of manure can and should be used to fertilize our field of knowledge by viewing each therapeutic encounter systematically as its own experiment. The middle way is that between the cynicism of discarding and blind acceptance of any and all evidence. That is where the art of medicine must come in, and the emerging payment systems must take it into account. Doctors need time, intellectual curiosity and skills to conduct these "n of 1" trials in their offices. If the fire hose of new data insists on staying on, this is the surest way to direct "conscientious and judicious" application of the resulting oceans of population data in the service of our public's health. And that should make the farmer happy, even if his cows are spherical.
That is an excellent and eloquent description of the challenges inherent in applying research evidence to clinical practice. "Spherical cow"=Gold. Expect more comments as I browse through the back catalogue.
ReplyDeleteHello, Dr. Jenner, nice to see you! Hope the small pox business is quiet. Thanks for your lovely comment. Looking forward to seeing you again soon.
ReplyDelete