In the last week I wrote about our quality improvement, or QI, efforts in healthcare. And although there is a burgeoning field representing itself as the "science" of QI, I question much of its scientific validity. As always, VAP is my poster child for these discussions, where neither the definition of the condition itself nor its prevention efforts are subject to much scientific scrutiny. This makes VAP have a surreal, ghost-like quality: now you see it, now you don't. And this alone makes it difficult to assess prevention efforts. Much as in the heated mammography debate, where passionate anecdote prevails, the sanctity of the QI rubric blunts the usual critical approach to the data.
So, the central point that I made in this post was essentially to devalue the VAP eradication efforts as not grounded in solid scientific evidence. What has occurred to me, however, is that this position may be in fact at odds with a realization I blogged about here and here, wherein I agreed with Dan Arieli's suggestion that outcomes in the real world, where they are influenced by so much randomness, are not the thing to reward. It would be much more rational to reward best efforts at best results, thus the process rather than the outcome. So, here is the apparent contradiction: On the one hand I agree that outcomes may be too unpredictable, being that they are influenced by too many factors that are not in our control, yet I am also advocating that we start measuring such outcomes as antibiotic use associated with VAP and its reduction. What gives?
Well, on the one hand, I am OK with contradiction; life is full of instances where we have to hold conflicting information and feelings together. But as a scientist it is my predisposition to analyze (which literally means splitting into smaller, more manageable chunks), so I have given this ostensible paradox more thought. What I came up with is that measuring process is the right thing to do, but only under very specific conditions. Avedis Donabedian, who is considered the father of quality science, introduced the triad of structure-process-outcome as the backbone of quality science. This relationship certainly lends validity to the "process" metrics as surrogates for "outcome." But the condition that has to be met is that there be an actual correlation between the process and the said outcome. If there is no such solid correlation, then we are simply going through the motions, doing a rain dance to cause rain.
So, what I have said about VAP prevention in particular is that we are nowhere near being able to say that the recommended processes correlate with any changes in meaningful clinical outcomes. And because the data on these interventions are so weak, throwing massive resources behind implementing them is irrational and resembles religious fervor more than scientific pragmatism.
It is entirely understandable that we would jump on this bandwagon so rapidly, given the magnitude of harm in our healthcare system combined with the need to reign in the healthcare spending. But there is a more subtle point to be made here too. It relates to the fertile soil of our American psyche, where doing something is always perceived as better than thinking about our course of action, which is frequently referred to with contempt as "doing nothing." In the end, this crisis response mentality is good in a crisis, but potentially detrimental in the long term: we are unlikely to be altering meaningful outcomes, and we are spending billions of dollars on interventions lacking evidence.
So, I stand behind both of my assertions and maintain that they are not mutually exclusive. Yes, outcomes are subject to much randomness; yes, processes known to alter these outcomes are the sensible measures of our efforts to improve quality; and yes, these processes need first to be rigorously validated for their impact on the outcomes in question. Anything short of this pathway is not just a waste of our collective resources, but a manipulation of the public trust. And that is as far from the intent of science as it can get.