So, here is the whole story. Stephanie Desmon, the author of the JH press release, e-mailed me back and pointed me to Peter Pronovost as the source for the 10% reduction information. I e-mailed Peter, and he got back to me, confirming that
"The 10 percent is the rounded differences in differences in odds ratios"Moral of the story: The devil is in the details.
And speaking of details, I must admit to an error of my own. If you look at the figure reproduced below, I called out the wrong points. For adjusted data, you need to look at the open circles (for the intervention group) and squares (for the control group). In fact, the adjusted mortality went from about 20% at baseline to 16% in the 13-22 months interval for the Keystone cohort, while for the control group it went from a little over 20% to a little under 18%. This makes the absolute reduction a tad more impressive, though there is still less than a 2% absolute difference between the reduction seen in the intervention vs. the control group, leaving all of my other points still in need of addressing.
Addendum #1, 11:00 AM EST, 2/2/11:
I just found what I think is the origin of the 10% mortality reduction rumor in this press release from Johns Hopkins. I just e-mailed Stephanie Desmon, the author of the release, to see where the 10% came from. Will update again should I hear from either Maggie Fox or Stephanie Desmon.
Remember the Keystone project? A number of years ago when we started to pay close attention to healthcare-associated infections (HAI), and hospitals started to take introspective looks at their records, it turned out the the ICUs in the state of Michigan for one reason or another had very high rates of HAIs. As this information percolated through our collective consciousness, the stars aligned in such a away as to release funding from the AHRQ in Washington, DC, for a group of ICU investigators at the Johns Hopkins University School of Medicine in Baltimore, MD, headed by Peter Pronovost, to design and implement a study employing IHI-style (Boston, MA) bundled interventions to prevent catheter-associated blood stream infections (CABSI) and ventilator-associated pneumonia (VAP) across the consortium of ICUs in MI. Whew! This poly-geographic collaboration resulted in a landmark paper in 2006 in the New England Journal of Medicine, wherein the authors showed that the bundled interventions directed by a checklist aimed at CABSI were indeed associated with a satisfying reduction of CABSI. Since 2006 the ICU community has been eagerly awaiting the results of the VAP intervention from Keystone, but none has come out. When there is a void of information, rumors fill this void, and plenty of rumors have circulated about the alleged failure of the VAP trial.
I do not want to belabor here what I have written before with regard to VAP and its prevention, and what makes the latter so difficult, and how little evidence there really is that the IHI bundle actually does anything. You can find at least some of my thoughts on that here. But why am I bringing up the Keystone project again anyway? Well, it is because Pronovost's group has just published a new paper in BMJ, and this time their aim was even more ambitious: to show the impact of this state-wide QI intervention on hospital mortality and length of stay. This is a really reasonable question, mind you, since, we could argue that, if the intervention reduces HAI, it should also do something to those important downstream events that are driven by the particular HAI, namely mortality and LOS. But here are a couple of issues that I found of great interest.
First, as we have discussed before, whether or not VAP itself causes death in the ICU population (that is patients die from VAP), or whether VAP tends to attack those who are sicker and therefore more likely to die anyway (patients die with VAP) remains unclear in our literature. There is some evidence that late VAP may be associated with an attributable increase in mortality, but not early, and these data need to be confirmed. Why is this important? Because if VAP does not impart an increase in mortality, then trying to decrease mortality by reducing VAP is just swinging at windmills.
So, let's talk about the study and what it showed as reported in the BMJ paper. You will be pleased that I will not here go through the traditional list of potential threats to validity, but take the data at face value (well, almost). The authors took an interesting approach of comparing the performance of all eligible ICUs regardless of whether they actually chose to take part in the project. Of all the admissions examined in the intervention group, 88% came from Keystone participants. This is a really sound way to define the intervention cohort, and it actually biases the data away from showing an effect. So, kudos to the investigators. The comparator cohort came from ICUs in the hospitals surrounding Michigan, those that were not eligible for Keystone participation. One point about these institutions also requires clarification: I did not see in the paper whether the authors actually looked at the control hospitals' QI initiatives. Why is this important? Well, if many of the comparator hospitals had successful QI initiatives, then one could expect to see even less difference between the Keystone intervention and the control group. So, again, good on them that they biased the data against themselves.
This is the line of thinking that brings me to my second point. Reuters' Maggie Fox covered this paper in an article a couple of days ago, an article whose
(Reuters) - A U.S. program to help make sure hospital staff maintain strict hygiene standards lowered death rates in intensive care units by 10 percent, U.S. researchers reported on Monday.Mind you, I read the article before delving into the peer-reviewed paper, so my surprise came out of just knowing how supremely difficult it is to reduce ICU mortality by 10% with any intervention. In the ICU we celebrate when we see even a 2% absolute mortality reduction. So, it became obvious to me that something got lost in translation here. And indeed, it did. Here is how I read the data.
There are multiple places to look for the mortality data. One is found in this figure:
Now, look at the top panel and focus on the solid circles -- these depict the adjusted mortality in the Keystone intervention group. What do you see? I see mortality going from about 14% at the baseline to about 13.5% at implementation phase to about 13% at 13-22 months post implementation. I do not see a 10% reduction, but at best about a 1% mortality advantage. What is also of interest is that the adjusted mortality in the control group (solid squares) also went down, albeit not by as much. But almost at every point of measurement it was lower already than in the intervention group.
Then there is this table, where the adjusted odds ratios of death are given for the two groups at various time points:
And this is where things get interesting. If you look at the last line of the table, the adjusted odds ratios indeed look impressive, and, furthermore, the AOR for the intervention group is lower than that for the control group. And this is pleasing to any investigator. But what does it mean? Well it means that the odds of death in the intervention group went down roughly by 24% (give-or-take the 95% confidence interval) and by 16% in the control group,each compared to itself at baseline. This is impressive, no?
Well, yes, it is. But not as impressive as it sounds. A relative reduction of 24% with the baseline mortality of 14% means an absolute reduction in mortality of 14% x 24% = 3.4%. But, you notice that we did not actually observe even this magnitude of mortality reduction in the graph. What gives? There is an excellent explanation for this. It is a little known fact to the the reader (and only slightly more so to the average researcher and peer reviewer) that the odds ratio, while a fairly solid way to express risk when the absolute risk is small (say, under 10%), tends to overestimate the effect when the risk is higher than 10%. I know we have not yet covered the ins and the outs of odds ratios, relative risks and the like in the "reviewing literature" series, but let me explain briefly. The difference between odds and risk is in the denominator. While the denominator for the latter is the entire cohort at risk for the event (here all patients at risk for dying in the hospital), that for the former is that part of the cohort that did not experience the event. See the difference? By definition, the denominator for the odds ratio is smaller than for the relative risk calculation, thus yielding a more impressive, yet inaccurate, reduction in mortality.
Bottom line? Interesting results. Not clear if the actual intervention is what produced the 1% mortality reduction -- could have been secular trends, regression to the mean or Hawthorne effect, to name just a few alternatives. But regardless, preventing death is good. The question is were these improvements in mortality sustained after hospital discharge, or were these patients merely kept alive so that they could die elsewhere? Also, what is the value balance here in terms of resources expended on the intervention versus the results that may not even be due to the particular intervention in question?
All of this is to say that I am really not sure what the data are showing. What I am sure of is that I did not find any evidence of a 10% reduction in mortality reported by Reuters (I did e-mail Maggie Fox and at this time still awaiting a reply; will update if and when I get it). In this time of aggressive efforts to bend the healthcare expenditures curve we need to pay attention to what we invest in and the return on this investment, even if the intervention is all "motherhood and apple pie."
No comments:
Post a Comment