Showing posts with label C difficile. Show all posts
Showing posts with label C difficile. Show all posts

Tuesday, September 20, 2011

Eminence or evidence, or how not to look like a fool when reporting your own data

A study presented a the ICAAC meeting was reported by the Family Practice News that piqued my interest. Firstly, it is a study on C. difficile infection treatment, and secondly it is counter to the evidence that has accumulated to date. So, I read the story very carefully, as, alas, the actual study presentation does not appear to be available.

Before I launch into the deconstruction of the data, I need to state that I do have a potential conflict of interest here. I am very involved in the CDI research from the health services and epidemiology perspective. But equally importantly, I have received research and consulting funding from ViroPharma, the manufacturer of oral Vancocin that is used to treat severe CDI.

And here is an important piece of background information: the reason the study was done. The recent evidence-based guideline on CDI developed jointly by SHEA and IDSA recommends initial treatment with metronidazole in the case of an infection that does not meet severe criteria, while advocating the use of vancomycin for severe disease. We will get into the reasons for this recommendation below.  

OK, with that out of the way, let us consider the information at hand.

My first contention is that this is a great example of how NOT to conduct a study (or how not to report it , or both). The study was a retrospective chart review at a single VA hospital in Chicago. All patients admitted between 1/09 and 3/10 who had tested positive for C. difficile toxin were identified and their hospitalizations records reviewed. A total of 147 patients were thus studied, of whom 25 (17%) received vancomycin and 122 (83%) metronidazole. It is worth mentioning that of the 122 initially treated with vancomycin, 28 (23%) were switched over to metronidazole treatment. The reasons for the switch as well as their outcomes remain obscure.

The treatment groups were stratified based on disease severity. Though the abstract states that severity was judged based on "temperature, white blood cell count, serum creatinine , serum albumin, acute mental status changes, systolic blood pressure<90, requirement for pressors," the thresholds for most of these variables are not stated. One can only assume that this stratification was done consistently and comported with the guideline.

Here is how the severity played out:

Nowhere can I find where those patients who were switched from metronidazole to vancomycin fell in these categories. And this is obviously important.

Now, for the outcomes. Those assessed were "need for colonoscopy, presence of pseudomembranes, adynamic ileus, recurrence within 30 days , reinfection > 30 days post therapy, number of recurrences >1, shock, megacolon, colon perforation, emergent colectomy, death." But what was reported? The only outcome to be reported in detail is recurrence in 30 days. And here is how it looks:

The other outcomes are reported merely as "M was equivalent to V irrespective of severity of illness (p=0.14). There was no difference in rate of recurrence (p= 0.41) nor in rate of complications between the groups (p=0.77)."
What the heck does this mean? Is the implication that the p-value tells the whole story? This is absurd! In addition, it does not appear to me from the abstract or the FPC report as if the authors bothered to do any adjusting for potential confounders. Granted, their minuscule sample size did not leave much room for that, but a lack of attempt alone invalidates the conclusion.

Oh, but if this were only the biggest of the problems! I'll start with what I think is the least of the threats to validity and work my way to the top of that heap, skipping much in the middle, as I do not have the time and the information available is full of holes. First, in any observational study of treatment there is a very strong possibility of confounding by indication. I have talked about this phenomenon previously here. I think of it as a clinician noticing something about the patient's severity of illness that does not manifest as a clear physiologic or laboratory sign, yet is very much present. A patient with this characteristic, although looking to us on paper much like one without a disease that is that severe, will be treated as someone at a higher threat level. In this case it may translate into treatment with vancomycin of patients who do not meet our criteria for severe disease, but nevertheless are severely ill. If present, this type of confounding blunts the observed differences between groups.

The lack of adjustment for potential confounding of any sort is a huge issue that negates any possibility of drawing a valid conclusion. Simply comparing groups based on severity of CDI does not eliminate the need to compare based on other factors that may be related to both the exposure and the outcome. This is pretty elementary. But again, this is minor compared to the fatal flaw.

And here it is, the final nail in the coffin of this study for me: sample size and superiority design. Firstly, the abstract and the write-up say nothing of what the study was powered to show. At least if this information had been available, we could make slightly more sense out of the p-values presented. But, no, this is nowhere to be found. As we all know, finding statistical significance is dependent on the effect size and variation within the population: the smaller the effect size and the greater the variation, the more subjects are needed to show a meaningful difference. Note, I said meaningful, NOT significant, and this they likewise neglect. What would be a clinically meaningful difference in the outcome(s)? Could 11% difference in recurrence rates be clinically important? I think so. But it is not statistically significant, you say! Bah-humbug, I say, go back and read all about the bunk that p-values represent!

One final issue, and this is that a superiority study is the wrong design here, in the absence of a placebo arm. In fact, the appropriate design is a non-inferiority study, with a very explicit development of valid non-inferiority margins that have to be met. It is true that a non-inferiority study may signal a superior result, but only if it is properly designed and executed, which this is not.

So, am I surprised that the study found "no differences" as supported by the p-values between the two treatments? Absolutely not. The sample size, the design and other issues touched on above preclude any meaningful conclusions being made. Yet this does not seem to stop the authors from doing exactly that, and the press from parroting them. Here is what the lead author states with aplomb:
              "There is a need for a prospective, head-to-head trial of these two medications, but I’m not sure who’s going to fund that study," Dr. Saleheen said in an interview at the meeting, which was sponsored by the American Society for Microbiology. "There is a paucity of data on this topic so it’s hard to say which antibiotic is better. We’re not jumping to any conclusions. There is no fixed management. We have to individualize each patient and treat accordingly."
OK, so I cannot disagree with the individualized treatment recommendation. But do we really need a "prospective head-to-head trial of these two medications"? I would say "yes," if there were not already not 1 but 2 randomized controlled trials addressing this very question. One by Zar and colleagues and another done as a regulatory study of the failed Genzyme drug tolevamer. Both of the trials contained separate arms for metronidazole and vancomycin (the Genzyme trial also had a tolevamer arm), and both stratified by disease severity. Zar and colleagues reported that in the severe CDI group the rate of clinical response was 76% in the metronidazole-treated patients versus 97% in the vancomycin group, with the p=0.02. In the tolevamer trial, presented as a poster at the 2007 ICAAC, there was an 85% clinical response rate to vancomycin and 65% to metronidazole (p=0.04).

We can always desire a better trial with better designs and different outcomes, but at some point practical considerations have to enter the equation. These are painstakingly performed studies that show a fairly convincing and consistent result. So, to put the current deeply flawed study against these findings is foolish, which is why I suspect the investigators failed to mention anything about these RCTs.

Why do I seem so incensed by this report? I am really getting impatient with both scientists and reporters for willfully misrepresenting the strength and validity of data. This makes everyone look like idiots, but more importantly such detritus clogs the gears of real science and clinical decision-making.

Thursday, March 3, 2011

The value of a test

Reading this vintage paper on C diff from the Archives of Pediatric and Adolescent Medicine, I came upon this irresistible conclusion:

Priceless!

Friday, December 3, 2010

What is the harm in repeat CDI testing?

This is cross-posted on my new False Positive blog where I plan to curate some of my writing specific to research methods. 


I recently peer-reviewed a manuscript on Clostridium difficile surveillance for a journal and was prompted to reread this paper by Lance Peterson and Ari Robicsek from the Annals of Internal Medicine August 2009. The paper is a great reminder of the high risk of false positive results from repeating a moderately sensitive yet highly specific test for a disease of low prevalence.
The case in point is C. difficile testing, of specific interest to me with the enzyme immunoassays (EIA) for toxins A and/or B. A frequent clinical dilemma is what to do when a patient has symptoms and signs suggestive of C. diff infection (CDI), yet the initial test comes back negative. Frequently the clinician will repeat testing, and possibly more than once. And on occasion, the repeat testing will yield a positive result. But how accurate is this positive result?
To answer this question, Peterson and Robicsek constructed a simple model, depicted in Table 2 of the paper (I have reproduced a part of it below). The authors derived EIA characteristics from the literature to be: sensitivity = 73.3%, specificity = 97.6%. They also assumed a true prevalence of disease of 10% among patients with suspected CDI. So, starting out with a hypothetical 1,000 patient cohort with suspected CDI, the results look as follows:
What comes out loud and clear is that with every round of repeat testing, the positive predictive value degrades dramatically, until the result becomes completely useless. But to go even further, the result may be harmful since we have now gone from having 1/4 of all detected positives being false to a whopping > 1/2! And now we need to think long and hard what to do with these results. Now, one other point: Merely going from a single test to a second round, while adding a substantial number of true positives (18), ends up identifying even a greater number of people without the disease as having CDI (22)!
This illustrates the great potential for misclassification of CDI that we run into in surveillance studies, leading to an inflated disease estimates. The moral of the story is that we always need to know what proportion of positive tests are in fact repeat tests following a negative assay. And I am even beginning to think that we need to limit ourselves at most to two consecutive tests when talking about incidence and prevalence of CDI.

Wednesday, February 10, 2010

Kids, schools and superbugs

What do the three have in common, you might wonder? Well, more than we used to think.

This story in today's UK's Telegraph reports on a school child who developed diarrhea and tested positive for C difficile. The alarming thing is that there did not seem to be any explicit risk factors for this. The appalling thing is the mis-information by the story that
Children rarely become ill with C-diff, which normally strikes elderly people in hospital. 
This is how things used to be, before the BI/NAP1/027 bug evolved in the early first decade of the millenium. We and others have shown that kids are not immune from it, and neither are other people previously thought not to possess any risk factors for it. In fact, we have a paper coming out shortly in the CDC's journal Emerging Infectious Diseases showing that in the US the rate of pediatric hospitalizations with C diff rose from 7.2 cases per 10,000 hospitalizations in 1997 to 12.8 cases/10,000 in 2006.

So why is this happening? Well, there are a couple of ways to answer this question. The proximal answer is that the new bug is better equipped to propagate. Its spore possesses greater stickiness than the old pathogenic version which we all knew and loved in the '90s, and thus is more difficult to eradicate from fomites and anatomic surfaces. It also produces on the order of 20-times more of the toxins responsible for wreaking havoc in the colon. So, clearly, this is a bug for the new millenium.

But here is the real reason for this, albeit a little more removed: antibiotic overuse. All physicians are aware of this, and we all get the connection. The way this works is that C diff is impervious to many of the antibiotics employed to treat other infections, while its neighbors in the gut are decimated. Thus, C diff proliferates to fill the void and takes a firm hold under the right circumstances. The new superbug is, of course, very likely the result of the all-too-familiar saga of resistance evolution.

Here is the frightening part: we are still overusing antibiotics! I frequently hear from my friends that their MDs offered antibiotics for something that to me is clearly a non-bacterial issue. The most frustrating situation is when I am convinced that the friend has a post-viral reactive airways cough and needs an inhaler, but instead comes home with a handful of antibacterial pills. And no one wants to take the chance. We are so risk averse that even when we are well educated about the perils of antibiotic overuse, we are still likely to take them if our doctor prescribes them. And for a doctor, with the shrinking appointment times, what is the most expedient course, particularly with a patient with an entitled attitude? You guessed it, antibiotics!

So, what do we do? My feeling is that the action has to be multi-pronged. Yes, physicians need to be held accountable for their treatment choices, but so do patients. We need to do a much better job educating the public about the dark underbelly of antibiotics, so that they can be partners in these decision. In my opinion, this may be the most critical healthcare issue of our time, given the concerns raised by both the WHO and the FDA that, if resistance emergence continues at this pace, we will be back to the dark ages of pre-antibiotic era.

To this end, The Surgeon General should pick up the banner of antibiotic education. In fact, I recently sent her a letter outlining why she might want to make this a part of her Public Health agenda. If anyone is interested in making it a more public effort, I am happy to share it and resend with more signatures than just my own. She after all puts the "Public" into Public Health.

We have got to start talking to the people about this. The resistance train needs to reverse direction. And now!        
  

Tuesday, September 22, 2009

My study covered on Reuters Health hosted on thedoctorschannel.com

Here is a story from Reuters Health covering my recent study in Chest. There is a video there too!
NEW YORK (Reuters Health) – Patients on prolonged acute mechanical ventilation face an increased risk of Clostridium difficile-associated disease, according to a report in the September Chest.

“C. difficile is frequent in this population, and we need to be vigilant in our prevention efforts,” Dr. Marya D. Zilberberg from University of Massachusetts, Amherst told Reuters Health by email. “Once present, it is critical to contain the spread of C. difficile to other patients.”

Dr. Zilberberg and colleagues used 2005 data from the Health Care Utilization Project/Nationwide Inpatient Sample to examine the rates and outcomes of C. difficile-associated disease among 64,910 hospitalized adults who received prolonged treatment with acute mechanical ventilation.

Among these patients, 5.3% had a concomitant diagnosis of C. difficile-associated disease at death or discharge.

On unadjusted analysis, the primary outcome of the study – hospital mortality – was 32.6% in patients with C. difficile-associated disease and 33.0% in patients without it. In the adjusted analysis, there was “a small but statistically significant difference” in mortality risk, with patients who had C. difficile-associated disease “somewhat less likely” to die in the hospital (adjusted relative risk, 0.89).

In addition, C. difficile-associated disease in patients with prolonged ventilation was associated with substantially greater median hospital length of stay, higher median costs, a greater likelihood of being discharged to a skilled nursing facility, and a lower likelihood of being discharged to home.

“This infection rate of 530 cases per 10,000 hospital admissions is strikingly high compared with that noted in the general hospitalized population (11.2 cases per 10,000 hospitalizations,” the investigators say.

“The most important thing is to minimize the use of antibiotics as much as possible, particularly the classes that are known to select for C. difficile (e.g., quinolones, etc.),” Dr. Zilberberg said. “We are finding that there is a fair bit of community-acquired C. difficile, which may act as a reservoir for hospital-acquired disease. So limiting antibiotics should be a community-wide effort, not just that in the hospital.”

Reference:
Chest 2009;136:752-758.