Showing posts with label prostate cancer. Show all posts
Showing posts with label prostate cancer. Show all posts

Friday, July 20, 2012

Early radical prostatectomy trial: Does it mean what you think it means?

Another study this week added to the controversy about early prostate cancer treatment. The press, as usual, stopped at citing the conclusion: Early prostatectomy does not reduce all-cause mortality. But the really interesting stuff is buried in the paper. Let's deconstruct.

This was a randomized controlled trial of early radical prostatectomy versus observation. The study was done mostly within the Veterans' Affairs system and took 8 years to enroll a little over 700 men. This alone should give us pause. Figure 1 of the paper gives the breakdown of the enrollment process: 5,023 men were eligible for the study, yet 4,292 declined participation, leaving 731 (15% of those who were eligible) to participate. This is a problem, since there is no way of knowing whether these 731 men are actually representative of the 5,023 that were eligible. Perhaps there was something unusual about them that made them and their physicians agree to enroll in this trial. Perhaps they were generally sicker than those who declined and were apprehensive about the prospect of observation. Or perhaps it was the opposite, and they felt confident in either treatment. We can make up all kinds of stories about those who did and those who did not agree to participate, but the reality is that we just don't know. This creates a problem with the generalizability of the data, raising the question of who are the patients that these data actually apply to.

The next issue was what might be called "protocol violation," though I don't believe the investigators actually called it that. Here is what I mean. 364 men were randomized to the prostatectomy group, and of them only 281 actually underwent a prostatectomy, leaving nearly one-quarter of the group free of the main exposure of interest. Similarly, of the 367 men randomized to observation, 36 (10%) underwent a radical prostatectomy. We might call this inadvertent cross-over, which does tend to happen in RCTs, but needs to be minimized in order to get at the real answer. What this type of cross-over does is, as is pretty intuitively obvious, blend the groups' differences in exposure, resulting in a smaller difference in the outcome, if there is in fact a difference. So, when you don't get a difference, as happened in this trial, you don't know if it is because of these protocol violations or because these treatments are essentially equivalent.

And indeed, the study results indicated that there is really no difference between the two approaches in terms of the primary endpoint (all-cause mortality over a substantially long follow-up period was 47% in the prostatectomy and 50% in the control groups [hazard ratio 0.88, 95% confidence interval 0.71 to 1.08, p=0.22]). This means that the 12% relative difference in this outcome between the groups was more likely due to chance than to any benefit of the surgery. "But how can cancer surgery impact all-cause mortality?" you say. "It only claims to alter what happens to the cancer, no?" Well, yes that is true. However, can you really call a treatment like that successful if all it does is give you the opportunity to die of something else within the same period of time? I thought not. And anyway, looking at the prostate cancer mortality, there really was no difference there either: 5.8% attributable mortality in surgery group compared to 8.4% in the observation group (hazard ratio 0.63, 95% confidence interval 0.36 to 1.09, p=0.09).  

The editorial accompanying this study raised some very interesting points (thanks to Dr. Bradley Flansbaum for pointing me to it). He and I both puzzled over this one particularly unclear statement:
...only 15% of the deaths were attributed to prostate cancer or its treatment. Although overall mortality is an appealing end point, in this context, the majority of end points would be noninformative for the comparison of interest. The expectation of a 25% relative reduction in mortality when 85% of the events are noninformative implies an enormous treatment effect with respect to the informative end points.
Huh? What does "noninformative" mean in this context? After thinking about it quite a bit, I came to the conclusion that the editorialists are saying that, since prostate cancer caused such a small proportion of all deaths, one cannot expect this treatment to impact all-cause mortality (certainly not the 25% relative reduction that the investigators targeted), the majority of the causes being non-prostate cancer related. Yeah, well, but then see my statement above about the problematic aspects of disease-specific mortality as an outcome measure.

The editorial authors did have a valid point, though, when it came to evaluating the precision of the effects. Directionally, there certainly seemed to be a reduction in both all-cause and prostate cancer mortality in the group randomized to surgery. On the other hand, the confidence intervals both crossed unity (I have an in-depth discussion of this in the book). On the third hand (erp!) the portion of the 95% CI below 1.0 was far greater than that above 1.0. This may imply that with a study that could have achieved greater precision (that is, narrower confidence intervals) we might have gotten a statistical difference between the groups. But to get at higher precision we would have needed either 1) a larger sample size (which the investigators were unable to obtain even over an 8-year enrollment period), or 2) fewer treatment cross-overs (which is clearly a difficult proposition, even in the context of a RCT), or 3) both. On the other hand (the fourth?), the 3% absolute reduction in all-cause mortality amounts to the number needed to treat of roughly 33, which may be clinically acceptable.

So what does this study tell us? Not a whole lot, unfortunately. It throws an additional pinch of confusion into the cauldron already boiling over with contradiction and uncertainty. Will we ever get the definitive answer to the question raised in this work? I doubt it, given the obvious difficulties implementing this RCT.  
                  
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Thursday, March 15, 2012

PSA screening: Does it or doesn't it?

A study in the NEJM reports that after 11 years of follow up in a very large cohort of men randomized either to PSA screening every 4 years (~73,000 subjects) or to no screening (~89,000 subjects) there was both a reduction in death and no mortality advantage. How confusing can things get? Here is a screenshot of today's headlines about it from Google News:






























How can the same test cut prostate cancer deaths and at the same time not save lives? This is counter-intuitive. Yet I hope that a regular reader of this blog is not surprised at all.  For the rest of you, here is a clue to the answer: competing risks.

What's competing risks? It is a mental model of life and death that states that there are multiple causes competing to claim your life. If you are an obese smoker, you may die of a heart attack or diabetes complications or a cancer, or something altogether different. So, if I put you on a statin and get you to lose weight, but you continue to smoke, I may save you from dying from a heart attack, but not from cancer. One major feature of the competing risks model that confounds the public and students of epidemiology alike is that these risks can actually add up to over 100% for an individual. How is this possible? Well, the person I describe may have (and I am pulling these numbers out of thin air) a 50% risk of dying from a heart attack, 30% from lung cancer, 20% from head and neck cancer, and 30% from complications of diabetes. This adds up to 130%; how can this be? In an imaginary world of risk prediction anything is possible. The point is that he will likely die of one thing, and that is his 100% cause of death.

Before I get to translating this to the PSA data, I want to say that I find the second paragraph in the Results section quite problematic. It tells me how many of the PSA tests were positive, how many screenings on average each man underwent, what percentage of those with a positive test underwent a biopsy, and how many of those biopsies turned up cancer. What I cannot tell from this is precisely how many of the men had a false positive test and still had to undergo a biopsy -- the denominators in this paragraph shape-shift from tests to men. The best I can do is estimate: 136,689 screening tests, of which 16.6% (15,856) were positive. Dividing this by 2.27 average tests per subject yields 6,985 men with a positive PSA screen, of whom 6,963 had a biopsy-proven prostate cancer. And here is what's most unsettling: at the cut-off for PSA level of 4.0 or higher, the specificity of this test for cancer is only 60-70%. What this means is that at this cut-off value, a positive PSA would be a false positive (positive test in the absence of disease) 30-40% of the time. But if my calculations are anywhere in the ballpark of correct, the false positive rate in this trial was only 0.3%. This makes me think that either I am reading this paragraph incorrectly, or there is some mistake. I am especially concerned since the PSA cut-off used in the current study was 3.0, which would result in a rise in the sensitivity with a concurrent decrease in specificity and therefore even more false positives. So this is indeed bothersome, but I am willing to write it off to poor reporting of the data.

Let's get to mortality. The authors state that the death rates from prostate cancer were 0.39 in the screening group and 0.50 in the control group per 1,000 patient-years. Recall from the meat post that patient-years are roughly a product of the number of subjects observed by the number of years of observation. So, again, to put the numbers in perspective, the absolute risk reduction here for an individual over 10 years is from 0.5% to 0.39%, again microscopic. Nevertheless, the relative risk reduction was a significant 21%. But of course we are only talking about deaths from prostate cancer, not from all other competitors. And this is the crux of the matter: a man in the screening group was just as likely to die as a similar man in the non-screening group, only causes other than prostate cancer were more likely to claim his life.

The authors go through the motions of calculating the number needed to invite for screening (NNI) in order to avoid a single prostate cancer death, and it turns out to be 1,055. But really this number is only meaningful if we decide to get into death design in a something like "I don't want to die of this, but that other cause is OK" kind of a choice. And although I don't doubt that there may be takers for such a plan, I am pretty sure that my tax dollars should not pay for it. And thus I cast my vote for "doesn't."