A story in HealthDay with the title of "Two Studies Find Routine Mammography Saves Lives" talks about studies presented at the annual meeting of the
Cutting through all the definitive bravado, here is a little piece of science that was reported:
Note, the reference is to deaths from breast cancer without any mention of all-cause mortality. (You can read about why the latter is important here.)Compared with the pre-screening period 1986 to 1988, deaths from breast cancer among women aged 55-79 fell by 31 percent in 2009," Jacques Fracheboud, a senior researcher at the Erasmus University Medical Center in Rotterdam, said in a meeting news release. We found there was a significant change in the annual increase in breast cancer deaths: before the screening program began, deaths were increasing by 0.3 percent a year, but afterwards there was an annual decrease of 1.7 percent," he added. "This change also coincided with a significant decrease in the rates of breast cancers that were at an advanced stage when first detected.
The report next states that over the first 20 years of the screening program
So, doing the math, I come up with about 31% false positive rate at the biopsy stage (that's 96,000 biopsies minus 66,000 positives for cancer, all divided by the 96,000 total biopsies). If we use the 180,000 "referral recommendations" as our denominator of all positive tests, and stick with the 66,000 true positive rate, then the false positives grow to (180,000-66,000)/180,000 = 0.63, or 63%. If we spread the 33,000 false positives over the 13.2 million examinations, that equates to 0.25% chance for a false positive. Yet the report goes on to say that (emphasis mine):... 13.2 million breast cancer screening examinations were performed among 2.9 million women (an average of 4.6 examinations per woman), resulting in nearly 180,000 referral recommendations, nearly 96,000 biopsies and more than 66,000 breast cancer diagnoses.
Six percent? This is clearly a place where my high school math teacher's mantra of "show your work" is applicable.For a woman who was 50 in 1990 and had 10 screenings over 20 years, the cumulative risk of a false-positive result (something being detected that turned out not to be breast cancer) was 6 percent.
The next piece of information that I would like to understand better is this:
How exactly was this computed? Again a case for "show-your-work."Over-diagnosis (detection of breast tumors that would never have progressed to be a problem) occurred in 2.8 percent of all breast cancers diagnosed in the total female population and 8.9 percent of screening-detected breast cancers."
And then there is this (emphasis mine):
Regular screening "decreases deaths by over 30 percent, [with] limited harm and reasonable costs. Additionally, cancers are detected at an earlier stage, which means not only decreased mortality but also morbidity; the patient may not have to have chemotherapy or a mastectomy," she noted.OK, so, if I got it right, it is breast cancer mortality that is decreased by 31%, not all-cause mortality. This really should have been spelled out more clearly, not to mention that the actual, or absolute, reduction likely pales in comparison to this relative drop. And what about diagnosing earlier stage disease? Lead time bias, anyone?
The second study was a computer model, and I will not go through it at this time as I need to move on to other work. But you get the picture: the numbers given in the report are limited and at times they don't add up. Mixing up cancer mortality with all-cause mortality leads to erroneous conclusions. And finally, forgoing reporting on the absolute risk reduction in favor of the inflated relative reduction is not helpful for understanding the true risks involved.
One final thought: Yes, I do have cognitive biases, and it is difficult for me to avoid them. I happen to fall into the camp that thinks screening for sublclinical diseases, at least in our current technological setting, is disease mongering. At the same time, I would like to think that if the data really showed a significant benefit without great risks, I would give them a second look.
The bottom line is this: at least for me, the report confused the issue more than it has clarified. Perhaps the study, once published, will answer all of the questions that I have posed adequately. But at this stage, it is a shame that such strong statements as...
and this..."These results show why mammography is such an effective screening tool," said one U.S. expert, Dr. Kristin Byrne, chief of breast imaging at Lenox Hill Hospital in New York City. She was not involved in the new research.
... are not backed up by appropriate evidence."We are convinced that the benefits of the screening program outweigh all the negative effects," Fracheboud said.
h/t to @ElaineSchattner for the story
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
"Additionally, cancers are detected at an earlier stage, which means not only decreased mortality but also morbidity; the patient may not have to have chemotherapy or a mastectomy," she noted."
ReplyDeleteYou responded with: "And what about diagnosing earlier stage disease? Lead time bias, anyone?"
Are these two different issues, mainly, morbidity avoidance vs. survival time?
Also, I am not sure I get the following:
*If we use the 180,000 "referral recommendations" as our denominator of all positive tests, and stick with the 66,000 true positive rate, then the false positives grow to (180,000-66,000)/180,000 = 0.63, or 63%.*
Are you assuming referrals are for discrete episodes? Isnt it possible that same woman gets recalled x 2 or 3 in 6 weeks for same mass? Not sure how on one hand the false (+) is 33K, and then it changes with other calculation.
THanks
Brad
Hi, Brad, thanks for commenting. I think this is exactly the point -- the article cites 6% false positive, and there is just no way to "retrofit" that to the data that are given. So, the point I tried to make (perhaps not that eloquently) is that when you cite such a low false positive rate, you need to be transparent about how it was calculated. I am sure that the full paper, once it comes out, will make it more plain, but for the moment such a startlingly low figure remains obscure.
ReplyDelete