Friday, March 30, 2012

My solution to the healthcare crisis

Here is my talk from Ignite Boston last night -- I solved the healthcare crisis!




If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Wednesday, March 28, 2012

Why the PPACA hearing is the very definition of insanity

I haven't said much about the SCOTUS hearing of the PPACA, but it is time to break my silence. I have been listening to some of the details of the arguments, and I cannot help but be nauseated. It feels to me like the Justices are behaving like my teen: they take literalness to its absurd limit. Health insurance and cell phones, really? This betrays a complete disregard for probability. What are the chances that you will within your lifetime in the US need to dial 911 in the absence of a landline or another human with a cell phone within shouting distance? And what are the odds that you will at some point in your life require a medical consultation? I rest my case.

The more I think about it, the more convinced I am that the bill should have introduced single payer a priori -- funding our access to medicine through a tax. Yes, a tax. Perhaps the government is not the most efficient agent of this, and the overhaul could have been accomplished through some public-private hybrid model. In the end, as I have said here, our priorities are misaligned, as are our perceptions of what is important in this debate. We spend 97% of all the healthcare money on medicine, and we spend well over 97% of our national discussion about health on access to healthcare and medical interventions, which can only make a 10% difference in our health. The real money, so-to-speak, is in public health, which contributes 60% to our true health and gets only 3% of the expenditures and practically no conversational energy.

So, once again I find myself turning to the wisdom of Albert Einstein, who defined insanity as "doing the same thing over and over again, and expecting different results." The SCOTUS circus is the poster child for this insanity. Whatever the outcome, and I am not at all optimistic about the individual mandate, my sense is that nothing will change until we start paying attention to the root causes of our collective illness.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 


Thank you for your support!

Tuesday, March 27, 2012

Coronary CT angiography: More and less

I am still scratching my head over this study that was just published in the NEJM on coronary CT angiography among patients admitted with a suspected coronary syndrome. There are many potentially confusing points about it and how it was reported. The three things that I found most confusing were:
1. The formulation of the null hypothesis
2. The definition of the outcomes
3. The flow of patients through the diagnostic algorithm.
Let's see if we can clear some of that confusion.

The intent of the study was to see how adding the CCTA to the usual diagnostic testing in the ED impacted cardiac mortality or a myocardial infarction within 30 days. The null hypothesis was posed in an interesting way:
The study was powered to test the null hypothesis that the rate of major cardiac events among patients who did not have clinically significant coronary artery disease as assessed by CCTA would exceed 1%. 
This is confusing, and I had to read and reread several times to understand what it meant. I finally realized that what they were saying was that in order to disprove that CCTA is not a useful test (this is the alternative hypothesis) they would have to show that the rates of the primary outcome were not above 1%. Make sense?

The next issue I had trouble with, as I always do in cardiology studies is their choice of endpoints. The primary endpoint was 30-day cardiac death OR MI. This means that anyone who either died of a cardiac cause or had a heart attack within this time period was counted as an event, which would argue against CCTA usefulness if they reached the critical mass of over 1%. Mind you this was limited to those patients whose CCTA did not reveal significant disease. There were some secondary outcomes examined as well, all at the 30-day time point: death, MI, revascularization procedure, and resource utilization. Note these outcomes were not combined, but were examined singly, and the pool of patients for these were all those randomized.

As an aside, cardiology studies frequently use combined outcomes, such as death and MI due to sample size considerations. That is both cardiac death and MI should be rare events in the group examined. When these events are rare, in order to get at their statistical significance exceedingly large sample sizes are needed. For this reason, these trials frequently combine several events, so as to enrich the frequency and commensurately drop the needed sample size.

But here is where it gets a little confusing. Looking at the "Safety" section of the Results, the authors state that no one who received a CCTA had a cardiac death or an MI within the 30-day period. However, 1% of the patients randomized into the CCTA arm did have a MI within 30 days. How can this be? Well, we need to keep in mind that not all those randomized to CCTA (n=908) actually got a CCTA (n=767), and that the majority of those who did have one did NOT have significant coronary disease (n=640). Thus, both the numerator and the denominator for the primary and the secondary outcomes are different.

Then there is this quick sentence in the "Efficiency and Use of Resources" section:
Coronary disease was more likely to be diagnosed in patients in the CCTA group than in patients in the traditional-care group (9.0% vs. 3.5%; difference, 5.6 percentage points; 95% CI, 0 to 11.2).
It is almost an afterthought, but it is important. It is puzzling that the outcomes in the two groups are completely identical, and this is not limited to those in the primary endpoint pool, namely people without significant disease. The question arises about what this means. Since there is no difference in cardiac death or MI, this increase in the diagnostic rate may imply overdiagnosis in the group receiving CCTA. There is a slight increase in the revascularization rates in the CCTA group (3%) over the standard care group (1%). This endpoint is much more subjective that most people realize, as a lot of judgment by the cardiologist and the surgeon goes into the decision. So this endpoint does not rule out overdiagnosis as a possibility. On the other hand, the study was not powered to detect a difference in the secondary outcomes, so it may be that the diagnoses are valid, but we do not have enough events to judge.

One final point of frustration with the study reporting was hitting dead ends in the flow of patients through the respective algorithms. I was, of course, looking for the rates of false positive CCTA tests and their outcomes. Unfortunately, I kept getting stuck at the catheterization and stress test steps, not knowing who exactly went on to have this testing. Since in the CCTA out of the 767 tests 47 were indeterminate and 80 were indicative of moderate-to-severe coronary disease, the 37 catheterizations performed were likely among these patients, though we are not told for sure. Of these catheterizations, 9 were negative for a significant stenosis, but again we are not told how this reflects back to the CCTA results. In fairness it is worth noting that fewer people in the CCTA arm (18%) underwent a follow-up test, compared to the standard care arm (62%). But again, this is just one test replacing another, or even being added on top of others. In addition, more of the former group were discharged from the ED (50%) than among the latter (23%) without being admitted.

So what does it all say to me? Well, CCTA seems to aid in the diagnosis of coronary disease among certain people presenting to the ED with chest pain. Given such a low pre-test probability of significant coronary disease (17%, derived by dividing the negative CCTA [640] by the total CCTA tests [767] and subtracting that from 1), I have to wonder about the performance of the test that is quite sensitive but not that specific (high risk of a false positive in a low-risk population). And even though fewer patients needed to be admitted from the ED in the CCTA group, I wonder if this does not have more to do with the differential availability of the testing rather than with its superiority.

Given the context that I laid out above for the results presented, I would not be rushing to adopt CCTA for all patients who present to the ED with a certain type of chest pain. And after all, though a "black swan" event, catastrophes do occur in follow-up catheterization even when the patient is disease-free. This is a good reason to be careful and circumspect in our quest for primum non nocere.      

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have non-profit status. 
Thank you for your support!

Friday, March 23, 2012

How our healthcare spending is like that drunk joke

You know that joke about the drunk crawling around under a street light? A cop comes up to him and asks what he is doing. The drunk explains that he is looking for his wallet. The cop, getting ready to help the man, asks where exactly he dropped it. The drunk points to a distant corner of the dark side of the street. The cop, baffled, inquires why the man is looking here. With inimitable logic the drunk responds, "This is where the light is."

What does this story have to do with anything? Well, I went to a great HIT tweet-up in Cambridge yesterday, organized by Scratch Marketing and led by Janice McCallum. No, they did not at all remind me of the drunk in the joke. But the lively discussion about data by about two dozen attendees inspired by Janice's thoughtful presentation certainly made me realize that our healthcare policy is like that drunk. Here is what I mean.

Our healthcare expenditures are completely devoid of any attempt at probabilistic thinking. I thought about the old Rand Corporation data, which I have presented here before. Juxtaposing them with the data on our National Healthcare Expenditures really drives home my message that we need to get a whole lot better at applying probabilities to our decisions. And this specifically applies to policy.

Just look at the glaring imbalance: while fully 60% of all premature deaths are due to behavioral, social and environmental factors which reside in the realm of public health, 97% of all NHE is spent on the medical side. If I add the 2% of the total NHE spent on research into the public health piece of the pie (this is exceedingly generous, as public health research gets a bafflingly tiny portion of the total US research budget), we still have 95% spent on personal health and its administration and only 5% on public health. 

So, if the probability of premature death due to a public health-related condition is 60%, why are we only spending 3% of all the healthcare dollars on fixing it? Another way of posing this question is, if the probability  of premature death from issues related to access to adequate medical care is 10%, why are we spending 97% of all the NHE on that piece of the pie?

If this isn't just like that joke, I don't know what is. Only in this case it is much less funny than in the case of the drunk.



If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 


Thank you for your support!

Thursday, March 22, 2012

More on "mammography saves lives" story


A story in HealthDay with the title of "Two Studies Find Routine Mammography Saves Lives" talks about studies presented at the annual meeting of the American Society for Clinical Oncology (ASCO) European Breast Cancer Conference (EBCC). The studies, both from the Netherlands, allegedly showed that mammography does indeed save lives. If true, these data would contrast with the preponderance of evidence that has stirred up such a ruckus recently about the utility of mammography, complete with references to rationing and death panels. But let's look at what's reported in today's story more closely.

Cutting through all the definitive bravado, here is a little piece of science that was reported:
Compared with the pre-screening period 1986 to 1988, deaths from breast cancer among women aged 55-79 fell by 31 percent in 2009," Jacques Fracheboud, a senior researcher at the Erasmus University Medical Center in Rotterdam, said in a meeting news release. We found there was a significant change in the annual increase in breast cancer deaths: before the screening program began, deaths were increasing by 0.3 percent a year, but afterwards there was an annual decrease of 1.7 percent," he added. "This change also coincided with a significant decrease in the rates of breast cancers that were at an advanced stage when first detected.
Note, the reference is to deaths from breast cancer without any mention of all-cause mortality. (You can read about why the latter is important here.)

The report next states that over the first 20 years of the screening program
... 13.2 million breast cancer screening examinations were performed among 2.9 million women (an average of 4.6 examinations per woman), resulting in nearly 180,000 referral recommendations, nearly 96,000 biopsies and more than 66,000 breast cancer diagnoses.
So, doing the math, I come up with about 31% false positive rate at the biopsy stage (that's 96,000 biopsies minus 66,000 positives for cancer, all divided by the 96,000 total biopsies). If we use the 180,000 "referral recommendations" as our denominator of all positive tests, and stick with the 66,000 true positive rate, then the false positives grow to (180,000-66,000)/180,000 = 0.63, or 63%. If we spread the 33,000 false positives over the 13.2 million examinations, that equates to 0.25% chance for a false positive. Yet the report goes on to say that (emphasis mine):
For a woman who was 50 in 1990 and had 10 screenings over 20 years, the cumulative risk of a false-positive result (something being detected that turned out not to be breast cancer) was 6 percent.
Six percent? This is clearly a place where my high school math teacher's mantra of "show your work" is applicable.

The next piece of information that I would like to understand better is this:
Over-diagnosis (detection of breast tumors that would never have progressed to be a problem) occurred in 2.8 percent of all breast cancers diagnosed in the total female population and 8.9 percent of screening-detected breast cancers."
How exactly was this computed? Again a case for "show-your-work."

And then there is this (emphasis mine):
Regular screening "decreases deaths by over 30 percent, [with] limited harm and reasonable costs. Additionally, cancers are detected at an earlier stage, which means not only decreased mortality but also morbidity; the patient may not have to have chemotherapy or a mastectomy," she noted.
OK, so, if I got it right, it is breast cancer mortality that is decreased by 31%, not all-cause mortality. This really should have been spelled out more clearly, not to mention that the actual, or absolute, reduction likely pales in comparison to this relative drop. And what about diagnosing earlier stage disease? Lead time bias, anyone?

The second study was a computer model, and I will not go through it at this time as I need to move on to other work. But you get the picture: the numbers given in the report are limited and at times they don't add up. Mixing up cancer mortality with all-cause mortality leads to erroneous conclusions. And finally, forgoing reporting on the absolute risk reduction in favor of the inflated relative reduction is not helpful for understanding the true risks involved.

One final thought: Yes, I do have cognitive biases, and it is difficult for me to avoid them. I happen to fall into the camp that thinks screening for sublclinical diseases, at least in our current technological setting, is disease mongering. At the same time, I would like to think that if the data really showed a significant benefit without great risks, I would give them a second look.

The bottom line is this: at least for me, the report confused the issue more than it has clarified. Perhaps the study, once published, will answer all of the questions that I have posed adequately. But at this stage, it is a shame that such strong statements as...
"These results show why mammography is such an effective screening tool," said one U.S. expert, Dr. Kristin Byrne, chief of breast imaging at Lenox Hill Hospital in New York City. She was not involved in the new research.
 and this...
"We are convinced that the benefits of the screening program outweigh all the negative effects," Fracheboud said.
 ... are not backed up by appropriate evidence.

h/t to @ElaineSchattner for the story


If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Wednesday, March 21, 2012

Aspirin or bias?

Update 4PM Eastern, 3/21/12

Wanted to append a couple of thoughts I tweeted earlier about these studies, in case you don't follow me on Twitter or just missed them:

Around 2:30 PM Eastern:
And then closer to 3:00 PM Eastern: 

Thoughts?


There is a fascinating review by Cancer Research UK of the new and old aspirin data with respect to its effects on cancer and cardiovascular complications in the context of a heightened risk for bleeding. The review is full of fabulous information about what we know and the uncertainties that remain, all with practical suggestions at the end, so go there and read it.

But here is what I wanted to highlight in the graph that I am reproducing from a screen shot:
There is something very interesting going on here. Just as the risk of bleeding begins to drop, so does the risk of developing cancer. This could be a complete coincidence, but perhaps not. An alternative explanation is that people who already have cancer, though it may not yet be diagnosed, may be at a higher risk for a bleeding complication. Those who develop a bleeding complication presumably are taken off aspirin. But remember, they may already be harboring a cancer that will rear its head in the near future. But what about those who do not bleed and therefore are able to tolerate aspirin for a longer time? They also seem to have a drop in their risk of incident cancer. But of course this may have nothing to do with aspirin's preventing cancer, so much as with its ability to unmask a cancer that is already present and essentially weed them out from the future risk pool for cancer development. And when you weed out those at a higher risk for clinical cancer, by definition you have a group with a lower than standard risk, creating the potential for a selection bias. Make sense?

Conversely, the risk of a cardiac event starts to increase roughly at the same time as the risks for cancer and bleeding begin to drop. This to me suggests confirmation that aspirin may prevent cardiovascular events early in the course of taking it. Furthermore, given my hypothesis above about aspirin's weeding out those with an early cancer, perhaps its cardiovascular impact is for some reason limited to those with an early cancer or with another reason for aspirin-induced bleeding.

All-in-all the data do not convince me to start taking aspirin -- I am still at odds with Dr. Agus on that. The selection bias that I described above may very well mean that aspirin's role is not as a cancer prevention, but more likely as a sort of a stress test for those with a subclinical cancer. So we are left again with the the chicken-and-egg question. But isn't that, after all, what makes science exciting?

Would love to know what others think -- does this make sense? Are there other possible explanations?      



If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 


 Thank you for your support!

Tuesday, March 20, 2012

The probabiltiy dozen of participatory medicine

Yesterday my rant about uncertainty and probability got quite a bit of play in cyberspace, and I am glad.

Uncertainty is ubiquitous. We consider the odds of rain when choosing what to wear. We do (or at least we should do) a quick mental risk-benefit analysis before buying a burger at Quickie-Mart. We choose our driving routes to work based on the probability of encountering heavy traffic. We do this mental calculus subconsciously but reliably, mostly getting it right. What is odd, though, is that there are certain parts of our lives where we expect complete and utter certainty. I will not get into the political aspects of this fallacy, but I do want to continue down this line of reasoning about healthcare.

As I said yesterday, and many many times in the past, the only certain thing about medicine is uncertainty. And here is what I want you to understand deeply: the amount of uncertainty is much greater than you think. So, every time you say to yourself "I think there is a lot of uncertainty in this information," multiply it by 100, and then you may get close to just how uncertain most information is.

And again, I want to emphasize that this uncertainty gets magnified in the office encounter. So, what is the solution, short of having everyone understand the totality of evidence? Yesterday I said that the solution is to teach probability early and often, and this is indeed the best long-range answer. But is there anything we can do in the short-term? The answer, of course, is yes. And here is what it is.

Everyone needs to learn what questions to ask. Instead of nodding your head vigorously to everything your doctor says, put up your hand and ask how certain s/he is that s/he is on the right track. Here is a dozen questions to help you have this conversation:

1. What are the odds that we have the diagnosis wrong?
2. What are the odds that the test you are ordering will give us the right answer, given the odds of my having the condition that you are testing me for?
3. How are we going to interpret results that are equivocal?
4. What follow-up testing will need to happen if the results are equivocal?
5. What are the implications of further testing in terms of diagnostic certainty and invasiveness of follow-up testing?
6. If I need an invasive test, what are the odds that it will yield a useful diagnosis that will alter my care?
7. If I need an invasive test, what are the odds of an adverse event, such as infection, or even death?
8. What are the odds of missing something deadly if we forgo this diagnostic testing?
9. What are the odds that the treatment you are prescribing for this condition will improve the condition?
10. How much improvement can I expect with this treatment if there is to be improvement?
11. What are the odds that I will have an adverse event related to this treatment? What are the odds of a serious adverse event, such as death?
12. How much will all of this cost in the context of the benefit I am likely to derive from it?

And in the end, you need to understand where these odds are coming from -- the clinician's gut or evidence or both? I prefer it when it integrates both, which, I believe, was the original intent of evidence-based medicine.

Perhaps for some of us this is a stretch: we don't like numbers, we are intimidated by the setting, the doc may be unhappy with the interrogation. But it is truly incumbent on all of us to accept the responsibility for sharing in these clinical decisions. I believe that the docs of today are much more in tune with shared decision-making, and understand the value of participatory medicine. And if they are not, educate them. Ultimately, it is your own attitude to risk, and not just the naked data and the clinician's perceptions of your attitude that should drive all of these decisions.  

Knowledge is empowering, and empowerment is good for everyone, patient and clinician alike. As patients, taking control of what happens to us in a medical encounter can only bring higher odds of a desirable outcome. For physicians, a cogent conversation about their recommendations may help safeguard against future litigation, not to mention augment the satisfaction in the relationship.

And thus starting to discuss probabilities explicitly is very likely to get us to a better place in terms of both quality and costs of medical care. And in the process it may very well train us how to make better decisions in the rest of our lives.

I would love to hear about your experiences discussing probability, be it in a medical or non-medical setting. And as always, thanks for reading.


If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Monday, March 19, 2012

How medicine is like quantum physics

When a patients goes to his doctor to get fixed a pivotal triad of presentation-diagnosis-treatment ensues. The three steps are as follows:
1. History, physical examination and a differential diagnosis
When the patient shows up with a complaint, a constellation of symptoms and signs, a good clinician collects this information and funnels it through a mesh of possibilities, ruling certain conditions in and others out to derive the initial differential diagnosis.
2. Diagnostic testing
Having gone through the exercise in step 1, the practitioner then decides on appropriate diagnostic testing in order to narrow down further the possible reasons for the person's state.
3. Treatment
Finally, having reviewed all the data, the clinician makes the therapeutic choice.

These three steps seem dead simple, and we have all experienced them, either as patients or clinicians or both. Yet the cause for the current catastrophic state of our healthcare system lies within the brackets of each of these three little domains.

The cause is our failure to acknowledge the vast universe of uncertainty dotted sparsely with the galaxies of definiteness, all shrouded in false confidence. And while the cause and the way to address it are conceptually simple, the remedy is not easy to implement. But I am jumping ahead; first I have to convince you that I have indeed discovered the cause of this ruin.

Let's examine what goes on in step 1, the compilation of history and physical to generate a differential diagnosis. This is usually an implicit process that takes place mostly at a subconscious level, where the mind makes connections between the current patient and what the clinician has learned and experienced. What does that mean? It means that the clinician, within the constraints of time and the incredible shrinking appointment, has to listen, examine, elicit and put together all of the data in such a way as to cram them into a few little diagnostic boxes, many of which contain much more material than a human brain can hold all at once, even if that brain is at the right tail of human cognition (or not). What overtakes at this step is a bunch of heuristics and biases. Have we talked about those enough here? Just to review, heuristics are mental shortcuts that can serve us well, but can also lead us astray, particularly under conditions of extreme uncertainty, as in a healthcare encounter. If you want to learn more about this, read Kahneman, Slovic and Tversky's opus "Judgement under uncertainty: Heuristics and biases." As for cognitive biases, I will not belabor them, as there is enough material about them on this web site and elsewhere to overload a spaceship.

The picture that emerges at this step is one of fragments of information gathered being fit into fragments of studies and experience, stirred with mental shortcuts and poured into a bunch of baking tins shaped like specific diagnoses. Is there any room in this process for assigning objective probabilities to any of these events? Well, there is an illusion of doing so, but even this step is done by feel, rather than by computation. So while there is some awareness of a probabilistic hierarchy, it is more chaos than science. Given this picture, it's a wonder it actually works as well as it does, don't you think?

The next step in this recipe is the diagnostic workup. What ensues here is utter Wild West, particularly as new technologies are adopted at breakneck speed without any thought to the interpretation of data that they are capable of spitting out. Here the confusion of the first step gets magnified exponentially, just as it seduces us into further illusion of certainty. The uncertainties in arriving at the differential get multiplied by the imperfections of diagnostic tests to give the encounter truly quantum properties: you may know the results or you may know the patient, but you may not know both at the same time. What I mean is what I have always said on this blog: no test is perfect, and because of this simple truth, unless we know the pre-test probability of the disease in a particular patient, as well as the characteristics of the test, we have no idea about the context of these results. Taking them at face value, as we know, is a grave error.

What follows these results is frequently more diagnostic hit-or-misses, as the likelihood of harm and escalating expenditures without any added value rises. Then comes the treatment, with its many uncertainties and the potential for adverse events, and what are we left with? A pile of costly and deadly steaming manure. So, what's a doc to do?

I think that there is a very simple solution to this, and in its simplicity it will be incredibly hard to implement: education. And I don't just mean medical education. Everything that I have talked about in this post echoes back to the concept of probability. In the secondary education, at least as I remember it, probability is left to Advanced Math. By the time a student becomes eligible to take this course, she has been made to feel that she does not have the facility for math, and that, furthermore, math is boring and useless. So, while my friends in education may have a much better idea of what percentage of kids leave high school having been exposed to some probability, my guess is that it is woefully small. And those that do get exposure to it walk out of class perfectly able to bet on a game of craps or a horse race, but no clue how to apply these ideas to the world they live in.

And so those who progress into healthcare and those who don't have heard the word "probability," but cannot quite understand how it impacts them beyond their chance of winning the lottery. And unfortunately, I have to tell you that, if I relied on what I learned in medical school about probability, well, let's just say it is highly improbable that we would be having this discussion right now. This is why I do now and will for the foreseeable future harp on all of these probabilities, so that when you are faced with your own medical decisions, you will at least know the right questions to ask.

I know I need to wrap this up -- I saw that yawn! Here is the bottom line. First, we need to acknowledge the colossal uncertainties in medicine. Once we have done so, we need to understand that such uncertainties require a probabilistic approach in order to optimize care. Finally, such probabilistic approach has to be taught early and often. All of us, clinicians and patients alike, are responsible for creating this monster that we call healthcare in the 21st century. We will not train it to behave by adding more parts. The only way to train it is to train our brains to be much more critical and to engage in a conversation about probabilities. Without this shift a constructive change in how medicine is done in this country is, well, improbable.              

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Sunday, March 18, 2012

A good week

So, it has been a great week here at Healthcare, etc, and I am grateful.  Here are the highlights:

1. I did a guest post on plagiarism at Retraction Watch, and not only did it generate a fantastic conversation there, but it also brought other stories of plagiarism to us right here on this site. The story is not finished, and I will update when I can.

2. Gary Schwitzer of Health News Review cited 2 posts from Healthcare, etc. on his extremely popular and well-respected site here and here. The corresponding stories that he referenced were "Unpacking the meat data" (which incidentally is the most popular post of all time on this blog) and "PSA screening: Does it or doesn't it" (a niche post with a lively discussion in the comments).

3. The crowning glory of the week was being cited by Mark Bittman in his New York Times column on May 16th, which drove hordes of readers to Healthcare, etc.

4. Finally, my book is progressing through editing and design, and it is all beginning to seem more real. It feels like I am realizing my dream of bringing a critical approach to evaluating medical evidence to all who are interested. If you want to get news about it in your inbox, please sign up here.

In the next few weeks you can expect to hear about my experience attending and presenting at the Ignite Boston conference at MIT on March 29, as well as coverage from TEDMED 2012 in DC, which I am thankful to be attending as a Frontline scholar.

Thanks again to everyone for visiting, tweeting, linking and sharing in all other ways. Looking forward to seeing you here again.

Friday, March 16, 2012

Time to get the goblins out of our healthcare system

Just when you thought you could predict my blog topics, I am going to foil you and not do a post on the rice study in BMJ. I feel like after the last two days with the "meat kills" and "PSA screening saves lives but does not reduce mortality" posts you are not only fed up, but also competent enough to do your own debunking. Let me know if you need any help with that, by the way.

No, in honor of Friday, I will do a fluff post about a show that my family and I are hooked on. Yes, we are addicted to the BBC series "Merlin." Guess what it's about. The new twist is that it is set when both Merlin and Arthur are young men, and it is Merlin's job to keep Prince Arthur from succumbing to his enemies or to his own folly, so that he can lead Camelot forever and ever amen. It has everything you would expect: castles, bloody sword-fight scenes, dragons. Its heroes and villains are an epidemiologist's dream, since they are all evil or all good with nothing in between. It has magic, though that mostly stays in the closet, as Arthur's misguided father King Uther is quite magic-phobic.

Well, last night we witnessed a first in the Episode 3 of Season 3 -- a mischievous goblin got inadvertently released from his lead-lined wooden box and invaded the body of the court physician and Merlin's mentor and friend Gaius. As we all know, of course, goblins are hungry for gold (almost literally -- you'll have to watch the episode). Normally a caring and reasonable man, the pillar of his community, the elderly physician became a raucous and self-centered party animal in search of gold coins at all costs, so-to-speak. In one scene while making a house call to a sick peasant he indicated that the man would die without the potion that Gaius was holding in his hands. Yet he would not part with the potion unless the sick man and his wife produced a gold coin as payment. "But we are so poor," the family explained, " and you have never charged us before." Well, the new and improved market-driven Gaius could not be swayed, and the family ended up giving up their meager wealth to "cure" the husband, who incidentally was merely suffering from a broken rib.

In the next scene, in his continued search for gold coins, Gaius-come-goblin is accosted by Guinevere (in this version she is merely a servant to Lady Morgana -- long story), who is suffering from some ill-defined but clearly innocuous symptom. He takes this opportunity to sell her a potion without which he claims she will succumb to the deadly infection that is gripping Camelot. And despite her obvious skepticism, she is so driven by the fear of the "what if," that she hands over her coin for the tonic and peace of mind.

If all of this seems familiar, but you just can't put your finger on how, I'll give you a hint: our US healthcare "system." It's like there is a goblin in it, who in his relentless pursuit of gold is bankrupting the nation and disease-mongering to increase profit. Of course this allegory has no nuance, and why should it: this is after all a TV series about a magical medieval kingdom whose name itself has come to mean "utopia." Yet it is jarring to realize just how close to our reality this story is.

I am sure you are all anxious to find out how the episode ended. Did the peasant survive? Was Gaius able to regain his body? Did the goblin get back into his box? For answers to all these questions you can go to Netflix. As for our healthcare system, isn't it time we chased the goblins out?


Thursday, March 15, 2012

PSA screening: Does it or doesn't it?

A study in the NEJM reports that after 11 years of follow up in a very large cohort of men randomized either to PSA screening every 4 years (~73,000 subjects) or to no screening (~89,000 subjects) there was both a reduction in death and no mortality advantage. How confusing can things get? Here is a screenshot of today's headlines about it from Google News:






























How can the same test cut prostate cancer deaths and at the same time not save lives? This is counter-intuitive. Yet I hope that a regular reader of this blog is not surprised at all.  For the rest of you, here is a clue to the answer: competing risks.

What's competing risks? It is a mental model of life and death that states that there are multiple causes competing to claim your life. If you are an obese smoker, you may die of a heart attack or diabetes complications or a cancer, or something altogether different. So, if I put you on a statin and get you to lose weight, but you continue to smoke, I may save you from dying from a heart attack, but not from cancer. One major feature of the competing risks model that confounds the public and students of epidemiology alike is that these risks can actually add up to over 100% for an individual. How is this possible? Well, the person I describe may have (and I am pulling these numbers out of thin air) a 50% risk of dying from a heart attack, 30% from lung cancer, 20% from head and neck cancer, and 30% from complications of diabetes. This adds up to 130%; how can this be? In an imaginary world of risk prediction anything is possible. The point is that he will likely die of one thing, and that is his 100% cause of death.

Before I get to translating this to the PSA data, I want to say that I find the second paragraph in the Results section quite problematic. It tells me how many of the PSA tests were positive, how many screenings on average each man underwent, what percentage of those with a positive test underwent a biopsy, and how many of those biopsies turned up cancer. What I cannot tell from this is precisely how many of the men had a false positive test and still had to undergo a biopsy -- the denominators in this paragraph shape-shift from tests to men. The best I can do is estimate: 136,689 screening tests, of which 16.6% (15,856) were positive. Dividing this by 2.27 average tests per subject yields 6,985 men with a positive PSA screen, of whom 6,963 had a biopsy-proven prostate cancer. And here is what's most unsettling: at the cut-off for PSA level of 4.0 or higher, the specificity of this test for cancer is only 60-70%. What this means is that at this cut-off value, a positive PSA would be a false positive (positive test in the absence of disease) 30-40% of the time. But if my calculations are anywhere in the ballpark of correct, the false positive rate in this trial was only 0.3%. This makes me think that either I am reading this paragraph incorrectly, or there is some mistake. I am especially concerned since the PSA cut-off used in the current study was 3.0, which would result in a rise in the sensitivity with a concurrent decrease in specificity and therefore even more false positives. So this is indeed bothersome, but I am willing to write it off to poor reporting of the data.

Let's get to mortality. The authors state that the death rates from prostate cancer were 0.39 in the screening group and 0.50 in the control group per 1,000 patient-years. Recall from the meat post that patient-years are roughly a product of the number of subjects observed by the number of years of observation. So, again, to put the numbers in perspective, the absolute risk reduction here for an individual over 10 years is from 0.5% to 0.39%, again microscopic. Nevertheless, the relative risk reduction was a significant 21%. But of course we are only talking about deaths from prostate cancer, not from all other competitors. And this is the crux of the matter: a man in the screening group was just as likely to die as a similar man in the non-screening group, only causes other than prostate cancer were more likely to claim his life.

The authors go through the motions of calculating the number needed to invite for screening (NNI) in order to avoid a single prostate cancer death, and it turns out to be 1,055. But really this number is only meaningful if we decide to get into death design in a something like "I don't want to die of this, but that other cause is OK" kind of a choice. And although I don't doubt that there may be takers for such a plan, I am pretty sure that my tax dollars should not pay for it. And thus I cast my vote for "doesn't."       

Tuesday, March 13, 2012

Unpacking the meat data

So, this story from Harvard on how red meat is bad for you deserves some unpacking. First, allow me to say that all meat is not created equal: the cows that graze on the farm around the corner from where I live make meat that is quite different from that reconstituted slime used by fast-"food" restaurants. Cows that are raised on CAFOs and fed corn-based diets are practically different species from those guernseys down the street.

But putting that aside, let's just look at what the paper reports and what the numbers add up to. The investigators examined two large observational cohort studies totaling over 100,000 subjects and tried to estimate the risk of death associated with red meat consumption. Now, first, it has been widely acknowledged that dietary habit surveys are a difficult beast, and that is how these two studies got at the food history. Next, let us look at some of the numerators and denominators. The paper reports 23,926 deaths among these >100,000 subjects over 22 to 28 years of observation. The denominator for this type of a study is person-years, where you simply multiply the number of persons observed by the corresponding number of years of observation. In this instance, this value is 2,960,000 person-years. So, the roughly 24,000 deaths occurred over 2.96 million years of observation, simplifying to 24,000/2,960,000 = 8 deaths per 1,000 years overall. If we were to translate this to an individual's risk for death over 1 year, it would be 0.008, or under 1%.

The study further reports that at its worst, meat increases this risk by 20% (95% confidence interval 15-24%, for processed meat). If we use this 0.8% risk per year as the baseline, and raise it by 20%, it brings us to 0.96% risk of death per year. Still, below 1%. Need a magnifying glass? Me too. Well, what if it's closer to the upper limit of the 95% confidence interval, or 24%? The risk still does not quite get up to 1%, but almost. And what if it is closer to the lower limit, 15%? Then we go from 0.8% to 0.92%.

Does this effect size matter, even if statistically significant? What if this were a randomized controlled trial for a statin? What would we say to this result? Even if this is a real signal, which is questionable given the observational design (yes, despite holding a special affection for observational studies, I don't think that this cause-effect is completely unconfounded; and this matters greatly in view of this minuscule magnitude), I am far more likely to die next time I get into my car than from eating burgers, even if I do indulge in one a couple of times per week. I am certainly not advocating eating red meat 7 days per week, though this view is driven more by practical concerns for sustainable beef farming than by the data presented in this paper.

There are a few political issues to disentangle. I despise CAFOs and their product; I despise their contempt for the animals and for the environment; and I despise their disregard for human health. I would love to see a study that shows that CAFO-raised beef kills people, as my cognitive biases tell me it must. I would love to see them all shut down, period. And this goes for the meat packing and distributing oligopoly as well. This venom notwithstanding, the current paper gives us no ammunition to this end: it failed to explore this pivotal question. Pity!

Furthermore, a study like this is likely to feed extremist marketing messages to suit someone's agenda that will likely drag us farther away from the moderation that is conducive to our health. But my local farmers should rest easy knowing that this is not by any means a game changer, that there is nothing in this paper to make us any less enthusiastic about their product, and that our New England pastures will not any time soon be devoid of the beautiful sight of these lovely ruminants. Moderation in everything including meat consumption, is probably still the best course of action. If we focus our energies on what is genuinely good for our health, we will do right by the environment.      

Monday, March 12, 2012

Plagiarism much?

Has everyone read my post on Retraction Watch about how I was plagiarized? There is a great discussion there, and I would love to get your comments.

Thursday, March 8, 2012

Three central questions about medical technologies

Every day my inbox gets filled with announcements for and invitations to attend all kinds of conferences. While many of them are of the traditional medical education sort, more and more I hear about meetings where new technologies and gadgetry in healthcare are the focus. And while the former feature healthcare professionals speaking medicalese from the stage to rapt audiences of other healthcare professionals in dimly lit halls, the latter capture their audience' imaginations with the promise of the future, in all its glitter and glory. And naturally, the latter are what attract techies and patients alike, steamrolling over our staid medieval medical conventions. I heard that the HIMSS conference in Las Vegas last month attracted 37,000 attendees! And the excitement was palpable even through Twitter feeds. This is clearly the preferred way to effect public engagement with healthcare.

But here is the thing: medical advances happen much more slowly than the speed of technology development. That is why, year after year, we go to our professional society meetings and have deja vus all over again. Year after year we hear the same people present the same studies, sometimes, if we are lucky, with a slightly different twist. The last group of breakthroughs I heard about at one of our critical care meetings was over a decade ago. And we have had to backpedal from that quite a bit with the removal of Xigris from the market, and with the realization that tight glucose control had to be used with extreme caution, so as not to kill more critically ill patients than it was meant to save.

This disconnect between the glacial pace of true progress in the clinical sciences and the lightening speed of technological progress raises some obvious questions about our assumptions. If we are not making dramatic breakthroughs in medicine every day, what is this breakneck pace of technology innovation delivering, save for the glitter and a seductive promise of health and wealth? Is there evidence supporting this promise? You might counter by saying that we have years, maybe even decades, of translational catching up to do, bringing all the advances from the bench to the bedside. I would have to say that the magnitude of such advances, as well as clinicians' resistance to them, may have been overstated. You might also point out that the vast advances in computing capabilities have not penetrated sufficiently into our healthcare system, and there I cannot disagree. But whether bringing these advances into clinic without careful planning will improve our health or our healthcare finances remains in question.

Moreover, I see numerous downsides to rushing ahead without thinking through what we are rushing toward. Ostensibly, the light at the end of this bright technological tunnel is better health. What concerns me is that the journey has become a sort of an end in itself: the sheer beauty of the tunnel has itself become the prize. To regain our compass, we need to ask three tough questions, each of them central to understanding the impending advent of too much out-of-context information:
1. Do we really want to walk around with sensors (pdf document, see page 16)? Personally, I find 24/7 monitoring of our vital functions to be a depressing prospect. Furthermore, I sincerely doubt that this is a better (or more cost-effective) way to achieve health than through focus on public health and socioeconomic equity.
2. What is the use of having your genome in your pocket?
How is that going to help us at the stage where all we can do is identify certain levels of risk (bracketed by broad intervals of uncertainty) in isolation from all the influences that modify that risk?
3. Do we want an epidemic of false positive findings and pseudo-disease?
We have enough trouble interpreting positive findings from medical screening tests. Do we really want the public falling prey to the anxiety and over-testing that false positives bring? As a healthcare system, can we sustain such an avalanche? As clinicians are we able to mange these screening snafus?
(I have done so many posts on this issue that you can barely navigate this site without stumbling over their debris).

All of the above is not sexy or shiny, and it brings in the ugly four-letter word "risk" to balance the discussion of the holy grail of benefit. So, how do we sell it to people so hypnotized by technology? I am asking this question quite seriously. I, and many of you out there, really would like people to become more cognizant of these nuances, but how do we accomplish this? Do we need to change our byzantine approach to medical meetings to start attracting a broader audience for our messages? If so, let's get started. Perhaps other forums that are already exceedingly successful can teach us how. I commend TEDMED (which I am attending as a Front-Line Scholar) for starting to bring this important viewpoint to their audiences. Now, how about having HIMSS, Health 2.0 and others join to clarify this other edge of the medical sword? Perhaps we can prevent the wild pendulum swings by thinking things through now. And think how much more credibility we all will have if, by thinking and debating now, we minimize the unintended consequences later?                            

Monday, March 5, 2012

Goldfinger's pan-man-scan: Available at a hospital near you

As my regular readers know all too well, I think a lot about our proneness to cognitive biases and how they impact our healthcare choices. I have been reading Sam Harris's book The Moral Landscape (Free Press, 2010), where he tries to lay a scientific foundation behind our values and beliefs. It is rather a dense book, full of linguistic convolution and intellectual contortions. And although I do not agree with Harris on some of his points (I think he would say that my disagreement stems more from my preconceived notions and values than from the weakness of his arguments), there are some very enlightening ideas in the book.

On page 123 he discusses bias:
...we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others.
Doesn't this echo my assertion that scientific knowledge acquisition tends to be unidirectional? What occurs to me is that the combination of these emotional and ideological factors along with our cognitive predispositions gangs up to steer us toward doom. Too nihilistic? Well, that's not what I am going for. Bear with me, and I will show you how it is driving us into bankruptcy, particularly where the healthcare system is concerned.

We know a lot about how the human brain works now. Creatures of habit, for the purpose of conserving energy, we develop many mental shortcuts that drive our decisions. Think about them like the impact of talking on your cell phone while driving: most of the time under usual circumstances you can pay enough attention to both driving and talking. But on occasion something unexpected comes along, and you find yourself plowed into the backside of an eighteen-wheeler, your face firmly pressed into an airbag, if you are lucky. So are these mental shortcuts: most of the time they serve us well, or at least don't get us into trouble, while just when we least expect it, the truth ambushes us and we suffer from an error in this habitual way of deciding.

Let's bring it around to medicine. Back in 1978, the dark ages, an intriguing paper was published by Casscells and colleagues in the New England Journal of Medicine. Here is what they did:
We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"
Here are the results:
Eleven of 60 participants, or 18 per cent, gave the correct answer. These participants included four of 20 fourth-year students, three of 20 residents in internal medicine and four of 20 attending physicians. The most common answer, given by 27, was 95 per cent, with a range of 0.095 to 99 per cent. The average of all answers was 55.9 per cent, a 30-fold overestimation of disease likelihood.
Since we are all experts on the positive predictive value of a test, I hardly have to go through the calculation. Briefly, though, and just to be complete, among 1,000 people, 1 has the disease and additional 50 test falsely positive. The PPV then is 1/51 = 2%. This is the chance that a person found to have a positive result actually has the disease. Yet nearly 1/2 of the responders said 95%. What did you estimate? Be honest. And this was Harvard! What chance do the rest of us, mere mortals, stand before such formidable cognitive traps? We might as well throw in the towel and prepare for an onslaught of tail-chasing physicians and patients spinning their gears in pursuit of false positive results. Because once you let a positive finding out of the bag... Well, you get the picture. I would wager that this is at least in part what has hijacked our healthcare systems, its finances and our reason. After all, as Sam Harris pointed out, it doesn't take much for us to form beliefs: no rational thinking required. How can I be so sure, you ask?

Casscells and his team in their paper bring up a phrase "the pan-man scan." They use it to refer to a battery of 25 screening tests and the chances that all of them come back normal if the person is perfectly healthy. Care to take a guess? Twenty-eight per cent. That's 28%! This means that out of 10 patients who walk into the office healthy, 7 will walk out with an abnormal finding and either be assigned a disease or be sent for further testing, even if it is only a repeat of the previously abnormal test. Now think back to your last "routine physical." Did you get "routine" blood work, cholesterol test, urinalysis? How many separate values were run? What was your risk of having an abnormal value and what ensued when you did?

The point here is once again to think about the pre-test probability of the disease and the characteristics of the test. Without this information there is no way to make an informed decision about a). whether or not to bother getting the test, and b). what to make of the result. But how do you rewire clinicians' and the public's brains to get out of these value-laden habitual errors that drive us to all kinds of bad behaviors?

Well, a psychologist from Northwestern University in Chicago by the name of Bodenhausen studies the human brain's proclivity for stereotyping. How is this relevant to medical decision making? Well, in medicine we tend to think in stereotypes. What I mean is that when a middle-aged male smoker with diabetes and hypercholestrolemia comes to the ED with a crushing substernal chest pain, we invoke the stereotype of an acute MI patient, and in this case such stereotyping allows us to make some good choices very rapidly. Where we fall into a trap is in the example above, where we stereotype a patient with a positive screening test into a group with the disease. While it is natural for our habit-addicted brains to do so, it lands us in hot water, individually and as a society. Well, Bodenhausen in this paper (subscription required) in the journal Medical Decision Making states plainly that there are certain circumstances that predispose us to stereotyping: complex decisions, cognitive overload, and happy, angry and anxious moods. Do you recognize any of these risk factors for stereotyping in medicine? I thought so.

So, what do we do? Here are a 5 potential solutions to consider:
1. Go back and think some more.
Metacognition training early and often would be a great addition to the medical curriculum.
2. Get our medical appointment out of the microwave
Give the clinician more cognitive time. Not possible, you say? Pity! That is the simplest answer.
3. Educate the public about these cognitive thought-traps and rational approaches to making medical decisions.
4. If we are so committed to the proliferation of medical stuff, then we really need better HIT infrastructure. We need bedside decision aids that automatically calculate the patient's probability of having a disease given their pre-test probability, as well as posterior probability adding the test characteristics into the mix. Having these data before ordering tests might reduce some of the waste and tail-chasing. It may also decrease some of the harm that comes with over-utilization in our dogged pursuit of a definitive diagnosis.
5. Finally, the cost-benefit equation of innovations in medicine must begin to incorporate these cost to cognition. They may be higher than we have admitted so far.