Showing posts with label overmedicalization. Show all posts
Showing posts with label overmedicalization. Show all posts

Monday, March 5, 2012

Goldfinger's pan-man-scan: Available at a hospital near you

As my regular readers know all too well, I think a lot about our proneness to cognitive biases and how they impact our healthcare choices. I have been reading Sam Harris's book The Moral Landscape (Free Press, 2010), where he tries to lay a scientific foundation behind our values and beliefs. It is rather a dense book, full of linguistic convolution and intellectual contortions. And although I do not agree with Harris on some of his points (I think he would say that my disagreement stems more from my preconceived notions and values than from the weakness of his arguments), there are some very enlightening ideas in the book.

On page 123 he discusses bias:
...we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others.
Doesn't this echo my assertion that scientific knowledge acquisition tends to be unidirectional? What occurs to me is that the combination of these emotional and ideological factors along with our cognitive predispositions gangs up to steer us toward doom. Too nihilistic? Well, that's not what I am going for. Bear with me, and I will show you how it is driving us into bankruptcy, particularly where the healthcare system is concerned.

We know a lot about how the human brain works now. Creatures of habit, for the purpose of conserving energy, we develop many mental shortcuts that drive our decisions. Think about them like the impact of talking on your cell phone while driving: most of the time under usual circumstances you can pay enough attention to both driving and talking. But on occasion something unexpected comes along, and you find yourself plowed into the backside of an eighteen-wheeler, your face firmly pressed into an airbag, if you are lucky. So are these mental shortcuts: most of the time they serve us well, or at least don't get us into trouble, while just when we least expect it, the truth ambushes us and we suffer from an error in this habitual way of deciding.

Let's bring it around to medicine. Back in 1978, the dark ages, an intriguing paper was published by Casscells and colleagues in the New England Journal of Medicine. Here is what they did:
We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"
Here are the results:
Eleven of 60 participants, or 18 per cent, gave the correct answer. These participants included four of 20 fourth-year students, three of 20 residents in internal medicine and four of 20 attending physicians. The most common answer, given by 27, was 95 per cent, with a range of 0.095 to 99 per cent. The average of all answers was 55.9 per cent, a 30-fold overestimation of disease likelihood.
Since we are all experts on the positive predictive value of a test, I hardly have to go through the calculation. Briefly, though, and just to be complete, among 1,000 people, 1 has the disease and additional 50 test falsely positive. The PPV then is 1/51 = 2%. This is the chance that a person found to have a positive result actually has the disease. Yet nearly 1/2 of the responders said 95%. What did you estimate? Be honest. And this was Harvard! What chance do the rest of us, mere mortals, stand before such formidable cognitive traps? We might as well throw in the towel and prepare for an onslaught of tail-chasing physicians and patients spinning their gears in pursuit of false positive results. Because once you let a positive finding out of the bag... Well, you get the picture. I would wager that this is at least in part what has hijacked our healthcare systems, its finances and our reason. After all, as Sam Harris pointed out, it doesn't take much for us to form beliefs: no rational thinking required. How can I be so sure, you ask?

Casscells and his team in their paper bring up a phrase "the pan-man scan." They use it to refer to a battery of 25 screening tests and the chances that all of them come back normal if the person is perfectly healthy. Care to take a guess? Twenty-eight per cent. That's 28%! This means that out of 10 patients who walk into the office healthy, 7 will walk out with an abnormal finding and either be assigned a disease or be sent for further testing, even if it is only a repeat of the previously abnormal test. Now think back to your last "routine physical." Did you get "routine" blood work, cholesterol test, urinalysis? How many separate values were run? What was your risk of having an abnormal value and what ensued when you did?

The point here is once again to think about the pre-test probability of the disease and the characteristics of the test. Without this information there is no way to make an informed decision about a). whether or not to bother getting the test, and b). what to make of the result. But how do you rewire clinicians' and the public's brains to get out of these value-laden habitual errors that drive us to all kinds of bad behaviors?

Well, a psychologist from Northwestern University in Chicago by the name of Bodenhausen studies the human brain's proclivity for stereotyping. How is this relevant to medical decision making? Well, in medicine we tend to think in stereotypes. What I mean is that when a middle-aged male smoker with diabetes and hypercholestrolemia comes to the ED with a crushing substernal chest pain, we invoke the stereotype of an acute MI patient, and in this case such stereotyping allows us to make some good choices very rapidly. Where we fall into a trap is in the example above, where we stereotype a patient with a positive screening test into a group with the disease. While it is natural for our habit-addicted brains to do so, it lands us in hot water, individually and as a society. Well, Bodenhausen in this paper (subscription required) in the journal Medical Decision Making states plainly that there are certain circumstances that predispose us to stereotyping: complex decisions, cognitive overload, and happy, angry and anxious moods. Do you recognize any of these risk factors for stereotyping in medicine? I thought so.

So, what do we do? Here are a 5 potential solutions to consider:
1. Go back and think some more.
Metacognition training early and often would be a great addition to the medical curriculum.
2. Get our medical appointment out of the microwave
Give the clinician more cognitive time. Not possible, you say? Pity! That is the simplest answer.
3. Educate the public about these cognitive thought-traps and rational approaches to making medical decisions.
4. If we are so committed to the proliferation of medical stuff, then we really need better HIT infrastructure. We need bedside decision aids that automatically calculate the patient's probability of having a disease given their pre-test probability, as well as posterior probability adding the test characteristics into the mix. Having these data before ordering tests might reduce some of the waste and tail-chasing. It may also decrease some of the harm that comes with over-utilization in our dogged pursuit of a definitive diagnosis.
5. Finally, the cost-benefit equation of innovations in medicine must begin to incorporate these cost to cognition. They may be higher than we have admitted so far.

 

Thursday, February 25, 2010

Does lactose intolerance really need a NIH panel?

I will go out on a limb: I think that lactose intolerance should not be medicalized.

I stumbled upon this story on WebMD discussing the NIH panel on lactose intolerance. First of all I was shocked that my tax dollars are even spent on a NIH panel on lactose intolerance. Reading the rest of the story provided ample opportunity for further shock. 

For example, did you know that we do not have an idea of what the prevalence of this scourge is? So, clearly, we need large representative studies to establish this. OK, so gathering evidence is never a bad idea. But in an odd juxtaposition to the call for evidence was this statement:
"The numbers may be elusive, but outcomes of a dairy-poor diet are easy to predict."
Really? Is this statement evidence-based, or is it setting up the argument that some associations are just too obvious to need evidence behind them? Because if it is the latter, I for one do not appreciate the double standard. In fact, the statement, though not ostensibly a direct quote from an "expert", seems rather irresponsible to me, implying that we should feel free to apply opinion-based and consensus-based principles to this question.

My final outrage came when reading that (I paraphrase) for a bona fide diagnosis one should really undergo a breath test, and other (by implication) more serious conditions need to be ruled out, such as irritable bowel syndrome and celiac disease.

Why, you might wonder, does this engender such a visceral reaction from me? Surely it is not because I do not feel compassion for those people who suffer these conditions. And it is not because I do not want to learn more about them. What worries me is that having a diagnosis requires treatment, usually with a drug directed at the symptom. I am very concerned that, instead of understanding and dealing with the underlying causes of, say lactose intolerance symptoms, we will slap the band-aid of a pill, a course much more expedient, though potentially far more detrimental, than looking for a preventive solution. In the case of lactose intolerance, the non-pharmacologic solution may have something to do with the way our milk is produced and processed: our terror of things microbial has driven us literally to sterilize milk prior to consumption. Some people feel that this "deadness", lack of organisms that through their own lactase production may potentially help us digest it, exacerbates the symptoms of the intolerance.

But this answer would be neither simple nor politically palatable. Who would support this type of research? Milk manufacturers, who would have to overhaul their operations completely? The government whose regulations drive our milk production? The small community of committed farmers who produce raw milk, but do not have corporate muscle behind them? Not likely. And what about public opinion, so durably skewed by the establishment to fear all microorganisms?

So, when a NIH panel begins looking at an issue like this, I naturally worry. And while I do want to know more about it, I am skeptical of the end-result. Are we going in the direction of 100% prevalence of chronic disease requiring 100% penetration of prescription drugs in the US?