Showing posts with label stereotype. Show all posts
Showing posts with label stereotype. Show all posts

Monday, March 5, 2012

Goldfinger's pan-man-scan: Available at a hospital near you

As my regular readers know all too well, I think a lot about our proneness to cognitive biases and how they impact our healthcare choices. I have been reading Sam Harris's book The Moral Landscape (Free Press, 2010), where he tries to lay a scientific foundation behind our values and beliefs. It is rather a dense book, full of linguistic convolution and intellectual contortions. And although I do not agree with Harris on some of his points (I think he would say that my disagreement stems more from my preconceived notions and values than from the weakness of his arguments), there are some very enlightening ideas in the book.

On page 123 he discusses bias:
...we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others.
Doesn't this echo my assertion that scientific knowledge acquisition tends to be unidirectional? What occurs to me is that the combination of these emotional and ideological factors along with our cognitive predispositions gangs up to steer us toward doom. Too nihilistic? Well, that's not what I am going for. Bear with me, and I will show you how it is driving us into bankruptcy, particularly where the healthcare system is concerned.

We know a lot about how the human brain works now. Creatures of habit, for the purpose of conserving energy, we develop many mental shortcuts that drive our decisions. Think about them like the impact of talking on your cell phone while driving: most of the time under usual circumstances you can pay enough attention to both driving and talking. But on occasion something unexpected comes along, and you find yourself plowed into the backside of an eighteen-wheeler, your face firmly pressed into an airbag, if you are lucky. So are these mental shortcuts: most of the time they serve us well, or at least don't get us into trouble, while just when we least expect it, the truth ambushes us and we suffer from an error in this habitual way of deciding.

Let's bring it around to medicine. Back in 1978, the dark ages, an intriguing paper was published by Casscells and colleagues in the New England Journal of Medicine. Here is what they did:
We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"
Here are the results:
Eleven of 60 participants, or 18 per cent, gave the correct answer. These participants included four of 20 fourth-year students, three of 20 residents in internal medicine and four of 20 attending physicians. The most common answer, given by 27, was 95 per cent, with a range of 0.095 to 99 per cent. The average of all answers was 55.9 per cent, a 30-fold overestimation of disease likelihood.
Since we are all experts on the positive predictive value of a test, I hardly have to go through the calculation. Briefly, though, and just to be complete, among 1,000 people, 1 has the disease and additional 50 test falsely positive. The PPV then is 1/51 = 2%. This is the chance that a person found to have a positive result actually has the disease. Yet nearly 1/2 of the responders said 95%. What did you estimate? Be honest. And this was Harvard! What chance do the rest of us, mere mortals, stand before such formidable cognitive traps? We might as well throw in the towel and prepare for an onslaught of tail-chasing physicians and patients spinning their gears in pursuit of false positive results. Because once you let a positive finding out of the bag... Well, you get the picture. I would wager that this is at least in part what has hijacked our healthcare systems, its finances and our reason. After all, as Sam Harris pointed out, it doesn't take much for us to form beliefs: no rational thinking required. How can I be so sure, you ask?

Casscells and his team in their paper bring up a phrase "the pan-man scan." They use it to refer to a battery of 25 screening tests and the chances that all of them come back normal if the person is perfectly healthy. Care to take a guess? Twenty-eight per cent. That's 28%! This means that out of 10 patients who walk into the office healthy, 7 will walk out with an abnormal finding and either be assigned a disease or be sent for further testing, even if it is only a repeat of the previously abnormal test. Now think back to your last "routine physical." Did you get "routine" blood work, cholesterol test, urinalysis? How many separate values were run? What was your risk of having an abnormal value and what ensued when you did?

The point here is once again to think about the pre-test probability of the disease and the characteristics of the test. Without this information there is no way to make an informed decision about a). whether or not to bother getting the test, and b). what to make of the result. But how do you rewire clinicians' and the public's brains to get out of these value-laden habitual errors that drive us to all kinds of bad behaviors?

Well, a psychologist from Northwestern University in Chicago by the name of Bodenhausen studies the human brain's proclivity for stereotyping. How is this relevant to medical decision making? Well, in medicine we tend to think in stereotypes. What I mean is that when a middle-aged male smoker with diabetes and hypercholestrolemia comes to the ED with a crushing substernal chest pain, we invoke the stereotype of an acute MI patient, and in this case such stereotyping allows us to make some good choices very rapidly. Where we fall into a trap is in the example above, where we stereotype a patient with a positive screening test into a group with the disease. While it is natural for our habit-addicted brains to do so, it lands us in hot water, individually and as a society. Well, Bodenhausen in this paper (subscription required) in the journal Medical Decision Making states plainly that there are certain circumstances that predispose us to stereotyping: complex decisions, cognitive overload, and happy, angry and anxious moods. Do you recognize any of these risk factors for stereotyping in medicine? I thought so.

So, what do we do? Here are a 5 potential solutions to consider:
1. Go back and think some more.
Metacognition training early and often would be a great addition to the medical curriculum.
2. Get our medical appointment out of the microwave
Give the clinician more cognitive time. Not possible, you say? Pity! That is the simplest answer.
3. Educate the public about these cognitive thought-traps and rational approaches to making medical decisions.
4. If we are so committed to the proliferation of medical stuff, then we really need better HIT infrastructure. We need bedside decision aids that automatically calculate the patient's probability of having a disease given their pre-test probability, as well as posterior probability adding the test characteristics into the mix. Having these data before ordering tests might reduce some of the waste and tail-chasing. It may also decrease some of the harm that comes with over-utilization in our dogged pursuit of a definitive diagnosis.
5. Finally, the cost-benefit equation of innovations in medicine must begin to incorporate these cost to cognition. They may be higher than we have admitted so far.