Showing posts with label heuristics. Show all posts
Showing posts with label heuristics. Show all posts

Tuesday, April 24, 2012

Do complex problems always require complex solutions?

A mind-blowing talk by Gerd Gigerenzer on making decisions under conditions of uncertainty -- this is for you, heuristics mavens! (Note an interesting nuance about medical decisions at about 16:15).



A big h/t to @Medskep

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Monday, March 19, 2012

How medicine is like quantum physics

When a patients goes to his doctor to get fixed a pivotal triad of presentation-diagnosis-treatment ensues. The three steps are as follows:
1. History, physical examination and a differential diagnosis
When the patient shows up with a complaint, a constellation of symptoms and signs, a good clinician collects this information and funnels it through a mesh of possibilities, ruling certain conditions in and others out to derive the initial differential diagnosis.
2. Diagnostic testing
Having gone through the exercise in step 1, the practitioner then decides on appropriate diagnostic testing in order to narrow down further the possible reasons for the person's state.
3. Treatment
Finally, having reviewed all the data, the clinician makes the therapeutic choice.

These three steps seem dead simple, and we have all experienced them, either as patients or clinicians or both. Yet the cause for the current catastrophic state of our healthcare system lies within the brackets of each of these three little domains.

The cause is our failure to acknowledge the vast universe of uncertainty dotted sparsely with the galaxies of definiteness, all shrouded in false confidence. And while the cause and the way to address it are conceptually simple, the remedy is not easy to implement. But I am jumping ahead; first I have to convince you that I have indeed discovered the cause of this ruin.

Let's examine what goes on in step 1, the compilation of history and physical to generate a differential diagnosis. This is usually an implicit process that takes place mostly at a subconscious level, where the mind makes connections between the current patient and what the clinician has learned and experienced. What does that mean? It means that the clinician, within the constraints of time and the incredible shrinking appointment, has to listen, examine, elicit and put together all of the data in such a way as to cram them into a few little diagnostic boxes, many of which contain much more material than a human brain can hold all at once, even if that brain is at the right tail of human cognition (or not). What overtakes at this step is a bunch of heuristics and biases. Have we talked about those enough here? Just to review, heuristics are mental shortcuts that can serve us well, but can also lead us astray, particularly under conditions of extreme uncertainty, as in a healthcare encounter. If you want to learn more about this, read Kahneman, Slovic and Tversky's opus "Judgement under uncertainty: Heuristics and biases." As for cognitive biases, I will not belabor them, as there is enough material about them on this web site and elsewhere to overload a spaceship.

The picture that emerges at this step is one of fragments of information gathered being fit into fragments of studies and experience, stirred with mental shortcuts and poured into a bunch of baking tins shaped like specific diagnoses. Is there any room in this process for assigning objective probabilities to any of these events? Well, there is an illusion of doing so, but even this step is done by feel, rather than by computation. So while there is some awareness of a probabilistic hierarchy, it is more chaos than science. Given this picture, it's a wonder it actually works as well as it does, don't you think?

The next step in this recipe is the diagnostic workup. What ensues here is utter Wild West, particularly as new technologies are adopted at breakneck speed without any thought to the interpretation of data that they are capable of spitting out. Here the confusion of the first step gets magnified exponentially, just as it seduces us into further illusion of certainty. The uncertainties in arriving at the differential get multiplied by the imperfections of diagnostic tests to give the encounter truly quantum properties: you may know the results or you may know the patient, but you may not know both at the same time. What I mean is what I have always said on this blog: no test is perfect, and because of this simple truth, unless we know the pre-test probability of the disease in a particular patient, as well as the characteristics of the test, we have no idea about the context of these results. Taking them at face value, as we know, is a grave error.

What follows these results is frequently more diagnostic hit-or-misses, as the likelihood of harm and escalating expenditures without any added value rises. Then comes the treatment, with its many uncertainties and the potential for adverse events, and what are we left with? A pile of costly and deadly steaming manure. So, what's a doc to do?

I think that there is a very simple solution to this, and in its simplicity it will be incredibly hard to implement: education. And I don't just mean medical education. Everything that I have talked about in this post echoes back to the concept of probability. In the secondary education, at least as I remember it, probability is left to Advanced Math. By the time a student becomes eligible to take this course, she has been made to feel that she does not have the facility for math, and that, furthermore, math is boring and useless. So, while my friends in education may have a much better idea of what percentage of kids leave high school having been exposed to some probability, my guess is that it is woefully small. And those that do get exposure to it walk out of class perfectly able to bet on a game of craps or a horse race, but no clue how to apply these ideas to the world they live in.

And so those who progress into healthcare and those who don't have heard the word "probability," but cannot quite understand how it impacts them beyond their chance of winning the lottery. And unfortunately, I have to tell you that, if I relied on what I learned in medical school about probability, well, let's just say it is highly improbable that we would be having this discussion right now. This is why I do now and will for the foreseeable future harp on all of these probabilities, so that when you are faced with your own medical decisions, you will at least know the right questions to ask.

I know I need to wrap this up -- I saw that yawn! Here is the bottom line. First, we need to acknowledge the colossal uncertainties in medicine. Once we have done so, we need to understand that such uncertainties require a probabilistic approach in order to optimize care. Finally, such probabilistic approach has to be taught early and often. All of us, clinicians and patients alike, are responsible for creating this monster that we call healthcare in the 21st century. We will not train it to behave by adding more parts. The only way to train it is to train our brains to be much more critical and to engage in a conversation about probabilities. Without this shift a constructive change in how medicine is done in this country is, well, improbable.              

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Monday, March 5, 2012

Goldfinger's pan-man-scan: Available at a hospital near you

As my regular readers know all too well, I think a lot about our proneness to cognitive biases and how they impact our healthcare choices. I have been reading Sam Harris's book The Moral Landscape (Free Press, 2010), where he tries to lay a scientific foundation behind our values and beliefs. It is rather a dense book, full of linguistic convolution and intellectual contortions. And although I do not agree with Harris on some of his points (I think he would say that my disagreement stems more from my preconceived notions and values than from the weakness of his arguments), there are some very enlightening ideas in the book.

On page 123 he discusses bias:
...we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others.
Doesn't this echo my assertion that scientific knowledge acquisition tends to be unidirectional? What occurs to me is that the combination of these emotional and ideological factors along with our cognitive predispositions gangs up to steer us toward doom. Too nihilistic? Well, that's not what I am going for. Bear with me, and I will show you how it is driving us into bankruptcy, particularly where the healthcare system is concerned.

We know a lot about how the human brain works now. Creatures of habit, for the purpose of conserving energy, we develop many mental shortcuts that drive our decisions. Think about them like the impact of talking on your cell phone while driving: most of the time under usual circumstances you can pay enough attention to both driving and talking. But on occasion something unexpected comes along, and you find yourself plowed into the backside of an eighteen-wheeler, your face firmly pressed into an airbag, if you are lucky. So are these mental shortcuts: most of the time they serve us well, or at least don't get us into trouble, while just when we least expect it, the truth ambushes us and we suffer from an error in this habitual way of deciding.

Let's bring it around to medicine. Back in 1978, the dark ages, an intriguing paper was published by Casscells and colleagues in the New England Journal of Medicine. Here is what they did:
We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"
Here are the results:
Eleven of 60 participants, or 18 per cent, gave the correct answer. These participants included four of 20 fourth-year students, three of 20 residents in internal medicine and four of 20 attending physicians. The most common answer, given by 27, was 95 per cent, with a range of 0.095 to 99 per cent. The average of all answers was 55.9 per cent, a 30-fold overestimation of disease likelihood.
Since we are all experts on the positive predictive value of a test, I hardly have to go through the calculation. Briefly, though, and just to be complete, among 1,000 people, 1 has the disease and additional 50 test falsely positive. The PPV then is 1/51 = 2%. This is the chance that a person found to have a positive result actually has the disease. Yet nearly 1/2 of the responders said 95%. What did you estimate? Be honest. And this was Harvard! What chance do the rest of us, mere mortals, stand before such formidable cognitive traps? We might as well throw in the towel and prepare for an onslaught of tail-chasing physicians and patients spinning their gears in pursuit of false positive results. Because once you let a positive finding out of the bag... Well, you get the picture. I would wager that this is at least in part what has hijacked our healthcare systems, its finances and our reason. After all, as Sam Harris pointed out, it doesn't take much for us to form beliefs: no rational thinking required. How can I be so sure, you ask?

Casscells and his team in their paper bring up a phrase "the pan-man scan." They use it to refer to a battery of 25 screening tests and the chances that all of them come back normal if the person is perfectly healthy. Care to take a guess? Twenty-eight per cent. That's 28%! This means that out of 10 patients who walk into the office healthy, 7 will walk out with an abnormal finding and either be assigned a disease or be sent for further testing, even if it is only a repeat of the previously abnormal test. Now think back to your last "routine physical." Did you get "routine" blood work, cholesterol test, urinalysis? How many separate values were run? What was your risk of having an abnormal value and what ensued when you did?

The point here is once again to think about the pre-test probability of the disease and the characteristics of the test. Without this information there is no way to make an informed decision about a). whether or not to bother getting the test, and b). what to make of the result. But how do you rewire clinicians' and the public's brains to get out of these value-laden habitual errors that drive us to all kinds of bad behaviors?

Well, a psychologist from Northwestern University in Chicago by the name of Bodenhausen studies the human brain's proclivity for stereotyping. How is this relevant to medical decision making? Well, in medicine we tend to think in stereotypes. What I mean is that when a middle-aged male smoker with diabetes and hypercholestrolemia comes to the ED with a crushing substernal chest pain, we invoke the stereotype of an acute MI patient, and in this case such stereotyping allows us to make some good choices very rapidly. Where we fall into a trap is in the example above, where we stereotype a patient with a positive screening test into a group with the disease. While it is natural for our habit-addicted brains to do so, it lands us in hot water, individually and as a society. Well, Bodenhausen in this paper (subscription required) in the journal Medical Decision Making states plainly that there are certain circumstances that predispose us to stereotyping: complex decisions, cognitive overload, and happy, angry and anxious moods. Do you recognize any of these risk factors for stereotyping in medicine? I thought so.

So, what do we do? Here are a 5 potential solutions to consider:
1. Go back and think some more.
Metacognition training early and often would be a great addition to the medical curriculum.
2. Get our medical appointment out of the microwave
Give the clinician more cognitive time. Not possible, you say? Pity! That is the simplest answer.
3. Educate the public about these cognitive thought-traps and rational approaches to making medical decisions.
4. If we are so committed to the proliferation of medical stuff, then we really need better HIT infrastructure. We need bedside decision aids that automatically calculate the patient's probability of having a disease given their pre-test probability, as well as posterior probability adding the test characteristics into the mix. Having these data before ordering tests might reduce some of the waste and tail-chasing. It may also decrease some of the harm that comes with over-utilization in our dogged pursuit of a definitive diagnosis.
5. Finally, the cost-benefit equation of innovations in medicine must begin to incorporate these cost to cognition. They may be higher than we have admitted so far.

 

Tuesday, January 4, 2011

Guest post: How our brains are wired to advance science

We have a treat today. Today I am featuring a guest post from my brilliant 17-year-old niece Katherine Dana. She is currently applying to colleges, and this is one of her brief essays. Kathy is interested in animal communication specifically, but, as you can see, also spends a lot of time thinking about science in general. And oddly, she seems to be contemplating similar themes to the ones we address here. 
While it is hard for me to stop waxing poetic about how proud I am of her, I will now cut myself short, so that you can enjoy her lucid commentary.

By Katherine E. Dana

Marcel Proust once wrote, "The real voyage of discovery consists not in seeking new landscapes, but in having new eyes." Thus goes the song of science, humanity's great unifier. Science is not merely the means for collecting random information—it is the means through which we make sense of our world. It is messier than mathematics, less exact. And yet in some ways, it is this very inexactitude that gives science its potency, and allows it to cut to the very heart of nature's chaotic randomness. It works by taking the givens of nature and churning out elegant guesses, which predict as effectively as they describe.

One quality that distinguishes mind from machine is that leap of thought that psychologists term "heuristics"—mental shortcuts, expressly designed to help us connect the dots without having to consciously traverse the spaces between. This is our organic advantage.

While today's machines, no matter how complex, are restricted to lengthy algorithms, we may leap from branch to branch. Nowhere in human endeavors is this cognitive edge more apparent than in the combined efforts of humans seeking to find new truth. For before we can know, we must question; and this is where insight is most crucial. It is not enough to investigate the familiar. We must find the courage to ask uncomfortable questions, and be willing to uproot even our most cherished beliefs, all in the name of a deeper understanding.