Here is the video of my Ignite Boston 9 talk from March 29.
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Showing posts with label probability. Show all posts
Showing posts with label probability. Show all posts
Monday, April 16, 2012
Wednesday, April 4, 2012
How to make safer decisions in medicine
I love when an article I read first thing in the morning gets me to think about itself all through my morning chores and then erupts into a blog post. So it was with this little gem in the statistical publications "Significance." The author suggests making gambling safer by placing realistic odds estimates right on the poker machines in casinos. He even goes through the generation of the odds of winning and losing and how much based on really transparent assumptions. In fact, what he has in effect constructed is a cost-benefit model for the decision to engage in the game of poker on these machines. Seems pretty simple, right? Just a few assumptions about how long the person will play, some objective inputs about the probabilities, and PRESTO, you have a transparent and realistic model of what is probable.
In medicine, there is a discipline known as Medical Decision Making, and what it does is exactly what you see in the "Significance" article: its practitioners construct risk- (and, hence, cost-) benefit models for decisions that we make in medicine. To be sure,these turn out to be rather more complex, since the inputs for them have to come from a large and complete sampling of the clinical literature addressing the risks and the benefits. But that's the meat; the skeleton upon which this meat hangs is a simple decision tree with "if this then that" arguments. In this way these models synthesize everything that we know about a specific course of action and put it together into a number driven by probability.
They usually go something like this. We have a group of women between 40 and 49 years of age with no apparent risk factors for breast cancer. What is the risk-benefit balance for mammography screening in this specific age layer? One way to approach this is to take a hypothetical cohort of 1,000 women who fit this description and put it through a decision tree. The first decision node here is whether to perform a screening or not. What follows are limbs stretching out toward particular outcomes. Obviously, some of these outcomes will be desirable (e.g., saving lives), while some will be undesirable, ranging from worry about false positive results to unnecessary surgery, chemotherapy, radiation, and even death. Because these outcomes are so heterogeneous, we try to convert everything to monetary costs per quality of life (quality because there are outcomes worse than death, as it turns out). But what underlies all of these models is the mathematics derived from clinical studies, not pulled out of thin air. This is the most useful synthesis of the best evidence available.
To be sure, MDM models are rather more complicated than the poker example. They require a little more undivided attention to follow and understand. Furthermore, I personally did not get a whole lot of exposure to them in my training, but perhaps that has changed. Like anything to do with probability, these models tend to be off-putting in a society that has consigned itself to wide-spread innumeracy. And doctors are certainly not immune from misunderstanding probability. Yet without them perceptions rule, and our healthcare becomes a reckless gamble. In our ignorance we collude to build profits that come with medicalizing small deviations from the perceived normality. Sadly, the primary interests that drive these profits are not usually doing so with probabilistic forethought either, but rather on the basis of red hot conviction that they are right.
Doctors and e-patients need to lead a radical transformation in how we handle decisions in healthcare. It is very clear that willful ignorance has not served us well, and we are all too easily led into panic about every pimple. Resilience can only come when we question our assumptions. Alas, our intuitive brain is almost certain to mislead us when faced with complex information; why else would we need explicit odds listed on poker machines? The absurd complexity of information in medicine deserves no less. It's time to start the probability revolution!
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
In medicine, there is a discipline known as Medical Decision Making, and what it does is exactly what you see in the "Significance" article: its practitioners construct risk- (and, hence, cost-) benefit models for decisions that we make in medicine. To be sure,these turn out to be rather more complex, since the inputs for them have to come from a large and complete sampling of the clinical literature addressing the risks and the benefits. But that's the meat; the skeleton upon which this meat hangs is a simple decision tree with "if this then that" arguments. In this way these models synthesize everything that we know about a specific course of action and put it together into a number driven by probability.
They usually go something like this. We have a group of women between 40 and 49 years of age with no apparent risk factors for breast cancer. What is the risk-benefit balance for mammography screening in this specific age layer? One way to approach this is to take a hypothetical cohort of 1,000 women who fit this description and put it through a decision tree. The first decision node here is whether to perform a screening or not. What follows are limbs stretching out toward particular outcomes. Obviously, some of these outcomes will be desirable (e.g., saving lives), while some will be undesirable, ranging from worry about false positive results to unnecessary surgery, chemotherapy, radiation, and even death. Because these outcomes are so heterogeneous, we try to convert everything to monetary costs per quality of life (quality because there are outcomes worse than death, as it turns out). But what underlies all of these models is the mathematics derived from clinical studies, not pulled out of thin air. This is the most useful synthesis of the best evidence available.
To be sure, MDM models are rather more complicated than the poker example. They require a little more undivided attention to follow and understand. Furthermore, I personally did not get a whole lot of exposure to them in my training, but perhaps that has changed. Like anything to do with probability, these models tend to be off-putting in a society that has consigned itself to wide-spread innumeracy. And doctors are certainly not immune from misunderstanding probability. Yet without them perceptions rule, and our healthcare becomes a reckless gamble. In our ignorance we collude to build profits that come with medicalizing small deviations from the perceived normality. Sadly, the primary interests that drive these profits are not usually doing so with probabilistic forethought either, but rather on the basis of red hot conviction that they are right.
Doctors and e-patients need to lead a radical transformation in how we handle decisions in healthcare. It is very clear that willful ignorance has not served us well, and we are all too easily led into panic about every pimple. Resilience can only come when we question our assumptions. Alas, our intuitive brain is almost certain to mislead us when faced with complex information; why else would we need explicit odds listed on poker machines? The absurd complexity of information in medicine deserves no less. It's time to start the probability revolution!
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Friday, March 30, 2012
My solution to the healthcare crisis
Here is my talk from Ignite Boston last night -- I solved the healthcare crisis!
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Zilberberg igniteboston2012
View more PowerPoint from murzee
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Tuesday, March 20, 2012
The probabiltiy dozen of participatory medicine
Yesterday my rant about uncertainty and probability got quite a bit of play in cyberspace, and I am glad.
Uncertainty is ubiquitous. We consider the odds of rain when choosing what to wear. We do (or at least we should do) a quick mental risk-benefit analysis before buying a burger at Quickie-Mart. We choose our driving routes to work based on the probability of encountering heavy traffic. We do this mental calculus subconsciously but reliably, mostly getting it right. What is odd, though, is that there are certain parts of our lives where we expect complete and utter certainty. I will not get into the political aspects of this fallacy, but I do want to continue down this line of reasoning about healthcare.
As I said yesterday, and many many times in the past, the only certain thing about medicine is uncertainty. And here is what I want you to understand deeply: the amount of uncertainty is much greater than you think. So, every time you say to yourself "I think there is a lot of uncertainty in this information," multiply it by 100, and then you may get close to just how uncertain most information is.
And again, I want to emphasize that this uncertainty gets magnified in the office encounter. So, what is the solution, short of having everyone understand the totality of evidence? Yesterday I said that the solution is to teach probability early and often, and this is indeed the best long-range answer. But is there anything we can do in the short-term? The answer, of course, is yes. And here is what it is.
Everyone needs to learn what questions to ask. Instead of nodding your head vigorously to everything your doctor says, put up your hand and ask how certain s/he is that s/he is on the right track. Here is a dozen questions to help you have this conversation:
1. What are the odds that we have the diagnosis wrong?
2. What are the odds that the test you are ordering will give us the right answer, given the odds of my having the condition that you are testing me for?
3. How are we going to interpret results that are equivocal?
4. What follow-up testing will need to happen if the results are equivocal?
5. What are the implications of further testing in terms of diagnostic certainty and invasiveness of follow-up testing?
6. If I need an invasive test, what are the odds that it will yield a useful diagnosis that will alter my care?
7. If I need an invasive test, what are the odds of an adverse event, such as infection, or even death?
8. What are the odds of missing something deadly if we forgo this diagnostic testing?
9. What are the odds that the treatment you are prescribing for this condition will improve the condition?
10. How much improvement can I expect with this treatment if there is to be improvement?
11. What are the odds that I will have an adverse event related to this treatment? What are the odds of a serious adverse event, such as death?
12. How much will all of this cost in the context of the benefit I am likely to derive from it?
And in the end, you need to understand where these odds are coming from -- the clinician's gut or evidence or both? I prefer it when it integrates both, which, I believe, was the original intent of evidence-based medicine.
Perhaps for some of us this is a stretch: we don't like numbers, we are intimidated by the setting, the doc may be unhappy with the interrogation. But it is truly incumbent on all of us to accept the responsibility for sharing in these clinical decisions. I believe that the docs of today are much more in tune with shared decision-making, and understand the value of participatory medicine. And if they are not, educate them. Ultimately, it is your own attitude to risk, and not just the naked data and the clinician's perceptions of your attitude that should drive all of these decisions.
Knowledge is empowering, and empowerment is good for everyone, patient and clinician alike. As patients, taking control of what happens to us in a medical encounter can only bring higher odds of a desirable outcome. For physicians, a cogent conversation about their recommendations may help safeguard against future litigation, not to mention augment the satisfaction in the relationship.
And thus starting to discuss probabilities explicitly is very likely to get us to a better place in terms of both quality and costs of medical care. And in the process it may very well train us how to make better decisions in the rest of our lives.
I would love to hear about your experiences discussing probability, be it in a medical or non-medical setting. And as always, thanks for reading.
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Uncertainty is ubiquitous. We consider the odds of rain when choosing what to wear. We do (or at least we should do) a quick mental risk-benefit analysis before buying a burger at Quickie-Mart. We choose our driving routes to work based on the probability of encountering heavy traffic. We do this mental calculus subconsciously but reliably, mostly getting it right. What is odd, though, is that there are certain parts of our lives where we expect complete and utter certainty. I will not get into the political aspects of this fallacy, but I do want to continue down this line of reasoning about healthcare.
As I said yesterday, and many many times in the past, the only certain thing about medicine is uncertainty. And here is what I want you to understand deeply: the amount of uncertainty is much greater than you think. So, every time you say to yourself "I think there is a lot of uncertainty in this information," multiply it by 100, and then you may get close to just how uncertain most information is.
And again, I want to emphasize that this uncertainty gets magnified in the office encounter. So, what is the solution, short of having everyone understand the totality of evidence? Yesterday I said that the solution is to teach probability early and often, and this is indeed the best long-range answer. But is there anything we can do in the short-term? The answer, of course, is yes. And here is what it is.
Everyone needs to learn what questions to ask. Instead of nodding your head vigorously to everything your doctor says, put up your hand and ask how certain s/he is that s/he is on the right track. Here is a dozen questions to help you have this conversation:
1. What are the odds that we have the diagnosis wrong?
2. What are the odds that the test you are ordering will give us the right answer, given the odds of my having the condition that you are testing me for?
3. How are we going to interpret results that are equivocal?
4. What follow-up testing will need to happen if the results are equivocal?
5. What are the implications of further testing in terms of diagnostic certainty and invasiveness of follow-up testing?
6. If I need an invasive test, what are the odds that it will yield a useful diagnosis that will alter my care?
7. If I need an invasive test, what are the odds of an adverse event, such as infection, or even death?
8. What are the odds of missing something deadly if we forgo this diagnostic testing?
9. What are the odds that the treatment you are prescribing for this condition will improve the condition?
10. How much improvement can I expect with this treatment if there is to be improvement?
11. What are the odds that I will have an adverse event related to this treatment? What are the odds of a serious adverse event, such as death?
12. How much will all of this cost in the context of the benefit I am likely to derive from it?
And in the end, you need to understand where these odds are coming from -- the clinician's gut or evidence or both? I prefer it when it integrates both, which, I believe, was the original intent of evidence-based medicine.
Perhaps for some of us this is a stretch: we don't like numbers, we are intimidated by the setting, the doc may be unhappy with the interrogation. But it is truly incumbent on all of us to accept the responsibility for sharing in these clinical decisions. I believe that the docs of today are much more in tune with shared decision-making, and understand the value of participatory medicine. And if they are not, educate them. Ultimately, it is your own attitude to risk, and not just the naked data and the clinician's perceptions of your attitude that should drive all of these decisions.
Knowledge is empowering, and empowerment is good for everyone, patient and clinician alike. As patients, taking control of what happens to us in a medical encounter can only bring higher odds of a desirable outcome. For physicians, a cogent conversation about their recommendations may help safeguard against future litigation, not to mention augment the satisfaction in the relationship.
And thus starting to discuss probabilities explicitly is very likely to get us to a better place in terms of both quality and costs of medical care. And in the process it may very well train us how to make better decisions in the rest of our lives.
I would love to hear about your experiences discussing probability, be it in a medical or non-medical setting. And as always, thanks for reading.
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Monday, March 19, 2012
How medicine is like quantum physics
When a patients goes to his doctor to get fixed a pivotal triad of presentation-diagnosis-treatment ensues. The three steps are as follows:
1. History, physical examination and a differential diagnosis
When the patient shows up with a complaint, a constellation of symptoms and signs, a good clinician collects this information and funnels it through a mesh of possibilities, ruling certain conditions in and others out to derive the initial differential diagnosis.
2. Diagnostic testing
Having gone through the exercise in step 1, the practitioner then decides on appropriate diagnostic testing in order to narrow down further the possible reasons for the person's state.
3. Treatment
Finally, having reviewed all the data, the clinician makes the therapeutic choice.
These three steps seem dead simple, and we have all experienced them, either as patients or clinicians or both. Yet the cause for the current catastrophic state of our healthcare system lies within the brackets of each of these three little domains.
The cause is our failure to acknowledge the vast universe of uncertainty dotted sparsely with the galaxies of definiteness, all shrouded in false confidence. And while the cause and the way to address it are conceptually simple, the remedy is not easy to implement. But I am jumping ahead; first I have to convince you that I have indeed discovered the cause of this ruin.
Let's examine what goes on in step 1, the compilation of history and physical to generate a differential diagnosis. This is usually an implicit process that takes place mostly at a subconscious level, where the mind makes connections between the current patient and what the clinician has learned and experienced. What does that mean? It means that the clinician, within the constraints of time and the incredible shrinking appointment, has to listen, examine, elicit and put together all of the data in such a way as to cram them into a few little diagnostic boxes, many of which contain much more material than a human brain can hold all at once, even if that brain is at the right tail of human cognition (or not). What overtakes at this step is a bunch of heuristics and biases. Have we talked about those enough here? Just to review, heuristics are mental shortcuts that can serve us well, but can also lead us astray, particularly under conditions of extreme uncertainty, as in a healthcare encounter. If you want to learn more about this, read Kahneman, Slovic and Tversky's opus "Judgement under uncertainty: Heuristics and biases." As for cognitive biases, I will not belabor them, as there is enough material about them on this web site and elsewhere to overload a spaceship.
The picture that emerges at this step is one of fragments of information gathered being fit into fragments of studies and experience, stirred with mental shortcuts and poured into a bunch of baking tins shaped like specific diagnoses. Is there any room in this process for assigning objective probabilities to any of these events? Well, there is an illusion of doing so, but even this step is done by feel, rather than by computation. So while there is some awareness of a probabilistic hierarchy, it is more chaos than science. Given this picture, it's a wonder it actually works as well as it does, don't you think?
The next step in this recipe is the diagnostic workup. What ensues here is utter Wild West, particularly as new technologies are adopted at breakneck speed without any thought to the interpretation of data that they are capable of spitting out. Here the confusion of the first step gets magnified exponentially, just as it seduces us into further illusion of certainty. The uncertainties in arriving at the differential get multiplied by the imperfections of diagnostic tests to give the encounter truly quantum properties: you may know the results or you may know the patient, but you may not know both at the same time. What I mean is what I have always said on this blog: no test is perfect, and because of this simple truth, unless we know the pre-test probability of the disease in a particular patient, as well as the characteristics of the test, we have no idea about the context of these results. Taking them at face value, as we know, is a grave error.
What follows these results is frequently more diagnostic hit-or-misses, as the likelihood of harm and escalating expenditures without any added value rises. Then comes the treatment, with its many uncertainties and the potential for adverse events, and what are we left with? A pile of costly and deadly steaming manure. So, what's a doc to do?
I think that there is a very simple solution to this, and in its simplicity it will be incredibly hard to implement: education. And I don't just mean medical education. Everything that I have talked about in this post echoes back to the concept of probability. In the secondary education, at least as I remember it, probability is left to Advanced Math. By the time a student becomes eligible to take this course, she has been made to feel that she does not have the facility for math, and that, furthermore, math is boring and useless. So, while my friends in education may have a much better idea of what percentage of kids leave high school having been exposed to some probability, my guess is that it is woefully small. And those that do get exposure to it walk out of class perfectly able to bet on a game of craps or a horse race, but no clue how to apply these ideas to the world they live in.
And so those who progress into healthcare and those who don't have heard the word "probability," but cannot quite understand how it impacts them beyond their chance of winning the lottery. And unfortunately, I have to tell you that, if I relied on what I learned in medical school about probability, well, let's just say it is highly improbable that we would be having this discussion right now. This is why I do now and will for the foreseeable future harp on all of these probabilities, so that when you are faced with your own medical decisions, you will at least know the right questions to ask.
I know I need to wrap this up -- I saw that yawn! Here is the bottom line. First, we need to acknowledge the colossal uncertainties in medicine. Once we have done so, we need to understand that such uncertainties require a probabilistic approach in order to optimize care. Finally, such probabilistic approach has to be taught early and often. All of us, clinicians and patients alike, are responsible for creating this monster that we call healthcare in the 21st century. We will not train it to behave by adding more parts. The only way to train it is to train our brains to be much more critical and to engage in a conversation about probabilities. Without this shift a constructive change in how medicine is done in this country is, well, improbable.
1. History, physical examination and a differential diagnosis
When the patient shows up with a complaint, a constellation of symptoms and signs, a good clinician collects this information and funnels it through a mesh of possibilities, ruling certain conditions in and others out to derive the initial differential diagnosis.
2. Diagnostic testing
Having gone through the exercise in step 1, the practitioner then decides on appropriate diagnostic testing in order to narrow down further the possible reasons for the person's state.
3. Treatment
Finally, having reviewed all the data, the clinician makes the therapeutic choice.
These three steps seem dead simple, and we have all experienced them, either as patients or clinicians or both. Yet the cause for the current catastrophic state of our healthcare system lies within the brackets of each of these three little domains.
The cause is our failure to acknowledge the vast universe of uncertainty dotted sparsely with the galaxies of definiteness, all shrouded in false confidence. And while the cause and the way to address it are conceptually simple, the remedy is not easy to implement. But I am jumping ahead; first I have to convince you that I have indeed discovered the cause of this ruin.
Let's examine what goes on in step 1, the compilation of history and physical to generate a differential diagnosis. This is usually an implicit process that takes place mostly at a subconscious level, where the mind makes connections between the current patient and what the clinician has learned and experienced. What does that mean? It means that the clinician, within the constraints of time and the incredible shrinking appointment, has to listen, examine, elicit and put together all of the data in such a way as to cram them into a few little diagnostic boxes, many of which contain much more material than a human brain can hold all at once, even if that brain is at the right tail of human cognition (or not). What overtakes at this step is a bunch of heuristics and biases. Have we talked about those enough here? Just to review, heuristics are mental shortcuts that can serve us well, but can also lead us astray, particularly under conditions of extreme uncertainty, as in a healthcare encounter. If you want to learn more about this, read Kahneman, Slovic and Tversky's opus "Judgement under uncertainty: Heuristics and biases." As for cognitive biases, I will not belabor them, as there is enough material about them on this web site and elsewhere to overload a spaceship.
The picture that emerges at this step is one of fragments of information gathered being fit into fragments of studies and experience, stirred with mental shortcuts and poured into a bunch of baking tins shaped like specific diagnoses. Is there any room in this process for assigning objective probabilities to any of these events? Well, there is an illusion of doing so, but even this step is done by feel, rather than by computation. So while there is some awareness of a probabilistic hierarchy, it is more chaos than science. Given this picture, it's a wonder it actually works as well as it does, don't you think?
The next step in this recipe is the diagnostic workup. What ensues here is utter Wild West, particularly as new technologies are adopted at breakneck speed without any thought to the interpretation of data that they are capable of spitting out. Here the confusion of the first step gets magnified exponentially, just as it seduces us into further illusion of certainty. The uncertainties in arriving at the differential get multiplied by the imperfections of diagnostic tests to give the encounter truly quantum properties: you may know the results or you may know the patient, but you may not know both at the same time. What I mean is what I have always said on this blog: no test is perfect, and because of this simple truth, unless we know the pre-test probability of the disease in a particular patient, as well as the characteristics of the test, we have no idea about the context of these results. Taking them at face value, as we know, is a grave error.
What follows these results is frequently more diagnostic hit-or-misses, as the likelihood of harm and escalating expenditures without any added value rises. Then comes the treatment, with its many uncertainties and the potential for adverse events, and what are we left with? A pile of costly and deadly steaming manure. So, what's a doc to do?
I think that there is a very simple solution to this, and in its simplicity it will be incredibly hard to implement: education. And I don't just mean medical education. Everything that I have talked about in this post echoes back to the concept of probability. In the secondary education, at least as I remember it, probability is left to Advanced Math. By the time a student becomes eligible to take this course, she has been made to feel that she does not have the facility for math, and that, furthermore, math is boring and useless. So, while my friends in education may have a much better idea of what percentage of kids leave high school having been exposed to some probability, my guess is that it is woefully small. And those that do get exposure to it walk out of class perfectly able to bet on a game of craps or a horse race, but no clue how to apply these ideas to the world they live in.
And so those who progress into healthcare and those who don't have heard the word "probability," but cannot quite understand how it impacts them beyond their chance of winning the lottery. And unfortunately, I have to tell you that, if I relied on what I learned in medical school about probability, well, let's just say it is highly improbable that we would be having this discussion right now. This is why I do now and will for the foreseeable future harp on all of these probabilities, so that when you are faced with your own medical decisions, you will at least know the right questions to ask.
I know I need to wrap this up -- I saw that yawn! Here is the bottom line. First, we need to acknowledge the colossal uncertainties in medicine. Once we have done so, we need to understand that such uncertainties require a probabilistic approach in order to optimize care. Finally, such probabilistic approach has to be taught early and often. All of us, clinicians and patients alike, are responsible for creating this monster that we call healthcare in the 21st century. We will not train it to behave by adding more parts. The only way to train it is to train our brains to be much more critical and to engage in a conversation about probabilities. Without this shift a constructive change in how medicine is done in this country is, well, improbable.
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status.
Thank you for your support!
Wednesday, December 15, 2010
Why medical testing is never a simple decision
A couple of days ago, Archives of Internal Medicine published a case report online. Now, it is rather unusual for a high impact journal to publish even a case series, let alone a case report. Yet this was done in the vein of highlighting their theme of "less is more" in medicine. This motif was announced by Rita Redberg many months ago, when she solicited papers to shed light on the potential harms that we perpetrate in healthcare with errors of commission.
The case in question is one of a middle-aged woman presenting to the emergency room with vague symptoms of chest pain. Although from reading the paper it becomes clear that the pain is highly unlikely to represent heart disease, the doctors caring for the patient elect to do a non-invasive CT angiography test, just to "reassure" the patient, as the authors put it. Well, lo' and behold, the test comes back positive, the woman goes for an invasive cardiac catheterization, where, though no disease is found, she suffers a very rare but devastating tear of one of the arteries in her heart. As you can imagine, she gets very ill, requires a bypass surgery and ultimately an urgent heart transplant. Yup, from healthy to a heart transplant patient in just a few weeks. Nice, huh?
The case illustrates the pitfalls of getting a seemingly innocuous test for what appears to be a humanistic reason -- patient reassurance. Yet, look at the tsunami of harm that followed this one decision. But what is done is done. The big question is, can cases like this be prevented in the future? And if so, how? I will submit to you that Bayesian approaches to testing can and should reduce such complications. Here is how.
First, what is Bayesian thinking? Bayesian thinking, formalized mathematically through Bayes theorem, refers to taking the probability of disease being there into account when interpreting subsequent test results. What does this mean? Well, let us take the much embattled example of mammography and put some numbers to the probabilities. Let us assume that an otherwise healthy woman between 40 and 50 years of age has a 1% chance of developing breast cancer (that is 1 out of every 100 such women, or 100 out of 10,000 undergoing screening). Now, let's say that a screening mammogram is able to pick up 80% of all cancers that are actually there (true positives), meaning that 20% go unnoticed by this technology. So, among the 100 women with actual breast cancer of the 10,000 women screened, 80 will be diagnosed as having cancer, while 20 will be missed. OK so far? Let's go on. Let us also assume that, in a certain fraction of the screenings, mammography will merely imagine that a cancer is present, when in fact there is no cancer. Let us say that this happens about 10% of the time. So, going back to the 10,000 women we are screening, of 9,900 who do NOT have cancer (remember that only 100 can have a true cancer), 10%, or 990 individuals, will still be diagnosed as having cancer. So, tallying up all of the positive mammograms, we are now faced with 1,070 women diagnosed with breast cancer. But of course, of these women only 80 actually have the cancer, so what's the deal? Well, we have arrived at the very important idea of the value of a positive test: this roughly tells us how sure we should be that a positive test actually means that the disease is present. It is a simple ratio of the real positives (true positives, in our case the 80 women with true cancer) and all of the positives obtained with the test (in our case 1,070). This is called positive predictive value of a test, and in our mammography example for women between ages of 40 and 50 it turns out to be 7.5%. So, what this means is that over 90% of the positive mammograms in this population will turn out to be false positives.
Now, let us look at the flip side of this equation, or the value of a negative test. Of the 8,930 negative mammograms, only 20 will be false negatives (remember that in our case mammography will only pick up 80 out of 100 true cancers). This means that the other 8,910 negative results are true negatives, making the value of a negative test, or negative predictive value, 8,910/8,930 = 99.8%, or just fantastic! So, if the test is negative, we can be pretty darn sure that there is no cancer. However, if the test is positive, while cancer is present in 80 women, 900 others will undergo unnecessary further testing. And for every subsequent test a similar calculus applies, since all tests are fallible.
Let's do one more maneuver. Let's say that now we have a population of 10,000 women who have a 10% chance of having breast cancer (as is the case with an older population). The sensitivity and specificity of mammography do not change, yet the positive and negative predictive values do. So, among these 10,000 women, 1,000 are expected to have cancer, of which 800 will be picked up on mammography. Among the 9,000 without cancer, a mammogram will "find" a cancer in 900. So, the total positive mammograms add up to 1,700, of which nearly 50% are true positives (800/1,700 = 47.1%). Interestingly, the negative predictive value does not change a whole lot (8,100/[8,100 + 200]) = 97.6%, or still quite acceptably high). So, while among younger women at a lower risk for breast cancer, a positive mammogram indicates the presence of disease in only 8% of the cases, for older women it is about 50% correct.
These two examples illustrate how exquisitely sensitive an interpretation of any test result is to the pre-test probability that a patient has the disease. Applying this to the woman in the case report in the Archives, some back-of-the-napkin calculations based on the numbers in the report suggest that, while a negative CT angiogram would indeed have been reassuring, a positive one would only create confusion, as it, in fact, did.
To be sure, if we had a perfect test, or one that picked up disease 100% of the time when it was present and did not mislabel people without the disease as having it, we would not need to apply this type of Bayesian accounting. However, to the best of my knowledge, no such test exists in today's clinical practice. Therefore, engaging in explicit calculations of what results can be expected in a particular patient from a particular test before ordering such a test can save a lot of headaches, and perhaps even lives. In fact, I do hope that the developers of our new electronic medical environments are giving this serious thought, as these simple algorithms should be built into all decision support systems. Bayes theorem is an idea whose time has surely come.
The case in question is one of a middle-aged woman presenting to the emergency room with vague symptoms of chest pain. Although from reading the paper it becomes clear that the pain is highly unlikely to represent heart disease, the doctors caring for the patient elect to do a non-invasive CT angiography test, just to "reassure" the patient, as the authors put it. Well, lo' and behold, the test comes back positive, the woman goes for an invasive cardiac catheterization, where, though no disease is found, she suffers a very rare but devastating tear of one of the arteries in her heart. As you can imagine, she gets very ill, requires a bypass surgery and ultimately an urgent heart transplant. Yup, from healthy to a heart transplant patient in just a few weeks. Nice, huh?
The case illustrates the pitfalls of getting a seemingly innocuous test for what appears to be a humanistic reason -- patient reassurance. Yet, look at the tsunami of harm that followed this one decision. But what is done is done. The big question is, can cases like this be prevented in the future? And if so, how? I will submit to you that Bayesian approaches to testing can and should reduce such complications. Here is how.
First, what is Bayesian thinking? Bayesian thinking, formalized mathematically through Bayes theorem, refers to taking the probability of disease being there into account when interpreting subsequent test results. What does this mean? Well, let us take the much embattled example of mammography and put some numbers to the probabilities. Let us assume that an otherwise healthy woman between 40 and 50 years of age has a 1% chance of developing breast cancer (that is 1 out of every 100 such women, or 100 out of 10,000 undergoing screening). Now, let's say that a screening mammogram is able to pick up 80% of all cancers that are actually there (true positives), meaning that 20% go unnoticed by this technology. So, among the 100 women with actual breast cancer of the 10,000 women screened, 80 will be diagnosed as having cancer, while 20 will be missed. OK so far? Let's go on. Let us also assume that, in a certain fraction of the screenings, mammography will merely imagine that a cancer is present, when in fact there is no cancer. Let us say that this happens about 10% of the time. So, going back to the 10,000 women we are screening, of 9,900 who do NOT have cancer (remember that only 100 can have a true cancer), 10%, or 990 individuals, will still be diagnosed as having cancer. So, tallying up all of the positive mammograms, we are now faced with 1,070 women diagnosed with breast cancer. But of course, of these women only 80 actually have the cancer, so what's the deal? Well, we have arrived at the very important idea of the value of a positive test: this roughly tells us how sure we should be that a positive test actually means that the disease is present. It is a simple ratio of the real positives (true positives, in our case the 80 women with true cancer) and all of the positives obtained with the test (in our case 1,070). This is called positive predictive value of a test, and in our mammography example for women between ages of 40 and 50 it turns out to be 7.5%. So, what this means is that over 90% of the positive mammograms in this population will turn out to be false positives.
Now, let us look at the flip side of this equation, or the value of a negative test. Of the 8,930 negative mammograms, only 20 will be false negatives (remember that in our case mammography will only pick up 80 out of 100 true cancers). This means that the other 8,910 negative results are true negatives, making the value of a negative test, or negative predictive value, 8,910/8,930 = 99.8%, or just fantastic! So, if the test is negative, we can be pretty darn sure that there is no cancer. However, if the test is positive, while cancer is present in 80 women, 900 others will undergo unnecessary further testing. And for every subsequent test a similar calculus applies, since all tests are fallible.
Let's do one more maneuver. Let's say that now we have a population of 10,000 women who have a 10% chance of having breast cancer (as is the case with an older population). The sensitivity and specificity of mammography do not change, yet the positive and negative predictive values do. So, among these 10,000 women, 1,000 are expected to have cancer, of which 800 will be picked up on mammography. Among the 9,000 without cancer, a mammogram will "find" a cancer in 900. So, the total positive mammograms add up to 1,700, of which nearly 50% are true positives (800/1,700 = 47.1%). Interestingly, the negative predictive value does not change a whole lot (8,100/[8,100 + 200]) = 97.6%, or still quite acceptably high). So, while among younger women at a lower risk for breast cancer, a positive mammogram indicates the presence of disease in only 8% of the cases, for older women it is about 50% correct.
These two examples illustrate how exquisitely sensitive an interpretation of any test result is to the pre-test probability that a patient has the disease. Applying this to the woman in the case report in the Archives, some back-of-the-napkin calculations based on the numbers in the report suggest that, while a negative CT angiogram would indeed have been reassuring, a positive one would only create confusion, as it, in fact, did.
To be sure, if we had a perfect test, or one that picked up disease 100% of the time when it was present and did not mislabel people without the disease as having it, we would not need to apply this type of Bayesian accounting. However, to the best of my knowledge, no such test exists in today's clinical practice. Therefore, engaging in explicit calculations of what results can be expected in a particular patient from a particular test before ordering such a test can save a lot of headaches, and perhaps even lives. In fact, I do hope that the developers of our new electronic medical environments are giving this serious thought, as these simple algorithms should be built into all decision support systems. Bayes theorem is an idea whose time has surely come.
Tuesday, October 19, 2010
"The illusion of certainty"
This post was in part inspired by my recent exchange with e-Patient Dave. But really I have been thinking about this for quite some time now, and Dave provided the necessary nudge for me to write about it. Hence, his words, "the illusion of certainty".
As I like to tell my students, in science we are always on the way. We never really reach the end of the road because science is advanced through curiosity and constant questioning. For this reason, even when we think that we know something, think again. This reality is anathema not only to our model of medicine, but to our current model of society as well. What do I mean by that?
Well, let us reflect on our system of education. My son has a science workbook developed by what s considered to be a fairly progressive curriculum group. One of the early sections deals with the Linnaean classification of the animal kingdom (yes, the King Philip Came Over From Germany... well, not exactly sober). The worksheet asks the student to fill in blanks in a paragraph with the appropriate gradation of the classification. And believe it or not, the top of the page sports the full list of these strata. So, what is the message? The message, as I see it, is that this classification is so important and so solid that it requires rote memorization, as it is the only correct answer. Is this really true? Almost every day we hear about organisms requiring reclassification because we learn something new about their features that excludes them from their previous cubby. Bacteria have gained their own domain, which they did not have when I studies them in college. So, what is important here? What is important is why it is that we need these classifications, as well as how they are developed. My son is learning more from our explorations of what animals do not fit neatly into this system than from regurgitating the 7 vocabulary words. He is also learning that science is messy and the illusion of certainty that our educational systems foster are a house of cards. We could extend this argument to the way we test our children's knowledge in school as well, reducing what they learn to only one neatly packaged correct answer with all of the rest being wrong.
Now, let us wander into other parts of this American life. Take for instance the advertising industry. Cleverly they have devised market segmentation schemes that allow them to target products with very specific characteristics to a very specific population of consumers. (I wish we could be as good at doing this kind of a subgroup analysis in clinical sciences!) We are told with certainty that Charmin is better than Scotties, that Gif is more natural than Peter Pan, that buying Ford is more patriotic than buying a Toyota. And even though we are constantly faced with an absurd volume of choices, we are somehow under the impression that there is only one that is right for us.
My final example is in politics, which these days cannot be separated from religion in the US. Our politicians have realized that reducing every important issue to two ways of addressing it, and from those two ways choosing the one that can get them the most votes is indeed the way to get elected and re-elected, so that they can have a constant wedding banquet without ever getting bogged down in the hard work of marriage. This falsely reductionist and dichotomous approach has led us to a deeply divided nation, where, although we are more same than different, it is indeed these differences that are amplified to a deafening hysterical pitch. Yet, we are all secure in our certainty of being right.
I used these examples to illustrate our dependence on certainty in some major aspects of our every-day lives. This paternalism of certainty in education, politics and trusted brands likely extends to the sense of entitlement to certainty in healthcare as well. When I was in practice and on call for my group one Saturday morning, with a line out the door and the ancillary personnel about to go home for the day, I walked into one of the examining rooms to greet a woman in her 40s, well dressed and groomed, and not appearing in the least distressed. Immediately she let me know that she was being seen for a cough that was going on for a couple of days (no, she was not a smoker and did not have any chronic conditions), and wanted to nip it in the bud with a prescription for an antibiotic. After listening to her thoroughly and ascultating her lungs, I came to the conclusion that her ailment was overwhelmingly more likely to be viral than bacterial, and an antibiotic was not indicated. I shared my thinking and probability analysis with her, and following my thorough explanation of why she was going to leave my office empty-handed, she responded, "But my doctor always prescribes me an antibiotic!" I looked at her with incredulity, glanced at the growing line outside my door, and then at the clock, and, though I am not proud of it to this day, I pulled out my prescription pad. Her expectation of certainty did not leave any room for probability-based thinking, and her physician, and now I, were colluding in continuing this deception.
Today I look at medicine from a completely different perspective. As a clinical researcher, I am keenly aware of all the uncertainties inherent in what we do. The richness of research is in asking the question, and getting the answer, though it gets us published, is but a fleeting pleasure on the way to the next question. Science is the classic example of "there is no there there"; it is the road that is the gift. In the clinician's office, this attitude becomes difficult to hold together with the need for answers and certainty. Yet hold it we must. Yes, we have to give patients the best care possible based on what we know. But we cannot for a moment forget that all knowledge is evolving. This epiphany is an invitation to physicians actually to work with individual patients and their values and wishes to the extent possible. Dogmatic certainty, the alternative, is tantamount to worshipping at the altar of a false god.
As I like to tell my students, in science we are always on the way. We never really reach the end of the road because science is advanced through curiosity and constant questioning. For this reason, even when we think that we know something, think again. This reality is anathema not only to our model of medicine, but to our current model of society as well. What do I mean by that?
Well, let us reflect on our system of education. My son has a science workbook developed by what s considered to be a fairly progressive curriculum group. One of the early sections deals with the Linnaean classification of the animal kingdom (yes, the King Philip Came Over From Germany... well, not exactly sober). The worksheet asks the student to fill in blanks in a paragraph with the appropriate gradation of the classification. And believe it or not, the top of the page sports the full list of these strata. So, what is the message? The message, as I see it, is that this classification is so important and so solid that it requires rote memorization, as it is the only correct answer. Is this really true? Almost every day we hear about organisms requiring reclassification because we learn something new about their features that excludes them from their previous cubby. Bacteria have gained their own domain, which they did not have when I studies them in college. So, what is important here? What is important is why it is that we need these classifications, as well as how they are developed. My son is learning more from our explorations of what animals do not fit neatly into this system than from regurgitating the 7 vocabulary words. He is also learning that science is messy and the illusion of certainty that our educational systems foster are a house of cards. We could extend this argument to the way we test our children's knowledge in school as well, reducing what they learn to only one neatly packaged correct answer with all of the rest being wrong.
Now, let us wander into other parts of this American life. Take for instance the advertising industry. Cleverly they have devised market segmentation schemes that allow them to target products with very specific characteristics to a very specific population of consumers. (I wish we could be as good at doing this kind of a subgroup analysis in clinical sciences!) We are told with certainty that Charmin is better than Scotties, that Gif is more natural than Peter Pan, that buying Ford is more patriotic than buying a Toyota. And even though we are constantly faced with an absurd volume of choices, we are somehow under the impression that there is only one that is right for us.
My final example is in politics, which these days cannot be separated from religion in the US. Our politicians have realized that reducing every important issue to two ways of addressing it, and from those two ways choosing the one that can get them the most votes is indeed the way to get elected and re-elected, so that they can have a constant wedding banquet without ever getting bogged down in the hard work of marriage. This falsely reductionist and dichotomous approach has led us to a deeply divided nation, where, although we are more same than different, it is indeed these differences that are amplified to a deafening hysterical pitch. Yet, we are all secure in our certainty of being right.
I used these examples to illustrate our dependence on certainty in some major aspects of our every-day lives. This paternalism of certainty in education, politics and trusted brands likely extends to the sense of entitlement to certainty in healthcare as well. When I was in practice and on call for my group one Saturday morning, with a line out the door and the ancillary personnel about to go home for the day, I walked into one of the examining rooms to greet a woman in her 40s, well dressed and groomed, and not appearing in the least distressed. Immediately she let me know that she was being seen for a cough that was going on for a couple of days (no, she was not a smoker and did not have any chronic conditions), and wanted to nip it in the bud with a prescription for an antibiotic. After listening to her thoroughly and ascultating her lungs, I came to the conclusion that her ailment was overwhelmingly more likely to be viral than bacterial, and an antibiotic was not indicated. I shared my thinking and probability analysis with her, and following my thorough explanation of why she was going to leave my office empty-handed, she responded, "But my doctor always prescribes me an antibiotic!" I looked at her with incredulity, glanced at the growing line outside my door, and then at the clock, and, though I am not proud of it to this day, I pulled out my prescription pad. Her expectation of certainty did not leave any room for probability-based thinking, and her physician, and now I, were colluding in continuing this deception.
Today I look at medicine from a completely different perspective. As a clinical researcher, I am keenly aware of all the uncertainties inherent in what we do. The richness of research is in asking the question, and getting the answer, though it gets us published, is but a fleeting pleasure on the way to the next question. Science is the classic example of "there is no there there"; it is the road that is the gift. In the clinician's office, this attitude becomes difficult to hold together with the need for answers and certainty. Yet hold it we must. Yes, we have to give patients the best care possible based on what we know. But we cannot for a moment forget that all knowledge is evolving. This epiphany is an invitation to physicians actually to work with individual patients and their values and wishes to the extent possible. Dogmatic certainty, the alternative, is tantamount to worshipping at the altar of a false god.
Subscribe to:
Posts (Atom)