Thursday, April 26, 2012

Fast science: No time for uncertainty

Reading Barbara Ehrenreich's "Bright-Sided" has been liberating in that is has given me permission to let my pessimistic nature out of the closet. Well, it's not exactly that I am pessimistic, but certainly I am not given over to brightness and cheer all the time. My poison is worry. Yes, I am a worrier, in case you had not noticed. So, imagine how satisfying it is for me to find new things to worry about. As if climate change were not enough, lately I started to worry about science.

No, my anxiety about how we do clinical science overall is not new; this blog is overrun with it. However, the new branch of that anxiety relates to something I have termed "fast science." Like fast food it fills us up, but the calories are at best empty and at worst detrimental. What I mean is that science is a process more than it is a result, and this process cannot and should not be microwaved. Don't believe me? Let me give you a couple of instances where slow science may be the answer to our woes.

1. Lies and damned lies
Remember this story in the Atlantic that rattled us with its incendiary message? Researcher John Ioannidis has been making headlines with his assertion that most, if not all, of what we know in medicine is in doubt, given how we do and publish research. And how we do and publish research has everything to do with the speed of "progress." Academic careers are made with positive results, to sell news the media demand positive results, and to respond to this demand academic journals prefer only to publish positive results (this last phenomenon is referred to as "publication bias," and is something Ben Goldacre rails against at length). A further manifestation of this fast science is that "no replicators need apply." I am, of course, referring to an extension of the publications bias, whereby journals are not interested in publishing even a positive study that replicates a previous finding -- this is simply not sexy. Thus, results have to be quick and positive to grab a share of our attention and sell academic prestige, journals and news.

2. Science output to drive business profits
In his book Supercapitalism, Robert Reich describes the growing demand by investors over the last several decades to squeeze ever-growing profits. It is clear that this chase after short-term profits has resulted in job loss in the US through outsourcing, the widening of the economic gap, and even the crash of the world economy following the collapse of the mortgage-backed securities house of cards. Much of the profit can be counted on to come through scientific innovations which may or may not improve our quality of life.
In medicine, where scientific progress is applied to our fragile being, being reasonably sure of our findings seems pretty important. Yet speed is once again the order of the day. I will grant you that speed is of importance in such diseases as advanced cancer, for example, where we may and should accept a level of uncertainty that we would ordinarily run away from in other circumstances. But doesn't it make sense to be much more cautious before broadly accepting an intervention that happens before one gets sick, one that is meant to diagnose either early disease or a precursor to one? Should we not demand slower science before we allow anyone to medicalize such normal events in life as menopause and aging? Should this caution also not apply to screening for diseases that may or may not impact us in the long term, yet the chase could hurt us substantially in the immediate future?
But this is not the way to stimulate the economy or to make a profit. The half-life of a medical device, for example, is less than 1 year. After that a new "improved" version of the device is expected, whether it does or does not improve outcomes. For decades we were told to get screening mammography after the age of 40, only to find out now that the risks of this may well outweigh its benefits for many. The American Lung Association has just endorsed CT screening for lung cancer among current or former heavy smokers, yet the jury on its risk-benefit-uncertainty equation should still be in the thick of deliberations.

3. Science denialism
We hear a lot about how people are turning away from science. The state of Tennessee is about to descend back into the dark ages when superstitions instead of scientific theories dominated the classroom. A strong and largely anti-scientific lobby wants to bury any mention of human-driven climate change; fortunately, it looks like they are not succeeding. The anti-vaccination groups are getting more instead of less vocal following repeated debunking of any link between vaccination and autism. Science denialism is so rampant that there was even a need for a conference on how to address it. What gives?
While blaming everything on fast science alone may be reductionist, fast science in the setting of our growing societal innumeracy is a recipe for disaster, as we are seeing unfold. Our schools have failed spectacularly in their duty to educate kids about the process of science, while at the same time arming them with the "single-right-answer-to-every-question" attitude toward knowledge. This pernicious combination, along with the publication and reporting of sexy science at the expense of the more thorough analytic and introspective approach, seals the impression that the roller coaster of scientific knowledge represents not the very essence of how science should be done, but that science (and scientists) has failed.
Is slow science the answer to this fiasco? Only in part, I am afraid. Without altering fundamentally how we teach science at all levels, it would not be the cure, even if it were possible to execute. No, I am afraid that without teaching what science is, it is not even possible to get it to slow down.

Let me reiterate: the pace of scientific discovery is slow. This does not mean that we need to hide every step of it from view until we get the results that we deem worthy of sharing. On the contrary, I agree with those who think that sharing at the more interim steps can only improve what we do. Yet the innumeracy, fame and fortune are forces that put such free sharing in peril by misrepresenting it as the final answer to everything. And when the answer is changed, which is not only expected, but indeed desired in scientific pursuits, the public opinion punishes science.

Let me end with a quote I read on one of my favorite web sites, Brain Pickings, in a review of the book boldly entitled Ignorance: How It Drives Science:
Are we too enthralled with the answers these days? Are we afraid of questions, especially those that linger too long? We seem to have come to a phase in civilization marked by a voracious appetite for knowledge, in which the growth of information is exponential and, perhaps more important, its availability easier and faster than ever.
[...]There are a lot of facts to be known in order to be a professional anything — lawyer, doctor, engineer, accountant, teacher. But with science there is one important difference. The facts serve mainly to access the ignorance… Scientists don’t concentrate on what they know, which is considerable but minuscule, but rather on what they don’t know…. Science traffics in ignorance, cultivates it, and is driven by it. Mucking about in the unknown is an adventure; doing it for a living is something most scientists consider a privilege. 
So, let's celebrate uncertainty. Let's take time to question, answer and question again. Slow down, take a deep breath, cook a slow meal and think.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, April 24, 2012

How to avoid the "Titanic effect" in Pharma

Today I was going to tell you the tale of my son's broken wrist (he is fine now, this happened in January, but the insurance issues are fascinating), but I got distracted thinking about another fascinating subject that many do not understand well: confounding by indication. I especially started thinking about it in the context of how decisions and policies are made, and how not having the right data at the right time leads to this "Titanic effect" for a technology. What do I mean by this? Well, let me explain.

Some say the Titanic sank simply because of poor preparation -- not enough life boats, not enough training on the evacuation procedure, in other words "not enough imagination" to plan for a catastrophe. It was derailed in its course by an entirely predictable natural calamity that had not been planned for adequately, even though the risk was obvious in retrospect. Was this just on of those "unintended consequences" that could have been avoided with more clear vision? Perhaps, but the Titanic is, ahem, water under the bridge. But we can focus on some more mundane and current potential missteps and make some guesses.

Let's talk about medical technologies, and drugs in particular. Let us say that there is a new sepsis drug that has been tested among patients with sepsis but without organ failure. This drug appears to prevent organ failure in a fraction of the treated patients, and also reduces mortality by 6%. The only obstacle to widespread use of this drug is its acquisition cost, which is much higher than what the hospital's critical care pharmacist is used to paying for other drugs. Because of this high cost, the drug, despite being on the formulary, gets administered only to those patients who have developed not one, but two organ failures. The savvy pharmacist looks at the outcomes of these patients and, after comparing them to those of the patients who did not receive the drug, concludes that the new sepsis drug, instead of saving lives, actually kills. The P&T committee discusses this, dumps the drug from the formulary and other hospitals follow suit. What's wrong with this picture?

Several fallacies are at work here, including an overly broad inference of causality and bias. But the most important lesson is to do with confounding: because of its apparent expense, the drug has been niched into a population of patients who a). were not the ones that exhibited the evidence of benefit in the trials, and b). have a very high risk of mortality at baseline. So, not only is it not valid to conclude that the drug killed these patients, but it is not even valid to say that the drug does not work -- it may well work in the populations that it was shown to work in, but not in this, much more ill, population. You see the difference? It is like saying that you umbrella failed to keep you dry when you opened it only after you already got soaked.

So confounding by indication is one reason that drugs "fail" -- they are given to people who are by definition not going to do well, and the confirmation bias pushes us to say see, it's expensive and doesn't work. So how do we overcome this phenomenon and make sure that appropriate patients get access to useful technologies? I believe I have a very simple answer: don't squeeze the toothpaste out of the tube if you don't want to have to cram it back in. Huh?

In other words, do what I always advocate: be ready with the relevant data before the train leaves the station, before the cat gets out of the bag, before the horse gets out of the barn. It is very well known that cognitive biases, once established, are difficult to overcome. The pharmacist's first concern is for being able to use his very limited resources efficiently, and to guard from spending his monthly budget on a potentially useless intervention in a single patient only to be left with no resources to care for all of the other patients. Yet many manufacturers at launch send their reps to the pharmacist with two virtually unrelated stories: one about efficacy and the other about the acquisition price and its impact on his budget. When the drug is expensive, the efficacy pales in comparison to the price tag, and the pharmacist has no choice but to restrict the use of the drug, thereby consigning it to failure by confounding by indication. Sound familiar?

Is there a way to avoid this scenario? I think so. It is self-evident that you have to have good data. The surprising thing is that good data are necessary, but not sufficient: the timing of these data is critical as well. It is easier to help people form an opinion where none exists than to change one that is already there. So, to be successful, the manufacturer with a good technology must have a coherent effectiveness and cost-effectiveness proposition right out of the gate. Not only that, but it is imperative to help the clinician understand what patients might benefit from the technology (no, not all patients should be on your drug). This is the kind of a collaboration that will ultimately benefit all stake holders: 1). Appropriate patients will get the opportunity at better outcomes, 2). The pharmacist will understand up front the value proposition and the potential scope of use, and 3). The manufacturer will profit from providing a beneficial service. Isn't this the intent of all this drug development?

If all this seems all too obvious, it is because this is not rocket science. But why, then, do I see so many companies get into trouble with this very scenario? Is it just the case of "best laid plans" or is it a real blind spot that needs to be illuminated? You tell me. Given the investment that goes into drug development, I think it makes sense to approach this gap earnestly, instead of just shuffling the deck chairs on the Titanic.      

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Do complex problems always require complex solutions?

A mind-blowing talk by Gerd Gigerenzer on making decisions under conditions of uncertainty -- this is for you, heuristics mavens! (Note an interesting nuance about medical decisions at about 16:15).

A big h/t to @Medskep

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Friday, April 20, 2012

To solve complex problems, tinker!

A fabulous TED talk by the economist and writer Tim Harford (@TimHarford) about the virtue of making good mistakes (aka tinkering, which I have written about in the past). This is why science, at least in theory, works so well -- trial and error move our knowledge forward. Also a great reminder of why our educational system is failing.

A big h/t to Kent Anderson of The Scholarly Kitchen for posting this on their web site.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 
Thank you for your support!

Monday, April 16, 2012

Can probability solve the healthcare crisis?

Here is the video of my Ignite Boston 9 talk from March 29.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Saturday, April 14, 2012

How I fell in love at TEDMED

Over the exhilarating four days this past week, we all fell in love a little bit -- with the city, the Center, the meeting, the ideas, and one another. The city was Washington, DC, a touch past its cherry-blossom blush; the meeting was, of course, TEDMED. The ideas were of about honoring our health, environment, food, and about making health and healthcare efficient and kind for all.

I fell in love with dreamers. Though their dreams were varied, their paths to fulfilling them all converged into the same stream. Like a trip down the Amazon that the biggest dreamer of all, Jay Walker, the curator and the force behind the meeting used as a metaphor for TEDMED 2012, they accepted their tortuous and demanding journeys and, much to our delight and benefit, made a stop at the Kennedy Center. And although I will only mention a few, many others will stay with and inspire me for the months to come until TEDMED 2013.

I fell in love with Bryan Stevenson, who spoke about his grandmother and identity and justice.

I fell in love with Rebecca Onie, who, while transforming the care of the urban poor is also transforming the face of student activism.

I fell in love with Traces, a Montreal performance group who made my heart stop with their daring acts of precision. Our healthcare system can learn a lot from these young people.

I fell in love with Jacob Scott and Sandeep Kishore, both of them young, energetic and passionately committed to changing the face of medical education.

I fell in love with Ed Gavagan, who told the story of his confrontation with death with courage, humor and honesty.

And yes, I fell in love with and was made to weep by Robert Gupta's transcendent violin and Stephen Petronio's defiant vulnerability.

TEDMED 2012 was a feast, and now I am back to the journey of my real life: calls to make, e-mails to return, analyses to do, papers to write, talks to give, a book to get to market. It all seems just a little drab compared to the four days I spent in this intellectual and emotional climax. But like a great yoga session, TEDMED was restorative, rejuvenating, and remarkably inspirational. The mix of hard core science, the arts, history and frank curiosity sparked personal ideas and renewed personal commitments to executing my dreams for a better society. Spurred by Sekou Andrews' and Steve Connell's raw poetry performance, like a youngster in love for the first time, I am ready to GO! So I am off to do what E.O. Wilson suggested a scientist needs to do: think like a poet and work like a bookkeeper.              

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Monday, April 9, 2012

Five ways to tame the risk-benefit-uncertainty troika

There was a story on NPR this morning that sent me in a radical direction. It discussed the increase in use of brachytherapy for localized breast cancer. The idea is that this is a concentrated dose delivered much more locally and rapidly (over 5 days) than the conventional external beam radiation (over 6 weeks). The issue is that it has not been tested rigorously in a randomized controlled trial yet, and some oncologists are concerned about its outcomes as they compare to the conventional approach. One of the concerns stems from an increase in the rates of subsequent mastectomies, which are double in brachytherapy relative to conventional. At the same time, there are clear benefits, not the least of which is the period of exposure and the hassle associated with daily trips for radiation for 6 weeks.

Several points popped up in my head in response to the story:
1. When discussing this predominantly women's disease, the expert voices mostly heard from were male (5 of the 6 doctors quoted). Does this matter? Not sure.
2. The priorities addressed by the experts were the traditional outcomes -- survival, recurrence, metastasis. The priorities described by the patient were about her time and convenience today.
3. The concern about the procedure stems from the observed doubling of the need for mastectomy within 5 years among patients treated with brachytherapy, an event "rare no matter what what kind of radiation women got."

So what's my point? Do I think that more rigorous testing is not indicated? Not at all; we need a more rigorous evaluation of the technique. No, what I am wondering is at what point should a procedure like this (or any intervention, for that matter) become available to patients as an option to be considered? Should its availability be determined in a dark room by bespectacled men around a conference table, or should it be put on the menu of choices, along with its risks, benefits and uncertainties, as soon as it looks safe enough, whatever that looks like?

The larger question this raises is what is the degree of uncertainty that we are willing to accept around interventions that become available, be it a drug or a procedure or a device? How do we incorporate patients' priorities for outcomes that are important to them into these decisions? Remember ACT-UP and how they moved the FDA to make more rapid decisions about treatments for HIV/AIDS? Have we swung too far in the opposite direction today, whereby we want a virtual guarantee of safety before a technology is approved?

I offer these 5 potential question to help with making these decisions:
1. How severe/deadly is the disease in question?
2. What is (are) the known potential benefit(s) of the intervention?
3. What is (are) the known potential risk(s) of the intervention?
4. What uncertainties bracket this risk-benefit equation?
5. How does the patient feel about the extent of this risk-benefit-uncertainty balance in the context of her condition?

I think that the first four are the questions that the FDA struggles with every day. They are the gatekeepers for the availability of new technologies, and, therefore, for the relevance of the fifth question. What I am wondering is whether it is not better to start bringing a lot more of the public perspective to the discussion much earlier, so that the patient can have the option of evaluating more choices sooner. I know I may be treading on thin ice here, but I am ignoring any market forces or special interests for the moment. The question I am asking is "In the best of all possible worlds, where no one is trying to sell you anything, when is the best time to give the patient an opportunity to accept or reject an intervention, given the risk-benefit-uncertainty profile?"

Bottom line: There are no guarantees. Just because something is available on the market does not mean that it is completely safe or completely effective. Most importantly, it does not mean that we come even close to being certain about these attributes. As a corollary, just because there are uncertainties about the risks and benefits of an intervention, does it mean that it should not be available as an option for a patient? My guess is that there is a balance of this troika of properties that may be optimal on average, but I am also guessing that that this average balance will miss a substantial volume of outliers. Just as some people thrive on the thrill of bungee jumping while others clamp down just at the mere thought of it, so some patients may surprise us with their position on this risk-benefit-uncertainty continuum.

I apologize if my argument is not clear -- I am definitely thinking about this stuff actively. The one thing I am absolutely sure of is this: Unless the public and clinicians are educated about how to have these conversations, we will always have to rely on and, consequently, blame someone else for making decisions for us.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Friday, April 6, 2012

Infographics: devil in details

I got an e-mail yesterday asking me if I would be interested in displaying an info graphic on my blog. After a few back and forths, I decided to do it. After all, a picture is worth a thousand words. But as with everything, buyer beware: the devil is in the details.

A couple of caveats:
1. I cannot back up all of the numbers myself, and the references at the bottom seem to represent single source data, rather than the totality of the evidence. So take with a grain of salt.
2. There are a couple of notable errors in the graphics:
a. The graphic on other first world nations' spending says that the US spends 2x what they do in Japan. What the graph shows is that this is the case as the proportion of the GDP (also the x-axis is not labeled as such). So, the statement is not entirely accurate.
b. In hospitals overcharging, the caption states that hospitals charge 200% more for meds compared to ex US. The number adds up to 100% more, which is 2x.
3. In general, there are actually some valid reasons why we pay more for stuff in the US. I do not, for example, know if the international data are adjusted for various economic factors, such as purchasing power.
4. The statement that doctors are overpaid is a laugh -- take it from someone who has been in the trenches. I worked dawn till dusk and beyond, nearly every day of every week. I was making barely enough to keep food on the table and a roof over our heads. No lavish vacations, no BMWs, etc. It is not the MDs that are overpaid. Go to the C-suites and corporations, and then you are getting warmer.

Despite the disagreements that I have with the data, I thought it might be a good point of discussion. Would love to hear your thoughts about it.

Created by: Medical Billing and Coding Certification
If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Thursday, April 5, 2012

Happy to be in the "top three"

So, I must be in the "top three" then :)

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!

Wednesday, April 4, 2012

How to make safer decisions in medicine

I love when an article I read first thing in the morning gets me to think about itself all through my morning chores and then erupts into a blog post. So it was with this little gem in the statistical publications "Significance." The author suggests making gambling safer by placing realistic odds estimates right on the poker machines in casinos. He even goes through the generation of the odds of winning and losing and how much based on really transparent assumptions. In fact, what he has in effect constructed is a cost-benefit model for the decision to engage in the game of poker on these machines. Seems pretty simple, right? Just a few assumptions about how long the person will play, some objective inputs about the probabilities, and PRESTO, you have a transparent and realistic model of what is probable.

In medicine, there is a discipline known as Medical Decision Making, and what it does is exactly what you see in the "Significance" article: its practitioners construct risk- (and, hence, cost-) benefit models for decisions that we make in medicine. To be sure,these turn out to be rather more complex, since the inputs for them have to come from a large and complete sampling of the clinical literature addressing the risks and the benefits. But that's the meat; the skeleton upon which this meat hangs is a simple decision tree with "if this then that" arguments. In this way these models synthesize everything that we know about a specific course of action and put it together into a number driven by probability.

They usually go something like this. We have a group of women between 40 and 49 years of age with no apparent risk factors for breast cancer. What is the risk-benefit balance for mammography screening in this specific age layer? One way to approach this is to take a hypothetical cohort of 1,000 women who fit this description and put it through a decision tree. The first decision node here is whether to perform a screening or not. What follows are limbs stretching out toward particular outcomes. Obviously, some of these outcomes will be desirable (e.g., saving lives), while some will be undesirable, ranging from worry about false positive results to unnecessary surgery, chemotherapy, radiation, and even death. Because these outcomes are so heterogeneous, we try to convert everything to monetary costs per quality of life (quality because there are outcomes worse than death, as it turns out). But what underlies all of these models is the mathematics derived from clinical studies, not pulled out of thin air. This is the most useful synthesis of the best evidence available.

To be sure, MDM models are rather more complicated than the poker example. They require a little more undivided attention to follow and understand. Furthermore, I personally did not get a whole lot of exposure to them in my training, but perhaps that has changed. Like anything to do with probability, these models tend to be off-putting in a society that has consigned itself to wide-spread innumeracy. And doctors are certainly not immune from misunderstanding probability. Yet without them perceptions rule, and our healthcare becomes a reckless gamble. In our ignorance we collude to build profits that come with medicalizing small deviations from the perceived normality. Sadly, the primary interests that drive these profits are not usually doing so with probabilistic forethought either, but rather on the basis of red hot conviction that they are right.

Doctors and e-patients need to lead a radical transformation in how we handle decisions in healthcare. It is very clear that willful ignorance has not served us well, and we are all too easily led into panic about every pimple. Resilience can only come when we question our assumptions. Alas, our intuitive brain is almost certain to mislead us when faced with complex information; why else would we need explicit odds listed on poker machines? The absurd complexity of information in medicine deserves no less. It's time to start the probability revolution!

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. 

Thank you for your support!