Friday, February 26, 2010

Would you spend your entire paycheck on healthcare?

It is an absurd question, right? Well, not really. Bear with me for a bit.

As I half-listened to the White House healthcare summit yesterday, I was feeling the familiar sensation of nausea rising in response to the usual excuses and talking points from both sides. Talking at each other, parroting old memorized lines about the process rather than the substance, the illustrious group came away with no consensus. Nevertheless, this lack of results seems to be pushing the Democrats to plow through the opposition and unilaterally pass the legislation through the process of reconciliation. But will it be done well? This has always been the question.

I think this WSJ article from today has called our attention explicitly to something that has been hampering any sensible approach to the reform, the selfish blame game:
Insurers contend that they must pass on ever-higher bills from hospitals and doctors. Hospitals say they are struggling with more uninsured patients, demands by doctors for top salaries, and underpayments from Medicare and Medicaid.
And doctors say they are strong-armed by insurance monopolies and hampered by medical malpractice costs.
This is the ultimate case of missing the forest for the trees. Let's think about this in slightly different terms. You are noticing that your food bill is growing out of pace with your income, the growth of your family and the growth of the national inflation rates. Do you not stop and ask what is going on? And if the answer is that, without any other changes, your family is now consuming 20 pounds of potatoes per week, while 6 months ago you made do with 4 pounds, do you not stop and ask why? And if on top of that observation you have also noticed that your family is becoming obese, do you not stop and rethink what is going on? And if as you stop and think you notice that your obese family members are in fact making even more frequent trips to the kitchen to feed their increased hunger, do you not want to break the cycle?

Well, we are "eating" a lot more healthcare than we did 10 years ago, and in the next 10 years we are likely to be "eating"even more. If healthcare continue to grow at the current pace, we will be spending 100% of our income on healthcare before we know it; and on healthcare that continues to fall short of meeting our needs to boot. The levels of chronic disease in our society have never been higher, and the number of diseases that "require" treatment has never been more vast (think mild depression, erectile dysfunction, lactose intolerance, and many other marginal human woes, most best addressed by introspection and lifestyle modification). Reflecting this growth, our preoccupation with health and disease seems to be edging out other, far healthier pursuits. And the rhetoric of "best healthcare system in the world", emanating from the Republican opposition yesterday, is not only disingenuous, it is just stupid, especially coming from someone, in fact many someones, who should know better.

So, let's look at the big picture, folks. Let's walk away from the divisiveness of the blame game and get back to the basics. We are not the best, we are least accessible, and we are most expensive. In the world! Yes, we are special, we are Americans. But it is time to exercise that specialness by admitting the abject failure of this market experiment, get off our high horse of individuality and look to other nations and systems for a sensible solution. I for one am not willing to spend all of my time or money obsessing about my health. How about you?  
 

Thursday, February 25, 2010

Does lactose intolerance really need a NIH panel?

I will go out on a limb: I think that lactose intolerance should not be medicalized.

I stumbled upon this story on WebMD discussing the NIH panel on lactose intolerance. First of all I was shocked that my tax dollars are even spent on a NIH panel on lactose intolerance. Reading the rest of the story provided ample opportunity for further shock. 

For example, did you know that we do not have an idea of what the prevalence of this scourge is? So, clearly, we need large representative studies to establish this. OK, so gathering evidence is never a bad idea. But in an odd juxtaposition to the call for evidence was this statement:
"The numbers may be elusive, but outcomes of a dairy-poor diet are easy to predict."
Really? Is this statement evidence-based, or is it setting up the argument that some associations are just too obvious to need evidence behind them? Because if it is the latter, I for one do not appreciate the double standard. In fact, the statement, though not ostensibly a direct quote from an "expert", seems rather irresponsible to me, implying that we should feel free to apply opinion-based and consensus-based principles to this question.

My final outrage came when reading that (I paraphrase) for a bona fide diagnosis one should really undergo a breath test, and other (by implication) more serious conditions need to be ruled out, such as irritable bowel syndrome and celiac disease.

Why, you might wonder, does this engender such a visceral reaction from me? Surely it is not because I do not feel compassion for those people who suffer these conditions. And it is not because I do not want to learn more about them. What worries me is that having a diagnosis requires treatment, usually with a drug directed at the symptom. I am very concerned that, instead of understanding and dealing with the underlying causes of, say lactose intolerance symptoms, we will slap the band-aid of a pill, a course much more expedient, though potentially far more detrimental, than looking for a preventive solution. In the case of lactose intolerance, the non-pharmacologic solution may have something to do with the way our milk is produced and processed: our terror of things microbial has driven us literally to sterilize milk prior to consumption. Some people feel that this "deadness", lack of organisms that through their own lactase production may potentially help us digest it, exacerbates the symptoms of the intolerance.

But this answer would be neither simple nor politically palatable. Who would support this type of research? Milk manufacturers, who would have to overhaul their operations completely? The government whose regulations drive our milk production? The small community of committed farmers who produce raw milk, but do not have corporate muscle behind them? Not likely. And what about public opinion, so durably skewed by the establishment to fear all microorganisms?

So, when a NIH panel begins looking at an issue like this, I naturally worry. And while I do want to know more about it, I am skeptical of the end-result. Are we going in the direction of 100% prevalence of chronic disease requiring 100% penetration of prescription drugs in the US?

Tuesday, February 16, 2010

Buddhism and antibiotic resistance

There is a concept called "samatha" in Buddhist meditation. It has to do with sitting quietly, doing nothing. The opposite of mindless action, samatha is the cornerstone of the mindfulness practice. But what does it have to do with antibiotic resistance?

Well, I came across this interesting slide presentation by Dick Zoutman from Canada. Starting at the top of page 5, the talk goes into several fascinating surveys about what influences antibiotic prescribing for upper respiratory tract infections (URTIs). The first survey was of 316 family MDs in Ontario, which, among other factors, determined "physician's desire to act" as a risk factor for prescribing an antibiotic. The next survey of 313 patients identified patient expectation to receive antibiotic as the most important driver of prescribing behavior. The latter can be interpreted as a). the patient's preference for some kind of an action or b). the patient's expectation of action on the part of the MD. Either way, "action" is the operative word.

So, what does this mean? Well, at the simplest, most immediate level, this finding confirms that education of both physicians and patients is a potentially fruitful target for antibiotic stewardship programs. But at a deeper level, perhaps something as fundamental as a re-evaluation of our approach to life is what is needed. Antibiotic overprescribing is a clear example where the philosophy that doing something is better than doing nothing is not just wrong, but threatens to send us back to the dark age of pre-antibiotic era.

Western medicine in general promotes rapid decision-making as its paradigm. In fact, when I was in practice, there used to be tremendous political capital in the bravado of rapid assessment and planning. But let's not kid ourselves: the majority of treatment decisions made in the outpatient setting do not necessarily need to be rushed in the way that our expectations have driven them to be rushed. So, let's take this sage advice and "don't just do something, sit there". It is time for some samatha in our decision-making as both physicians and patients, lest we continue knee-jerking our way into this escalating resistance catastrophe.    

 

Thursday, February 11, 2010

Evidence-based... inquisition?

Remember the trial a couple of years ago that showed that group support participation was associated with prolonged survival among women with metastatic breast cancer? I've thought a lot about that over the years. Isn't it interesting that something as simple as a supportive environment can make a difference in what researchers consider to be the hardest endpoint there is: survival? In our dualistic view of the human organism, we think of support as acting in the realm of the psyche, and not the physical. And yet, here is the evidence of a psychological exposure somehow making a tangible physiologic difference.

Now, how do we do evidence-based medicine? Well, we look for clinical studies that tell us whether and how well a treatment works for a particular condition. For the rabid evidenistas among us the most valid design to provide such evidence is a randomized controlled trial, since it has the most internal validity (i.e., we are in fact likely to be studying what we think we are studying). When we pat ourselves on the back on a randomization well done, we cite the balance in the fairly obvious demographic and clinical characteristics in the two (or more) comparator groups, namely, age, gender, comorbidities, the burden of acute illness, and the like. We rarely bother with their social or psychological milieu; in fact edging up to evaluating that may be viewed by some as engaging in quackery. Well, true, these exposures are ephemeral and somewhat abstract, but look at the breast cancer study... Just because it is difficult to study and we do not have validated tools for them currently, does not mean that we can ignore, or worse yet, disparage, their potential influence. Isn't there a saying to the effect that we cannot discover that which we do not currently have the tools to understand?

And speaking of inadequate tools, a related sticky wicket comes to mind: heterogeneity. I say it is related because we do not even dare look at the underlying non-physiologic heterogeneity as I mentioned above. What may surprise the uninitiated more, however, is the fact that we do not have good tools to identify physiologic heterogeneity. And as most appreciate, heterogeneity demands large numbers of subjects to study to get a detectable effect. In fact, our research enterprise is set up to do mammoth studies for often a miniscule difference (think cardiology trials requiring 20,000 patients to demonstrate a fall in mortality from 0.5% to 0.25%). It is very likely that by using this sledge hammer method to craft the fine jewel of evidence we are missing huge chunks of useful information.

And if this is the case for our Western paradigm of medical treatment, how does it play out in our study of Eastern and other non-traditional modalities? Don't take me wrong; I am not suggesting having blind faith in homeopathy, for example. But I am curious about how cultural psychology may influence responses to such treatments as Ayurvedic medicine, say. Perhaps it only "works" in conjunction with meditation and yoga? An "Eastern bundle" anyone?

The point is I do not know the answers to these questions. What I do know is that with our approach to evidence building we are looking at a vast castle through a key hole: we are only seeing small swaths of reality. My final point is this: because so much remains in the dark, we need to be humble when exploring evidentiary basis for any intervention. A parochial attitude equating gaps in our understanding to lack of effectiveness makes us seem like the Inquisition persecuting Galileo for defining an alternate reality which turned out in the long run to be the truth we live by.  

Wednesday, February 10, 2010

Kids, schools and superbugs

What do the three have in common, you might wonder? Well, more than we used to think.

This story in today's UK's Telegraph reports on a school child who developed diarrhea and tested positive for C difficile. The alarming thing is that there did not seem to be any explicit risk factors for this. The appalling thing is the mis-information by the story that
Children rarely become ill with C-diff, which normally strikes elderly people in hospital. 
This is how things used to be, before the BI/NAP1/027 bug evolved in the early first decade of the millenium. We and others have shown that kids are not immune from it, and neither are other people previously thought not to possess any risk factors for it. In fact, we have a paper coming out shortly in the CDC's journal Emerging Infectious Diseases showing that in the US the rate of pediatric hospitalizations with C diff rose from 7.2 cases per 10,000 hospitalizations in 1997 to 12.8 cases/10,000 in 2006.

So why is this happening? Well, there are a couple of ways to answer this question. The proximal answer is that the new bug is better equipped to propagate. Its spore possesses greater stickiness than the old pathogenic version which we all knew and loved in the '90s, and thus is more difficult to eradicate from fomites and anatomic surfaces. It also produces on the order of 20-times more of the toxins responsible for wreaking havoc in the colon. So, clearly, this is a bug for the new millenium.

But here is the real reason for this, albeit a little more removed: antibiotic overuse. All physicians are aware of this, and we all get the connection. The way this works is that C diff is impervious to many of the antibiotics employed to treat other infections, while its neighbors in the gut are decimated. Thus, C diff proliferates to fill the void and takes a firm hold under the right circumstances. The new superbug is, of course, very likely the result of the all-too-familiar saga of resistance evolution.

Here is the frightening part: we are still overusing antibiotics! I frequently hear from my friends that their MDs offered antibiotics for something that to me is clearly a non-bacterial issue. The most frustrating situation is when I am convinced that the friend has a post-viral reactive airways cough and needs an inhaler, but instead comes home with a handful of antibacterial pills. And no one wants to take the chance. We are so risk averse that even when we are well educated about the perils of antibiotic overuse, we are still likely to take them if our doctor prescribes them. And for a doctor, with the shrinking appointment times, what is the most expedient course, particularly with a patient with an entitled attitude? You guessed it, antibiotics!

So, what do we do? My feeling is that the action has to be multi-pronged. Yes, physicians need to be held accountable for their treatment choices, but so do patients. We need to do a much better job educating the public about the dark underbelly of antibiotics, so that they can be partners in these decision. In my opinion, this may be the most critical healthcare issue of our time, given the concerns raised by both the WHO and the FDA that, if resistance emergence continues at this pace, we will be back to the dark ages of pre-antibiotic era.

To this end, The Surgeon General should pick up the banner of antibiotic education. In fact, I recently sent her a letter outlining why she might want to make this a part of her Public Health agenda. If anyone is interested in making it a more public effort, I am happy to share it and resend with more signatures than just my own. She after all puts the "Public" into Public Health.

We have got to start talking to the people about this. The resistance train needs to reverse direction. And now!        
  

Wednesday, February 3, 2010

HEOR as a non-clinical option

Read my post on HEOR as an option for former clinicians at the Non-Clinical Healthcare Professionals ning network blog here.

Monday, February 1, 2010

Health Economics and Outcomes Research: Too little too late?

So I went to this meeting in Washington, DC, last week, to be a part of the conversation on the value of HEOR in the industry. It was a great meeting, with about 50 attendees, most of whom are intimately involved in HEOR in their every-day lives. It was also somewhat spooky. None of the presenters had shared thoughts prior to the meeting. Yet everyone's message was oddly aligned: we need more quality HEOR studies earlier in technology development.

There was broad consensus that most companies do not have a good understanding of the role, methodologies, or value of HEOR within their development programs. And while clinical trialists are a well accepted asset to the industry, HEOR groups still tend to be the red-headed step children. They have little buy-in from other departments and minimal support from the leadership, and their output is viewed with suspicion. To be sure, there are companies who understand the role of HEOR, and these are the success stories. But majority are still in the dark.

This situation must change, and here are some of the compelling reasons why. While 20 years ago all of the emphasis in drug development was on the FDA approval, today, in our economically constrained healthcare system, no approved technology can succeed without understanding what value it brings to the table over what is already available on the market. No longer can marketers employ smoke and mirrors to develop the "winning" proposition. I would argue that, in general, the industry cannot afford to lag in its understanding of economic arguments behind the payor community.

I have always argued that manufacturers need to be the biggest experts on the diseases they are pursuing and their treatments. This by necessity must include the value proposition of their technologies beyond the statistically significant improvements over placebo required by the FDA for approval. We must develop objective milestones by which to judge worthiness of technologies in development at every point in the development process. Those who do, will adapt to and succeed in this atmosphere of cost controls. Those who do not do so at their own peril. As scientists, citizens, consumers and investors, we should make sure that manufacturers are engaging in this ongoing evaluation of their wares with a critical eye to what value they intend to bring to the society.