Tuesday, November 9, 2010

Could our application of EBM be unethical?

As you may have noticed, I have been thinking a lot about the nature of our research enterprise and its output as it relates to practical decisions that need to be made in the real world by physicians and policy makers. I have come to the conclusion that it is woefully inadequate in so many ways, and in fact it may even be borderline unethical. My thinking on this has been influenced at least in part by some of my reading of the work by the Tufts group led by David M. Kent, who has written a lot about the impact of heterogeneous treatment response on the central measures we report in trials -- I urge you to look their work up on Medline. (My potential COI here is that I did all of my Internal Medicine and Pulmonary and Critical Care training at Tufts in the 1990s, but did not know or work with Kent or his group). But admittedly I have been thinking about a lot of this stuff on my own as well, as I have advocated risk stratification for quite some time now. So, here is what I have been thinking.

The randomized controlled trial is what puts "evidence" into evidence-based medicine. It is the sine qua non of EBM, and is the preferred way to resolve questions of "does it work?" The rigorous experimental design, the blinding and placebo controls are all meant to maximize what we call internal validity, or assure that what we are detecting is not due to bias or chance or confounding or placebo effect or some other invalidating reason. But the one downfall of these massive undertakings is that we arrive at some measure that tells us what the response was on average, and how that differed on average from placebo. The beauty of such a measure of central tendency is also its curse: it tends to smooth out any of what we call "noise" in the signal, but at the same time this smoothing cannot differentiate between noise and important population differences in both baseline characteristics and responses. Therefore, I contend that it lacks what I have termed "individualizability", a threat to validity most felt at the bedside.

Take for an example, a cholesterol-lowering medication. Let us assume that the mean or median lowering that it provides is 20 points over 6 months. This number conflates those patients who respond vigorously (say with a 40 point lowering) with those with much more sluggish to no responses. So, while the total patient group may represent a broad spectrum of the population, we walk away from the data not really understanding who is likely to respond well and who not so well. Make sense?

Now, this becomes potentially even more important when looking at adverse responses to interventions. Just because on average we do not detect more deaths, for example, in the treatment than in the placebo arm, there may be important differences between those susceptible to this outcome in the two groups. That is, while in the placebo group the deaths may be due to the disease under investigation, in the treatment group the deaths may be directly attributable to the experimental treatment, and thus may impact different subjects. And although the balance of the accounting does not differ between groups, we may be creating a variant of the trolley problem.

What would be helpful, but is less accepted by statisticians and regulators, is exploration of subgroups of patients within any given trial. I will go even further to say that the trial design should a priori include randomization stratified by these known potential effect modifiers, so as to give the power and validity to understand the continuum of efficacy and harm. Because we rarely do this, the instrument of evidence becomes a blender of forced homogenization, lumping together to create results we like to call generalizable, but lacking the individualizable data that a clinician needs in the office. And it is this lack of individualizability that to me creates a possibility of an ethical breach at the point of treatment.

It has been asserted here that 90% of all medicines work only in 30-50% of the people who qualify for them. So, for the majority of drugs, we have less than a coin-toss chance that it will work. While at 50% we have equipoise, below 50% equipoise disappears. And may I remind you that equipoise is a requirement for an experimental protocol. In other words, if we have data that leans in either direction away from the 50-50% proposition of efficacy, the ethics of an investigation are brought into question. In the current situation, where the probability of a drug working is less than or equal to 50%, does not the office trial of such therapy qualify as investigational at best and potentially unethical at worst? Of course, this would depend, as always, on the risk-benefit balance of treating vs. not treating a particular condition. But my overarching question remains: should such a drug not be considered investigational in every patient in the office when the more individualizable data are not available?

Some of these issues may be remedied by the adaptive trial designs that the FDA is exploring. But knowing the glacial pace of progress at the agency, this solution is not imminent. Also, I am by no means a bioethicist, but I do spend a lot of time thinking about clinical research as it applies to patient care. And this situation seems ripe for a more well informed multi-disciplinary discussion. As always, my ruminations here are just a manifestation of how I am thinking through some of the issues I consider important. None of this is written in stone, and all is evolving. Your thoughtful comments help my evolution.  



 

7 comments:

  1. Helpful post, Dr Z, thanks.
    Decisions that I make are all informed by evidence of all kinds and sorts from complex people and environments. In FM, we deal with a lot of small number possibilities with one or two elements supported by larger number probabilities, making for fun in fog. Then all of our patients die. Maybe not the best of outcomes for outcome-based reimbursement.
    As a family physician, when I hear EBM, I think of hospital based researchers and envision people being beaten with journals by the zealous possessors of the "only truth possible". I next think of medical directors of health plans who have MBA's and apply reference to EBM to letters of explanation about why certain bonuses won't be coming into my practice. The evidence they use for these decisions is flawed and usually not statistically significant. EBM is a misused and misguided term that needs an upgrade to not be construed as a weapon for know-it-alls and claim deniers.
    I believe that ALL information is evidence. It feels better that way. And it isn't ethically sensitive. It just is.

    ReplyDelete
  2. @ Dr Synon aaaargh that would be aggravating, receiving letters telling you what medicine based on evidence should be!

    Since I got into this blogging stuff I've realised how much resistance there has always been to oversimplification of the complex and interesting reality of research findings. I should have known there are critical thinkers everywhere but it just doesn't seem that way at face value when EBM is discussed, at least round these parts. Good on Dr Z and Healthcare etc for being contemporary flagbearers of critical thought.

    ReplyDelete
  3. I appreciate you 'thinking out loud' about these issues from the bottom of my heart. Everybody wants a seat on the EBM train but very few give much thought to what that actually means, and even fewer have the ability to critically read and evaluate the quality of a research study. It seems like lots of docs read the abstract, the conclusion and (with pride and confidence) whip out the prescription pad. You are so right that ALL information IS evidence -- and it's disheartening to see the amount of cherry-picking going on. Even nursing touts evidence based practice and (believe me) the state of research in nursing is dismal at best. (I'm a nurse so I can say that.)

    Having spent the last decade as a critical care nurse at a major teaching/research facility, I have also been a data collector for many studies (funny that nursing isn't compensated for the extra work or acknowledged in the papers). One requirement of a study had us (among other things, of course) measure the CVP every 4 hours -- even if the central line was in the groin (and we did not record the location of the central line). I think that was sloppy. But what is worse is that I could generally accurately predict which patients would be presented the "option" of participating in the study. And I'm sorry to say this, but it's true: the lower your SES/education, the higher the chance you would be a good candidate for the study. Individuals might have one thing (the study topic, like sepsis) in common but in terms of co-morbidities and pre-existing conditions, they were radically different.

    I was an active member of the Hospital Ethics Committee and when I raised my concerns privately with more powerful members, they would always say I should contact the research board with my concerns. I needed the job and I'm not stupid, so I kept my mouth shut.

    I love the concept of "individualizability" and know that you are on to something VERY important. You say you are not a bio-ethicist but if you are doing research on human subjects (that can potentially modify practice), high ethical standards must be your top priority. You should join ASBH and definitely attend the next conference. There is potentially an important (Very Important) paper within your questions. I encourage you to pursue this. (In any case, the conference is a blast. So many interesting ideas and brilliant people that you will have a 'full brain' headache every afternoon.)

    Thank you again for writing this. I was feeling rather alone with my belief that EBM is not the super-hero that it seems...and possibly a villain at times. Keep fighting the good fight.

    ReplyDelete
  4. I really appreciate everyone's comments, thanks! Dr. S's pragmatic approach is sensible, and I agree that there must be other ways of "knowing" than from empiric evidence. If there were not, what role would philosophy play in the evolution of thought, Eddy? And aidel, thanks for your honest account of how research is really done -- very disheartening!

    Thanks for the ASBH suggestion, will look into it. So many meetings, so little time... :)

    Would love to hear some (politely) dissenting views as well as need to stretch my thinking.

    ReplyDelete
  5. Someone on Facebook brought this article to my attention. Your posting reminds of a quote I included in my last newsletter from the Greek physician Galen (c. 130-210 A.D.) "All who drink of this remedy recover in a short time, except those whom it does not help, who all die. Therefore, it is obvious that it fails only in incurable cases." As quoted at www.johndcook.com/blog/2008/04/15/galen-and-clinical-trials/

    There is a kernel of truth in your comments, and the best reference I have seen is the Steven Jay Gould article, The Median Is Not the Message, which should be required reading for anyone interested in EBM. http://cancerguide.org/median_not_msg.html

    But I think you take this idea too far. Admittedly, some drugs and therapies only help a subgroup of patients. But expecting that the subgroup of patients who are helped will have a common demographic feature is a bit naive. Maybe genetics will eventually find a key feature in our genes that helps explain the tendency for many drugs to help only a specific subgroup. But there aren't a lot of successes yet.

    Do you think that race/ethnicity is going to be a good subgroup to look at when there is such a fuzzy line separating white, black, and hispanic patients? How about socioeconomic status as a subgroup. It makes sense, but there sure is a lot of measurement error associated with determining socioeconomic status.

    I could go on and on, but the basic idea is that the heterogeneity in the definition of who belongs to what subgroup is a serious enough problem that we should be cautious about reporting risk levels stratified by demographic subgroups.

    We know that a model that treats all patients alike is an oversimplification, but is it a serious enough oversimplification to warrant a more complex model?

    To say that research that ignores subgroups is borderline unethical is effectively to state that a complex statistical model is always superior to a simple statistical model.

    That being said, I'm all for looking at subgroups where there is some a priori evidence of differential effect and there is a plausible scientific mechanism. That's a pretty high bar to cross, but we need to keep that bar high because so many subgroup findings fail to replicate.

    I'm also in favor of looking at individual patient characteristics and modifying research recommendations based on those characteristics.

    The most difficult part of individualization is not the subgroups, per se, but rather the individual patient beliefs and values that need to be factored in. Consider a drug that has, for men, a side effect of sterility. I know some people who would never consider such a drug. They would sacrifice their right arm before they would sacrifice their ability to have a family. For others, the side effect would be an irrelevancy. Still others might perceive the side effect as a side benefit. So how can you do an appropriate risk benefit analysis when there is such a range of perceptions about the risk?

    So my general comment would be that research is complicated. While risk stratification may make sense in some studies, in other studies, a good old simple randomized trial may provide a sufficient approximation to reality. Let's not get too critical of the simple randomized trial just yet.

    But thanks for writing the article. Your point is important to remember, even if it is a bit too broadly applied. I also appreciate the suggestions to review the work by David M. Kent and I've already done a PubMed search on articles written by him.

    ReplyDelete
  6. Steve, thank you for reading and commenting! I agree with many of your points, but also realize that we need something now, even before genetics gives us all the answers (if that ever happens, which I do not believe it will). Elsewhere on this blog I spend a lot of time talking about the harms produced by our healthcare system, the avoidable complications and deaths. Although we tend to attribute it mostly to poor systemic adoption of evidence, we have not even asked the question how much of it may be due to this phenomenon of non-individualizable data. I may be casting a broad net, but we really need to start with an open mind here in order to generate a productive research agenda.

    ReplyDelete
  7. I think that all you are saying is that there is a very large number of things that are not known about medicine. That is sad, but true. Perhaps it isn't surprising. Serious medical research has been going for a tiny length of time, and the complications have turned out to be great.

    While I have the greatest sympathy with the clinician who has so often to make a decision based on wholly inadequate knowledge, the answer cannot be to give up on doing proper randomized experiments. All that would do is to delay still further the advancement of knowledge.

    In the absence of good evidence, one has to say "I don't know". Any other response would return us to an age when 'clinical experience' kept blood letting in business for so long.

    I suppose that impatience is natural, but it it leads to substituting hunches for evidence, it counterproductive.

    ReplyDelete