Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Sunday, March 18, 2012

A good week

So, it has been a great week here at Healthcare, etc, and I am grateful.  Here are the highlights:

1. I did a guest post on plagiarism at Retraction Watch, and not only did it generate a fantastic conversation there, but it also brought other stories of plagiarism to us right here on this site. The story is not finished, and I will update when I can.

2. Gary Schwitzer of Health News Review cited 2 posts from Healthcare, etc. on his extremely popular and well-respected site here and here. The corresponding stories that he referenced were "Unpacking the meat data" (which incidentally is the most popular post of all time on this blog) and "PSA screening: Does it or doesn't it" (a niche post with a lively discussion in the comments).

3. The crowning glory of the week was being cited by Mark Bittman in his New York Times column on May 16th, which drove hordes of readers to Healthcare, etc.

4. Finally, my book is progressing through editing and design, and it is all beginning to seem more real. It feels like I am realizing my dream of bringing a critical approach to evaluating medical evidence to all who are interested. If you want to get news about it in your inbox, please sign up here.

In the next few weeks you can expect to hear about my experience attending and presenting at the Ignite Boston conference at MIT on March 29, as well as coverage from TEDMED 2012 in DC, which I am thankful to be attending as a Frontline scholar.

Thanks again to everyone for visiting, tweeting, linking and sharing in all other ways. Looking forward to seeing you here again.

Friday, February 3, 2012

Teach one thing, or the rule of thirds

When I was a medical student, I did a lot of rotations at the Boston VA in JP. I loved my patients there -- they were patient and kind and stoic. One of the best rotations I did was Hematology, where Lou Fiore was my preceptor. Lou was not only an excellent teacher, but also a terrific doctor and a good human being all around. He used to start our days together by saying, "I'm gonna teach you one thing today." And teach us he did, at least one thing per day. Now I teach. And on occasion I have used the Lou Fiore "I'm gonna teach you one thing today" promise. Well, today is one of those days: I'm gonna teach you one thing.

And here is that thing. I am sure I am not the first one to notice this, but I still think of it as the "Zilberberg rule of thirds." The gist of it is that, for clinical research purposes, one can think of patient populations crudely in thirds: there is one third who are too sick to benefit from any of our interventions, there is one third who are too healthy, so that no matter how we try to tweak, their outcomes will not change, and the middle third, which comprises the "sweet spot" for intervention. So it is a fool's errand to pursue proof of concept studies in either of the bracketing thirds, since it is only the middle third that is likely to show a signal.

Pharmaceutical manufacturers do not always appreciate this trichotomy. Look at Vioxx, for example: when used in patients who were essentially healthy, an unacceptable safety signal arose that drove the drug off the market. Same for SSRIs, where the ill-conceived enthusiasm for treating marginal depression cases seems to be debunking the entire serotonin hypothesis. The flip side is sepsis research: septic shock patients are so far gone that it is difficult for any single therapy to alter their outcomes. Just look at the Xigris story, as well as myriad other therapies that tried and failed. This is the rule of thirds at its most pronounced.

In HEOR the rule of thirds holds as well. To prove cost effectiveness the following questions need to be asked:
1. Is the disease in question prevalent?
2. Is the economic impact of the disease known and substantial?
3. Does the diagnostic/therapy in question alter the course of the disease in such a way as to be significant?
If the answer to any of the questions above is "no," you really need to think carefully about the value proposition.

Some of you will bring up the inter-individual differences, the heterogeneous treatment effect, etc. And yes, these are supremely important. However, though the framework I propose here is simplistic, we have to start somewhere. To be sure, there is a more nuanced approach to this beast, but generally, one will not go wrong by asking these questions before committing huge resources to a project, particularly if the answer to question 2 or 3 is a resounding "no." So, even in health economics it behooves one to know the Zilberberg rule of thirds: choose the right population where the diagnostic/therapeutic advance and its costs can be justified by a substantial gain in the outcomes.

And that is your one thing for today.    

Wednesday, February 9, 2011

Evidence and profit: An unhealthy alliance

My JAMA Commentary came out this week, and I am getting e-mail about it. It seems to have resonated with many docs who feel that the research enterprise is broken and its output fails them at the office. But what I want to do is tie a few ideas together, ideas that I have been exploring on this blog and elsewhere, ideas that may hold the key to our devastating healthcare safety problem.

The last four decades can be viewed as a nexus between the growth of evidence-based medicine (EBM) on the one hand, and the unbridled proliferation of the biopharmaceutical industry and its technologies. The result has been rapid development, maximization of profit, and a juggernaut of poorly thought-out and completely uncoordinated research geared initially at regulatory approval and subsequently to market growth. It is not that the clinical research has been of poor quality, no. It is that our research tools are primitive and allow us to see only slivers of reality. And these slivers are prone to many of our cognitive biases to boot. So, the drive to produce evidence and the drive to grow business colluded to bring us to where we are today: inundated with evidence of unclear validity, unbalanced with regard to where the biggest difference to public health can be made. Yet we are constantly poked and prodded by the eager bureaucracy to do better at implementing this evidence, while the system continues to perform in a devastatingly suboptimal fashion, causing more deaths every year than strokes.

A byproduct of this technological and financial race has been the rapid escalation of healthcare spending, with the consequent drive to contain it. The containment measures have, of course, had the "unintended consequence" of increased patient volume for providers and of the incredible shrinking appointment, all just to make a living. The end-result for clinicians and patients is the relentless pressure of time and the straight jacket of "evidence-based" interventions in the name of quality improvement. And in this mad race against the clock and demoralization, very few have had the opportunity to think rationally and holistically about the root causes of our status quo. The reality is that we are now madly spinning our wheels at the margins, getting bogged down in infinitesimal details and losing the forest for the trees (pardon all of the metaphor mixing). Our evidence-based quality improvement efforts, while commendable, are like trying to plug holes in a ship's hull with bandainds: costly and overall making little if any difference.

But if we step back and stop squinting, we can see the big picture: stagnated and outdated research enterprise still rewarding spending over substance, embattled clinicians trying to stay afloat, and a $2.5 trillion healthcare gorilla feeding the economy at the expense of human lives. Will technology fix this mess? Not by itself, no. Will more "evidence" be the answer? No, not if we continue to generate it as usual. Is throwing more money at the HHS the solution? I doubt it. A radical change of course is in order. Take profit out of evidence generation, or at least blunt its influence (this will reduce the clutter of marginal, hair-splitting technologies occupying clinicians' collective consciousness), develop new tools for better patient care rather than for maximizing the bottom line, give clinicians more time to think about their patients' needs rather than about how to maintain enough income to pay for the overhead, these are some of the obvious yet challenging solutions to the current crisis. Challenging because there needs to be political will to implement them. And because we are currently so invested in the path we are on that it is difficult and perhaps impossible to stray without losing face. But what is the alternative?

Thursday, October 21, 2010

Lies and more lies: Are all lies created equal?


The Atlantic article about John Ioannidis' research has sparked a live debate about the trustworthiness of much of the evidence generated in science. Much of what is referenced is his 2005 PLoS paper, where, through a modeling exercise, he concludes that a shotgun approach to hypothesis generation is the formula for garbage-in garbage-out data. The Atlantic article compelled me to search out the primary paper, and, despite its dense language, get though it and see what it really says. Below is my attempt at synthesis and interpretation of the salient points. My conclusion, as you will see, is that not all lies are in fact created equal. 

As if the title were not incendiary enough, “Why Most Published Research Findings Are False”, Ioannidis, in this 2005 oft-cited PLoS paper, goes on to provoke further with

“There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false.”

He develops a model simulation to prove to us mathematically that this is the case. But first he rightfully criticizes the fact that we do not put value in duplicating prior findings:

“Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values.”

He then briefly touches upon the fact that negative findings are also of importance (this is the huge issue of publication bias, which gets amplified in meta-analyses), but forgoes an extensive discussion of this in favor of explaining why we cannot trust the bulk of modern research findings.

“As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [10,11].”

He then uses the tedious, yet effective and familiar lingo of Epidemiology and Biostatistics methods to develop his idea:

“Consider a 2 x 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field.”

So, let’s look at something that I know about – healthcare-associated pneumonia. We, and others, have shown that administering empiric antibiotics that do not cover the likely pathogens within the first 24 hours of hospitalization in this population is associated with a 2-3 increase in the risk of hospital death. So, the association is antibiotic choice and hospital survival. Any clinician will tell you that this idea has a lot of biologic plausibility: get the bug with the right drug and you improve the outcome. It is also easy to justify based on the germ theory. Finally, it does not get any more “gold standard” than death. We also look at the bugs themselves to see if some are worse than others, some of the process measures, as well as how sick the patient is, both acutely and chronically. Again, it is not unreasonable to hypothesize that all of these factors influence the biology of host-pathogen interaction. So, again, if you are Bayesian, you are comfortable with the prior probability.

The next idea he puts forth is in my opinion the critical piece of the puzzle:   

R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated.”

To me what this says is that the more carefully we in any given field define what is probable prior to torturing the data, the more chance we have of being correct. Following through on his computation he derives this:

“Since usually the vast majority of investigators depend on α = 0.05, this means that a research finding is more likely true than false if (1 − β)R > 0.05.”

So, given the conventional β of 0.8, the R has to be 0.25 or greater to meet this threshold. That is 1 of 4 every 4 hypothesized associations in a given field must be correct. This does not seem like and unreasonable proportion if we are invoking a priori probabilities instead of plunging into analyses head first. In the field of genomics, as I understand it, the shotgun approach to finding associations certainly would alter this relationship in favor of a very high denominator, thus making the probability of a real association much lower.

Do I disagree with his assertions about multiple comparisons, biases, fabrication, etc.? Of course not – these are well known and difficult to quantify or remedy. Should we work to get rid of them with more transparency? Absolutely! But does his paper really mean that all of what we think we know is garbage, based on his mathematical model? I do not think so.

As everyone who reads this blog by now knows, I do not believe that we can ever arrive at the absolute truth. We can only inch as close as our methods and interpretations will allow. That is why I am so enamored of Dave deBronkart’s term “illusion of certainty”. I do, however, think that we need to examine these criticisms reasonably and not throw they baby out with the bath water.

I would be grateful to hear others’ interpretations of the Ioannidis’ paper, as it is quite possible that I have missed or misinterpreted something very important. After all, we are all learning.