1. Interpretation bias: Our interpretation of the validity of data and of the data themselves relies upon our own preconceived judgments and notions of the field.
2. Confirmation bias: We tend to be less skeptical and therefore less critical of a study that supports than one that goes against what we think we know and understand.
3. Rescue bias: When a study does not conform to our preconceived ideas, we tend to find selective faults withe study in order to rescue our notions.
4. Auxiliary hypothesis bias: To summarize this briefly, it the "wrong patient population and wrong dose bias".
And although none of these is particularly surprising, especially given our predictably irrational human nature, they are certainly worth examining critically, particularly in the context of how we and our colleagues interpret data and how data are communicated to public at large. It is necessary to acknowledge that these exist in order to understand the complexities of clinical science and its uses and mis-uses. So, let's move on to the three remaining types of bias that Kaptchuk talks about in his paper.
As if it was not enough that we will defend our preconceived notions tooth and nail, that we will make up stories for why they may have been refuted, and maintain that a different design would fix the universe, we also fall prey to the so-called plausibility or mechanism bias. What the heck is this? Simply put, "evidence is more easily accepted when supported by accepted scientific mechanisms". Well, no duh, as my kids would say. I fear that I cannot improve on the examples Ted cites in the paper, so will just quite them:
For example, the early negative evidence for hormone replacement therapy would have undoubtedly been judged less cautiously if a biological rationale had not already created a strong expectation that oestrogens would benefit the cardiovascular system. Similarly, the rationale for antiarrhythmic drugs for myocardial infarction was so imbedded that each of three antiarrhythmic drugs had to be proved harmful individually before each trial could be terminated. And the link between Helicobacter pylori and peptic ulcer was rejected initially because the stomach was considered to be too acidic to support bacterial growth.Of course, you say, this is how science continues on its evolutionary path. And of course, you say, we as real scientists are first to acknowledge this fact. And while this is true, and we should certainly be saluted for this forthright admission and willingness to be corrected, the trouble is that this tendency does not become clear until it can be visualized through the 20/20 lens of the retrospectoscope. When we are in the midst of our debates, we frequently cite biologic plausibility or our current understanding of underlying mechanisms as a barrier to accepting not just a divergent result, but even a new hypothesis. I touched upon this when mentioning equipoise here, as well as in my exploration of Bayesian vs. frequentist views here. In fact, John Ioannidis' entire argument as to why most of clinical research is invalid, which I blogged about a few weeks ago, relies on this Bayesian kind of thinking of the established prior probability of an event. So entrenched are we in this type of thinking that we feel scientifically justified to poo-poo anything without an apparent plausible mechanistic explanation. And experience cited above provides evidence that at least some of the time we will have to get used to how we look with egg on our faces. Of course, the trouble is that we cannot prospectively know what will turn out to be explainable and what will remain complete hoo-ha. My preferred way of dealing with this, one of many uncertainties, is just to acknowledge the possibility. To do otherwise seems somehow, I don't know, arrogant?
All right, point made, let's move on to the colorfully named "time will tell" bias. This refers to the different thresholds for the amount of evidence we have for accepting something as valid. This in and of itself does not seem all that objectionable, if it were not for a couple of features. The first is the feature of extremes (extremism?). An "evangelist" jumps on the bandwagon of the data as soon as it is out. The only issue here is that an evangelist is likely to have a conflict of interest, either financial or professional or intellectual, where there is some vested interest in the data. One of these vested interests, the intellectual, is very difficult to detect and measure, to the point that peer-review journals do not even ask to disclose whether a potential for it exists. Yet, how can it not, when we build our careers on moving in a particular direction of research (yes, another illustration of its unidirectionality), and our entire professional (and occasionally personal) selves may be invested in proving it right? Anyhow, at the other extreme is the nay sayer who needs large heaps of supporting data before accepting the evidence. And here as well, same conflicts of interest abound, but in the direction away from what is being shown. To illustrate this, Kaptchuk gives one of my favorite quotes from Max Planck:
Max Planck described the “time will tell” bias cynically: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”The final bias we will explore is hypothesis and orientation bias. This is where the researcher's bias in the hypothesis affects how data are gathered in the study. The way I understand this type of bias makes me think that the infamous placebo effect fits nicely into this category, where, if not blinded to experimental conditions, we tend to see effects that we want to see. You would think that blinding would alleviate this type of a bias, yet, as Kaptchuk sites, even if blinded, for example RCTs sponsored by pharmaceutical companies are more likely to be positive than those that are not. To be fair (and he is), the anatomy of this discrepancy is not well understood. As he mentions, this may have something to do with a publication bias, where negative trials do not come to publication either because of methodological flaws or due to suppression of (or sometimes, with less malfeasance even due to apathy about) negative data. I am not convinced that this finding is not due to the fact that folks who design trials for pharma are just not better at it than non-pharma folks, but this is a different discussion altogether.
So here we are, the end of this particular lesson. And lest we walk away thinking that on the one hand we know nothing or on the other, this is just a nutty guy and we should not pay attention, here is the concluding paragraph from the paper, which should be eerily familiar; I will let it be the final word.
I have argued that research data must necessarily undergo a tacit quality control system of scientific scepticism and judgment that is prone to bias. Nonetheless, I do not mean to reduce science to a naive relativism or argue that all claims to knowledge are to be judged equally valid because of potential subjectivity in science. Recognition of an interpretative process does not contradict the fact that the pressure of additional unambiguous evidence acts as a self regulatingmechanism that eventually corrects systematic error. Ultimately, brute data are coercive. However, a view that science is totally objective is mythical, and ignores the human element of medical inquiry. Awareness of subjectivity will make assessment of evidence more honest, rational, and reasonable.
It has been a very interesting and thought provoking few days. Thanks for your effort.
ReplyDeleteWhile plausibility bias might have delayed the interpretation of trials (HRT, H. pylori, etc), but these hypotheses were being tested for the reason that they were plausible. Recognition of plausibility bias should not result in chasing every conceivable thought with an equipoise. This will be fruitless, expensive and perhaps even unethical.
sorry but again my comment was rejected so I'll break it into 2 halves and see if that works....
ReplyDeleteThanks, very interesting.
The reading I've done about these cognitive biases has been in relation to teaching 'clinical reasoning' ie. bedside problem-solving, and looking at differences in medical vs legal reasoning in court cases where doctors give expert evidence and recurring difficulties occur in how that evidence is interpreted by the court.
I'm sure this is no revelation to some but it's striking how much these real-life biases echo difficulties which Popper tried to address in stating demarcation definitions of what science is.
-interpretation bias- he rejected the idea of 'objective observation' and stated that all observations are made through the spectrum of preconceived theories
-confirmation bias- he rejected the utility of verification of theories as unscientific and advocated attempts to falsify established theories as the path to progress
-rescue/auxiliary hypothesis bias- he stated that in the face of evidence in conflict with an established theory the only way to scientifically rescue that theory was with an auxiliary hypothesis which clearly rendered that theory more falsifiable ie. testable with concrete evidence
What has struck me when I've read numerous psychology texts and articles is that none have referenced Popper's work which predates theirs by half a century. Popper had a very ambivalent relationship with psychology throughout his career, rejecting it for years after writing a doctorate in the field. Academic psychology reportedly remains quite antipathetic to Popper's work, though I haven't asked psychologists myself if this is true.
part 2.....
ReplyDeleteI've been ambivalent reading discussions about these biases or 'heuristics', at least as when they are used to describe flaws in clinical reasoning (which I realise is a different context to what you discuss). They definitely exist and occur frequently and often unnoticed. But what troubles me is the question of whether merely being aware of their recurring existence is helpful in avoiding their effect; I'm skeptical about that. This is partly because it seems to me that these mental shortcuts which are called biases when they have been shown to lead to wrong conclusions surely must also be part of the mechanics of successful reasoning, especially real-life reasoning like medical work which must be conducted with time constraints and lots of white noise information. So it seems to me there is some hindsight bias about what reasoning has been biased and what has simply been effective. Unless there is a way to tell at the time of the reasoning which of these it is, this self-knowledge doesn't necessarily improve our reasoning ability. It is of course still a very useful idea in discussion between individuals eg. about published research, but some difficulty may apply if one can be accused of suffering one of these biases with regard to a position someone disagrees on, with similar mental leaps being applauded with regard to agreed positions (whether someone's point of reasoning is biased would be a difficult point to falsify with concrete evidence).
With regard to clinical reasoning, this old paper stated "One technique that has proven to be absolutely worthless is telling people what a particular bias is and then telling them not to be influenced by it" Impediments to Accurate Clinical Judgment and Possible Ways to Minimize Their Impact, Arkes H, Journal of Consulting and Clinical Psychology 1981, Vol.49, No. 3, 323-330.
The approaches they advise include actively considering alternative hypotheses and properly arguing against your own beliefs (similar to the story of the Wright brothers being directed to swap sides mid-argument which one of your wise commenters recounted). They also discuss the importance of understanding Bayesian analysis of probabilities.
Hello, halfbaked, and thanks for your comment. Would love to hear from others what they think about your view. Are there other ideas?
ReplyDeleteHi, Eddy, thanks for your considered thoughts. I sometimes wonder if several people might not have come up with these concepts independently -- not impossible, I suppose. I think that we need to be aware of these potential foibles of the mind, so that we can be open to new possibilities.
ReplyDelete