1. Interpretation bias: Our interpretation of the validity of data and of the data themselves relies upon our own preconceived judgments and notions of the field.
2. Confirmation bias: We tend to be less skeptical and therefore less critical of a study that supports than one that goes against what we think we know and understand.
3. Rescue bias: When a study does not conform to our preconceived ideas, we tend to find selective faults withe study in order to rescue our notions.
4. Auxiliary hypothesis bias: To summarize this briefly, it the "wrong patient population and wrong dose bias".
And although none of these is particularly surprising, especially given our predictably irrational human nature, they are certainly worth examining critically, particularly in the context of how we and our colleagues interpret data and how data are communicated to public at large. It is necessary to acknowledge that these exist in order to understand the complexities of clinical science and its uses and mis-uses. So, let's move on to the three remaining types of bias that Kaptchuk talks about in his paper.
As if it was not enough that we will defend our preconceived notions tooth and nail, that we will make up stories for why they may have been refuted, and maintain that a different design would fix the universe, we also fall prey to the so-called plausibility or mechanism bias. What the heck is this? Simply put, "evidence is more easily accepted when supported by accepted scientific mechanisms". Well, no duh, as my kids would say. I fear that I cannot improve on the examples Ted cites in the paper, so will just quite them:
For example, the early negative evidence for hormone replacement therapy would have undoubtedly been judged less cautiously if a biological rationale had not already created a strong expectation that oestrogens would benefit the cardiovascular system. Similarly, the rationale for antiarrhythmic drugs for myocardial infarction was so imbedded that each of three antiarrhythmic drugs had to be proved harmful individually before each trial could be terminated. And the link between Helicobacter pylori and peptic ulcer was rejected initially because the stomach was considered to be too acidic to support bacterial growth.Of course, you say, this is how science continues on its evolutionary path. And of course, you say, we as real scientists are first to acknowledge this fact. And while this is true, and we should certainly be saluted for this forthright admission and willingness to be corrected, the trouble is that this tendency does not become clear until it can be visualized through the 20/20 lens of the retrospectoscope. When we are in the midst of our debates, we frequently cite biologic plausibility or our current understanding of underlying mechanisms as a barrier to accepting not just a divergent result, but even a new hypothesis. I touched upon this when mentioning equipoise here, as well as in my exploration of Bayesian vs. frequentist views here. In fact, John Ioannidis' entire argument as to why most of clinical research is invalid, which I blogged about a few weeks ago, relies on this Bayesian kind of thinking of the established prior probability of an event. So entrenched are we in this type of thinking that we feel scientifically justified to poo-poo anything without an apparent plausible mechanistic explanation. And experience cited above provides evidence that at least some of the time we will have to get used to how we look with egg on our faces. Of course, the trouble is that we cannot prospectively know what will turn out to be explainable and what will remain complete hoo-ha. My preferred way of dealing with this, one of many uncertainties, is just to acknowledge the possibility. To do otherwise seems somehow, I don't know, arrogant?
All right, point made, let's move on to the colorfully named "time will tell" bias. This refers to the different thresholds for the amount of evidence we have for accepting something as valid. This in and of itself does not seem all that objectionable, if it were not for a couple of features. The first is the feature of extremes (extremism?). An "evangelist" jumps on the bandwagon of the data as soon as it is out. The only issue here is that an evangelist is likely to have a conflict of interest, either financial or professional or intellectual, where there is some vested interest in the data. One of these vested interests, the intellectual, is very difficult to detect and measure, to the point that peer-review journals do not even ask to disclose whether a potential for it exists. Yet, how can it not, when we build our careers on moving in a particular direction of research (yes, another illustration of its unidirectionality), and our entire professional (and occasionally personal) selves may be invested in proving it right? Anyhow, at the other extreme is the nay sayer who needs large heaps of supporting data before accepting the evidence. And here as well, same conflicts of interest abound, but in the direction away from what is being shown. To illustrate this, Kaptchuk gives one of my favorite quotes from Max Planck:
Max Planck described the “time will tell” bias cynically: “a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”The final bias we will explore is hypothesis and orientation bias. This is where the researcher's bias in the hypothesis affects how data are gathered in the study. The way I understand this type of bias makes me think that the infamous placebo effect fits nicely into this category, where, if not blinded to experimental conditions, we tend to see effects that we want to see. You would think that blinding would alleviate this type of a bias, yet, as Kaptchuk sites, even if blinded, for example RCTs sponsored by pharmaceutical companies are more likely to be positive than those that are not. To be fair (and he is), the anatomy of this discrepancy is not well understood. As he mentions, this may have something to do with a publication bias, where negative trials do not come to publication either because of methodological flaws or due to suppression of (or sometimes, with less malfeasance even due to apathy about) negative data. I am not convinced that this finding is not due to the fact that folks who design trials for pharma are just not better at it than non-pharma folks, but this is a different discussion altogether.
So here we are, the end of this particular lesson. And lest we walk away thinking that on the one hand we know nothing or on the other, this is just a nutty guy and we should not pay attention, here is the concluding paragraph from the paper, which should be eerily familiar; I will let it be the final word.
I have argued that research data must necessarily undergo a tacit quality control system of scientific scepticism and judgment that is prone to bias. Nonetheless, I do not mean to reduce science to a naive relativism or argue that all claims to knowledge are to be judged equally valid because of potential subjectivity in science. Recognition of an interpretative process does not contradict the fact that the pressure of additional unambiguous evidence acts as a self regulatingmechanism that eventually corrects systematic error. Ultimately, brute data are coercive. However, a view that science is totally objective is mythical, and ignores the human element of medical inquiry. Awareness of subjectivity will make assessment of evidence more honest, rational, and reasonable.