Wednesday, August 31, 2011

My top 5 reasons for rejecting a manuscript

Here are five manuscript transgressions that make me hit "Reject" faster that you can blink. The first four in particular do not instill confidence in what you actually did in the study.

1. Matched cohort masquerading as a case-control
This happens quite a bit with papers submitted to third and fourth tier journals, but watch out for it anywhere. The authors claim to have done a matched case-control study, where there is indeed matching. However, the selection of participants in the study is based on the exposure variable, rather than the outcome. Why is this important? Well, for one, the design informs the structure of the analyses. But even more fundamentally, I am really into definitions in science because they allow us to make sure we are talking about the same thing. And the definition of a case-control study is that it starts with the end -- that is to say, the outcome defines the case. So, if you are exploring whether a Cox-2 inhibitor is associated with mortality from heart disease, do not tell me that your "cases" were defined by taking the Cox-2 and controls were the ones that did not take it. If you are enrolling based on exposure, even if you are matching on such variables as age, gender, etc., this is still a COHORT STUDY!  It is a different story that this may not be the most efficient way to answer the particular example question, and a real case-control might be better. In order to call your study case-control, you need to define your cases as those who experienced the outcome, death in our example, making the controls those that did not die. I know that this explanation leaves a thick shroud over some of the very important details of how to choose the counterfactual, etc., but that is outside the scope here. Just get the design right, for crissakes

2. Incidence or prevalence, and where is the denominator?
I cannot tell you how annoying it is to see someone cite the incidence of something as a percentage. But as annoying as this is, it alone does not get and automatic rejection. What does is when someone tells me this "incidence" in a study that is in fact a matched cohort. By definition, matching means that you are not including the entire denominator of the population of interest, so whatever the prevalence of the exposure may seem to be in a matched cohort is the direct result of your muscling it into this particular mold. In other words, say you are matching 2:1 unexposed to exposed and the exposure is smoking, while the outcome of interest is the development of lung disease. First, if you are telling me that 10% of the smokers developed lung disease in the time frame, please, please, call it a prevalence and not an incidence. Incidence must incorporate a uniform time factor in the denominator (e.g., per year). And second, do not tell me what the "incidence" of smoking was based on your cohort -- by definition in your group of subjects smoking will be experienced by 1/3 of the group. Unless you have measure the prevalence of smoking in the parent cohort BEFORE you did your matching, I am not interested. This is just stupid and thoughtless, so it definitely gets an automatic reject (or a strong question mark at the very least). 
  
3. Analysis that does not explore the stated hypothesis
I just reviewed a paper that initially asked an interesting question (this is how they get you to agree to review), but turned out to turn the hypothesis on its head and ended up being completely inane. Broadly, the investigators claimed to be interested in how a certain exposure impacts mortality, a legitimate question to ask. As I was reading through the paper, and as I could not make any heads or tails out of the Methods section, it slowly began to dawn on me that he authors went after the opposite of what they promised: they started to look for the predictors of what they set up as the exposure variable! Now, this can sometimes still be legit, but the exposure variable needs to be already recognized as somehow relating to the outcome of interest (hey, surrogate endpoints, anyone?). This was not the case here. So, please, authors, do look back on your hypothesis once in a while as you are actually performing the study and writing up your results.

4. Stick to the hypothesis, can the advertising
I recently rejected a paper that asked a legitimate question, but, in addition to doing a shoddy job with the analyses and the reporting, did the one thing that is an absolute no-no: it reported on a specific analysis of the impact of a single drug on the outcome of interest. And yes, you guessed it, the sponsor of the study was the manufacturer of the drug in question. And naturally, the drug looked particularly good in the analysis. I am not against manufacturer-sponsored studies, and even those that end up shedding positive light on their products. What I am against is random results of random analyses that look positive for their drug without any justification or planning. So, all of this notwithstanding, the situation might have been tolerable, had the authors made a credible case for why it was reasonable to expect this drug to have the salutary effect, citing either theoretical considerations or prior evidence. They of course would have had to incorporate it into their a priori hypothesis. Otherwise this is just advertising, a random shot in the dark, not an academic pursuit of knowledge.

5. Language is a fraught but important issue
I do not want to get into the argument about whether publishing in English language journals brings more status than in non-English language ones. This is not the issue. What I do want to point out, and this is true for both native and non-native English speakers, is that if you cannot make yourself understood, I do not have either time or the ability to read your mind. If you are sending a paper into an English language journal, do make your arguments clearly, do make sure that your sentence structure is correct, and do use constructions that I will understand. It is not that I do not want to read foreign studies, no. In fact, you have no idea just how important it is to have data from geopolitically diverse areas. No, what I am saying is that I volunteer my time to be on Editorial Boards and as a peer reviewer, and I just do not have the leisure to spend hours unraveling the hidden meaning of a linguistically encrypted paper. And even if I did, I assure you, you are leaving a lot to the reviewer's idiosyncratic interpretation. So, please, if you do not write in English well, give your data a chance by having an editor take a look at your manuscript BEFORE you hit the submit button.

Friday, August 26, 2011

Botox and empathy: Less is more

I am kind of stuck on this whole Botox-empathy thing. A recent study from researchers at Duke and UCLA implied that people who get Botox to attenuate their wrinkles also seem to attenuate their empathic ability. Somehow their inability to mimic others' facial expressions impairs the firing of their mirror neurons and they top feeling empathy. Wow!

But think of it -- Botulinum toxin, arguably one of the most potent poisons known to humans, is being used essentially recreationally as a drug, quite possibly an addictive one. Who thought this was a good idea? OK, don't answer that.

To be sure, the same toxin in a therapeutic preparation can help people with paralysis release painful contractures, and this is a wonderful advance. Just as morphine is a terrific pain reliever under the right circumstances. But used recreationally? Everyone is aware of the havoc it can wreak, both personally and societally. So, how did we justify allowing this most potent of all poisons to be injected into perfectly healthy (and beautiful, I might add) aging faces?

File this under "Go figure." Another opportunity for "less is more."

Thursday, August 25, 2011

Side effects: The subject must become the scientist

A few weeks ago someone I know, a normally robust and energetic woman, began to feel fatigued and listless, and had some strange sensations in her chest. She presented to her primary care MD, who obtained an EKG and a full panel of blood tests. The former showed some non-specific changes, while the latter was entirely normal. Although reassured, she continued to experience malaise. When she fetched her EKG, she received a copy with the computer interpretation indicating that, in its wisdom, the program could not rule out a heart attack. Given that her symptoms continued, and now anxiety was piled on top, she presented to the ED, where a heart attack was excluded, and she was scheduled for a stress test. In the subsequent weeks the symptoms continued off and on, and the stress test turned out to be negative for coronary disease. Great, mazel tov!

What I failed to mention was that just prior to the onset of her symptoms, she had been started on 5-fluorouracil cream for a basal cell skin cancer. And while she did not commit my current device of omission with her doctors (including the dermatologist who prescribed the drug), all denied her constellation of symptoms as a potential side effect. And granted, when I looked it up, there was no mention of anything like fatigue and listlessness. So, does it mean that it is not within the realm of the possible that this drug was responsible?

Not at all. And here is why. Our adverse event reporting is essentially a discretionary system. Here is what the FDA says about their Adverse Event Reporting System (AERS):
Reporting of adverse events from the point of care is voluntary in the United States. FDA receives some adverse event and medication error reports directly from health care professionals (such as physicians, pharmacists, nurses and others) and consumers (such as patients, family members, lawyers and others). Healthcare professionals and consumers may also report these events to the products’ manufacturers. If a manufacturer receives an adverse event report, it is required to send the report to FDA as specified by regulations. 
What this means is that, when a patient complains to a doctor of a symptom, even when its onset is in obvious proximity to a particular medication, the doctor is not compelled to report it. The most an average physician will do is look up the known AE profile of the drug and at best look up its interactions with other medications. But one is not generally inclined to use one's imagination (and the constraints of the shrinking appointments spread across exponentially growing cognitive loads conspire against it too) to entertain the possibility that the current problem is related. And yet since many AEs are particularly rare, the knowledge about them must necessarily rely on scrupulous reporting by the prescribers into a central repository. This is what is missing: not the repository, but the impetus to report.

So, when we go looking up side effects of a given medication, we must take the information for what it is: a woefully incomplete list of what has been experienced by other patients. And when someone asks "Do statins make you stupid," instead of denying the possibility, we should just admit that we don't know. Because once drugs are released by the FDA into the wild of our modern healthcare, by relying on others' reports of AEs we become inadvertent enablers of our ignorance about them.

My friend's symptoms abated after she finished the course of the 5-FU cream. None of the MDs bothered to report her symptoms to the AERS, and nor did she. I am not even sure that any of the players were aware of the possibility. Oh, well, an opportunity lost. We need to feel responsible for gathering this knowledge. The subject must be empowered to become the scientist; this is the only way we can get the full picture of the harm-benefit balance of our considerable and unruly pharmacopeia.

If you want to report a possible side effect of a medication, this FDA web page will guide you through the process.

Wednesday, August 17, 2011

Counterfactuals: I know you are, but what am I?

It occurs to me that as we talk more and more about personalized medicine, the tension between the need for individual vs. group data is likely to intensify. And with it, it is important to have the vocabulary to articulate the role for each.

Scientific method, in order to disprove the null hypothesis, demands highly controlled experimental conditions, where only a single exposure is altered. While this is feasible when dealing with chemical reactions in a beaker, and even, to a great extent, with bacteria and single cells in a petri dish, the proposition becomes a whole lot more complicated in higher order biology. In this way, the phrase "all things being equal" must really apply to the individuals or groups under study.

We call this formulation "the theory of counterfactual," and it is defined in the following way by the researchers at the University of North Carolina (see slide #3 in the presentation):
Theory of Counterfactuals
The fact is that some people receive treatment.
The counterfactual question is: “What would have happened to those who, in fact, did receive treatment, if they had not received treatment (or the converse)?”
Counterfactuals cannot be seen or heard—we can only create an estimate of them.
Take care to utilize appropriate counterfactual
So, essentially what it means is figuring out what would have happened to, for example, Uncle Joe if he had not smoked 2 packs of cigarettes per day for 30 years. Now, our complexity as the human organism makes it impossible (so far) to replicate Uncle Joe precisely in the laboratory, so we must settle for individuals or groups of individuals that resemble Uncle Joe in most if not all identifiable ways in order to understand the isolated effect of heavy smoking on his health outcomes.

So, you see the challenge? This is why we argue about the validity of study designs to answer clinical questions. This is why a randomized controlled trial is viewed as the pinnacle of validity, since in it, just by the sheer force of randomness in the Universe, we expect to get two groups that match in every way except the exposure in question, such as a drug or another therapy. This is why we work so hard statistically in observational studies to assure that the outcome under examination is really due to the exposure of interest (e.g., smoking), "all other things being equal."

But no matter how we slice this pie, this equality can only be approached, but never truly reached. And this asymptotic relationship of our experimental design to reality may be OK in some instances, yet not nearly precise enough in others. We just cannot know the complete picture, since we only have partial information on how the human animal really works. And this is precisely what makes our struggle to infer causality problematic, and precisely what introduces uncertainty into our conclusions.

What is the answer? Is it better to rely on individual experience or group data? As always, I find myself leaning inward toward the middle. Because an individual's experience is prone to many influences, both internal, such as cognitive biases, and external, such as variations in response under different circumstances, it is not valid to extrapolate this experience to a group. In the same vein, because groups represent a conglomeration of individual experiences, smoothing out the inherent variabilities which ultimately determine the individual results, study data are also difficult to apply to individuals. For this reason medicine should be the hybrid of the two: the make-up of the patient can partly fit into the larger set of persons with similar characteristics, yet also jut out into the perilous territory of idiosyncratic individuality. This is precisely what makes medicine so imprecise. This is precisely the tension between the science and the art of medicine. Because "counterfactuals cannot be seen or heard," Uncle Joe!          

Tuesday, August 16, 2011

Medicine and the internet: Harnessing the yottabytes

What if medicine in the US is just like the internet? What if it is just as difficult to separate the chaff from the wheat in medicine as it is on the web?

Both the curse and the blessing of the web is its accessibility. This means that anyone's voice can be heard. And it also means that anyone's voice can be heard. So, we are just as likely to stumble upon drivel as we are on information gold. And what takes time and skill is separating the two into neat piles, one to be ruthlessly discarded, and the other cherished for how it enriches us. To be sure without the web we might not have had access to either, and it is the egalitarian nature of the internet that gives us such a variety of sources in our information diet.

Now, let's look at medicine. Every day we hear about how much noise there is in the field, and this noise is difficult, if not impossible, to separate from the signal. Some signals are becoming much clearer, and they tell us that by being too egalitarian in medicine, we have likely been causing great harm. Take, for example, PSA and mammography screenings. The drumbeat of harm associated with these highly non-specific tests and the resultant chase after false positive results, is getting deafening, and rightfully so. Every day we hear that researchers have uncovered a breakthrough mechanism or treatment, and we hear with increasing frequency that a treatment previously thought to be sacrosanct is a bunch of rubbish. What gets lost among all this noise is the possibility of a true breakthrough in disease management or treatment or cure.

Think how hard it is to separate general valuable content from bunk on the web. Now, think of the logs of increase in the levels of difficulty of this task in medicine, where difficult concepts are further shrouded in the opaque cloth of arcane and obfuscating terminology. In fact, it is so difficult, that the class previously designated as the interpreters of this information for the lay public, physicians, are unable to keep up. There is a need for a whole new class of interpreters now -- researchers and patient advocates. And while this is good for the market and the economy, since it creates jobs that had not existed before, it begs a more critical evaluation vis a vis its impact on public's health. It also begs the question of the value of this gadgetry and information glut in medicine -- what is truly the wheat and what is the chaff? And what happens when you continuously try to drink from a fire hose? And do we turn down the stream, or is there another way?

Is it feasible to limit this stream of idea and information generation? Furthermore, is it sensible to do so? Many worry that putting limitations on this is tantamount to stifling innovation. But what is innovation? The most pertinent definition to the current discussion in the Merriam-Webster dictionary is "a new idea, method or device." Nowhere does the definition incorporate the value of this idea, method or device. Perhaps it is left to the free market to determine this value and ultimate use of such innovation. Well, in a market that claims to be free, but is filled with cynical machinations in the form of favoritism, subsidies and pricing games, is objective value really what is valued? And indeed, given the complexity of these "innovations", is it even possible for the end-user to judge their value, even if the market were free?

Yet, even despite all these challenges to establishing the value of innovation on the back end, I am not sure that centrally limiting idea generation is either feasible or right. In the case of ideas on the web, I have come to the conclusion that such microblogging platforms as Twitter can be invaluable filters of information, where my network of favorite tweeters whom I follow faithfully provides me with the wheat that has already been cleaned, yet not always overprocessed. Is this possible in medicine? I know that the FDA and CMS are supposed to provide some filtration for such medical information and interventions, but each is statutorily handcuffed and gagged not to stray beyond their legislative agendas. Therefore, a value filter should not be a body beholden to the letter of the law, or to political or financial interests. It needs to be driven by the spirit of scientific curiosity, objective evaluation and pragmatism. Most importantly, it must be open to a conversation that incorporates respectful dissent and many different perspectives.

Twitter arose out of the drive to share information, and it has shaped itself as a tool for developing value in the gargantuan and ever-growing world of yottabytes. Perhaps it is citizen bloggers and tweeters, including e-patients and clinicians and researchers and writers and others, who will ultimately solve this information glut in medicine by extracting the kernel of usefulness from this morass of vegetation. Harnessing this power systematically and accurately is the next challenge of our information age.

Because ultimately, for human cognition and health, less is more. And we are still human.