Friday, June 29, 2012

Molecular diagnostics: Making the uncertainties more certain?

Scott Hensley over at the NPR's Shots blog posted a story about the recently approved molecular diagnostic test that can rapidly identify several gram-positive bacteria that cause blood stream infections.  This is indeed important, since conventional microbiologic techniques rely on bacterial growth, which can take up to 2 to 3 days. This is too long to wait to identify the bug that is the cause of a serious infection. What doctors have done to date is make the best guess based on several factors, including the type of a patient, the source of the infection and the patterns of bacterial resistance at their site, to tailor empiric antibiotic coverage. The sicker the patient, the broader the coverage, until the culture results come back, when the doctor is meant to alter this treatment accordingly, either by narrowing or broadening the spectrum. The pitfalls of this work flow are obvious -- too many points where error can enter the equation. So on the surface the new tests are a great advance. And they actually are, but they are not free of problems, and we need to be very explicit confronting them.

Each diagnostic test can be evaluated on its sensitivity (how well it identifies the problem when the problem exists), specificity (how rarely it identifies a problem when it does NOT exist), and positive (what proportion of all positive tests represents true problem) and negative (what proportion of all negative tests represents a true absence of problem). Sensitivity and specificity are intrinsic properties of the test and can be altered only by how the test is performed. Positive and negative predictive values are dependent not only on the test and how it is done, but also on the population that is getting tested.

Let's take Nanosphere's test in Scott's story. If you trawl the company's web site, you will find that the sensitivity and specificity of this technology is close to 100%, if not 100%, the "gold standard" for comparison being conventional microbiology culture. And perhaps this is really the case in these very specialized hands that were testing the diagnostic. If these characteristics remain at 100%, disregard the rest of this post, please. However, the odds that they will remain at 100% in the wild of clinical practice are slim. But I am willing to give them 99% on each of these characteristics nevertheless.

OK, so now we have a near-perfect test that is available for anyone to use. Imagine that you are an ED doc at the beginning of your shift. An ambulance pulls up and rolls a septic patient into an empty bay. The astute ED nurses rush into settle the patient, and, as a part of the protocol, take a sample of blood for determining the pathogen that is making your patient sick. You quickly start the patient on broad spectrum antibiotics and walk away to take care of the next patient that has just rolled in with a heart attack. A few hours later, the septic patient, who is still in the ED because there are no ICU beds for him yet, is pretty stable, and you get the lab result back: he has MRSA sepsis. You pat yourself on the back because one of the antibiotics that you ordered was vancomycin, which should cover this bug quite adequately. You had also put him on ceftazidime to cover any potential gram-negative critters that may be lurking within as well. Now that you have the data, though, you can stop ceftaz and just continue vanc. The patient finally gets a bed upstairs, and your shift is over and you go home withe a sense of accomplishment.

The next morning you come in refreshed with your double-venti iced macchiato in your hand, sit at the computer and check on the septic patient. You are shocked to find out that last night he decompensated, went into shock and is now requiring breathing assistance and 3 vasopressors to maintain his blood pressure. You scratch your head wondering what happened. Then you come upon this crazy blog post that tells you.

Here is what happened. What you (and these tests) did not take into account is the likelihood of MRSA being the real problem rather than just a decoy false positive. Let's run some numbers. The literature tells us that the likelihood of MRSA causing sepsis is on the order of 5%. Let's create a 2x2 square to figure out what this means for the value of a positive test, shall we?


MRSA present
MRSA absent
Total
Test +
495
95
590
Test -
5
9405
9410
Total
500
9,500
10,000

What this says is the following. We have 10,000 patients roll into our ED with sepsis (in reality there are about 1/2 million to 1 million sepsis cases in the US annually), and we test them all with this great new test that has 99% sensitivity and 99% specificity. Of these 10,000, fifty five hundred (thanks, Brad, for noticing this error!) are expected to have MRSA. Given this situation, we are likely to get 590 positive tests, of which 95, or 16%, will be false positive. Face-palm, you drop your head on the desk realizing that Mr. Sepsis from yesterday was probably one of these 16 per 100 false positives, and MRSA is probably not the cause of his infection.

You begin to wonder what if your lab really did not get the sensitivity and specificity of 99%, but more like 98%? Still pretty generous, but what if? You start writing madly on a napkin that you grabbed at Starbucks, and your jaw drops when you see your 2x2:


MRSA present
MRSA absent
Total
Test +
490
190
680
Test -
10
9310
9320
Total
500
9,500
10,000

Wow, you think, this false positive rate is now nearly 30% (190/680)! You can't believe that you could be jeopardizing your patients' lives 3 times out of 10 because you are under the mistaken impression that they have MRSA sepsis. This is unacceptable. But can you really trust yourself with these calculation? You have to do one more thing to convince yourself. What if your lab only gets 97% specificity and sensitivity? What then? You choke when you see the numbers:




MRSA present
MRSA absent
Total
Test +
485
285
770
Test -
15
9215
9230
Total
500
9,500
10,000


It's and OMG moment -- nearly 40% would be treated for MRSA when they potentially have something else.

But you, my dear reader, realize that in the real world docs are not that likely to remove gram-negative coverage if MRSA shows up as the culprit pathogen. Why should you think otherwise, when there is so much evidence that people are not that great about de-escalating antimicrobial coverage in response to culture data? But then I have to ask you what's the use of this new test if no one will act on it anyway? In other words, how is it expected to help curb the rise of resistance? In fact, given the false positive MRSA rates we see above, might there not even be a paradoxical increase in the proliferation of resistance?

The point is this: We are about to see many new molecular diagnostic technologies on the market that have really really high sensitivity and specificity. The fly in this ointment, of course is the pre-test probability of the bug causing the problem. Look how in a very low risk group (5% MRSA) even a near-perfect test's value of a positive is reduced by almost a ridiculous magnitude. Do feel free to check my math.

So you trudge into the hospital the next day for your shift and check on Mr. Sepsis one more time. Sure enough, his conventional blood culture grew out E. coli, a gram-negative bug. You notice that he is turning around, though, ceftazidime having been restarted by the astute intensivist (well, I am a bit biased here, of course). All is well in the world once again. Except you hear an ambulance pull up and the nurse talking on the phone to the EMTs -- it's another sepsis on your hands. What are you going to do now?
              

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Thursday, June 28, 2012

ACA: The reports of its death were greatly exaggerated

This is a big day for President Obama's signature legislation, the Affordable Care Act. The Supreme court upheld its constitutionality, and the punditdom thinks that further challenges are unlikely. On the other hand, if Romney takes the White House in the next election... Well, you can guess what will happen then.

It has been interesting to watch the run-up to this decision. Most recently I have been amused by surveys finding that on the one hand many American people are in favor of the pre-existing condition inclusion (this part of the bills forbids insurance companies to discriminate against people with prior health conditions), as well as the provision that allows young adults to stay on their parents' insurance policies through a certain age. On the other hand, reportedly the majority of Americans are against the healthcare law, and most also oppose the individual mandate provision (this is the part where everyone has to buy insurance or pay a tax). Given this imbalance in the public opinion, a more pertinent survey should have assessed how well people understand these provisions in the first place. And this would have had to establish how well the public gets our whole healthcare "system."

To start from the beginning, any healthcare system can be judged on three criteria:
1. How accessible is it?
2. Is it of adequate quality?
3. How expensive is it?

The answer to the first question provides one of the rationales for the individual mandate. Currently there are about 50 million people without health insurance in the US, and, hence, without adequate access to the system. Many of these people are the young and the healthy who gamble on staying young and healthy. And many are consigned to relying on expensive emergency care when this gamble fails. Some of them go bankrupt trying to pay for it, while others become "safety net" cases, where the institution that cares for them swallows the costs. These institutions do get some public dollars for providing safety net care, but not nearly enough to break even. Since many of the 50 million don't buy health insurance because they cannot afford it, the healthcare bill provides a way to create more affordable insurance products.

The answer to the second question is not related directly to the individual mandate. Since much of this blog is devoted to the issues of healthcare-associated harm, I do not wish to belabor this point here. Suffice it to say that the bill does try to address this catastrophic situation, though it remains to be seen if it will succeed.

The third question is the crux of the story. Many have said that the escalation of healthcare costs is unsustainable, and I subscribe to this notion: I am not sure how much more than $2.6 trillion/year we want to pay for this insatiable beast. Yet judging by the near-revolt that "death panels" rhetoric caused, the citizenry is not interested in being thoughtful about what services make sense. The vehement knee-jerk to the "R"word shuts down the discussion before it even starts. So, OK, how do we pay this ever-increasing bill? Moreover, since we are all happy with the government mandate for all insurance to pay for pre-existing conditions, how do we propose to pay for this additional coverage? Short of printing money (not generally a good idea) or creating a single-party payer system that regulates these expenditures, the only way is to broaden the pool of revenue. The way the ACA has proposed to broaden this pool is through the very individual mandate that is anathema to our American way of life. But without it, there is no broadening of coverage, and there is no paying for every intervention that we seem to feel entitled to.

I doubt very much that the ACA will substantively contain healthcare costs. I even doubt that it will solve the quality problems, but I am willing to wait and see on that. This bill is but s band-aid on an arterial bleed. However, I do believe that upholding this legislation allows us to take the first steps toward a reasonable national dialog about the kind of healthcare system we need. This dialog will not be helped by stupid surveys that reinforce our willful ignorance. We have the opportunity to move this conversation to a higher level, where people begin to understand the issues we are up against more deeply. Let's take it.                      

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, June 26, 2012

Peeling the cabbage of "works" in treatment interventions

What exactly does it mean when we say that a treatment works? Do we mean the same thing for all treatments? Are there different ways of assessing whether and how well a treatment works? I am sure you've guessed that I wouldn't be asking this question if the answer were simple. And indeed, the answer is "it depends."

What I am talking about is examining outcomes. I did a post a couple of years ago here, where I use the following quote from a Pharma scientist:
"The vast majority of drugs - more than 90 per cent - only work in 30 or 50 per cent of the people," Dr Roses said. "I wouldn't say that most drugs don't work. I would say that most drugs work in 30 to 50 per cent of people. Drugs out there on the market work, but they don't work in everybody."
Here is that word "work" again. What does this mean? well, let's take such common condition as heart disease. What does heart disease do to a person? Well, it can do many things, including give him/her such symptoms as a chest pain, shortness of breath, dizziness and palpitations, to name a few. These symptoms may have at least two sets of implications: 1) they are bothersome to the individual, and in this way may impair his/her enjoyment of life, and 2) they may signal either a present or a future risk of a heart attack. Why are heart attacks important? Well, they are important because one may kill the person who is having it, or one (or several) may weaken the heart to the point of a substantial disability and thus a deterioration in the quality of life. So, there certainly seems to be a good rationale to prevent heart disease either from happening in the first place or from at least worsening when it's already established.


Now, what's available to us to prevent heart disease? Well, some think that lowering one's cholesterol is a good thing. OK, let's go with that. What is the sign that the statins (cholesterol-lowering drugs) "work"? What would it look like if it was about lowering the cholesterol? Say, your total cholesterol is 240. You go on a statin and in 6 months your total cholesterol is 238. Your cholesterol was lowered, it worked! Well, yes, but if you are asking what this 2-point drop really accomplishes, you are beginning to understand the meaning of "work." So, just intuitively we can say that there needs to be a certain, perhaps "clinically significant," drop in the total cholesterol in order for us to say that the drug "worked." 


Great! Now we are sidling up to the real issue: What constitutes a "clinically significant" drop in cholesterol? Is it some arbitrary number that looks high enough? Probably not. How about some drop that correlates to a drop in the risk of the actual condition we are trying to impact, heart disease? Say, a 40-point drop, or getting to below 200, may be the right threshold for the "works" judgment. Ah, but there is yet another question to ask: How often does this type of a drop lead to a reduction in heart disease? Is it always (not likely), or is it the majority of the time (rarely) or at least some of the time (most likely in clinical medicine)? And what portion of that time do we consider satisfactory -- 60%? 40%? 20%? 2%? 


Let me bring just one more layer into this discussion. Many people walk with heart disease and don't know that they have it. Some of these people are destined to have a heart attack and/or die from it. Many others are likely to die from something else before they ever experience any symptoms or signs of their heart disease. This raises the question of whether the statins' ability to reduce cholesterol and hence reduce the risk of heart disease is enough to say that the drugs "work." Perhaps "work" means that by lowering cholesterol (say in the majority of those who take it) they reduce the risk of hear disease in some proportion of those who are at risk for it, and among that proportion whose risk is reduced they also reduce the risk of a heart attack in a few, and of death in even fewer. 


So, to sum up, "works" is a loaded term. For the case we are discussing, there is what I call a "dwindle" effect, where the main outcome, cholesterol lowering, is likely to show a somewhat robust result. On the other hand, this (surrogate) outcome itself is not all that interesting when divorced from what we really care about -- symptoms, heart attacks and death. And I haven't even gone into the side of the equation where the patient gets to decide what "work" means for him/herself. The layers of the possible "works" are a cabbage that we all need to peel when discussing treatment plans with our clinicians and when reading news about new technologies.                     
 

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Monday, June 25, 2012

Thank you for your support

A few moths ago I quietly put  a "Donate" button in the right margin of my blog. It was an experiment to see what it can do just by sitting there. But now that I have had a few donations come my way, I feel the need to explain why I decided to solicit donations.

I have now been blogging for a little over three years, and have a small but loyal following. One of the ways I have considered to support my blogging is by allowing ads and sponsorships on the site. Having considered that, I have ruled it out as a possibility for the moment. The reason is simple: even if I manage not to give in to the cognitive biases created by a financial relationship, there will still be the hazard of a perception of conflict. And I prefer to keep my blogging free of such real or imagined conflicts.

At the same time, when I do a meaty post, I spend considerable time on it, and often wish I had more time and resources to spend. So, I thought that direct donation through the site might be a good way to support my blogging.

The posts that seem most valuable, those deconstructing studies, take the longest time and the most effort to do. And it's no wonder: The amount of misinformation that is available to us is staggering, and it is not always clear how to filter it. Although I acknowledge that my interpretations are still informed by my own (conscious or unconscious) cognitive biases, I try to be as transparent as possible about where I am coming from.

Thank you all for your support and for encouraging me to continue this work.        

So, if you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Thursday, June 21, 2012

Unstructured medical record: Another case of "not everything that counts can be counted"?

Have you ever wished that instead of choosing a single answer on a multiple choice exam you could write an essay instead to show how you are thinking about the question? It happened to me many times, particularly on my medical board exams, where the object seemed more to guess what the question writers were thinking than to get at the depth of my knowledge. And even though each question typically had a menu of 5 possible answers, the message was binary: right vs.wrong. There was never room for anything between these two extremes. Yet this middle ground is where most of our lives take place.    

This "yes/no" is a digital philosophy, where strings of 0s and 1s act as switches for the information that runs our world. These answers are easily quantifiable because they are easily counted. But what are we quantifying? What are we counting? Has the proliferation of easily quantifiable standardized testing led us to more and deeper knowledge? I think we all know the answer to that question. Yet are heading in the same direction with electronic medical data? Let me explain what I mean.  

There was an interesting discussion yesterday on a listserv I am a part of about structured vs. unstructured (narrative) clinical data. I don't often jump into these discussions (believe it or not), but this time I had to make my views heard, because I believe they are similar to the views of many clinicians.

My understanding of one of the comments was that the narrative portions of the medical record are less valuable, since they are more difficult to digitize, and, therefore, do not represent data per se. The narrative, it was stated, serves more as a memory and communication aid for the clinician, and furthermore it frequently misses patient perspective. Right or wrong, what I walked away with is that "narrative parts of the record are not as valuable as the structured parts."

I responded:
Isn't the point of the narration to synthesize the structured data into a coherent whole that feeds into the subsequent plan? I completely agree that the patient perspective is often missing, but the two are not mutually exclusive. In fact, I see the synthesis of both narratives giving rise to most valuable interpretation and channeling of the "objective" data. 

When I was in practice, I often had patients referred to me in consultation for lung issues. And while the S and O parts of the SOAP note (Subjective, Objective, Assessment, Plan) lent themselves relatively naturally to structured data, the A and P parts really did not. In the A I waxed poetic about how the S and the O fit together for me and why (why I thought that diagnosis X was more likely than diagnosis Y), and in the P I discussed why I was making the recommendations I was making. And while... this was a way to communicate with other docs and to jog my memory about the patient when I saw him/her next, this was the most valuable part for me, since it had already contained the cognitive piece of the process. And yet this was the part that was most challenging to fit into a predetermined structure of a EMR. 

The richness of patient participation would come in a dialogue between clinicians and their patients, which I still think requires a narrative, no? 
And then someone else piped in and talked about two-dimensional holes being forced to hold the multidimensionality of our health and being, and the disruption that this represents. And I have to say this is exactly my recollection of my interactions with the (very early) EMR that my group adopted. I had loved to sit and write my SOAPs by hand, even the S and the O, but especially the A and the P. There was something about the process that solidified and imprinted for me the particular patient. The printed dictation was secondary to that somehow, and I always tended to refer to my hand-written note. This is how I knew my patients. That was slow medicine.  

Perhaps this was a byproduct of how I grew up -- largely analog, not anywhere near as digital as the younger generations, and much more inclined to put my thoughts on paper through the interaction of my brain, hand and pen. Strangely, today I have a hard time writing by hand, and my preferred method is to type on my keyboard. So, perhaps some brain rewiring has taken place. All I can say is this: for me the act of writing down my thoughts about the patient made the encounter real and memorable. Even more, I think it lent a certain level of respect to the individuality of each encounter. It is possible that if I were in practice today, I could achieve the same by typing my note.

But the writing medium is somewhat separate from "structured vs. unstructured" data. The "ideal" structured record is all about yes/no checkboxes. Having to fit our analog thoughts into such limited and limiting digital environment is distracting. And a lot of data from brain science suggest that it takes us a while to get back to the same level of focus on a task or a thought once we have been distracted. A clinical encounter is woefully brief, and these distractions can and do reduce its efficiency and integrity.

As a health services researcher, I rely on digital data for my work. Yet as a clinician I railed against it, as many clinicians continue to do today. Is the answer to abandon all electronic pursuits? Of course not. But I am baffled by the attitude that it is clinicians' duty to adapt to the digital way of thinking rather than the other way around. There does not seem to be a recognition that the purpose and meaning of a clinical record is different to a clinician from what it is to a researcher or a policy maker. Yet the current EMR development seems to focus on the latter two constituencies virtually ignoring the clinical setting. Given our vastly improved computing capabilities over the last decade, why does a clinician still have to think like a computer? Moreover, if medicine is indeed a mix of art and science, do we really want our doctors to fit strictly into the digital model?

I believe that it is a lazy way out for developers. They are the ones that need to step up and create an electronic record that does not gratuitously disrupt the clinical encounter. This record needs to fit the work flow of clinical medicine like a glove. We cannot wait for some miracle to come along and magically transform our sick healthcare system. Today's EMR will not succeed unless it takes into account the art of medicine. The narrative parts of the record must be preserved and enriched by patient collaboration, not eliminated in the interest of easy bean counting. Because several smart people tell us that "not everything that counts can be counted, and not everything that can be counted counts." It's time to pay attention.   

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Friday, June 15, 2012

Plagiarism and advertising and COI, oh my!

Update #4, 7:45 AM Eastern, Sunday, June 17 (Happy Father's Day, everyone!)
As of this morning, the HealthWorks Collective has taken down the story as well. What is interesting to me is that neither OneMedPlace nor HealthWork Collective has put any explanation on their respective site, and the reader is essentially consigned to finding "ERROR 404." I sure hope that this is not either of the organization's attempt to sweep the whole thing under the rug. 


Update #3, 6:00 PM Eastern (and last for tonight, I hope)
Another e-mail from Matt Margolis, in which he requested that I let my readers know that 
...we took down our piece after an internal discussion, before you and I made any contact. Hence my request to remove a reference to us.
Still no answer on Aviisha -- perhaps it will appear in the "similar piece" they are planning to publish next week. looking forward to it. 


Update #2, 4:35 PM Eastern
Another exiting update for y'all. A few hours ago I got a message from a Matt Margolis at OneMedPlace informing me that the company has taken the post down after realizing their mistake. Well, why don't I just share the whole message (emphasis mine)?
Hello Marya,

Thank you for reaching out to us. We quickly realized there was an error in attribution and have since pulled the story. We would appreciate at this time that you do not include our name or our article in your post. However, we will be publishing a similar piece next week that will incorporate some of the facts in the NYT story. At which time, I am happy to answer any questions to help give depth to your piece.


Best,
Matt Margolis
Managing Editor, OneMedPlace
I responded asking for "more depth" with respect to whether Aviisha is a client of OneMedPlace. I will let you know what I hear when I hear it. And by the way, the HealthWorks Collective still has the story on its page


Update #1, 1:45 PM Eastern
As of right now, the OneMedPlace story page has been taken down. No one from OneMedPlace has communicated with me at this point. The story is still up on the HealthWorks Collective site here

About four years ago I decided that it was time to learn more about the recent abundant advances in brain science. Since then, I have read avidly on our neurobiology, behavioral economics and decision science. I have learned about our predictable irrationality, biophilia, heuristics and biases, and our drive to create linear explanations for phenomena where none exists. I also learned about priming, where subtle messages delivered prior to a task's completion (did you know that you can, for example, get kids to improve more on their exams by commending their hard work than their native brilliance?) influence the outcome of the task.

It is in this context of (some) understanding about how the human brain assembles its information pathways that I read this story from yesterday's OneMedPlace News (also reprinted by The HealthWorks Collective here under the byline of Herina Ayot, the Managing Editor for OneMedPlace). This story about a correlation between severe sleep apnea and cancer, starts out thusly:
Two new studies have found that people with sleep apnea, a common disorder that causes snoring, fatigue and dangerous pauses in breathing at night, have a higher risk of cancer. The new research marks the first time that sleep apnea has been linked to cancer in humans.
About 28 million Americans have some form of sleep apnea, though many cases go undiagnosed.
[...] For sleep doctors, the condition is a top concern because it deprives the body of oxygen at night and often coincides with cardiovascular disease, obesity, and diabetes.
All of this is true and truly concerning. The next two paragraphs state
In light of the recent studies, Aviisha Medical Institute, LLC is taking $200 off the cost of its home sleep test, which was originally $449.49, and offering free assessments for the duration of May. The special offer is intended to encourage the public to get tested for sleep apnea and raise awareness about the deadly consequences of untreated apnea. Studies estimate that 85% of sleep apnea sufferers don’t know they have the condition.
One may speculate that other diagnostic technologies developers may promote offers in light of this newfound cancer correlation, as well.
This is when I got a little uncomfortable thinking that this is an advertisement rather than a story. And the final statement really got my hackles up:
Although the study did not look for it, study author Dr. Miguel Angel Martinez-Garcia, of La Fe University and Polytechnic Hospital in Spain, speculated that treatments for sleep apnea like continuous positive airway pressure, or CPAP, which keeps the airways open at night, might reduce the association.
And how does sleep apnea cause cancer?
Lead author Dr. F. Javier Nieto, chair of the Department of Population Health Sciences at the University of Wisconsin School of Medicine and Public Health, commented that five times the risk of cancer is more than just a statistical anomaly. Previous studies in animals have shown similar results, while other studies have linked cancer to possible lack of oxygen or anaerobic cell activity over long periods of time, therefore, it’s possible poor breathing fails to oxygenate the cells sufficiently. 
But then even more happened. From the very beginning of the article, I had sensed something familiar in it. It was my recollection that I had seen this story before about the two studies presented at the American Thoracic Society last month. Dutifully clicking on the link provided in the first paragraph of Ayot's story, I found myself on the NYT's "Well" blog reading the post from May 20, 2012, by Anahad O'Connor. Here is how it starts:
Two new studies have found that people with sleep apnea, a common disorder that causes snoring, fatigue and dangerous pauses in breathing at night, have a higher risk of cancer. The new research marks the first time that sleep apnea has been linked to cancer in humans.
About 28 million Americans have some form of sleep apnea, though many cases go undiagnosed. For sleep doctors, the condition is a top concern because it deprives the body of oxygen at night and often coincides with cardiovascular disease, obesity and diabetes.
And then, disappointingly, toward the end of the post:
Although the study did not look for it, Dr. Martinez-Garcia speculated that treatments for sleep apnea like continuous positive airway pressure, or CPAP, which keeps the airways open at night, might reduce the association.
A couple of things shocked me (in addition to the final statement about CPAP):
1). Ayot's story was almost verbatim (with the exception of the Aviisha advertisement) reprinted from O'Connor's story. There did not seem to be an attribution, unless linking the the original post counts as one. Please, someone who is well versed in this, tell me if this is an acceptable way to attribute. I'll tell you now that in the academic circles this would be (and has been) called plagiarism. As you may recall, I am pretty sensitive to this, having had my work plagiarized recently.
2). The thinly veiled advertisement (inserted into the body of this story which is practically copied from the NYT word for word). The advertisement would not bother me if it had stayed on the OneMedPlace web site -- after all they seem to be a PR agency. But is was reprinted on the HealthWorks Collective's site as a legitimate news item, and I don't believe that HWC is a purveyor of advertorials.

I browsed the web site of Ayot's employer OneMedPlace to see if there is evidence that Aviisha is a client, but did not find any, though my search was admittedly perfunctory. I have reached out to the company for a comment and to understand whether a conflict of interest may exist with Aviisha, but have not heard from them at this time. I will update the post if and when I hear from them.

So drawing on my limited understanding of how the brain works, here is what I am thinking this piece aims to accomplish:
1). Create an awareness of a condition that is apparently common and largely undiagnosed, and do it using words from a high-impact publication
2). Prime the reader with the idea that we understand how it causes cancer (if only in laboratory animals, maybe)
3). Set up Aviisha (and "other diagnostic technology developers") as solution providers
4). Create a linear path to CPAP as the answer

But this is just my uneducated guess at how the human brain may perceive this story. Of course, I could just be playing right into my cognitive biases.   

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Wednesday, June 13, 2012

A FORCE against disease mongering

Have you been over to The Oransky Journal lately? If not, go and see what is happening there. What is happening is a microcosm of the larger debate we are having about detection and diagnosis of real disease versus overdiagnosis of phantom conditions whose treatment is worse than anything that the potential disease may deliver.

The issue is as follows. In his talk at TEDMED in April, Ivan gave an excellent and measured presentation about the folly of pre-disease classifications and the harm they can bring. As my readers are well aware, this is the subject of great interest to me -- after all, it is a travesty that contact with the so-called "healthcare" system is the third leading cause of death in the US, and that overtreatment costs us at least 10 cents of each healthcare dollar, and probably much more (you will find a slice of my posts on this issue here). So, Ivan's talk was timely and cogent.

After he posted the talk on his blog, he received a letter from a group called FORCE (Facing Our Risk of Cancer Empowered) who, as it turns out, coined the word "previvor," one of the many words Ivan used to illustrate the philosophy of disease mongering. The letter voiced a vigorous objection to Ivan's use of the word to "misunderstanding" its meaning. But what really happened?

Apparently, "previvor" defines a group of people who are at a heightened risk for cancer, but have not yet been diagnosed. It seems that the majority of FORCE's constituency consists of women with the BRCA gene mutations, which put them at an extraordinarily high risk of several cancers, most notably breast and ovarian. Moreover, these cancers tend to occur at an early age, and are generally quite a bit more aggressive than those not associated with these mutations. We are not talking a trivial rise in the risk either; BRCA1, for example, raises one's lifetime risk for breast cancer to about 80%! To mitigate this risk, many women with these types of mutations undergo prophylactic mastectomies and oophorectomies. These are life-changing events, and their genetic make-up hangs like a Damocles' sword over the offspring of these women as well. So, what's the problem with using whatever word suits them?

The issue is the group's definition of this neologism "previvor." As quoted in Oransky's post (italics mine):
“Cancer previvors” are individuals who are survivors of a predisposition to cancer but who haven’t had the disease. This group includes people who carry a hereditary mutation, a family history of cancer, or some other predisposing factor. The cancer previvor term evolved from a challenge on the FORCE main message board by Jordan, a website regular, who posted, “I need a label!” As a result, the term cancer previvor was chosen to identify those living with risk. The term specifically applies to the portion of our community which has its own unique needs and concerns separate from the general population, but different from those already diagnosed with cancer.
So, the definition is quite broad, as you can see, especially the "some other predisposing factor." Who doesn't have one? Just by virtue of being alive we have predisposing factors to many diseases, including cancer. And aging is one of the strongest predisposing factors to cancer as well. The concern is that a broadly defined term like this plays right into our national paranoia about our health and our enthusiasm for screening as the primary mode of prevention. And if you really don't feel well informed about why screening is not all it's cracked up to be, I urge you to dig through the annals of this site thoroughly (if you don't have much time, you can get a solid primer on the issue from my book). In my view, given the extent of the harm from overdiagnosis and overtreatment, Oransky's call-out of this word in the ultra-visible forum of TEDMED was a public service.

And indeed, it turned out this public service has gone well beyond just delivering the information. The discussion that ensued over the last couple of days with FORCE has shown what this organization is made of. An 80% lifetime risk of breast cancer is a grave matter, and the group is an important force in advocating for these patients and supporting their families. But as it turns out, it stands for even more than that. I commend Dr. Friedman, the Executive Director of the group, for being open to narrowing the definition of the term "previvor." This willingness signifies a real desire to do the right thing not only for her constituency, but also for the public at large. Even more, she should be proud that her organization is taking a stand against disease mongering.

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Tuesday, June 12, 2012

Healthfinder.gov: Education or indoctrination?

Ever heard of healthfinder.gov? It's a web site from the US Department of health and Human Services
...where you will find information and tools to help you and those you care about stay healthy.
Sounds like a laudable goal, right? Great! Now, help me! Here is the "help" that I found when I went to the page called "Colorectal Cancer Screening: Questions for the doctor":

What do I ask the doctor?

It helps to have questions for the doctor written down ahead of time. Print out these questions and take them to your next appointment. You may want to ask a family member or close friend to come with you to take notes.
So far so good. But here is the list that follows:
  • What puts me at risk for colorectal cancer?
  • When do I need to start getting tested?
  • How often do I need to get tested?
  • What screening test do you recommend? Why?
  • What’s involved in screening? How do I prepare?
  • Are there any dangers or side effects involved?
  • How long will it take to get the results?
  • What can I do to reduce my risk of colorectal cancer?
Note the wording: "When do I need to start getting tested?" "How often do I need to get tested?" And these "needs" come well before the "why?" In fact, the "why" never really comes. The oblique "why" about which test is recommended is too little too late. The real "why" is why, or even whether, I need to get tested in the first place. I am happy to see a question on the dangers of screening, but again it leaves plenty of room for the clinician to minimize and patronize.

The list of questions is built upon one (erroneous) assumption: Everyone is bound to perceive the risk-benefit equation of colorectal cancer screening the same way. We know this is false, and each person needs to make an individual decision based in what we know today and according to the values he/she places on the outcomes. The way the questions are written, they simply reinforce the bullying attitude of the screening bias, making those who swim against this tide feel irrational and unreasonable. But may I point out that some of us spoke out against universal mammography screening even before it became the main-stream recommendation? So perhaps there are good reasons to be more cautious with screening for everything, even colon cancer.

Science evolves, our knowledge evolves. What we think we know today will be modified tomorrow. I take a strong exception to this dogmatic and one-sided formulation of how to have a discussion about testing whose risk and benefit profile may not (and should not) elicit the same unbridled enthusiasm from everyone. So please, healthfinder.gov, rethink your "helpful" questions so as to educate, rather than indoctrinate.

Hat tip to @DCPatient for pointing me to this page  

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Monday, June 4, 2012

Why "efficacy" does not equal "effectiveness"

As you may have noticed, one of the biggest medical conferences is in progress right now in Chicago -- the American Society of Clinical Oncology. It is the premier meeting of cancer doctors, where new data are the order of the day. This stream of data is reflected in a constant barrage of media coverage. One of the stories I saw today serves as a great illustration of the title of this post: why efficacy does not equal effectiveness. Although I do discuss this in my book, I thought it might be nice to apply these ideas to a current story.

The MedPage story opens with this:
Older patients with advanced cancers treated in the community had survival that fell short of that for patients who received the same regimens in clinical trials, analysis of a large government database showed.
Most of the differences were small in absolute terms, except for patients with stage IV colorectal cancer treated with the FOLFIRI chemotherapy regimen. Patients in the community had a median survival of 16.1 months, which was 30% lower than that of patients treated with the same regimen in a cooperative-group randomized clinical trial.
Moreover, survival among FOLFIRI-treated older patients was 4 months shorter than that of older patients treated with FOLFOX chemotherapy, Elizabeth B. Lamont, MD, reported here at the American Society of Clinical Oncology meeting. 
This is a common conundrum, and a poster child for the distinction between efficacy and effectiveness. So, what are these two "e"s? Efficacy is essentially a statistical distinction between the outcomes of interest among patients who undergo a particular treatment compared to those who do not, or those who get treated with a placebo. Efficacy is measured under the controlled circumstances of a clinical trial, where only very specific patients are enrolled, very specific treatments are administered, and very specific outcomes are monitored. These trials usually randomize patients to either the treatment or the placebo, thus ensuring that treated and untreated patients are the same in all ways save for the treatment under examination. Efficacy is frequently represented by what we call surrogate outcomes, such as laboratory measurements or radiographic tests. In cancer studies, for example, a frequent surrogate outcome that is used is so-called progression-free survival. This refers to the duration of time from treatment that a person is alive AND the tumor has not shown any evidence of growth on a scan. Another frequently used surrogate measure is blood pressure as a marker for heart disease. These are surrogates because, although correlated with such important outcomes as death and heart attacks, respectively, they themselves do not tell us with any precision how our interventions impact these ultimate measures. I will not belabor at this time why we rely on efficacy and surrogate outcomes, since I go into detail about that in the book.

Effectiveness, on the other hand, is something altogether different. Effectiveness tells us exactly what happens in the real messy world to outcomes that matter, such as death and quality of life, in conjunction with the treatment in question. We have known for a long time that the outcomes we see in naturalistic studies are often much less spectacular than those reported in RCTs of efficacy. Why is this? And more importantly, which do we believe? The second question is easier to answer than the first: we believe what happens in the real world, because it is precisely what happens in the real world rather than in the laboratory of clinical research that matters. As to why this difference exists, there are many reasons for this, most of which I have discussed elsewhere on this blog and in Between the Lines. Some of the reasons may have to do with patient selection, which in real life tends to be less restrictive than in RCTs. For example, individuals who are more ill may get the intervention that was intended to be given to those with lesser illness severity. In this population the intervention may not prove to be as effective as in those who are not as ill. This is called "confounding by indication," and we talked about it most recently here. Other reasons may be that other conditions patients have in real life, which tend to be excluded from RCT populations, attenuate the impact of the intervention. When looking at mortality in cancer, patients with end-stage heart disease may be excluded from the RCT, but treated in the wilds of clinical practice. And this treatment may give us a Pyrrhic victory, where cancer is indeed held at bay, but the patient dies of his heart disease. And here is yet another reason for the efficacy-effectiveness disconnect: attribution. In clinical trials there is a meticulous process that has to be followed in order to attribute the cause of death to a particular disease. In real life -- not so: death certificates are a notoriously dicey source of information on the causes of death.

So here are some of the challenges with applying RCT data to the real world, illustrated so palpably in the story we started out with. Please, do not misunderstand my message: I am not saying that RCTs are useless. What I am saying (I must sound like a broken record by now) is that we need different types of studies to see the whole picture. RCTs by their nature are exclusive undertakings whose findings are only narrowly translatable to the real world. Naturalistic observational data are key components of building the entire jigsaw puzzle of how our interventions really work.                

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!