Showing posts with label HIT. Show all posts
Showing posts with label HIT. Show all posts

Thursday, June 21, 2012

Unstructured medical record: Another case of "not everything that counts can be counted"?

Have you ever wished that instead of choosing a single answer on a multiple choice exam you could write an essay instead to show how you are thinking about the question? It happened to me many times, particularly on my medical board exams, where the object seemed more to guess what the question writers were thinking than to get at the depth of my knowledge. And even though each question typically had a menu of 5 possible answers, the message was binary: right vs.wrong. There was never room for anything between these two extremes. Yet this middle ground is where most of our lives take place.    

This "yes/no" is a digital philosophy, where strings of 0s and 1s act as switches for the information that runs our world. These answers are easily quantifiable because they are easily counted. But what are we quantifying? What are we counting? Has the proliferation of easily quantifiable standardized testing led us to more and deeper knowledge? I think we all know the answer to that question. Yet are heading in the same direction with electronic medical data? Let me explain what I mean.  

There was an interesting discussion yesterday on a listserv I am a part of about structured vs. unstructured (narrative) clinical data. I don't often jump into these discussions (believe it or not), but this time I had to make my views heard, because I believe they are similar to the views of many clinicians.

My understanding of one of the comments was that the narrative portions of the medical record are less valuable, since they are more difficult to digitize, and, therefore, do not represent data per se. The narrative, it was stated, serves more as a memory and communication aid for the clinician, and furthermore it frequently misses patient perspective. Right or wrong, what I walked away with is that "narrative parts of the record are not as valuable as the structured parts."

I responded:
Isn't the point of the narration to synthesize the structured data into a coherent whole that feeds into the subsequent plan? I completely agree that the patient perspective is often missing, but the two are not mutually exclusive. In fact, I see the synthesis of both narratives giving rise to most valuable interpretation and channeling of the "objective" data. 

When I was in practice, I often had patients referred to me in consultation for lung issues. And while the S and O parts of the SOAP note (Subjective, Objective, Assessment, Plan) lent themselves relatively naturally to structured data, the A and P parts really did not. In the A I waxed poetic about how the S and the O fit together for me and why (why I thought that diagnosis X was more likely than diagnosis Y), and in the P I discussed why I was making the recommendations I was making. And while... this was a way to communicate with other docs and to jog my memory about the patient when I saw him/her next, this was the most valuable part for me, since it had already contained the cognitive piece of the process. And yet this was the part that was most challenging to fit into a predetermined structure of a EMR. 

The richness of patient participation would come in a dialogue between clinicians and their patients, which I still think requires a narrative, no? 
And then someone else piped in and talked about two-dimensional holes being forced to hold the multidimensionality of our health and being, and the disruption that this represents. And I have to say this is exactly my recollection of my interactions with the (very early) EMR that my group adopted. I had loved to sit and write my SOAPs by hand, even the S and the O, but especially the A and the P. There was something about the process that solidified and imprinted for me the particular patient. The printed dictation was secondary to that somehow, and I always tended to refer to my hand-written note. This is how I knew my patients. That was slow medicine.  

Perhaps this was a byproduct of how I grew up -- largely analog, not anywhere near as digital as the younger generations, and much more inclined to put my thoughts on paper through the interaction of my brain, hand and pen. Strangely, today I have a hard time writing by hand, and my preferred method is to type on my keyboard. So, perhaps some brain rewiring has taken place. All I can say is this: for me the act of writing down my thoughts about the patient made the encounter real and memorable. Even more, I think it lent a certain level of respect to the individuality of each encounter. It is possible that if I were in practice today, I could achieve the same by typing my note.

But the writing medium is somewhat separate from "structured vs. unstructured" data. The "ideal" structured record is all about yes/no checkboxes. Having to fit our analog thoughts into such limited and limiting digital environment is distracting. And a lot of data from brain science suggest that it takes us a while to get back to the same level of focus on a task or a thought once we have been distracted. A clinical encounter is woefully brief, and these distractions can and do reduce its efficiency and integrity.

As a health services researcher, I rely on digital data for my work. Yet as a clinician I railed against it, as many clinicians continue to do today. Is the answer to abandon all electronic pursuits? Of course not. But I am baffled by the attitude that it is clinicians' duty to adapt to the digital way of thinking rather than the other way around. There does not seem to be a recognition that the purpose and meaning of a clinical record is different to a clinician from what it is to a researcher or a policy maker. Yet the current EMR development seems to focus on the latter two constituencies virtually ignoring the clinical setting. Given our vastly improved computing capabilities over the last decade, why does a clinician still have to think like a computer? Moreover, if medicine is indeed a mix of art and science, do we really want our doctors to fit strictly into the digital model?

I believe that it is a lazy way out for developers. They are the ones that need to step up and create an electronic record that does not gratuitously disrupt the clinical encounter. This record needs to fit the work flow of clinical medicine like a glove. We cannot wait for some miracle to come along and magically transform our sick healthcare system. Today's EMR will not succeed unless it takes into account the art of medicine. The narrative parts of the record must be preserved and enriched by patient collaboration, not eliminated in the interest of easy bean counting. Because several smart people tell us that "not everything that counts can be counted, and not everything that can be counted counts." It's time to pay attention.   

If you like Healthcare, etc., please consider a donation (button in the right margin) to support development of this content. But just to be clear, it is not tax-deductible, as we do not have a non-profit status. Thank you for your support!

Monday, March 5, 2012

Goldfinger's pan-man-scan: Available at a hospital near you

As my regular readers know all too well, I think a lot about our proneness to cognitive biases and how they impact our healthcare choices. I have been reading Sam Harris's book The Moral Landscape (Free Press, 2010), where he tries to lay a scientific foundation behind our values and beliefs. It is rather a dense book, full of linguistic convolution and intellectual contortions. And although I do not agree with Harris on some of his points (I think he would say that my disagreement stems more from my preconceived notions and values than from the weakness of his arguments), there are some very enlightening ideas in the book.

On page 123 he discusses bias:
...we know that people often acquire their beliefs about the world for reasons that are more emotional and social than strictly cognitive. Wishful thinking, self-serving bias, in-group loyalties, and frank self-deception can lead to monstrous departures from the norms of rationality. Most beliefs are evaluated against a background of other beliefs and often in the context of an ideology that a person shares with others.
Doesn't this echo my assertion that scientific knowledge acquisition tends to be unidirectional? What occurs to me is that the combination of these emotional and ideological factors along with our cognitive predispositions gangs up to steer us toward doom. Too nihilistic? Well, that's not what I am going for. Bear with me, and I will show you how it is driving us into bankruptcy, particularly where the healthcare system is concerned.

We know a lot about how the human brain works now. Creatures of habit, for the purpose of conserving energy, we develop many mental shortcuts that drive our decisions. Think about them like the impact of talking on your cell phone while driving: most of the time under usual circumstances you can pay enough attention to both driving and talking. But on occasion something unexpected comes along, and you find yourself plowed into the backside of an eighteen-wheeler, your face firmly pressed into an airbag, if you are lucky. So are these mental shortcuts: most of the time they serve us well, or at least don't get us into trouble, while just when we least expect it, the truth ambushes us and we suffer from an error in this habitual way of deciding.

Let's bring it around to medicine. Back in 1978, the dark ages, an intriguing paper was published by Casscells and colleagues in the New England Journal of Medicine. Here is what they did:
We asked 20 house officers, 20 fourth-year medical students and 20 attending physicians, selected in 67 consecutive hallway encounters at four Harvard Medical School teaching hospitals, the following question: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5 per cent, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person's symptoms or signs?"
Here are the results:
Eleven of 60 participants, or 18 per cent, gave the correct answer. These participants included four of 20 fourth-year students, three of 20 residents in internal medicine and four of 20 attending physicians. The most common answer, given by 27, was 95 per cent, with a range of 0.095 to 99 per cent. The average of all answers was 55.9 per cent, a 30-fold overestimation of disease likelihood.
Since we are all experts on the positive predictive value of a test, I hardly have to go through the calculation. Briefly, though, and just to be complete, among 1,000 people, 1 has the disease and additional 50 test falsely positive. The PPV then is 1/51 = 2%. This is the chance that a person found to have a positive result actually has the disease. Yet nearly 1/2 of the responders said 95%. What did you estimate? Be honest. And this was Harvard! What chance do the rest of us, mere mortals, stand before such formidable cognitive traps? We might as well throw in the towel and prepare for an onslaught of tail-chasing physicians and patients spinning their gears in pursuit of false positive results. Because once you let a positive finding out of the bag... Well, you get the picture. I would wager that this is at least in part what has hijacked our healthcare systems, its finances and our reason. After all, as Sam Harris pointed out, it doesn't take much for us to form beliefs: no rational thinking required. How can I be so sure, you ask?

Casscells and his team in their paper bring up a phrase "the pan-man scan." They use it to refer to a battery of 25 screening tests and the chances that all of them come back normal if the person is perfectly healthy. Care to take a guess? Twenty-eight per cent. That's 28%! This means that out of 10 patients who walk into the office healthy, 7 will walk out with an abnormal finding and either be assigned a disease or be sent for further testing, even if it is only a repeat of the previously abnormal test. Now think back to your last "routine physical." Did you get "routine" blood work, cholesterol test, urinalysis? How many separate values were run? What was your risk of having an abnormal value and what ensued when you did?

The point here is once again to think about the pre-test probability of the disease and the characteristics of the test. Without this information there is no way to make an informed decision about a). whether or not to bother getting the test, and b). what to make of the result. But how do you rewire clinicians' and the public's brains to get out of these value-laden habitual errors that drive us to all kinds of bad behaviors?

Well, a psychologist from Northwestern University in Chicago by the name of Bodenhausen studies the human brain's proclivity for stereotyping. How is this relevant to medical decision making? Well, in medicine we tend to think in stereotypes. What I mean is that when a middle-aged male smoker with diabetes and hypercholestrolemia comes to the ED with a crushing substernal chest pain, we invoke the stereotype of an acute MI patient, and in this case such stereotyping allows us to make some good choices very rapidly. Where we fall into a trap is in the example above, where we stereotype a patient with a positive screening test into a group with the disease. While it is natural for our habit-addicted brains to do so, it lands us in hot water, individually and as a society. Well, Bodenhausen in this paper (subscription required) in the journal Medical Decision Making states plainly that there are certain circumstances that predispose us to stereotyping: complex decisions, cognitive overload, and happy, angry and anxious moods. Do you recognize any of these risk factors for stereotyping in medicine? I thought so.

So, what do we do? Here are a 5 potential solutions to consider:
1. Go back and think some more.
Metacognition training early and often would be a great addition to the medical curriculum.
2. Get our medical appointment out of the microwave
Give the clinician more cognitive time. Not possible, you say? Pity! That is the simplest answer.
3. Educate the public about these cognitive thought-traps and rational approaches to making medical decisions.
4. If we are so committed to the proliferation of medical stuff, then we really need better HIT infrastructure. We need bedside decision aids that automatically calculate the patient's probability of having a disease given their pre-test probability, as well as posterior probability adding the test characteristics into the mix. Having these data before ordering tests might reduce some of the waste and tail-chasing. It may also decrease some of the harm that comes with over-utilization in our dogged pursuit of a definitive diagnosis.
5. Finally, the cost-benefit equation of innovations in medicine must begin to incorporate these cost to cognition. They may be higher than we have admitted so far.

 

Tuesday, February 8, 2011

Medical decision making: More signal less noise, please!

It's official, I'm a country bumpkin! Driving in Boston last week I was distracted, annoyed, made anxious and confused by the constant traffic, billboards and signs. Even highway markings confused me, particularly one indicating a detour to Storrow Drive East, which never materialized. Despite the fact that I know the geography of Boston like the back of my hand, I nearly went down the wrong streets multiple times, including driving the wrong way on some one-way roads. Yes, I am now the menace I used to save my prize driving language for in my younger days.

But it seems that over the years of my living away, there has been a sharp increase in the information thrown at me from all directions, accompanied by a decline in places to rest my gaze without suffering the perseveration of conscious processing. And while the value of this information is at best questionable, the sum total of this overstimulation is clearly confusion, wrong road choices and possibly a reduction in the safety of my driving. This whole experience reminded me of Thomas Goetz's distaste for how medical results are reported. If you have not seen him preach about it, you really should. Here is his excellent TED talk on the subject.


It is ironic that during this overwhelming city visit I also had the chance to speak to a doctor about "routine" preoperative testing and its value. Before surgery, it is recommended that a patient get a screening evaluation. Yet the components of this evaluation vary widely, and may include blood work, urinalysis, electrocardiogram, a chest X-ray and the like. Although evidence suggests that most of the points of this evaluation are useless at best, many institutions continue to order a shotgun panel of preoperative testing for everyone. This one-size-fit-all medicine results in reams of useless and distracting information, a high frequency of abnormal findings of questionable significance, a potential for harm, worry and needless healthcare spending. In my particular conversation I asked the anesthesiologist what the pre-test probability for someone with my characteristics was for a useful chest X-ray result, for example, and whether the fancy electronic medical record used by the hospital could help her determine this. While the answer to the former question was "probably exceedingly low", the answer to the latter was a definitive "no." So, given some elementary thinking, it became clear that a patient like me should not in fact be subjected to a chest X-ray, since any pathology found on one would likely represent a false positive finding, which would nevertheless require potentially invasive follow-up. And guess what? By focusing on the particular individual in the office, rather than all comers, we could have gone through the entire menu of the possible preoperative tests "routinely" ordered and eliminated most if not all of them. But my bet is that not all patients, not even all e-patients, either know or are able to initiate this type of a critical discussion. And yet what tests to obtain, if any, should always be a thoughtful and individualized decision. To approach testing in any other way is to risk generating noise, distraction and harm.

And this brings me back to Thomas Goetz's idea of redesigning how test results are reported. I love his idea. But to me what needs to happen before making the data patient-friendly, is making the decision-making provider-friendly. So, great idea, Mr. Goetz, but let us move it upstream, to the office, where the decision to get chest X-rays, cholesterols and urinalyses is made, and help the doctor visualize her patient's risk for a disease being present, the characteristics of the test about to be ordered, the probability of a positive test result, and all the downstream probabilities that stem from this testing, so as to put a positive test result in the context of the individual's risk for having the disease. Because getting the results of tests that perhaps should never have been obtained in the first place is following the GIGO principle. It is generating noise, distraction and detours going wrong way down one-way roads. And when applied to medicine, these are definitely unwelcome metaphors.