ADVERTISEMENT
  • Amplify Ad_LivingWithRiskUrgentCare_728x90_NA_DISP

Overdiagnosis: Fact vs. Fiction

No Comments

altEP Monthly has highlighted the issue of practice heterogeneity over the last four years, but the universe of emergency medicine has largerly resisted efforts to admit and address this problem. Why? Do we not believe it to be true? Do we assume that it is unavoidable?

Why the growing gap between diagnosis and mortality is a problem EM should be fighting

Next September, the Dartmouth Institute for Health Policy and Research and Clinical Practice will hold a conference called “Preventing Overdiagnosis,” a collaboration between the British Medical Journal and Bond University. The Dartmouth Institute, creator of the Dartmouth Atlas, was the first organization to highlight significant practice variability in the United States. EP Monthly has highlighted the issue of practice heterogeneity over the last four years (see columns by Mark Plaster on The Dartmouth Atlas Study and Breaking Down the Healthcare Reform Bill and Why ACEP Should Join the Choosing Wisely Campaign), but the universe of emergency medicine has largerly resisted efforts to admit and address this problem. Why? Do we not believe it to be true? Do we assume that it is unavoidable? Or maybe “overdiagnosis” is simply ivory tower nonsense that has no connection to the day-to-day practice of emergency medicine. To understand these issues, we first need to define our terms.

ADVERTISEMENT
Amplify LivingWithRiskUrgentCare_300x250_NA_DISP

What is Overdiagnosis?
Overdiagnosis occurs when patients without symptoms are diagnosed with a “disease or condition” that, if left untreated, would not cause them to experience significant untoward effects or early death. For example, a patient might have a clinically insignificant PE. Everybody probably has PE’s at some point. The million dollar question is “if the PE does not cause significant symptoms do we need a screening test to make the diagnosis?” Undoubtedly, some would argue that this does not apply to EM since we do not do screening tests. We perform diagnostic tests, since all of our patients have symptoms. I would argue that overdiagnosis still applies to us for two reasons:

  • Just having a diagnosis doesn’t always explain the cause of the symptoms for which the patient came to the ED, let alone predict a prognosis (i.e. what to tell the patient to expect in the future).
  • Diagnostic accuracy for clinical intuition, physical exam, lab tests, and imaging is often overestimated and misunderstood.

In 2010, Dr. Gil Welch wrote “Overdiagnosis: Making People Sick in the Pursuit of Health.” Even though the book was written from a primary care perspective, Welch made some observations applicable to both screening and diagnostic testing. Dr. Welch pointed out (and others such as Drs. Ray Monihan and Jerry Hoffman have concurred) that advancing technology and higher screening rates result in evolving definitions of ‘abnormal’. For example hypercholesterolemia was defined as >240 mg/dL in 1995, but since 1998 has been defined as >200. Lowering the threshold that distinguishes “disease” from “no disease” increases the rate of reported disease. If the lower threshold of “disease” is clinically consequential for death, we expect mortality to increase as the number of cases increase. This concept is depicted conceptually in figure 1a. However, what we actually have seen for breast, thyroid, and prostate cancer, ADHD, and gestational diabetes is depicted in figure 1b. Increases in the diagnosis of these diseases has not been associated with more symptoms of such diseases, just more awareness, and more treatment. Some respond that there have been steady rates of death despite increased diagnosis simply because treatment options have improved concurrently with diagnostic prowess. Is this rational? Perhaps if we’re talking about one or two diagnoses, but when figure 1b is repeated across a spectra of diseases, it seems unlikely. I submit that the greater part of those new diagnoses fall within the blue portion of figure 1c and are actually “overdiagnosis”.

alt

Is Overdiagnosis Pertinent to Emergency Medicine?
I believe it is. In 1959, when Barritt published the first (and only) randomized controlled trial to assess the use of heparin in pulmonary embolism (35 patients, no blinding, no placebo), they reported a 20% mortality rate. Most contemporary PE trials report about 0.2% mortality. If the treatment now is the same as 1959 (heparin), why is mortality so much lower in 2013? I contend that we are diagnosing more cases of PE, many of which are clinically inconsequential. Different researchers debate whether there has been a real decrease in mortality or not, but nobody argues that mortality has increased so the increased numbers of PE diagnoses are represented by the yellow portion of Figure 1c. Additionally, there may be increases in cancer risk due to CT radiation, as well as contrast-induced nephropathy. As stated by Newman and Schriger, “When the goal is to detect all clots, the harms of testing outpace the threat of disease.” In fact, they quantify that testing for PE in one study prevented six PE deaths and 24 major non-fatal PE events, but caused 36 deaths and 37 non-fatal major medical events. Translation: Many of the PEs detected by CT are clinically inconsequential in that they will not increase mortality and may not even be the cause for the patient’s symptoms. Treating these inconsequential PEs is an all-risk, no benefit proposition for these patients since the PE would have resolved without the heparin.

ADVERTISEMENT

How is this possible? PE protocol CTs are the current diagnostic standard of care and they detect more PEs than V/Q scans.  Based on PIOPED II data, PE CTs are neither 100% sensitive nor 100% specific  (see http://tinyurl.com/bk7d8vh for a more detailed analysis) and the risks of CT include radiation-related cancer risk and contrast nephropathy.  The trick is in knowing which patients have an inconsequential PE. Since that question has no answer today, the next most appropriate action is to only seek a PE diagnosis in those most likely to benefit from therapy when PE is identified.

Why Does Overdiagnosis Occur in Emergency Medicine?
If you agree with the premise that overdiagnosis occurs in EM, you probably have no problem listing off many reasons why this is so. Here is my short list:

Litigation. In the US, malpractice suits are real and costly. No jury will ever reward me for being frugal with my test ordering. However, the malpractice quagmire seems to be fairly unique to the US, while overdiagnosis is a global issue in medicine. Therefore, this argument is probably not enough to explain overtesting without considering other issues.
Money. The US healthcare system reimburses by the widget produced. Thus, more testing and more diagnosis usually means more money. Change the incentives and there will be less of this activity.

ADVERTISEMENT

Education. Most EM practitioners were never trained to  perform diagnostic critical appraisals or to use Bayesian (probability based) logic. Rick Bukata noted in these pages recently (Less ‘Art’, More Medicine) that EM GME training in the use of clinical decision rules, diagnostic reasoning, and cognitive biases remains profoundly heterogeneous. This seems to be due to an entrenched logic against evidence-based medicine (Christensen-Szalanski 1982, Young 1984, Asch 1990, DeKay 1998, Phelps 2004, Croskerry 2008). I contend that EM is the expert in acute disease diagnostics – 24/7 we evaluate the acute chest pain patients, not Cardiology; we evaluate the newly cognitively impaired patient, not Neurology; we evaluate the acutely elevated blood pressure, not Internal Medicine. Teaching our future EM faculty how to find, appraise, and incorporate diagnostic research into bedside decision making ought to be our strength, but it is a Catch 22 because somebody has to teach the teachers. As Jerry Hoffman has famously declared, “Don’t just do something! Stand there!” More specifically, “We must question the notion that as technology advances, it always provides improved solutions to clinical problems.”

Evidence. Diagnostic research is underwhelming. Traditionally, NIH funders and journal editors have recognized the merits of therapeutic research principles. Diagnostic research is not therapeutic and the design is frequently observational. The unique biases and research constraints for diagnostic studies are underappreciated. Consequently, the volume and quality of diagnostic research is decades behind therapeutic studies. For example, the Cochrane Collaboration is an exceptional resource for interventional systematic reviews on thousands of topics across all medical specialties. Unfortunately, Cochrane is a poor source of evidence for diagnostics. Useful sources of diagnostic summaries do exist in the JAMA Rational Clinical Exam series and recently thennt.com, but these are often of indirect relevance to EM, leaving much work to be done. The STARD criteria guide diagnostic research and can improve the overall quality of this research for healthcare consumers, but remain largely underutilized by investigators and journal editorial boards.

All of these issues extend well beyond EM, as all can attest when remembering consultant’s requests for an additional test (BNP, D-dimer, etc.) before accepting a patient for admission. We are indeed a small piece of the financial pie compared with other specialty’s spending patterns. For now, however, we can only control our own clinical realm and there is room for improvement.

How Do We Confront, Evaluate and Avoid Overdiagnosis?
Emergency physicians possess unique expertise to shape actionable solutions to the overdiagnosis crisis. Our specialty has derived, validated, and sometimes refuted most currently used clinical decision rules including the Well’s criteria for DVT and PE, PERC rule, NEXUS criteria, and recent chest pain decision aids. However, while CDRs are an effective tool to reduce practice heterogeneity and reduce unnecessary testing without compromising patient safety, they aren’t perfect. All diagnostic tests fail sometimes which leaves us constantly balancing patient risk (false positives/false negatives) and the desire for perfection. For example, see the EPM subarachnoid hemorrhage debate between Drs. David Newman and Kevin Klauer. Therefore, CDRs should not be construed as “rules”, but as decision instruments to augment gestalt when diagnoses remain elusive.

ADVERTISEMENT

That being said, CDRs remain underused (Brehaut 2006, Smith 2008, and Crichlow 2012). Why? Some research suggests that patients devalue or denigrate physicians who use CDRs. Rather than throw the baby out with the bathwater, this research suggests an enhanced role for shared decision making with patients by providing them with information – health literacy appropriate interpretations of CDRs for their clinical situation. However, to do this in real time, instruments need to be accessible at the bedside so as not to impede ED throughput. Dorsata, a group led by Dr. Daniel Gibson at Washington University in St. Louis, is developing one such product to do just that.

Diagnostic researchers and educators need to provide busy clinicians with resources to find, critically appraise, and incorporate practice-enhancing diagnostic research into practice. Pearl described a hierarchy of research for EM clinicians (Figure 2) where the top of the pyramid represents the most compelling level of evidence for change.

Unfortunately, the vast majority of diagnostic research in 2013 is in the second tier of studies assessing only diagnostic accuracy without assessing clinician- or patient-centric outcomes. This is important. For example, even though BNP has been shown to increase diagnostic accuracy, when multiple randomized controlled trials actually assessed EM clinician awareness of BNP results in these patients, BNP did not reduce ED length of stay, admission rates, hospital length of stay, or ancillary diagnostic testing (see Carpenter 2011).

Diagnostic tests are not developed under the same watchful eye of the FDA as are new pharmaceuticals and medical devices. Prasad recently recommended an overhaul of the diagnostic research funding infrastructure to break the vicious circle of inadequately researched tests followed by aggressive marketing, overdiagnosis, and overtreatment. In the future, clinicians, educators, and journal editors must contemplate where a diagnostic test falls on the hierarchy and whether more compelling data is needed before incorporating the test into practice. When more compelling evidence is indicated, non-research clinicians must provide a voice to the NIH and other funding organizations to support these studies that have traditionally not been supported

altIn 2013, clinicians, educators, journal editors, and policy makers have access to at least three key resources to propel the science of EM diagnostics forward. First, the University of California in San Francisco offers an annual CME workshop entitled “Evidence Based Diagnosis: Advanced Workshop on Evaluating and Using Medical Tests for Clinicians, Educators, Editors, and Policy Makers” which will occur June 20-21 this year. Drs. Tom Newman and Michael Kohn who wrote the book “Evidence-Based Diagnosis” organize this course. It provides learners with hands-on sessions to appraise and incorporate diagnostic research into clinical practice. Second, Academic Emergency Medicine launched an ongoing series of diagnostic systematic reviews and meta-analyses in 2011. In addition to exploring diagnostic questions that are more pertinent to EM than those found in JAMA’s Rational Clinical Exam, this series quantifies EM disease prevalence (pre-test probability), test-treatment thresholds, and implications for future research. No other source currently provides this level of EM-relevant, peer-reviewed original diagnostic research. Thus far, the series has examined septic arthritis and extremity fractures and many more topics are in the pipeline. Third, I co-authored the textbook “Evidence-Based Emergency Care: Diagnostic Testing and Clinical Decision Rules, 2nd Edition” with Drs. Jesse Pines, Ali Raja, and Jay Schuur in 2013. This book provides readers with one contemporary resource to learn more about the concepts and implications of diagnostic research bias while highlighting high-yield evidence to use at the busy ED bedside.

As the House of Medicine contemplates, defines, and continues to study “overdiagnosis” at the Dartmouth conference in September and beyond, emergency medicine will undoubtedly be challenged again to justify our diagnostic processes. Our world of high-stakes acute care medicine is unique and stakeholders need to realize that all negative tests are not inherently worthless, unnecessary, or wasteful spending. Tort reform is an essential component to remove one large obstacle to more reasonable decision making. The science of diagnostic research needs to improve and evolve using concepts like disruptive innovation, appropriate GME and editorial training, and publication guidelines. Overdiagnosis is a valid concept not intellectual chatter, but the pressure to make the right diagnosis, quickly, accurately, and economically will create quite a storm. Are you ready?

Want to learn more about the Overdiagnosis conference?  See this interview of Dr. Carpenter by Dr. Plaster

Leave A Reply