EHR usability needs to improve, but we need to focus on patient safety issues rather than waste time complaining about click counts or citing flawed research
This article is one of two concerning the measurement of EHR effectiveness. Click here to read the article by Richard Bukata, MD.
EHR usability needs to improve, but we need to focus on patient safety issues rather than waste time complaining about click counts or citing flawed research
What are we talking about, when we’re talking about EHR?
For years, in these pages and elsewhere, the question was, “Should we, or shouldn’t we?” And we turned to anecdotes and horror stories, philosophical stances and rationalizations, for endorsing a system – paper or electronic – that we liked. There was precious little research to guide us on our choices.
But now, thanks to the Meaningful Use program, hospitals have too many financial incentives (and soon, penalties for delays) to avoid implementing EHR in the manner the government wants. So adoption of EHR has surged. And, thankfully, we also have some well-conducted research about the benefits and drawbacks of implementing EHR in the ED.
Because of the high adoption, however, the conversation should be changing toward usability – “How can we improve these electronic systems we have?” Or even “How can we convince the powers-that-be that our frustrations and concerns about usability are important?”
Hospital risk management, and stifling ‘gag clauses’ between vendors and clients, make it challenging to discuss EHR safety issues in an open forum. You’ll be hard-pressed to even find screenshots of ED information systems online. So when we’re talking about improving usability, we’re more reliant on peer-reviewed research than ever.
Some recent research has tried to shed light on EHR’s impact on ED productivity, throughput, and patient satisfaction. Rick Bukata has cited a few papers on the facing page. Over the years, Rick and I have clashed on the benefits of EHR. After reading his first paragraph this month, I think he’s come a long way in accepting EHRs and even acknowledging some potential benefits. For my part, over time, I’ve become increasingly frustrated at the slow pace of improvement of EHR usability, and placed increasing stock in Rick’s view that ED attendings should be as unencumbered as possible while working.
Still, there’s considerable daylight between us – and that becomes glaringly apparent when you consider our differing takes on the literature, or even what’s worth citing.
Take, for example, the “4000 Clicks” piece by Hill et al. from November 2013. I first learned of this paper in January, when Jim Augustine and John Holstein cited it in these pages. It took me a few minutes of reading the original article to conclude just how bad it was – full of faulty assumptions and unclear methods. For instance:
1) Hill et al assert that other studies where EHRs improved ED throughput metrics and reduced errors are biased because these studies use data from “the most innovative institutions” with presumable customizations. They also note that “software vendor” has been the leading factor in determining ED improvement and provider satisfaction. So I’d expect they’d want to pick a system with average scores, implemented at a fairly typical ED. Yet the authors don’t make mention of any ED characteristics and chose one of the worst-ranked ED information systems (McKesson) to evaluate. In KLAS surveys of ED physicians, McKesson scored at or near the bottom for many fields, including provider satisfaction, perceived workflow integration, and speed of charting. The authors justified their choice of McKesson’s ED software for analysis because many hospitals use McKesson products. By the same rationale we should evaluate GE software because the hallways are lit with GE bulbs. McKesson has a popular billing service but their ED software has small and declining market share. Given that the majority of EHR systems scored better – many scored much better – it undermines the author’s goal of a more accurate representation of the ED EHR experience.
2) The authors looked at the behavior of 16 providers, a mix of attendings, residents and PAs/NPs. Lumping different provider types together can be risky, depending on the environment and documentation expectations. Yet the authors don’t describe the environment, and the insane four-fold variability of time spent charting (17.5% to 67.6%) suggests very different roles were being combined haphazardly. In another section, the authors confusingly make reference to their “30 subjects” – but wait, wasn’t it sixteen? Nowhere do the authors describe their subjects’ experience in emergency medicine, or with the system. Were they newly-minted physicians? New hires? Was this day one of the system installation?
This paper always seemed to me more like sensationalism than a scholarly analysis of EHR’s impact on ED productivity. So it rankles when this lousy paper keeps getting cited as evidence of the onerous burden of EHRs. Anyone who uses an EHR knows the software should be better, but we need good research to really inform the debate and drive the field forward.
Fortunately, the new paper by Ward, Froehle, et al, is thorough and good. Lots of characteristics about the practice environment (suburban, academic, 34,000-visit ED) and the EHR (Epic) are given. The pre-implementation period usage (paper charts for notes and orders, electronic track board, and telephone dictation) was also described, including paper-based order sets that were faithfully reproduced in the EHR post-implementation, without any computerized decision support to influence usage of tests.
The results are pretty clear, too.
Critics of EHR have long assumed that patients would not react well to their providers spending time at the PC workstation, instead of at the bedside. But Ward’s study showed no sustained change in patient satisfaction. Complaints went down immediately post-implementation and stayed down compared to pre-implementation baseline. Revisits and AMAs also dropped post EHR.
Another significant finding of Ward’s research concerns length-of-stay (LOS). Patient LOS (after the brief transition period) was actually slightly lower by the end of the post-implementation study period, for both admitted and discharged patients. Someone new to the debate may wonder why anyone is surprised by this – moving from paper to electronic systems has led to greater efficiencies in many other industries. But it’s an axiom in some circles that going electronic makes EDs inefficient.
In fact, Ward published another study in Annals just a few months after this one, which showed EHR implementation across 23 community EDs had no sustained meaningful difference in any of eight commonly accepted operational performance metrics, including LOS and patient satisfaction.
Of course this is just one group of community EDs, but the urban academic ED I work in similarly showed throughput improvements, ten years ago, when we adopted EHR (Baumlin et al, Jt Comm J Qual Patient Saf. Vol. 36 No. 4 April 2010).
Critics of that 2010 manuscript argued that this academic ED’s pre-EHR baseline throughput times were so slow that any investment or attention to the problem would have yielded improvements, and EHR was the beneficiary of unwarranted accolades.
Rick argues that, since Ward was studying 23 community EDs run by the successful Schumacher Group, the applicability of this research is limited. The reasoning goes, Schumacher had so many resources at their disposal that it was inevitable they’d see no decline in operational characteristics after adopting EHR; regular EDs could still suffer from EHR’s inefficient effects.
But if Occam was told that resource-strapped EDs like ours, and resource-rich EDs like Schumacher’s, both show that EHR implementation doesn’t harm throughput, he might use his razor to favor the simple argument: EHR isn’t necessarily harmful for ED throughput. I’m not sure how much more research will be required to satisfy critics with alternative explanations for the results. But I do think we’ve past the point where EHR’s benign or harmless impact on ED throughput should cause surprise.
That leaves us with the last major finding of Ward’s research on the single suburban academic ED: the impact on testing rates after EHR implementation. Ward’s group showed that testing rates went up – doubling for meds, rising substantially for labs and EKGs, and rising slightly for radiology tests. The increases were sustained and significant.
The result fits well with criticism that EHR encourages over-testing. Certainly, advocates of ‘choosing wisely’ and reducing unnecessary workups and medications may find it hard to embrace EHR in light of Ward’s data.
The result is particularly noteworthy because the ED he studied had paper order sets prior to EHR implementation. This refutes Rick’s inference that creating comprehensive order sets is the culprit, but does suggest that simply putting the same checkboxes on a computer screen makes the doctors more likely to place orders.
That’s certainly possible, but I think there’s another explanation.
Ward’s group was wise to count only administered meds, only resulted labs, only interpreted radiology studies. But I suspect that the EHR facilitated capture of meds that, on the old paper system, hadn’t been well documented.
I’d love to know how much of the med increase observed here was from oxygen, saline and albuterol. I have a hunch that at least some of the lab testing increase came from fingersticks and urine pregnancy tests. We know a whole lot of meds and tests are routinely performed in the ED, either through verbal orders, per protocol, or by skilled nurses who anticipate what the EP will want. Maybe the old paper system just failed to capture a lot of routine care, but the new EHR didn’t. It’s notable that the smallest increase post-EHR was in radiology studies, which is the kind of order that’s least likely to go undocumented on a paper system.
Think of it this way: If you suddenly doubled the amount of meds your ED is dispensing and dramatically increased the number of tests you’re running, wouldn’t patient length of stay go up? Yet Ward showed it didn’t. I think what’s going on is better capture of meds and tests that were already being performed.
Regardless, we have enough information to make some conclusions. It’s time to put LOS and patient satisfaction concerns to rest. And yes, the increase in testing seen here needs follow-up – to show whether benefit or harm came to patients (all we know is that the post-EHR death rate declined, without achieving statistical significance). This ED failed to implement any clinical decision support (CDS) tools for ordering labs, rads or meds. Like other aspects of EHR, if implemented poorly, CDS can lead to the useless frustration of alert fatigue, but when done right, CDS can curtail unnecessary testing, prevent medication errors, and improve outcomes. To qualify for Meaningful Use, hospitals have to implement some CDS, and it’s expected CMS will encourage this tool to monitor and decrease practice variation.
Where does that leave us? Hospital administration doesn’t care too much about doctors’ click counts or frustrations. They care about the bottom line: are patients happy? Are they moving quickly through the ED? The new research is loud and clear: EHR doesn’t adversely affect patient satisfaction or length of stay.
I think the best way to get administrators, vendors and the feds motivated to improve EHR usability is to highlight the risks to patient safety. Yet we’re stymied at every turn, unable to really talk about the inefficient design or dangerous settings in open forums, thanks to contracts with gag clauses, and fears of liability.
Last year, ACEP’s QIPS (Quality Improvement and Patient Safety) and Informatics sections published a white paper (Farley HL, Baumlin KM,et al. Quality and Safety Implications of Emergency Department Information Systems. Ann Emerg Med. 2013; 62:399-407) with concrete examples of the risks of poor usability, and specific recommendations for improving EHR safety. In response to the white paper, ACEP met with vendors and started to develop a framework for securely but publicly reporting safety concerns, and letting prospective customers know what safety issues vendors had addressed (and what issues were still outstanding). We need to put pressure on ACEP to get this system up and running, and we need to make use of it. That’s going to be far more productive for improving EHR usability than complaining about click counts or citing flawed research.
Nicholas Genes, MD, PhD is a senior editor at EPM and a clinical informaticist at Mount Sinai Medical Center
This article is one of two concerning the measurement of EHR effectiveness. Click here to read the article by Richard Bukata, MD.
3 Comments
Reading thru Dr N.Gs afticles I get the exact feel or even stronger impression, what he critices is what he practises in his article pretty much like teenagers do in a school debate.
Softwares are dumped on Physicians. Decision to implement what EHR will be bought, how it will look on screen whether a physician-friendly medically-logical software or a legally-strong n safe software….the chosen one will be based on criterio…’legally’ good n can keep CMS happy for incentives n avoid penalties (whatever lengthy arguments one puts forward on front, I find that a discussion so frequently a gimmick, and its money not patient-safety thats up in the forefront ever-so-often when choosing an EHR).
A simple example- MARs are shown in a beautiful alphabetical order when physicians think in organ- or system-wise manner. My own hospital EHR Director is a Pathologist, and after nearly 5 years I heard him say ‘yes’ to organ- or system-wise grouping in MAR. Still 0.9% Saline 10 mL push every-so-often followed by 25% dextrose prn if bld sugar below….. or 0.45%saline 5%dextrose at 30 mL per hour….and this goes on n on for 5-6-7-8 or even 10 drugs/medicines…..Atenolo on top of list and Metoprolol low down ‘invisible’ with patient getting 2 B-Blockers or Amlodipine and Verapamil – 2 CCBs on one patient or Vancomycin – life-saver in my Staph bacteremic patient relegated to 6th page or way-down out-of-sight on my computer screen!
I could not see logic in this that my hospital EHR-vendor with administration could see. Yes, legally-speaking, its all there.
I am sure there will be lot of noise generated from my acidic comments but….EHRs need be a whole lot better before shoved down Physicians throat.
I have not allured to interesting icons that mean ‘save’ one EHR and means ‘send’ in another. Iconic language – a new language every 18-24 months. U scroll mouse over ‘The Iconic’ symbol and you may be lucky to find out what the icon is or frequently not find out.
I am apalled at the concept of needing ‘education’ and ‘training’ in EHR so much so that I can even earn CME points! WOW!
The EHRs language changes, screen-shots change every few months such that cumulative lost work-hours have never been realistically explored.
I could not see the wisdom in this EHR.
Or in a Chrinic Kidney Clinic, I saw only BUN but not Creatininine in a serial or graphic format to eval progressive decline in renal function over time….
Or…could labs be shown
14Oct14
Dear Emergency Physicians Monthly:
I read Dr Bukata’s article on EHRs with interest.
I skipped the article by Dr. Genes. Alas, during my 32-year career as an emergency physician, I have found reading articles by MD-PhD’s to be unrewarding.
The MD-PhD species live in a different world than I.
I’m a grunt-EP, and I’ve seen more than 40,000 patients in my career. So if I find that EHRs impede me in doing my work (and I do), no 100,000 words by an MD-PhD is going to change my mind.
What I would really like to see is an article by a grunt-EP –with experience-in-the-trenches comparable to my own– saying that EHRs are wonderful, help his or her productivity, improve safety, and so on.
That would be a unique article.
I am still waiting.
Sincerely,
Peter Nelson MD
Always happy to see comments to my article, even if they’re incomprehensible or from people who can’t wait to share their views but can’t be bothered to read mine.
Hey, EPs: you’re going to get the systems you fight for. Hopefully your IT advocates aren’t like the people commenting, above.